Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.



column

Image Added



width755
Section
1



Column
width
4
WORLD USABILITY DAY

HOME


Column
width5

REGISTER


Column
width
3

SPEAKERS


Column
width
6
SCHEDULE

SESSION MATERIALS


Column
width5

ABOUT


Column
width5

FAQ


Column
width5

EVENT PHOTOS


Column
width15

Image Removed







SCHEDULE

SESSION MATERIALS





Section

With an exciting line-up of engaging topics and speakers, you will not want to miss anything! Do you have a busy day, or an upcoming deadline? We get it. That is why the Zoom event is an open-house format with sessions beginning on the hour with 15-minute breaks in between. You can even sit down with us during lunchtime for an informative panel discussion. 

Section
Section

Click the time to see what session is happening:

8:30  9:00  |  10:00  |  11:00  |  12:00  |  2:00   |   3:00

Session recordings and presentation slides are posted below for most of the sessions in case you were not able to attend a session, or would like to watch it again. 




8:30- 9:00  |  Welcome from CMS Leadership9:00 - 9:45  |  A Gold Mining Adventure –

Using Natural Language Processing and Machine Learning to find Gold in Unstructured Data  

PresentersPresenterChris Schilistra Chris Schilstra  

What is this aboutPresentationJoin us to "dig for gold" in sizeable textual data sets to uncover stakeholder themes and sentiment, with industry-standard accuracy rates. We will compare two thematic/sentiment model visualizations with a presentation and demo and explore the importance of incorporating Human-Centered Design (HCD) with Artificial Intelligence (AI). Who is this for: Federal Subject Matter Experts, analysts and managers  

Areas of interest: Data, Design, Policy, Technology

Session materials:

Unavailable at this time.

Section
Anchor
Session 2
Session 2
 

 A Gold Mining Adventure –

Section
Anchor
Session 1Session 1
Section
Anchor
Session 2Session 2
Expand
titleMore About This Session

Join us as we "dig for gold" in sizeable textual data sets to uncover the stakeholder themes and sentiment, with industry-standard accuracy rates. Using a presentation and demo, we will share the challenges we overcame to minimize text interpretation bias by using Natural Language Processing and Machine Learning (ML). Attendees will journey through our data mining process, which maximized model accuracy when identifying themes and associated sentiment.    

We will compare two thematic/sentiment model visualizations to emphasize the criticality of Human-Centered Design in big data interpretation while highlighting interactive filters to maximize targeted views of subsets of data. 

Section9:45 - 10:00  |  BREAK





10:45 - 11:00  |  BREAK

Section
Anchor
Session 3
Session 3
10:00 - 10:45
:

 Catch

me

Me if

you can

You Can – How to Fight Fraud, Waste and Abuse using Machine Learning AND Machine TEACHING (by human) 

PresentersPresenter: Cupid Chan 

What is this aboutPlenary Session: Machine Learning (ML) is often the focus of an Artificial Intelligence (AI) discussion, but Machine TEACHING is just as important. This session will intersect a technical conversation with a real business context: Fraud, Waste, and Abuse. Who is this for:  business usersdata scientists and designers  

Areas of interest: this is the morning plenary session, and we encourage everyone to attend.

Session materials:

Recording - November 12, 2020

Slides: WUD_Chan.pdf


Expand
titleRead More About This Session

When talking about Artificial Intelligence (AI) in the past few years, we have focused a lot on Machine Learning (ML), but much like in our school system, learning is only half of the equation. We are missing the other half: Machine TEACHING, which is where the human-centered portion comes in. This session will reveal the findings and intersect a technical discussion with a real business context: Fraud, Waste, and Abuse (FWA).  

For a business to thrive, cutting expenses is just as necessary as increasing income. In industries like Insurance, Finance, and Healthcare, FWA is one of the top costs, costing companies billions of dollars. Traditionally, a group of experts and investigators handle this time-consuming and labor-intensive process, but the rise of AI can assist humans in identifying and preventing FWA.  

This session will show not only “why” we should use AI to fight against FWA but also “how” to achieve it practically in 3 significant sections:  

  • Supervised Learning: We will explore how basic algorithms like decision trees, logistic regressions, and support vector machines can help fight FWA. 
  • Unsupervised Learning: We will see how techniques like clustering can complement the supervised learning approach.  
  • Leveraging Generative Adversarial Networks (GAN): We will investigate the use of GAN (used in image generation) and see how this more sophisticated model can help fight FWA.  
  • The session demonstrates by utilizing an open data set, which is available online. We will also include the latest research in this area to strengthen the cases presented. 
Section

The recording of this session and a copy of the presentation slides are posted above.





Section
Anchor
Session 4
Session 4
11:00 - 11:30  |  

 How Humans Make AI Work  

Presenters: Ian  Ian Lowrie and Stephanie Warren 

What is this aboutPresentation: “Artificially intelligent” systems rely on complex combinations of humans and machines to produce the desired user experiences, posing challenges for service design and ultimately affecting overall user trust and user experience. This session will explore systems like chatbots and provide practical guidance for UX professionals working with or curious about Artificial Intelligence (AI). 

Who is this forAreas of interest: designers, developers, and product owners Data, Design, Product, Technology

Session materials:

Recording - November 12, 2020

Slides: WUD_Warren_Lowrie.pdf


11:30 - 12:00  |  BREAK
Expand
titleRead More About This Session

Artificial intelligence (AI) seems to promise wide-ranging automation of many familiar civic technology components by providing just-in-time help to website visitors, performing audits and calculations, and personalizing services to specific users. However, in practice, these “artificially intelligent” systems generally rely on complex combinations of humans and machines to produce user experiences. This requirement poses unique challenges for service design and system transparency, with profound consequences for overall user trust and user experience.  

In this presentation, we take a sometimes-speculative look at how we have been thinking about “heteromated” systems such as chatbots and measure calculation on Healthcare Quality Reporting and beyond, providing both a theoretical overview and some practical guidance for UX professionals working with artificially intelligent systems. 

Section

The recording of this session and a copy of the presentation slides are posted above.






Section
Anchor
Session 5
Session 5
12:00 - 1:15 | 

 Capabilities and Challenges for Machine Learning focused on Preserving Privacy and CMS Healthcare Goals 

Presenters: Keith McFarland (moderator) and panelists: Combiz Abdolrahimi Combiz Abdolrahimi, Steve Geller, Harlan Krumholz, MD, SM,  Darryl Marshall, and Bin Shao, Ph.D.  

What is this aboutPanel DiscussionMachine Learning (ML) can support patient care improvement while managing costs, but there are risks involved. Join us for a panel discussion on how a ML approach implementation can be implemented without compromising reliability, trustworthiness, and safety. The panel of professionals will share their knowledge in areas including Human-Centered Design (HCD), Health Privacy, Data, and more. Who is this for: CMS Leadership, Program leaders, technical team members, CMS contractors, Quality measure specialists, and healthcare professionals  

Areas of interest: this is a panel discussion, and we encourage everyone to attend

Session materials:

Recording - November 12, 2020


Expand
titleRead More About This Session

Imagine a world where Medicaid and Medicare beneficiaries are empowered hand-in-hand with providers focused on improving their care while also managing costs. How can we incentivize clinicians and beneficiaries to participate proactively in the CMS CCSQ mission? 

Machine Learning (ML) can help drive this vision. ML presents all-new capability options for CMS, but it's not a panacea. There are challenges and opportunities to consider when moving from predominantly post-care quality measures and traditional analytics to a proactive ML model-driven approach. Humans are critical to the overall performance of the recommender (ML Models) systems. 

We will discuss the challenges and capabilities of four main topics: 

  • How can we train ML models to help drive the CMS CCSQ mission focus: Steadfast focus on improving outcomes, beneficiaries' experience of care, and population health, while also aiming to reduce healthcare costs through improvement." For example, AML recommender systems might include health goals, treatment outliers, fraud, waste, abuse, treatment trends, and anomaly detection. 
  • Where is the CMS data? Discuss privacy concerns of training data such as using federated learning. 
  • What data/feature engineering challenges are there inherently with Medicaid and Medicare data? 
  • Discuss which humans (actors) should participate in ML to ensure safety, reliability, and trustworthiness. How can beneficiaries remain in control of their participation, privacy and continually improve the trustworthiness of AML?   

Conclusion: Human involvement and participation is an essential and necessary element for ML unbiased trustworthiness, reliability, fairness, and safety.

A recording of this session is posted above. Please note there was not a presentation deck for this session. 





Section
Anchor
Intermission Activities
Intermission Activities

Intermission Activities

Watch Satisfy the Cat to learn more about HCD.

Have fun with AI with Akinator.

1:15 - 2:00  |  BREAK


2:45 - 3:00  |  BREAK

Section
Anchor
Session 6
Session 6
2:00 - 2:45  |  

 Federated Learning to Collect Mobile Patient-Reported Outcomes 

Presenters: Dr. Rachele Hendricks-Sturrup and Dr. Sara Jordan  

What is this aboutPlenary Session: Health data requires unique privacy and governance protections, and patient-reported outcomes measures (PROs/PROMs) data is no exception.  We will discuss what it takes to ensure patient privacy in federated learning architectures. Who is this for:  a general audience interested in learning about how to uphold privacy in federated learning architectures to track and monitor patients’ symptoms, preferences, complaints, and/or experiences following a clinical intervention  

Areas of interest:  this is our afternoon plenary session, everyone is encouraged to attend.

Session materials:

Recording - November 12, 2020

Slides: WUD_Jordan_HendricksSturrup.pdf


Expand
titleRead More About This Session

Health data requires unique privacy and governance protections. Certain types of health data warrant specific protections based on how and from whom the data is collected. Patient-reported outcomes measures (PROs/PROMs) data, for example, require specificparticular protectionspecific protections, particularly when it is used in or informed by machine learning regimes. A patient/human-centered, federated learning architecture is appropriate for ensuring the privacy of users’ data. However, using a federated learning approach to provide privacy without attention to private data management dimensions may compromise users’ data unexpectedly.  

Users of machine learning to collect and manage PROs/PROMs should ensure that:   

  • Choices about machine learning models do not open users to attack or undue influence and thus do not open users to liability for interpretation of false responses,  
  • Machine learning (ML) models are not compromised, and valuable machine learning spending lost to competitors; and  
  • ML models are tested and validated to ensure the quality of unstructured PROM data versus influencing or skewing PROs concerning safety, symptoms, and other vital outcomes.  
Section

The recording of this session and a copy of the presentation slides are posted above.




3:45 - 4:00  |   Closing Remarks 

Section

Anchor
Session 7
Session 7

3:00- 3:45  |

 

 

Using Human-Centered Machine-Learning (HCML) to Improve Data Quality & Data Governance Projects 

PresentersPresenter: Edward F. O'Connor 

What is this aboutPresentationAre you interested in understanding the components of a real-world and complex Machine Learning (ML) project? Join us as we walk through the implementation process of combining Human-Centered Design (HCD) techniques into a ML project.  

Who is this for: general audience interested in ML, business owners of data quality or governance efforts, engineers/developers, and data scientists 

Areas of interest: Data, Design, Product, Strategy, Technology

Session materials:

Recording - November 12, 2020

Slides: WUD_OConner.pdf


expand
Expand
titleMore About This Session

This session will serve as a practical walk-through of combining techniques from Human-Centered Design (HCD) into Machine Learning (ML) projects to create usable, trustable, and accurate systems to support real-world data quality and data governance efforts. 

The presentation will connect the dots between the: 

  • logical flow of an example system, 
  • specific open-source technologies and sub-systems, and 
  • specific recommendations to get the right people involved in each step of the process. 

 

The presentation is targeted at cross-functional teams and provides an example implementation approach and common pitfalls seen along the way. The presentation will utilize a case study focused on improving data quality pipelines and supporting data governance – but the information applies to any HCML project.  

The audience will leave with: 

  • A solid understanding of the component parts of a real-world and complex ML project – along with the typical flow through those components. 
  • Specific technology examples used in each component area, and an overview of what typically goes into ML architectures that utilize a modular and open-source approach as a precondition. 
  • Information on exactly how and where to involve users, whether building or maintaining an analogous ML system.

The recording of this session and a copy of the presentation slides are posted above.


 
 

Section






Panel


Column
width2



Column
width5

Image Added


Column
width2





Panel

Contact 

If you have any questions about World Usability Day or to learn more about the HCD CoE, please contact us today.

For the HCQIS Community:

Visit our HCD Confluence Site  -or-
our HCQIS Slack channel #hcd-share 

For all other visitors, please feel free to email us at: hcd@hcqis.org

Panel
HTML
<script src="https://cdn.logwork.com/widget/countdown.js"></script>
<a href="https://logwork.com/countdown-2s7r" class="countdown-timer" data-style="columns" data-timezone="America/New_York" data-textcolor="#49104d" data-date="2020-11-12 09:00">CCSQ World Usability Day</a>
Panel
bgColor#5d2265

HCD Center of Excellence 

CCSQ’s World Usability Day is planned by the HCD Center of Excellence. The HCD CoE is an organization that impacts the way CCSQ delivers policy, products, and services to its customers. Through the provision of education, support, and resources, we promote the continued implementation and use of HCD best practices.