Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.





Section


Column
width1



Column
width5

HOME


Column
width5

JOIN NOW


Column
width5

SPEAKERS


Column
width5

SCHEDULE


Column
width5

ABOUT


Column
width5

FAQ


Column
width15







SCHEDULE




Section

With an exciting line-up of engaging topics and speakers, you will not want to miss anything! Do you have a busy day, or an upcoming deadline? We get it. That is why the Zoom event is an open-house format with sessions beginning on the hour with 15-minute breaks in between. You can even sit down with us during lunchtime for an informative panel discussion. 


Section



Section

Click the time to see what session is happening:

8:30  9:00  |  10:00  |  11:00  |  12:00  |   1:30   |   2:00   |   3:00


Section
Anchor
Session 1
Session 1

 8:30- 9:00  |  Welcome from CMS Leadership



Section
Anchor
Session 2
Session 2
 

 9:00 - 9:45  |  A Gold Mining Adventure – Using Natural Language Processing and Machine Learning to find Gold in Unstructured Data  

Presenter: Chris Schilstra  

Presentation: Join us to "dig for gold" in sizeable textual data sets to uncover stakeholder themes and sentiment, with industry-standard accuracy rates. We will compare two thematic/sentiment model visualizations with a presentation and demo and explore the importance of incorporating Human-Centered Design (HCD) with Artificial Intelligence (AI). 

Areas of interest: Data, Design, Policy, Technology

Recording from - November 12, 2020


Expand
titleMore About This Session

Join us as we "dig for gold" in sizeable textual data sets to uncover the stakeholder themes and sentiment, with industry-standard accuracy rates. Using a presentation and demo, we will share the challenges we overcame to minimize text interpretation bias by using Natural Language Processing and Machine Learning (ML). Attendees will journey through our data mining process, which maximized model accuracy when identifying themes and associated sentiment.    

We will compare two thematic/sentiment model visualizations to emphasize the criticality of Human-Centered Design in big data interpretation while highlighting interactive filters to maximize targeted views of subsets of data. 




Section

9:45 - 10:00  |  BREAK



Section
Anchor
Session 3
Session 3

 10:00 - 10:45Catch Me if You Can – How to Fight Fraud, Waste and Abuse using Machine Learning AND Machine TEACHING (by human) 

Presenter: Cupid Chan 

Plenary Session: Machine Learning (ML) is often the focus of an Artificial Intelligence (AI) discussion, but Machine TEACHING is just as important. This session will intersect a technical conversation with a real business context: Fraud, Waste, and Abuse. 

Areas of interest: this is the morning plenary session, and we encourage everyone to attend.


Expand
titleMore About This Session

When talking about Artificial Intelligence (AI) in the past few years, we have focused a lot on Machine Learning (ML), but much like in our school system, learning is only half of the equation. We are missing the other half: Machine TEACHING, which is where the human-centered portion comes in. This session will reveal the findings and intersect a technical discussion with a real business context: Fraud, Waste, and Abuse (FWA).  

For a business to thrive, cutting expenses is just as necessary as increasing income. In industries like Insurance, Finance, and Healthcare, FWA is one of the top costs, costing companies billions of dollars. Traditionally, a group of experts and investigators handle this time-consuming and labor-intensive process, but the rise of AI can assist humans in identifying and preventing FWA.  

This session will show not only “why” we should use AI to fight against FWA but also “how” to achieve it practically in 3 significant sections:  

  • Supervised Learning: We will explore how basic algorithms like decision trees, logistic regressions, and support vector machines can help fight FWA. 
  • Unsupervised Learning: We will see how techniques like clustering can complement the supervised learning approach.  
  • Leveraging Generative Adversarial Networks (GAN): We will investigate the use of GAN (used in image generation) and see how this more sophisticated model can help fight FWA.  
  • The session demonstrates by utilizing an open data set, which is available online. We will also include the latest research in this area to strengthen the cases presented. 




Section

10:45 - 11:00  |  BREAK



Section
Anchor
Session 4
Session 4

 11:00 - 11:30  |  How Humans Make AI Work  

Presenters: Ian Lowrie and Stephanie Warren 

Presentation: “Artificially intelligent” systems rely on complex combinations of humans and machines to produce the desired user experiences, posing challenges for service design and ultimately affecting overall user trust and user experience. This session will explore systems like chatbots and provide practical guidance for UX professionals working with or curious about Artificial Intelligence (AI). 

Areas of interest: Data, Design, Product, Technology


Expand
titleMore About This Session

Artificial intelligence (AI) seems to promise wide-ranging automation of many familiar civic technology components by providing just-in-time help to website visitors, performing audits and calculations, and personalizing services to specific users. However, in practice, these “artificially intelligent” systems generally rely on complex combinations of humans and machines to produce user experiences. This requirement poses unique challenges for service design and system transparency, with profound consequences for overall user trust and user experience.  

In this presentation, we take a sometimes-speculative look at how we have been thinking about “heteromated” systems such as chatbots and measure calculation on Healthcare Quality Reporting and beyond, providing both a theoretical overview and some practical guidance for UX professionals working with artificially intelligent systems. 





Section

11:30 - 12:00  |  BREAK



Section
Anchor
Session 5
Session 5

 12:00 - 1:15 |  Capabilities and Challenges for Machine Learning focused on Preserving Privacy and CMS Healthcare Goals 

Presenters: Keith McFarland (moderator) and panelistsCombiz Abdolrahimi, Steve Geller, Harlan Krumholz, MDDarryl Marshall and Bin Shao, Ph.D.  

Panel Discussion: Machine Learning (ML) can support patient care improvement while managing costs, but there are risks involved. Join us for a panel discussion on how a ML approach can be implemented without compromising reliability, trustworthiness, and safety. The panel of professionals will share their knowledge in areas including Human-Centered Design (HCD), Health Privacy, Data, and more. 

Areas of interest: this is a panel discussion, and we encourage everyone to attend  


Expand
titleMore About This Session

Imagine a world where Medicaid and Medicare beneficiaries are empowered hand-in-hand with providers focused on improving their care while also managing costs. How can we incentivize clinicians and beneficiaries to participate proactively in the CMS CCSQ mission? 

Machine Learning (ML) can help drive this vision. ML presents all-new capability options for CMS, but it's not a panacea. There are challenges and opportunities to consider when moving from predominantly post-care quality measures and traditional analytics to a proactive ML model-driven approach. Humans are critical to the overall performance of the recommender (ML Models) systems. 

We will discuss the challenges and capabilities of four main topics: 

  • How can we train ML models to help drive the CMS CCSQ mission focus: Steadfast focus on improving outcomes, beneficiaries' experience of care, and population health, while also aiming to reduce healthcare costs through improvement." For example, AML recommender systems might include health goals, treatment outliers, fraud, waste, abuse, treatment trends, and anomaly detection. 
  • Where is the CMS data? Discuss privacy concerns of training data such as using federated learning. 
  • What data/feature engineering challenges are there inherently with Medicaid and Medicare data? 
  • Discuss which humans (actors) should participate in ML to ensure safety, reliability, and trustworthiness. How can beneficiaries remain in control of their participation, privacy and continually improve the trustworthiness of AML?  

Conclusion: Human involvement and participation is an essential and necessary element for ML unbiased trustworthiness, reliability, fairness, and safety. 




Section

1:15 - 1:30  |  BREAK



Section
Anchor
Intermission Activities
Intermission Activities

1:30 - 2:00  |  Intermission Activities

An afternoon activity session with networking, fun with AI, and more!


Section
Anchor
Session 6
Session 6

 2:00 - 2:45  |  Federated Learning to Collect Mobile Patient-Reported Outcomes 

Presenters: Dr. Rachele Hendricks-Sturrup and Dr. Sara Jordan  

Plenary Session: Health data requires unique privacy and governance protections, and patient-reported outcomes measures (PROs/PROMs) data is no exception.  We will discuss what it takes to ensure patient privacy in federated learning architectures. 

Areas of interest:  this is our afternoon plenary session, everyone is encouraged to attend


Expand
titleMore About This Session

Health data requires unique privacy and governance protections. Certain types of health data warrant specific protections based on how and from whom the data is collected. Patient-reported outcomes measures (PROs/PROMs) data, for example, require specificparticular protections, particularly when it is used in or informed by machine learning regimes. A patient/human-centered, federated learning architecture is appropriate for ensuring the privacy of users’ data. However, using a federated learning approach to provide privacy without attention to private data management dimensions may compromise users’ data unexpectedly.  

Users of machine learning to collect and manage PROs/PROMs should ensure that:   

  • Choices about machine learning models do not open users to attack or undue influence and thus do not open users to liability for interpretation of false responses,  
  • Machine learning (ML) models are not compromised, and valuable machine learning spending lost to competitors; and  
  • ML models are tested and validated to ensure the quality of unstructured PROM data versus influencing or skewing PROs concerning safety, symptoms, and other vital outcomes.  



Section


2:45 - 3:00  |  BREAK



Section

Anchor
Session 7
Session 7


 3:00- 3:45  |  Using Human-Centered Machine-Learning (HCML) to Improve Data Quality & Data Governance Projects 

Presenter: Edward F. O'Connor 

PresentationAre you interested in understanding the components of a real-world and complex Machine Learning (ML) project? Join us as we walk through the implementation process of combining Human-Centered Design (HCD) techniques into a ML project.  

Areas of interest: Data, Design, Product, Strategy, Technology


Expand
titleMore About This Session

This session will serve as a practical walk-through of combining techniques from Human-Centered Design (HCD) into Machine Learning (ML) projects to create usable, trustable, and accurate systems to support real-world data quality and data governance efforts. 

The presentation will connect the dots between the: 

  • logical flow of an example system, 
  • specific open-source technologies and sub-systems, and 
  • specific recommendations to get the right people involved in each step of the process. 

The presentation is targeted at cross-functional teams and provides an example implementation approach and common pitfalls seen along the way. The presentation will utilize a case study focused on improving data quality pipelines and supporting data governance – but the information applies to any HCML project.  

The audience will leave with: 

  • A solid understanding of the component parts of a real-world and complex ML project – along with the typical flow through those components. 
  • Specific technology examples used in each component area, and an overview of what typically goes into ML architectures that utilize a modular and open-source approach as a precondition. 
  • Information on exactly how and where to involve users, whether building or maintaining an analogous ML system. 


 



Section

3:45 - 4:00  |   Closing Remarks 





Panel


Column
width2



Column
width5


Column
width2





Panel

Contact 

If you have any questions about World Usability Day or to learn more about the HCD CoE, please contact us today.

For the HCQIS Community:

Visit our HCD Confluence Site  -or-
our HCQIS Slack channel #hcd-share 

For all other visitors, please feel free to email us at: hcd@hcqis.org