Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Include Page
Nav-buttons-schedule
Nav-buttons-schedule


Schedule




Section

With an exciting line-up of engaging speakers and diverse topics about trust, ethics, and integrity, you will not want to miss anything! Do you have a busy day, or an upcoming deadline? We get it. That is why the Zoom event is an open-house format with sessions throughout the day and breaks in between sessions. You can even sit down with us during lunchtime for an informative panel discussion. 


Section

8:30- 8:45 | Welcome from CMS Leadership 

Presenter: Roni Garland-Wynegar 


Section
Anchor
Session 2
Session 2
 


8:45 - 9:30 |Keynote Presentation: Navigating the Complexity of Trust  

Presenter: Carol J. Smith  

Keynote Presentation 

Carol J. Smith from Carnegie Mellon University will explore trust and how UX practitioners can define and measure it. 

Areas of interest: data, design, leadership, product, strategy, and technology 

Expand
titleMore About This Session

Trust is complex, transient, and personal. Context, knowledge, awareness, privacy, respect, and other fluid considerations affect trust. How can we examine this complexity in a way that supports the work of making digital experiences? What research supports this work and how can we use practices of responsible development to make systems that earn appropriate levels of trust? What is an appropriate level of trust for complex systems? This talk will examine trust and how UX practitioners can define and measure it. 

Attendees will: 

  • Understand trust and methods to explore it in their context, 
  • Explore a framework to consider transitions in trust over the course of an experience, and 
  • Have techniques to support appropriate trust in design. 




Section

9:30  9:45 | Break 


Section
Anchor
Session 3
Session 3

9:45 – 10:15 | Customer  Customer Engagements Using Human-Centered Design Design 

Presenters: Suzanne Martin-Devroye and Morgan Taylor 

Presentation 

Join us to learn from the Customer-Focused Research Group who will share an overview of their work across CMS and how it informs policymaking. 

Areas of interest: design, leadership, policy, and strategy 

Expand
titleMore About This Session

When talking about Artificial Intelligence (AI) in the past few years, we have focused a lot on Machine Learning (ML), but much like in our school system, learning is only half of the equation. We are missing the other half: Machine TEACHING, which is where the human-centered portion comes in. This session will reveal the findings and intersect a technical discussion with a real business context: Fraud, Waste, and Abuse (FWA).  

For a business to thrive, cutting expenses is just as necessary as increasing income. In industries like Insurance, Finance, and Healthcare, FWA is one of the top costs, costing companies billions of dollars. Traditionally, a group of experts and investigators handle this time-consuming and labor-intensive process, but the rise of AI can assist humans in identifying and preventing FWA.  

This session will show not only “why” we should use AI to fight against FWA but also “how” to achieve it practically in 3 significant sections:  

  • Supervised Learning: We will explore how basic algorithms like decision trees, logistic regressions, and support vector machines can help fight FWA. 
  • Unsupervised Learning: We will see how techniques like clustering can complement the supervised learning approach.  
  • Leveraging Generative Adversarial Networks (GAN): We will investigate the use of GAN (used in image generation) and see how this more sophisticated model can help fight FWA.  
  • The session demonstrates by utilizing an open data set, which is available online. We will also include the latest research in this area to strengthen the cases presented. 

The recording of this session and a copy of the presentation slides are posted above.

The Customer-Focused Research Group (CFRG) started their Human-Centered Design (HCD) work in 2017, leading cross-agency customer engagements, which informs policymaking by understanding the customer experience, uncovering burden, and identifying opportunities for improvement. When visiting onsite, customers often say they are the 'friendly feds' because the team truly listens to their perspective. In addition, the team co-creates with customers to ensure insights and illustrations accurately reflect their stories. 

 Attendees will learn: 

  • Various ways to illustrate a customer's story, 
  • How co-creation and design activities contribute to building trust with customers, and 
  • How HCD informs policymaking and identifies areas for improvement. 




Section

10:15  10:30 | Break 


Section
Anchor
Session 4
Session 4

10:30 – 11:00 | Accessible Insights: Democratizing User Research with Jira and Confluence  

Presenter: Lesley Humphreys and Fan Huang  
 

Case Study 

This case study will show how we created the repository, integrated personas and intend for the repository to be an integral part of the growth of our HQR system and program.    

Areas of interest: design, leadership, policy, product, strategy 

Expand
titleMore About This Session

Belief in the value of qualitative user research and personas is a core principle of human-centered design (HCD) --but how often are research insights and personas used across all the disciplines that are supposed to be collaborating to create software? How often are they created and then shelved, never to be seen again by anyone outside the HCD team? Research participants trust us to make the most of their contributions to the design and development of our systems. And our user community trusts us to do the best we can to design a usable system for them. 

In the Hospital Quality Reporting Program (HQR), the HCD team has developed a research repository using Jira and Confluence accessible to many of our stakeholders, partners, and team members of the Application Development Organization (ADO). This case study will show how we created the repository, integrated personas and intend for the repository to be an integral part of the growth of our HQR system and program.    

Attendees will learn: 

  • How to create a research repository with tools that are accessible to many CCSQ programs, 
  • About a governance model for the repository, and 
  • How to integrate research insights into system documentation and SAFe practices. 



Section

11:00 - 11:15 | Break 


 How Humans Make AI Work  

Presenters: Ian Lowrie and Stephanie Warren 

Presentation: “Artificially intelligent” systems rely on complex combinations of humans and machines to produce the desired user experiences, posing challenges for service design and ultimately affecting overall user trust and user experience. This session will explore systems like chatbots and provide practical guidance for UX professionals working with or curious about Artificial Intelligence (AI). 

Areas of interest: Data, Design, Product, Technology

Session materials:

Recording - November 12, 2020

Slides: WUD_Warren_Lowrie.pdf

Section
Anchor
Session 5
Session 5

11:15 – 11:45 |Morning Plenary Session: In What We Trust?  

Presenter: Cupid Chan 

Morning Plenary Session 

Cupid Chan with Pistevo Decision will explore trust, ethics, and integrity as he reviews trends and challenges with Artificial Intelligence (AI) and the latest in Federated Learning.  

Areas of interest: compliance, data, design, leadership, policy, product, strategy, and technology 

Section
Anchor
Session 4Session 4
Expand
titleMore About This Session

Artificial intelligence (AI) seems to promise wide-ranging automation of many familiar civic technology components by providing just-in-time help to website visitors, performing audits and calculations, and personalizing services to specific users. However, in practice, these “artificially intelligent” systems generally rely on complex combinations of humans and machines to produce user experiences. This requirement poses unique challenges for service design and system transparency, with profound consequences for overall user trust and user experience.  

In this presentation, we take a sometimes-speculative look at how we have been thinking about “heteromated” systems such as chatbots and measure calculation on Healthcare Quality Reporting and beyond, providing both a theoretical overview and some practical guidance for UX professionals working with artificially intelligent systems. 

The recording of this session and a copy of the presentation slides are posted above.

Even though AI has advanced a lot in the past six years, the legacy AI approach has a problem: data must be consolidated in one location for the machine learning model to be trained. That means data are exposed, and the data owners lose their data privacy. Even worse, unlike tangible objects, the data exposed can be replicated to potentially hundreds and thousands of times with just a click of a mouse. That makes recovery almost impossible, and hence people now treat data privacy more seriously.  

The result is insufficient data, which creates another dilemma of hindering the growth and maturity of many AI models as they rely on data. Where should data owners draw the line to determine what they can and cannot trust? In insecurity discipline, there is a methodology called Zero-Trust. Can this be used in AI so that we trust nobody to hold our data but can still help advance AI? 

There is a new branch of AI called Federated Learning. The concept includes data consolidation to train the model, and the model is pushed out to where the data is located for training. The individual result will then be sent back for aggregation to form the final useful model. Sounds very promising, right? But can this be THE solution to solve the trust issue?  

Attendees will: 

  • Understand the risk and opportunities of this technology and see In What We Trust, 
  • Learn the latest trends in AI – Federated Learning, and how this technology can provide an AI model integrity, and 
  • Understand the latest trends and end-user data trust. 




Section

11:00 - 11:15 | Break 



 Capabilities and Challenges for Machine Learning focused on Preserving Privacy and CMS Healthcare Goals 

Presenters: Keith McFarland (moderator) and panelistsCombiz Abdolrahimi, Steve Geller, Harlan Krumholz, MDDarryl Marshall and Bin Shao, Ph.D.  

Panel Discussion: Machine Learning (ML) can support patient care improvement while managing costs, but there are risks involved. Join us for a panel discussion on how a ML approach can be implemented without compromising reliability, trustworthiness, and safety. The panel of professionals will share their knowledge in areas including Human-Centered Design (HCD), Health Privacy, Data, and more. 

Areas of interest: this is a panel discussion, and we encourage everyone to attend

Session materials:

Recording - November 12, 2020

Section
Anchor
Session 6
Session 6

12:00 – 1:00 Eroding and Rebuilding Trust: What We Can Learn from Dark Patterns and Selfish Design 

Moderator: Rob Fay 

Panel Discussion 

Rob Fay with Tantus Technologies will moderate a panel discussion on examples of bad design and how empathy-driven design can build trust in government products and services.  

Areas of interest:  compliance, data, design, leadership, policy, product, strategy, and technology  

Section
Anchor
Session 5Session 5
Expand
titleMore About This Session

Imagine a world where Medicaid and Medicare beneficiaries are empowered hand-in-hand with providers focused on improving their care while also managing costs. How can we incentivize clinicians and beneficiaries to participate proactively in the CMS CCSQ mission? 

Machine Learning (ML) can help drive this vision. ML presents all-new capability options for CMS, but it's not a panacea. There are challenges and opportunities to consider when moving from predominantly post-care quality measures and traditional analytics to a proactive ML model-driven approach. Humans are critical to the overall performance of the recommender (ML Models) systems. 

We will discuss the challenges and capabilities of four main topics: 

  • How can we train ML models to help drive the CMS CCSQ mission focus: Steadfast focus on improving outcomes, beneficiaries' experience of care, and population health, while also aiming to reduce healthcare costs through improvement." For example, AML recommender systems might include health goals, treatment outliers, fraud, waste, abuse, treatment trends, and anomaly detection. 
  • Where is the CMS data? Discuss privacy concerns of training data such as using federated learning. 
  • What data/feature engineering challenges are there inherently with Medicaid and Medicare data? 
  • Discuss which humans (actors) should participate in ML to ensure safety, reliability, and trustworthiness. How can beneficiaries remain in control of their participation, privacy and continually improve the trustworthiness of AML?  

Conclusion: Human involvement and participation is an essential and necessary element for ML unbiased trustworthiness, reliability, fairness, and safety.

A recording of this session is posted above. Please note there was not a presentation deck for this session. 

According to Pew Research, public trust in the government nears record lows and the federal government has recognized the need to improve the way it serves its citizens. One response has been through the publication of OMB Circular A-11 Section 280, which guides how all agencies should prioritize managing the customer experience and improving service delivery. 

Dark patterns are designs (digital or non-digital) that erode trust by intentionally or unintentionally tricking people into doing something they don't intend, want, or need. These mistakes usually cost people money and always cost them time. The purpose of this panel is not to discuss ways that the government has failed the public. Instead, the goal is to focus on examples of bad design most often seen in the commercial space and how we might respond to these examples to rebuild the public's trust in government solutions. 

Attendees will: 

  • Hear diverse perspectives and ideas from a panel of design professionals, 
  • Have an opportunity to learn from the mistakes of others, and 
  • Understand how to leverage principles taken from lousy design to inform ideas for rebuilding trust by improving the products and services we deliver to the public.    



Section

1:05 – 1:30 |  Meet and Greet 

Social Session 

Remember the days of meeting fellow attendees at a conference to discuss ideas, socialize, make connections and contacts, and more? Join us for a brief session that will include creative introductions and breakout rooms so can do just that. 

Areas of interest: data, design, leadership, policy, product, and strategy 


Intermission Activities

Watch Satisfy the Cat to learn more about HCD.

Have fun with AI with Akinator.

 Federated Learning to Collect Mobile Patient-Reported Outcomes 

Presenters: Dr. Rachele Hendricks-Sturrup and Dr. Sara Jordan  

Plenary Session: Health data requires unique privacy and governance protections, and patient-reported outcomes measures (PROs/PROMs) data is no exception.  We will discuss what it takes to ensure patient privacy in federated learning architectures. 

Areas of interest:  this is our afternoon plenary session, everyone is encouraged to attend.

Session materials:

Recording - November 12, 2020

Slides: WUD_Jordan_HendricksSturrup.pdf

Section

Anchor
Session 7
Session 7


1:30 – 2:15 | Losing Patients: Trust, Compliance, and the Patient Journey 

Presenters: Hunter Whitney and Mehlika Toy, Ph.D. 

Afternoon Plenary Session 

The talk will present a case study focusing on hepatitis B patients and an effort involving researchers from Stanford University and others to determine patient-centered tools to better understand non-compliance from the patient perspective and improve the outcomes of the disease.  

Areas of interest: data, design, leadership, policy, product, and strategy 

Section
Anchor
Intermission ActivitiesIntermission Activities
Section
Anchor
Session 6Session 6

Health data requires unique privacy and governance protections. Certain types of health data warrant specific protections based on how and from whom the data is collected. Patient-reported outcomes measures (PROs/PROMs) data, for example, require specific protections, particularly when it is used in or informed by machine learning regimes. A patient/human-centered, federated learning architecture is appropriate for ensuring the privacy of users’ data. However, using a federated learning approach to provide privacy without attention to private data management dimensions may compromise users’ data unexpectedly.  

Users of machine learning to collect and manage PROs/PROMs should ensure that:   

  • Choices about machine learning models do not open users to attack or undue influence and thus do not open users to liability for interpretation of false responses,  
  • Machine learning (ML) models are not compromised, and valuable machine learning spending lost to competitors; and  
  • ML models are tested and validated to ensure the quality of unstructured PROM data versus influencing or skewing PROs concerning safety, symptoms, and other vital outcomes. 
The recording of this session and a copy of the presentation slides are posted above.
Expand
titleMore About This Session

The talk will present a case study focusing on hepatitis B patients and an effort involving researchers from Stanford University and others to determine patient-centered tools to better understand non-compliance from the patient perspective and improve the outcomes of the disease. In addition, the project is looking for better ways to collect, manage, and communicate public health data among providers, caregivers, and patients with hepatitis B. 

This evidence highlights the need to improve patients’ disease management and adherence to their biannual monitoring and hepatocellular carcinoma (HCC) surveillance. Progress of this nature will lead to identifying individuals eligible for treatment and early detection of HCC. In addition, appropriate tools, such as those presented in session, can have a meaningful impact on patient engagement and empowerment, making adherence to care plans and better outcomes more promising.  

Attendees will: 

  • Learn how human-centered design (HCD) enhances patient-provider communications and makes data more useful in a clinical setting. 
  • Understand how monitoring and managing chronic hepatitis B infections lie equally on the shoulders of the patient and the healthcare provider. 


 



Section

2:15 – 2:30  | Break 


Section

Anchor
Session 7
Session 7

 Using Human-Centered Machine-Learning (HCML) to Improve Data Quality & Data Governance Projects 

Presenter: Edward F. O'Connor 


2:30 – 3:15 |Your Chart is a Bigot: Ethical Data Visualization in Public Health 

Presenter: Edward O’Connor 

Presentation 

This session will include a practical review of data visualization in a public health planning, decision-making, and policy-making context while focusing on fairness, equity, and measuring the efficacy of programs over time.PresentationAre you interested in understanding the components of a real-world and complex Machine Learning (ML) project? Join us as we walk through the implementation process of combining Human-Centered Design (HCD) techniques into a ML project.  

Areas of interest:Data data, Designdesign, Productleadership, Strategy, Technology

Session materials:

Recording - November 12, 2020

Slides: WUD_OConner.pdfpolicy, product, and strategy 

Expand
titleMore About This Session

This session will serve as include a practical walk-through of combining techniques from Human-Centered Design (HCD) into Machine Learning (ML) projects to create usable, trustable, and accurate systems to support real-world data quality and data governance efforts. 

The presentation will connect the dots between the: 

  • logical flow of an example system, 
  • specific open-source technologies and sub-systems, and 
  • specific recommendations to get the right people involved in each step of the process. 

The presentation is targeted at cross-functional teams and provides an example implementation approach and common pitfalls seen along the way. The presentation will utilize a case study focused on improving data quality pipelines and supporting data governance – but the information applies to any HCML project.  

The audience will leave with: 

  • A solid understanding of the component parts of a real-world and complex ML project – along with the typical flow through those components. 
  • Specific technology examples used in each component area, and an overview of what typically goes into ML architectures that utilize a modular and open-source approach as a precondition. 
  • Information on exactly how and where to involve users, whether building or maintaining an analogous ML system.

The recording of this session and a copy of the presentation slides are posted above.

review of data visualization in a public health planning, decision-making, and policy-making context while focusing on fairness, equity, and measuring the efficacy of programs over time. Includes examples of nuanced and small-seeming problems in data visualization and the large problems they can create downstream. The discussion will consist of specific techniques for governance, independent review, and ongoing quality improvement. 

Attendees will: 

  • Increase their ability to spot ethical issues in data visualizations and review practical examples to correct them or mitigate the harmful impacts - both at a micro-level (visualization by visualization) or macro-level (project or project), 
  • Understand what to look for on their projects and how to make corrections, and 
  • Data Scientists and Analysts will walk away with new tricks or techniques related to data visualization of very complex datasets or machine-learning models.   




Section

3:15 – 3:30  | Break 


Section

Anchor
Session 7
Session 7


3:30 – 4:00 | Content Strategy: Building Trust Through Thoughtful Communication 

Presenters: Julie Stromberg  

Case Study 

Learn how content strategy and tools contributed to building trust within a digital experience for The Center for Medicaid and CHIP Services (CMCS). 

Areas of interest: design, product, and strategy 

Expand
titleMore About This Session

What happens when you have a Human-Centered Design (HCD) content strategist on your agile team? Great content things, of course! This presentation tells the tale of how a pair of HCD content strategists joined an existing CMCS agile team to round out the HCD capabilities and work on a new feature.  

HCD content strategy is all about ensuring that we speak with users in ways that make sense, provide the right message at the right time, and plan how to keep the conversation going after the release.  

How did we build trust within the experience? HCD content strategy tools include content audits, facilitated content analysis exercises, content remediation plans, and more! And content design tools, including the messaging framework, UX writing, content testing, and more!  

Join us to learn how these tools served as a foundation for building trust with customers through thoughtful, informed communication. 

Attendees will: 

  • Understand how agile teams can benefit from the addition of an HCD content strategist and the use of content strategy and design tools, and 
  • Learn how these tools can serve as a foundation for trust with customers through thoughtful, informed communication. 

 




Section

4:00 | Closing Remarks 




Panel


Column
width2



Column
width5


Column
width2





Panel

Contact 

If you have any questions about World Usability Day or to learn more about the HCD CoE, please contact us today.

For the HCQIS Community:

Visit our HCD Confluence Site  -or-
our HCQIS Slack channel #hcd-share 

For all other visitors, please feel free to email us at: hcd@hcqis.org