Versions Compared
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Image RemovedImage Added
Column | ||
---|---|---|
| ||
HOME |
Column | ||
---|---|---|
| ||
Register |
Column | ||
---|---|---|
| ||
Speakers |
Column | ||
---|---|---|
| ||
SESSION MATERIALS |
Column | ||
---|---|---|
| ||
About |
Column | ||
---|---|---|
| ||
FAQ |
Column | ||
---|---|---|
| ||
EVENT PHOTOS |
width | 15 |
---|
Include Page | ||||
---|---|---|---|---|
|
Session Materials
SESSION MATERIALS
Section |
---|
Session recordings and presentation slides are posted below for most all of the sessions in case you were not able to attend a session, or would like to watch it again. Note: speaker presentation slides are provided as an Adobe Acrobat PDF document and not yet 508 compliant. View the Collaborative Note Taking Mural Board and even listen to our trust-themed playlist while you read through the presentations. |
Section | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
A Gold Mining Adventure – Using Natural Language Processing and Machine Learning to find Gold in Unstructured DataPresenter: Chris Schilstra Keynote Presentation: Navigating the Complexity of TrustPresenter: Carol J. Smith Keynote Presentation Carol J. Smith from Carnegie Mellon University will explore trust and how UX practitioners can define and measure itPresentation: Join us to "dig for gold" in sizeable textual data sets to uncover stakeholder themes and sentiment, with industry-standard accuracy rates. We will compare two thematic/sentiment model visualizations with a presentation and demo and explore the importance of incorporating Human-Centered Design (HCD) with Artificial Intelligence (AI). Areas of interest:Data, Design, Policy, Technology Session materials: Unavailable at this time. data, design, leadership, product, strategy, and technology Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_Smith.pdf
| ||||||||||||
Section | ||||||||||||
|
Panel | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Panel |
---|
ContactIf you want to learn more about the HCD CoE, please contact us today. For the QualityNet Community: Visit our HCD Confluence Site -or- For all other visitors, please feel free to email us at: hcd@hcqis.org |
Section | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Customer Engagements Using Human-Centered DesignPresenters: Morgan Taylor Presentation Join us to learn from the Customer-Focused Research Group who will share an overview of their work across CMS and how it informs policymaking. Areas of interest: design, leadership, policy, and strategy Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_Taylor Anchor | | Session 3 | Session 3 |
Expand | ||
---|---|---|
| ||
When talking about Artificial Intelligence (AI) in the past few years, we have focused a lot on Machine Learning (ML), but much like in our school system, learning is only half of the equation. We are missing the other half: Machine TEACHING, which is where the human-centered portion comes in. This session will reveal the findings and intersect a technical discussion with a real business context: Fraud, Waste, and Abuse (FWA). For a business to thrive, cutting expenses is just as necessary as increasing income. In industries like Insurance, Finance, and Healthcare, FWA is one of the top costs, costing companies billions of dollars. Traditionally, a group of experts and investigators handle this time-consuming and labor-intensive process, but the rise of AI can assist humans in identifying and preventing FWA. This session will show not only “why” we should use AI to fight against FWA but also “how” to achieve it practically in 3 significant sections:
The recording of this session and a copy of the presentation slides are posted above. | ||
The Customer-Focused Research Group (CFRG) started their Human-Centered Design (HCD) work in 2017, leading cross-agency customer engagements, which informs policymaking by understanding the customer experience, uncovering burden, and identifying opportunities for improvement. When visiting onsite, customers often say they are the 'friendly feds' because the team truly listens to their perspective. In addition, the team co-creates with customers to ensure insights and illustrations accurately reflect their stories. Attendees will learn:
|
Section | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Accessible Insights: Democratizing User Research with Jira and ConfluencePresenter: Lesley Humphreys and Fan Huang Case Study This case study will show how we created the repository, integrated personas and intend for the repository to be an integral part of the growth of our HQR system and program. Areas of interest: design, leadership, policy, product, strategy Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_Huang_Humphreys.pdf
|
Section | |||||||||
---|---|---|---|---|---|---|---|---|---|
Morning Plenary, In What We Trust?Presenter: Cupid Chan Morning Plenary Session Cupid Chan with Pistevo Decision will explore trust, ethics, and integrity as he reviews trends and challenges with Artificial Intelligence (AI) and the latest in Federated Learning. Areas of interest: compliance, data, design, leadership, policy, product, strategy, and technology Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_Chan | |||||||||
Section | |||||||||
Anchor | | Session 4 | Session 4 |
Expand | ||
---|---|---|
| ||
Artificial intelligence (AI) seems to promise wide-ranging automation of many familiar civic technology components by providing just-in-time help to website visitors, performing audits and calculations, and personalizing services to specific users. However, in practice, these “artificially intelligent” systems generally rely on complex combinations of humans and machines to produce user experiences. This requirement poses unique challenges for service design and system transparency, with profound consequences for overall user trust and user experience. In this presentation, we take a sometimes-speculative look at how we have been thinking about “heteromated” systems such as chatbots and measure calculation on Healthcare Quality Reporting and beyond, providing both a theoretical overview and some practical guidance for UX professionals working with artificially intelligent systems. The recording of this session and a copy of the presentation slides are posted above. | ||
Even though AI has advanced a lot in the past six years, the legacy AI approach has a problem: data must be consolidated in one location for the machine learning model to be trained. That means data are exposed, and the data owners lose their data privacy. Even worse, unlike tangible objects, the data exposed can be replicated to potentially hundreds and thousands of times with just a click of a mouse. That makes recovery almost impossible, and hence people now treat data privacy more seriously. The result is insufficient data, which creates another dilemma of hindering the growth and maturity of many AI models as they rely on data. Where should data owners draw the line to determine what they can and cannot trust? In insecurity discipline, there is a methodology called Zero-Trust. Can this be used in AI so that we trust nobody to hold our data but can still help advance AI? There is a new branch of AI called Federated Learning. The concept includes data consolidation to train the model, and the model is pushed out to where the data is located for training. The individual result will then be sent back for aggregation to form the final useful model. Sounds very promising, right? But can this be THE solution to solve the trust issue? Attendees will:
|
Section | |||||||||
---|---|---|---|---|---|---|---|---|---|
Eroding and Rebuilding Trust: What We Can Learn from Dark Patterns and Selfish DesignPresenter: Rob Fay Panel Discussion Rob Fay with Tantus Technologies will moderate a panel discussion on examples of bad design and how empathy-driven design can build trust in government products and services. Areas of interest: compliance, data, design, leadership, policy, product, strategy, and technology Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_Fay_Panel.pdf | |||||||||
Section | |||||||||
Anchor | | Session 5 | Session 5 |
Expand | ||
---|---|---|
| ||
Imagine a world where Medicaid and Medicare beneficiaries are empowered hand-in-hand with providers focused on improving their care while also managing costs. How can we incentivize clinicians and beneficiaries to participate proactively in the CMS CCSQ mission? Machine Learning (ML) can help drive this vision. ML presents all-new capability options for CMS, but it's not a panacea. There are challenges and opportunities to consider when moving from predominantly post-care quality measures and traditional analytics to a proactive ML model-driven approach. Humans are critical to the overall performance of the recommender (ML Models) systems. We will discuss the challenges and capabilities of four main topics:
Conclusion: Human involvement and participation is an essential and necessary element for ML unbiased trustworthiness, reliability, fairness, and safety. A recording of this session is posted above. Please note there was not a presentation deck for this session. | ||
According to Pew Research, public trust in the government nears record lows and the federal government has recognized the need to improve the way it serves its citizens. One response has been through the publication of OMB Circular A-11 Section 280, which guides how all agencies should prioritize managing the customer experience and improving service delivery. Dark patterns are designs (digital or non-digital) that erode trust by intentionally or unintentionally tricking people into doing something they don't intend, want, or need. These mistakes usually cost people money and always cost them time. The purpose of this panel is not to discuss ways that the government has failed the public. Instead, the goal is to focus on examples of bad design most often seen in the commercial space and how we might respond to these examples to rebuild the public's trust in government solutions. Attendees will:
|
Section | |||||||||
---|---|---|---|---|---|---|---|---|---|
Afternoon Plenary, Losing Patients: Trust, Compliance, and the Patient JourneyPresenters: Hunter Whitney and Mehlika Toy, Ph.D. Afternoon Plenary Session The talk will present a case study focusing on hepatitis B patients and an effort involving researchers from Stanford University and others to determine patient-centered tools to better understand non-compliance from the patient perspective and improve the outcomes of the disease. Areas of interest: data, design, leadership, policy, product, and strategy Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to requestSlides: WUD2021_Whitney_Toy | |||||||||
Section | |||||||||
Anchor | Intermission Activities | Intermission Activities | |||||||
Section | |||||||||
Anchor | | Session 6 | Session 6 |
Expand | ||
---|---|---|
| ||
Health data requires unique privacy and governance protections. Certain types of health data warrant specific protections based on how and from whom the data is collected. Patient-reported outcomes measures (PROs/PROMs) data, for example, require specific protections, particularly when it is used in or informed by machine learning regimes. A patient/human-centered, federated learning architecture is appropriate for ensuring the privacy of users’ data. However, using a federated learning approach to provide privacy without attention to private data management dimensions may compromise users’ data unexpectedly. Users of machine learning to collect and manage PROs/PROMs should ensure that:
The recording of this session and a copy of the presentation slides are posted above. | ||
The talk will present a case study focusing on hepatitis B patients and an effort involving researchers from Stanford University and others to determine patient-centered tools to better understand non-compliance from the patient perspective and improve the outcomes of the disease. In addition, the project is looking for better ways to collect, manage, and communicate public health data among providers, caregivers, and patients with hepatitis B. This evidence highlights the need to improve patients’ disease management and adherence to their biannual monitoring and hepatocellular carcinoma (HCC) surveillance. Progress of this nature will lead to identifying individuals eligible for treatment and early detection of HCC. In addition, appropriate tools, such as those presented in session, can have a meaningful impact on patient engagement and empowerment, making adherence to care plans and better outcomes more promising. Attendees will:
|
Section | ||||||||
---|---|---|---|---|---|---|---|---|
Your Chart is a Bigot: Ethical Data Visualization in Public HealthPresenter: Edward O’Connor Presentation This session will include a practical review of data visualization in a public health planning, decision-making, and policy-making context while focusing on fairness, equity, and measuring the efficacy of programs over time. Areas of interest: data, design, leadership, policy, product, and strategy Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_O'Connor | ||||||||
Section | ||||||||
Anchor | | Session 7 | Session 7 |
Expand | ||
---|---|---|
| ||
This session will serve as include a practical walk-through of combining techniques from Human-Centered Design (HCD) into Machine Learning (ML) projects to create usable, trustable, and accurate systems to support real-world data quality and data governance efforts. The presentation will connect the dots between the:
The presentation is targeted at cross-functional teams and provides an example implementation approach and common pitfalls seen along the way. The presentation will utilize a case study focused on improving data quality pipelines and supporting data governance – but the information applies to any HCML project. The audience will leave with:
The recording of this session and a copy of the presentation slides are posted above. |
width | 2 |
---|
Column | ||
---|---|---|
| ||
Image Removed |
width | 2 |
---|
Contact
If you have any questions about World Usability Day or to learn more about the HCD CoE, please contact us today.
For the HCQIS Community:
Visit our HCD Confluence Site -or-
our HCQIS Slack channel #hcd-share
|
Section | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Content Strategy: Building Trust Through Thoughtful CommunicationPresenters: Julie Stromberg Case Study Learn how content strategy and tools contributed to building trust within a digital experience for The Center for Medicaid and CHIP Services (CMCS). Areas of interest: design, product, and strategy Session Materials: Recording: unavailable at this time; please contact hcd@cms.hhs.gov to request Slides: WUD2021_Stromberg.pdf
|