Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel
borderWidth0


Column
width1400 px

Column
width850px


Excerpt
Uncovering the Emotional Journey for the Ideal Patient Experience
Meaghan Hudak, Reading time:
 

In addition to being expensive, collecting usability metrics interferes with the goal of gathering qualitative insights to drive design decisions. As a compromise, you can measure users' ability to complete tasks. Success rates are easy to understand and represent the UX bottom line.

Numbers are powerful (even though they are often misused in user experience). They offer a simple way to communicate usability findings to a general audience. Saying, for example, that "Amazon.com complies with 72% of the e-commerce usability guidelines" is a much more specific statement than "Amazon.com has great usability, but it doesn't do everything right."

Metrics are great for assessing long-term progress on a project and for setting goals. They are an integral part of a benchmarking program and can be used to assess if the money you invested in your redesign project was well spent.

Unfortunately, there is a conflict between the need for numbers and the need for insight. Although numbers can help you communicate usability status and the need for improvements, the true purpose of a user experience practice is to set the design direction, not to generate numbers for reports and presentations. Thus, some of the best research methods for usability (and, in particular, qualitative usability testing) conflict with the demands of metrics collection.

The best usability tests involve frequent small tests, rather than a few big ones. You gain maximum insight by working with 4–5 users and asking them to think out loud during the test. As soon as users identify a problem, you fix it immediately (rather than continue testing to see how bad it is). You then test again to see if the "fix" solved the problem.

Although small tests give you ample insight into how to improve design, such tests do not generate the sufficiently tight confidence intervals that traditional metrics require. Think-aloud protocols are the best way to understand users' thinking and thus how to design for them, but the extra time it takes for users to verbalize their thoughts contaminates task time measures. Plus, qualitative tests often involve small tweaks from one session to the next, and, because of that metrics, collected in such tests are rarely measuring the same thing.

Thus, the best usability methodology is the one least suited for generating detailed numbers.

Measuring Success

One of the more common metrics used in user experience is task success or completion. This is a very simple binary metric.  When we run a study with multiple users, we usually report the success (or task-completion) rate: the percentage of users who were able to complete a task in a study.  

Like most metrics, it is fairly coarse — it says nothing about why users fail or how well they perform the tasks they did complete.

Nonetheless, success rates are easy to collect and a very telling statistic. After all, if users can't accomplish their target task, all else is irrelevant. User success is the bottom line of usability.

Levels of Success

Success rates are easy to measure, with one major exception: How do we account for cases of partial success? If users can accomplish part of a task, but fail other parts, how should we score them?

Let's say, for example, that the users' task is to order twelve yellow roses to be delivered to their mothers on their birthday. True task success would mean just that: Mom receives a dozen roses on her birthday. If a test user leaves the site in a state where this event will occur, we can certainly score the task as a success. If the user fails to place any order, we can just as easily determine the task a failure.

But there are other possibilities as well. For example, a user might:

  • order twelve yellow tulips, twenty-four yellow roses, or some other deviant bouquet
  • fail to specify a shipping address, and thus have the flowers delivered to their own billing address
  • specify the correct address, but the wrong date
  • do everything perfectly except forget to specify a gift message to enclose with the shipment, so that mom gets the flowers but has no idea who they are from

Each of these cases constitutes some degree of failure.

If a user does not perform a task as specified, you could be strict and score it as a failure. It's certainly a simple model: Users either do everything correctly or they fail. No middle ground. Success is success, without qualification.

However, we sometimes grant partial credit for a partially successful task. It can seem unreasonable to give the same score (zero) to both users who did nothing and those who successfully completed much of the task. How to score partial success depends on the magnitude of user error.

In the flower example, we might define several levels of success:

  • complete success: the user places the order with no error, exactly as specified
  • success with one minor issue: the user places the order but omits the gift message or orders the wrong flowers
  • success with a major issue: the user places the order but enters the wrong date or delivery address  
  • failure: the user is not able to place the order

Of course, the precise levels of success would depend on the task and your and your users’ particular needs. (For example, if you did a survey and determined that most mothers would consider it a major offense to get tulips instead of roses, you may change the rating accordingly).

Reporting Levels of Success

To report levels of success, you simply report the percentage of users who were at a given level. So, for example, if out of 100 users, 35 completed the task with a minor issue, you would say that 35% of your users were able to complete the task with a minor issue.  Like for any metric, you would have to report the confidence interval for that number.

Image Removed

(*) In this table, the ranges represent 95% confidence intervals calculated using the Adjusted Wald method.

Image Removed

Note that this method simply amounts to using multiple metrics for success instead of just one — each level of success is a separate metric.

You can also use other metrics such as number of errors; for example, you could define different error types (e.g., wrong flowers, wrong shipping address) and track the number of people who made each of these errors. Doing so may actually give you a more nuanced picture than using levels of success because you might be able to say precisely which of the different errors is more common and, thus, focus on fixing that one.

Do Not Use Numbers for Success Levels

A common error that people make when working with success levels is to assign numbers to them; for example, they may say:

  • complete success = 1
  • success with one minor issue = 0.66
  • success with a major issue = 0.33
  • failure = 0

And then, instead of reporting success, they simply average these success levels for their participants. In our example, they might say that the success rate is:

(20*1+35*0.66+ 30*0.33+0*15)/100 = 0.53 = 53%

This approach is wrong! The numbers that we assigned to the different levels of success are simply labels and they form an ordinal scale, not an interval or ratio scale. That means that, even though there is an order established across these levels of success (e.g., failure is worse than success with major issue), there is no mathematical meaning to these numbers and we cannot average them because we cannot truly guarantee that these numbers are evenly spaced on a 0 to 1 scale (or whatever other scale we’re using between complete success and complete failure). In other words, we don’t know and have no reason to assume if the difference between complete success and success with minor issue is the same as the difference between failure and success with major issue.

4 minutes

Whenever we conduct user research in health care, the same pain points consistently emerge. The experience tends to be overwhelming, confusing, frustrating, and stressful. This is because it’s hard to understand, and the process of getting care is often inefficient — health care has usability issues. 

Ideally, people want to feel in control. They desire health care that is straightforward and easy to understand, and want this care to be delivered through a convenient process. 

This presentation shared learnings of how patients define the ideal health care experience and focus on a user-driven definition of health care usability.  

Attendees learned how patients define the ideal health care experience, and to focus on a user-driven definition of health care usability.

The keynote presentation, ‘A User-Driven Perspective on the Healthcare Experience’, was facilitated by Marty Gage, VP of Design Research at Lextant. After a brief introduction, we were presented with a discussion of the problem, the solution, and the insights learned.

The Healthcare Experience Today 

Marty starts us off by assessing the current healthcare experience. People find themselves in moments throughout the patient journey. It is not always a fluid experience. 

All of the research reveals a similar emotional outcome:

  • Should I seek care?
  • What’s my condition?
  • What should I expect?
  • What are my treatment options
  • How should I manage my condition?
  • Am I doing it right?

Image Added


Touchpoints throughout a users journey includes: people, devices, apps, websites, medication. Touchpoints can be confusing. The current experience is the emotional outcome that results from the interaction between a person and something. Users can be stressed and overwhelmed making it difficult to make decisions, follow procedures and understand complex information. We learn that there's a fundamental usability issue with the healthcare experience. 

The Desired Healthcare Experience 

What is the desired healthcare experience? Marty explains that we need to dive into to people's emotions, benefits, features and sensor cues. If we define a desired patient experience it would be the emotional outcome people desire when interacting with the touchpoint.

Image Added

The benefits would provide people with positive emotions of being calm and in control. Staying informed with easy to understand information. The sensory cues would have information: 

  • All in one place
  • Minimal steps
  • Easy to navigate
  • Up-to-date
  • Simple language
  • Concise
  • Visual

The Solution

Marty asked us to assume we have a usability challenge. We are to treat moments as use cases and to think holistically about the use cases. Keep in mind the contextual/emotional, physical, cognitive, and desired outcomes.

We learn that touchpoints are the solutions in the moment. It is important to keep key touchpoints discreet and focused with a clear architecture and hierarchy. We should involve users in determining how the information is delivered and to test for usability

The Healthcare Usability Challenge 

We are shown complex information and depending on the situation, stress can limit comprehension. Marty left us with these tips:

  • Embrace easy to understand
  • Design for the big picture
  • Right touchpoints for the right moments
  • Involve users in the design
  • Use information design best practices
  • Test ideas for usability


“Even baseline or bad experiences in CPG or retail would be viewed as huge wins in healthcare. There’s so much opportunity to flip the script and impact the entire experience along the way.”

Sara Saldoff, OhioHealth


If you missed Marty’s presentation, check out the transcript and recording on the CCSQ World Usability Day page. This page also includes an archive of transcripts and recordings of speaker presentations, session materials, and event photos. For more information about the Human-Centered Design Center of Excellence, refer to the HCD CoE Confluence pageSince the temptation of averaging numbers is so big in real life, we strongly recommend that you assign word labels to levels of success instead of numbering them.

Anchor
MeaghanBio
MeaghanBio

Panel
borderWidth0



Column
width128px

Image Modified 


Column
width20


Column
width690px

MEAGHAN HUDAK 

Meaghan is a Communication Specialist supporting the CCSQ Human-Centered Design Center of Excellence (HCD CoE). Meaghan has been with the HCD CoE since January 2022. 



Image Modified      Image Modified


Column
width100


Column
width520px


Include Page
SEPT_embedded. Advertising Include
SEPT_embedded. Advertising Include





...