Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 31 Next »

Success Rate:
The Simplest Usability Metric
Jakob Nielson & Raluca Budiu, Nielsen Norman Group, Reading time: about 6 min

In addition to being expensive, collecting usability metrics interferes with the goal of gathering qualitative insights to drive design decisions. As a compromise, you can measure users' ability to complete tasks. Success rates are easy to understand and represent the UX bottom line.

Numbers are powerful (even though they are often misused in user experience). They offer a simple way to communicate usability findings to a general audience. Saying, for example, that "Amazon.com complies with 72% of the e-commerce usability guidelines" is a much more specific statement than "Amazon.com has great usability, but it doesn't do everything right."

Metrics are great for assessing long-term progress on a project and for setting goals. They are an integral part of a benchmarking program and can be used to assess if the money you invested in your redesign project was well spent.

Unfortunately, there is a conflict between the need for numbers and the need for insight. Although numbers can help you communicate usability status and the need for improvements, the true purpose of a user experience practice is to set the design direction, not to generate numbers for reports and presentations. Thus, some of the best research methods for usability (and, in particular, qualitative usability testing) conflict with the demands of metrics collection.

The best usability tests involve frequent small tests, rather than a few big ones. You gain maximum insight by working with 4–5 users and asking them to think out loud during the test. As soon as users identify a problem, you fix it immediately (rather than continue testing to see how bad it is). You then test again to see if the "fix" solved the problem.



Assessing Human-Centered Design Adoption in the Enterprise 



Looped In: The Measuring Customer Satisfaction Initiative


Understanding Qualitative & Quantitative Research


Practitioner Profile: Hyorim Park





More Content...


Calculating ROI for Design Projects
in 4 Steps

Ultimately there are some problems that we cannot solve but understanding the scope of what we can control through our work allows us to serve others in a more meaningful way. 



Moderated vs. Unmoderated User Testing:
Which is Right for You?

As professionals who deploy products and services for our customers, how often do we stop and make decisions to prune or simplify? Is the deck stacked against this behavior?  


Creating Effective Surveys

A 5-step process for creating empathy maps that describe user characteristics at the start of a UX design process. Originally published by NNg.


How to improve your
Net Promoter Score (NPS)

Try your hand at deciding what is fact and what is fiction in this interactive quiz about the practice of human-centered design.



A Day to Explore Important Product Issues
like Trust, Ethics, and Integrity

It is essential and mandatory that CCSQ comply with the PRA when applying human-centered design (HCD) processes and methodologies. However, the PRA should not limit HCD best practices to improve the customer experience. 


Customer Service Tips Videos

Join us for a case study of how the recently centralized Quality Payment Program’s (QPP) human-centered design team learned to be effective by scaling and streamlining the research processes.
To learn more about the CCSQ HCD Community of Practice, visit our information page.


  • No labels