Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 38 Next »

Success Rate:
The Simplest Usability Metric
Jakob Nielson & Raluca Budiu, Nielsen Norman Group, Reading time: about 6 min

In addition to being expensive, collecting usability metrics interferes with the goal of gathering qualitative insights to drive design decisions. As a compromise, you can measure users' ability to complete tasks. Success rates are easy to understand and represent the UX bottom line.

Numbers are powerful (even though they are often misused in user experience). They offer a simple way to communicate usability findings to a general audience. Saying, for example, that "Amazon.com complies with 72% of the e-commerce usability guidelines" is a much more specific statement than "Amazon.com has great usability, but it doesn't do everything right."

Metrics are great for assessing long-term progress on a project and for setting goals. They are an integral part of a benchmarking program and can be used to assess if the money you invested in your redesign project was well spent.

Unfortunately, there is a conflict between the need for numbers and the need for insight. Although numbers can help you communicate usability status and the need for improvements, the true purpose of a user experience practice is to set the design direction, not to generate numbers for reports and presentations. Thus, some of the best research methods for usability (and, in particular, qualitative usability testing) conflict with the demands of metrics collection.

The best usability tests involve frequent small tests, rather than a few big ones. You gain maximum insight by working with 4–5 users and asking them to think out loud during the test. As soon as users identify a problem, you fix it immediately (rather than continue testing to see how bad it is). You then test again to see if the "fix" solved the problem.



Assessing Human-Centered Design Adoption in the Enterprise 



Looped In: The Measuring Customer Satisfaction Initiative


Understanding Qualitative & Quantitative Research


Practitioner Profile: Hyorim Park





More Content...


Calculating ROI for Design Projects
in 4 Steps

As UX professionals, we see inherent value in improving the experiences of products and services. But many people don’t see it that way. And sometimes, those people make decisions about your funding. 



Moderated vs. Unmoderated User Testing:
Which is Right for You? 

Both moderated and unmoderated usability tests prove to be effective solutions to getting accurate feedback on an ongoing project. Which should you use?  


Creating Effective Surveys

From working out what you want to achieve to providing incentives for respondents, survey design can take time. But when you don’t have hours to devote to becoming a survey-creation guru, a quick guide to the essentials is a great way to get started.



How to improve your
Net Promoter Score (NPS)

What steps do you need to take to improve your Net Promoter Score? Learn how to benchmark your performance, track progress and see better customer experience results.



A Day to Explore Important Product Issues
like Trust, Ethics, and Integrity

Wednesday, November 10, 2021 wasn’t just another Wednesday — it was a celebration for the CCSQ and QualityNet community. Produced by the Human-Centered Design Center of Excellence (HCD CoE,) CCSQ’s 3rd Annual World Usability Day was a virtual, open-house style conference held to honor and explore good design and the usability of products.  


Customer Experience Tips

A series of videos from the world's most notorious soup villain offering tips to ensure a positive customer experience!


  • No labels