Page tree



What is Heuristic Evaluation?

Heuristic Evaluation is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics"). 

Why use a Heuristic Evaluation?

There are some great reasons why heuristic evaluations are helpful, especially during the downtimes of user research. Overall, heuristic evaluations allow you to:
  • Identify and focus on specific issues without having to speak to users
  • Discover usability problems with individual elements and how they impact the overall user experience.
  • Provide quick and inexpensive feedback to designers
  • Gather and give feedback early in the design process
  • Conduct usability testing to further identify and understand problems
  • See improvements in important business metrics, such as bounce rate, user engagement, and click-through rate

Heuristic evaluations are not a replacement for usability testing or speaking to users. They provide a foundation for improving the experience on the side of user testing, or before you go into a usability test.

Because you will be finding issues does not necessarily mean you will get answers. Proper usability testing is essential to ensure you are building the correct solution.

How to perfrom a Heuristic Evaluation?

In general, heuristic evaluation is difficult for a single individual to do because one person will never be able to find all the usability problems in an interface. Luckily, experience from many different projects has shown that different people find different usability problems. Therefore, it is possible to improve the effectiveness of the method significantly by involving multiple evaluators. It is certainly true that some usability problems are so easy to find that they are found by almost everybody, but there are also some problems that are found by very few evaluators. Furthermore, one cannot just identify the best evaluator and rely solely on that person's findings. First, it is not necessarily true that the same person will be the best evaluator every time. Second, some of the hardest-to-find usability problems are found by evaluators who do not otherwise find many usability problems. Therefore, it is necessary to involve multiple evaluators in any heuristic evaluation. Use three to five evaluators since one does not gain that much additional information by using larger numbers. (NN/g Nielson Norman Group)

Having more than one evaluator helps avoid false alarms and give priority to the issues found. If all the evaluators rate something as critical, that issue should be prioritized at the top of the list.

Below are several steps to conduct a heuristic evaluation:

  1. Define what you will evaluate. 
  2. Know your user's behaviors and motivations.
  3. Choose which heuristics you will use. There are several well-recognized sets of heuristics that can be applied and you can also create your own if you are an advanced evaluator (learn more here).
  4. Set up the way you will identify issues. People may consider problems differently, and the severity of each problem could vary from one evaluator to the next. It is essential for evaluators to identify and define the different severity ratings of each issue. 
  5. Define the task(s). Using a task makes it easier to get into the user's perspective and allows the evaluators to remember the user's goals.
  6. Conduct the evaluation. Never evaluate together and go step-by-step through each interaction on each section you have decided to assess. Interact with each element and see if the elements violate any of the heuristics. Keep on hand definitions (and examples) of each heuristic. Allow a few hours to properly evaluate (more if conducting a heuristic evaluation of a full product, it may take one or two days). It is also helpful to include annotated screenshots that visually highlight any violations. 
  7. Analyze and summarize the results. Bring together all of the different evaluators and their findings. Add up the number of times an issue occurred across evaluators and the average severity of each violation. The most frequent problems, and the higher their severity, the more prioritized they become. For instance, if each evaluator encountered a problem with the search field, and rated it as a major violation, that issue should get a higher priority than a cosmetic or minor issue.

Overall, a heuristic evaluation should produce a clear list of usability problems, which heuristics they violate, and how severely they impact the user. With this information, designers can make quick and informed changes to improve the experience.

Microlearning

HCD PROCESS

Immerse | Synthesize | Prototype

FOR MORE INFORMATION

What Three Heuristic Evaluations Taught Us About Iteration

TIPS


  • Establish an appropriate list of heuristics.

  • Make sure to carefully choose your evaluators. Your evaluators should not be your end users.

  • Brief the evaluators so they know exactly what they are meant to do and cover during their evaluation.

  • Record problems. Evaluators must either record problems themselves or record them as they carry out their various tasks to track any problems encountered. Be sure to ask your evaluators to be as detailed and specific as possible when recording problems.

  • Hold a debriefing session to collate the findings of the evaluators and to establish a complete list of problems. Encouraged them to suggest potential solutions for these problems on the basis of the heuristics.
  • No labels