Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Column
width1400 px

Column
width850px
Image Added
Usability Testing: Best Practices 
Meaghan Hudak
Image Removed
Design Critiques:
Encourage a Positive Culture to Improve Products
Sarah Gibbons
| Reading time: about
11 mins

Open feedback is essential for a collaborative UX process. However, sharing unfinished work is naturally uncomfortable and often generates tension. The right facilitation process can foster an efficient, honest feedback loop.

What Is a Critique?

Definition: A design critique refers to analyzing a design, and giving feedback on whether it meets its objectives.

A design critique usually manifests as a group conversation with the ultimate goal of improving a design. It does not mean simply judging a design.

There are two distinct breeds of design critiques: standalone critiques and design reviews. Standalone critiques are gatherings with the sole purpose of improving a particular piece of work. Design reviews, in contrast, are usually evaluations of a design based on a set of heuristics; they can be done by a usability expert or in a meeting held at the end of the creative process in order to gain approval and move forward. In this article we will focus specifically on standalone critiques.

In a standalone critique, there are two roles: the presenter and the critiquer. The presenter shares the design, while the critiquer acts as the critic, offering informed thoughts or perspectives. (Both roles can involve multiple people.) Critiques can, and should be, cross-disciplinary. They can happen at any stage in a design process, and usually there will be different critique sessions for several iterations of the same design.

Throughout this article you’ll observe 3 underlying themes of effective critiques:

  1. Clear scope for the conversation. Too often critiques become unwieldy due to lack of scope. Critiques will only prove beneficial if there are unambiguous boundaries for what can and should be critiqued. Once boundaries are set, participants, duration, and formality can be determined.
  2. Agreed-upon design objectives for the work. In order to analyze a design and whether it meets its goals, there must be agreement on the problem that needs to be solved. This likely means a clear understanding of users and their needs. Without these, any feedback is subjective and baseless.
  3. Conversation rather than command. Commands, or directives, can very quickly ruin the exact purpose of the critique, which is to foster open discussion in order to improve the outcome..

The word “critique” has slightly negative connotations in everyday language, but when conducted according to this definition, a design critique is a positive event that should feel good for all parties involved.

Why Critique?

It is nearly impossible to improve a design without feedback from others. Their input helps you avoid mistakes and thus create higher quality work. The old saying rings true: two brains (or more in this case) are always better than one.

A positive culture of critique supports team building in multiple ways. First, from the get-go, everyone is able to stay up to date and in the loop on the work. Sharing designs early allows for earlier buy-in from team members that otherwise may not feel confident about the work and builds team consensus. Over time, this practice creates team trust and prevents any destructive egos from causing too much damage to a project.

Second, design critiques enable cooperation and collaboration. Your work can influence the work of others. For example, developers could build more extensible code throughout the current release if they have an understanding for what designs may come in the future. In this same way, they could question technical feasibility when the designs are still in progress and can be changed without throwing time and money away. Multiple designers who work on different parts of a big project can pick up possible inconsistencies across the overall user experience when they all participate in early critiques of each other’s draft designs.

Facilitating a Critique

Facilitation is a core aspect of a critique. Traditionally, facilitation is a mechanism used to manage chaotic processes. As Connor and Irizarry describe “[critique] facilitation is the conscious, balanced management of conversations towards a conclusion.” This management creates the structure and framework needed for productive conversations.

There are two main facilitation approaches to UX critiques:

  • Round robin. Participants share their perspectives one by one, making their way around the table. This method provides two clear advantages. First, everyone contributes. Second, the process feels democratic: you can start at a random place at the table, and anybody has a chance at going first (if not the first time, then the next time).
  • Quotas. The facilitator gathers a specific, predetermined number of positive and negative comments from each participant. For example, each participant could share two aspects of the design that seems to accurately meet users’ needs, and one aspect that could be improved. This approach specifically should be used only as a way to initiate conversation. Once there is a natural exchange, the critique could carry on based on where the conversation goes (assuming it falls within the set scope).

A member of the team should act as a designated facilitator, in charge with the overall handling of the critique. It is best to rotate the role of facilitation from critique to critique. This circumvents one team member always dominating the conversation or directing the outcome. Rotating facilitators can also allow introverted team members to gain experience and confidence managing the team.

Facilitators’ responsibilities will vary, but likely will include time boxing, keeping conversation on track, and negotiating any tension. Other important responsibilities are:

  • Creating, then distributing the scope and agenda for the design critique. In order for a critique to be productive, there must be a plan heading into it. Defining this plan is the facilitator’s responsibility. There are key components to line up prior to conducting a critique.

    First, make everyone aware of the critique’s scope and goals. Setting the common understanding of what the conversation should and should not cover is an important part of making the most of the team’s time. Establish rules and expectations beforehand, to make sure that participants know what a critique is and how it is run. In addition, share the work that will be critiqued — you want to avoid big surprises at the time of the critique, while also giving participants the time to really think about the work before offering feedback.

    Here’s an example of the facilitator’s email specifying a critique topic and scope:

Image Removed

  • Second, purposefully choose the people who will participate in the critique. Ideally, this group will be cross-disciplinary.

  • Asking the right questions. The role of the facilitator is to ask pressing questions to ensure that the presenter is getting the right feedback. The facilitator can reformulate questions or comments that sound opinionated (“This is too red!”) or directive (“I would have done it differently!”) to relate them to the goals of the design.
    • Bad question: Yikes… that layout!
    • Reformulation: How does this layout make it easier for the user to accomplish their task quickly and efficiently?
    • Bad feedback: I love those colors, but I think that button is in the wrong spot and the overall page looks busy.
    • Reformulation: If the goal is to have the user register quickly, I’m concerned we are placing emphasis on the wrong elements and hiding the primary task by making the button hard to find.
  • Documenting the discussion. A facilitator may also act as a recorder. In some cases, someone else on the team can adopt this role. The recorder should take notes publicly, using a collaborative editing tool, and should allow all participants to add additional observations and clarification in real time.

    Here’s an example of using a spreadsheet to track the outcome of a critique session:

Image Removed

  • Following up. It is the facilitator’s job to wrap up the conversation. The follow up should comprise notes from the critique, as well as action items moving forward. Emailing the participants or posting a summary in a collaborative place can maintain the momentum after a critique.

Presenting in a Design Critique

Presenting work, whether in a critique setting or not, makes the presenter feel vulnerable, especially if the critique is not established in your organization. Remember to not take feedback personally; instead, keep your mindset on improving your product. This attitude will make it easier to work though points of tension and will enable you to gain maximum value from the critique.

When you’re presenting your work in a design critique, follow these best practices:

  • Repeat objectives. Prior to starting the critique, reiterate the goals of the work. Quickly summarize personas, current pain points, user tasks, or previous work.

    As mentioned above, it is also good idea to send out your work beforehand to avoid the initial reactive feedback based on someone’s gut reaction.

  • Tell a story. Start the critique by telling your work’s story. Though this might feel silly, not only is it good practice for storytelling to customers, but it loops your audience into the problems you encountered and into your inspirations and decision points. Follow it up with specific requests for feedback: what works, what doesn’t, where you need input and suggestions. Present your work quickly and efficiently. We like to overexplain as a means of defending every decision we have made, because we are often emotionally tied to our designs. Try to be concise and to the point. After presenting, the team can always circle back to something that needs more discussion — but avoid eating up unnecessary time in the initial presentation.

    This approach to presenting will also have the added benefit of allowing your critiquers to see your work as your users may, without much explanation. During the subsequent discussions, questions and accompanying explanations will arise naturally.

  • Make your designs readily available. If your designs were not sent out prior to the meeting, make your designs available after the critique, in case extended discussion is needed. Schedule individual follow-up meetings if you need to discuss anyone’s feedback in more detail.

Making Critiques Part of Your Process

Design critiques should be a key part of the iterative creative process, but incorporating feedback into your team’s existing process is likely to hit obstacles. It is a given that you will experience people and scenarios that make critiques difficult and frustrating. Often the critique goes against the overarching organizational structure or certain team members object to the practice. In such cases, the best approach is to start soon and start small.

The sooner your team understands and absorbs design critiques into existing processes, the sooner your products will reap the benefits that come from these important conversations. Start now, by running critiques for your current design project — in whatever state that may be. Don’t wait for the kickoff of your next big project. Even if you can’t make substantial improvements to an in-progress project, you can refine your critique culture so that you’re better positioned for great things on that next project.

Start small by pushing for exchanging better feedback amongst your immediate team. The more this occurs, the more likely it is to become a natural part of your process. Try dedicating 30 minutes a week to a round-robin critique of a project someone in your team has been working on.

If critique is already a successful part of your team’s process, think about inviting someone with a different background or from a different department on a rotating basis. Critique helps create a common foundation by bringing together different perspectives. Over time, not only do extended-team members get a better feel for the design process, but you also build trust and a shared vocabulary throughout the discussions.

Critique Pitfalls

Keeping a critique on track and effective is hard work. Below are bad habits that can negatively impact critiques:

  • Not agreeing on personas or objectives beforehand
  • Scheduling too long critique sessions
  • Taking feedback personally
  • Rushing to problem solve in the moment
  • Talking only about the negatives

Conclusion

Creating a culture of honest critique takes time and investment, but it improves design by incorporating multiple perspectives. Critiquing ongoing design projects affords changes to be made to the design before it is final, without impacting the project cost and timeline, and ultimately insures that the end product meets the original goal.

Use the attached critique cheat sheet as a guide for you and your team — either as a reference or to create a baseline understanding of the process amongst your team. If your team is already using some form of design critiques, use these best practices to refine and increase the effectiveness of the conversation.

Article originally published October 2016 by Nielsen Norman Group

3 mins

Earlier this year, I participated in my first usability test. From the experience, I’ve gained some helpful insights and learnings. You may be wondering where to start or how to conduct a virtual usability test.

Let’s start with, what is Usability Testing?

Usability testing is a testing method for measuring how well and user-friendly an application or product is.

A small, targeted set of end-users will test the application or product to discover any usability errors. Usability testing focuses on how easily users can accomplish their goals with a given system.

This level of testing is often performed on the current version of the product, or at the beginning of the software development life-cycle.

A group of users will review the application to be developed in accordance with what the users want from it.

From there, suggestions and improvements can be considered. To kick things off, I’ve included five points to consider during the usability testing process:

What is your goal?

What is the question you’re trying to answer with your test? Is there a design issue on a website that is hindering users? Is there a new product you want to test

out? Based on your goal, pick specific tasks to give the test participant. We’ll learn much more if we watch them try to accomplish something.

Participant Recruitment

Recruiting test participants may seem daunting, but it doesn’t need to be. For starters, we only need 5 people. Jakob Nielsen explains The Magical Number of 5.

Getting more participants isn’t worth it because there are diminishing returns on the data. Focus on finding representative people. This means people who look like our users and would have a reason to do the tasks we’re testing. How do you find the right people?

The first place to look is your user base. It’s an instant pool of potential participants who care about your product. Once you’ve found participants, explain what the test is about and how long it will take.

Prep for the Usability Study

Detail what steps you want the user to take to uncover accessibility issues or challenges. You’re going to want to write a script. This ensures we’re giving the right information and eliminates the chances of inconsistencies between tests.

You’ll want to record the test so you can focus on what’s happening and avoid having to take notes under pressure.

Perform the Usability Study

Welcome the participant and explain to them how the test will work. You want to take some of the pressure off. Explain you are not testing them... you’re testing the website (or software application, mobile app, etc).

If they make mistakes, it’s not their fault and the test is not punitive; we’re here to learn from their experience. Ask them to try to think out loud as they perform each task.

Explain that to ensure conditions are as real as possible, you won’t be able to offer them any advice or guidance. Explain the real-life scenario that would lead to them performing this task so they can get in the right mindset. Let them read the task out loud and begin. It’s important to remain neutral and silent as the participant takes the test. This is not about teaching them how to use the interface. You’re there to listen and watch.

Users may be critical or run into problems but resist the urge to explain things or prompt them. If they ask you how to do something, reply with “What do you think?” or “I am interested in what you would do.”

After each test, take a step back with the participant and ask, “How’d that go?” If you have specific questions, you can retrace their steps and ask them open ended questions like, “Why did you decide to do that there?” or “What was going through your mind at this point?”

Data Analysis

Review the recording. Did the participant complete the task successfully and efficiently? If not, what stopped them? What were their key behaviors and  comments? Cross reference and look for patterns between the different participants.

Rank the issues, identify solutions, and determine the best course of action moving forward.

The Human-Centered Design data synthesis methodology explained:

Synthesis Process

  • Externalize the data and organize it by creating an Affinity Diagram.
  • Draw connections between the groupings to develop deeper insights
    and identify common themes.
  • Distill the themes, generating insight statements to summarize
    key learnings or findings.

Quantitative vs. Qualitative

  • Quantitative data reflect whether the tasks were easy to perform
  • Qualitative data consist of observational findings that identify design
    features that were easy or hard to use

Task Efficiency

Measure the average (mean) time taken to complete each task. Some
users may simply take longer to carry out tasks, possibly skewing the
results by making the average time to complete tasks higher. To
account for this, mean totals should also be calculated.

By following these best practices, you will be able to implement the changes to better serve your customer and users.


Anchor
MeaghanBio
MeaghanBio

Panel
borderWidth0





Column
width128px

Image Added

Panel
borderWidth0
Anchor
NNGBioNNGBio
Column
width128px

Image Removed


Column
width20


Column
width690px

Image Removed

SARAH GIBBONS

Sarah Gibbons is Nielsen Norman Group's Chief Designer. She works at the intersection of design research, strategy, and user experience design.


MEAGHAN HUDAK 

Meaghan is an Associate Product/Program Analyst supporting the CCSQ Human-Centered Design Center of Excellence (HCD CoE). Meaghan has been with the HCD CoE since January 2022. 





Image AddedImage Removed      Image Removed


Column
width30


Column
width490px

Include Page
SEPT_embedded. Advertising Include
SEPT_embedded. Advertising Include


...