Originally published: Dec 20, 2011
Usability tests are the workhorse technique for teams to understand how their users interact with their design. The best teams conduct frequent tests, giving each team member regular opportunities to learn where their design elicits frustration and delight.
In its simplest form, a usability test is a chance to watch someone interact with the design. We focus on that user's behaviors, favoring what they do over what they say. Usability testing is the best instrument for finding out what they actually do.
If you've conducted a usability test, chances are it was a scavenger hunt test. In these tests, we give each test participant a series of tasks to complete, similar to asking people to play a scavenger hunt.The scavenger hunt protocol is easy to create. We identify a set of activities we'd like to see the user complete, then set the participant down and give them the first one. If we ask them to think aloud, we can learn a lot about their thought process as they work through the design.
Scavenger hunt protocols are very useful to see if the design can handle important functional needs. However, it frequently falls flat because the participants may be performing for us. They'll gladly complete the tasks, even when the tasks aren't what they'd really do in their own lives. Thus, the scavenger hunt test only works well when we're confident we know the tasks our users really do.
In Interview-based usability tests, we can discover more about what users really do with our design, by interactively creating a more realistic scenario than we can with a typical scavenger hunt test. The method helps us learn how our users approach our design and what their own tasks are.
Interview-based testing starts with the way we recruit our participants. Here, we only choose participants that have a need for the design. For example, if we're testing a design for reporting their expenses, we'd only recruit participants who have expenses to report right now.
When they show up for our session, we'd interview them about their expenses. We'd learn how often they submit them, their organization process, and their experiences. During the interview portion (usually about one-third of the session), we talk about the expenses they need to submit today and how they'll know if they're done. With this information, we can dynamically create the tasks they'll use.
In contrast with scavenger-hunt tests, each participant ends up doing something different with the design. Because it's grounded in what these people really do, we learn more about how well the design matches their experience and thought processes.
In another variant of scavenger hunt tests, we can try out new ideas about our design. In paper prototype testing, we create an interactive paper model of the design. Participants then perform scavenger-hunt-style tasks using the mocked-up design.
While we could use electronic prototyping tools, we've found paper takes the design team to a different level. With everyone contributing design elements, the entire team is immersed in thinking about the design. (With electronic prototypes, typically only one person on the team can use the tool at once, thereby distancing everyone else from the creation process.)
Also, with paper, we can make changes really fast. We often conduct four test sessions a day, changing the design between each one. In an hour, a team can completely redesign their approach, using art skills they learned in kindergarten.
With the new popularity of design studios, paper prototyping (which has been around since the early 90s) has seen a resurgence in popularity. It's a brilliant variation to try out ideas before you make the investment of code.
Five-Second tests tell us if users understand the purpose of the design from the moment they see it. When a screen isn't immediately clear about its purpose, the user has to decode what they're supposed to do. Frustration ensues.
In a five-second test, we show users a design for, not surprisingly, five-seconds. Then we ask them to write down everything they remember about what they just saw. While we can show the design on a laptop screen, we've often show designs on printed pages. This makes it convenient for testing in places like cafeterias, libraries, or coffee shops.
What the user remembers (and what they don't) becomes the focus of the study. We can identify what we'd like them to remember from that initial five-second inspection, then change the design until we consistently get that in subsequent tests.
Five-second tests work best for those detailed pages deep in a web site or application. They don't work well for pages that serve multiple functions, like a web site's home page. To test home pages, we'll use a scavenger hunt or interview-based test.
We created this variation when we needed to test the online content of a luxury vacation service, who also had a beautiful printed catalogue. Then we discovered this method is useful for anyone with a ton of online content and wants to find out if it's organized in a useful and usable way.
We start with an paper catalogue version of the content. If we already have one, like when we were working with the vacation service, we're golden. However, we've found it's easy to create one out of the pages on the site, as we once did for a state highway department.
We ask participants to go through the catalogue with a highlighter, marking anything they find of interest. We can then ask them why it's interesting and what questions they still have. We then ask them to find the same information in the online version of the content.
We can easily spot navigation issues and where, due to the online translation, the design became less usable. We also learn where additional content should go.
The nice thing about these different variations is that you can mix and match. We've put an existing design's interview-based test together with a paper prototyping test and a quick five-second test. This way, we make the most of the time we have with each participant.
Depending on the protocols we choose, our usability testing sessions can take from fifteen minutes to several hours. Our longest usability test took 20 hours with each participant. (We spread each session over a week.) In this one-of-a-kind study, we watched programmers as they designed and built a small application. Because we saw the entire process, we learned a tremendous amount about how the designs helped and hindered the development process.
Of course, you don't have to conduct 20-hour sessions to get value. However, frequent exposure to your users will dramatically improve your design process, better inform your decision making, and help you produce inspired, innovative user experiences. What's stopping you?
What are your favorite variations on usability testing? We'd love to hear what you're doing over at our UIE Brain Sparks blog.
Read related articles: