By Jared M. Spool
What's the easiest way to conduct a usability test? Well, you could just sit a person down (it doesn't matter who) in front of your design and ask them to do something (it doesn't matter what).
If this is so easy, why does a standard usability test contain all that other rigmarole? Because that rigmarole goes a long way to ensure that the test will produce quality results.
When a design has a usability problem, it's because someone made a wrong decision. They chose to take the design in a direction that creates frustration for the user. A different design choice would have prevented the frustration.
We consider a usability test to be successful when the design team members receive the information they need to make the right decision. Successful usability tests produce informed decisions.
There are two outcomes from poor decisions: either the user experience is worsened because of a change that just shouldn't have happened; or a valuable opportunity is missed to improve the design's user experience. Either way, when usability tests work, these results are significantly less likely.
As we work with teams all over the globe, there are mistakes that we see frequently. These mistakes are very easy to prevent -- if only the team members realized they were making them. Here are seven of the most common mistakes:
The first mistake we see constantly is teams don't understand when usability testing can help and when it can't. Usability testing is a tool to produce information. However, it can't effectively produce *all* types of information.
These teams often make the mistake of using usability tests to see how the users "feel" about the design. They want to know if the tested participants will favor the design, want to use it again, and share it with their friends.
While these are all important things to find out, a standard usability test is not the way to do it. We've seen instances where users were extremely frustrated with the design -- couldn't complete a single task -- yet told us they loved it. We've also come out of tests where the users completed every task quickly and effectively, but hated the design, even though they also told us they'd use it again. It's very hard to know what to change when you get results like these.
Because a usability test allows you to observe the user's actual behavior, its real forte is in telling you where the interface causes frustration. The observation of how users flow through the design provides far more actionable information than asking them if they like it or not.
You can avoid this first mistake by being clear what you want to get out of the test. Posing a behavioral question or two, such as "Can our users apply for a mortgage without confusion?" or "Will the content reduce calls to our support center?", will dramatically improve the test results you get. The more detailed the question, the better your results will be. You'll know when the design is working and what to do if it's not.
When you were in grade school, did you ever play The Game of Telephone? That's the game where someone whispers a saying (like "The spirit is willing, but the flesh is weak") into one person's ear, who then whispers it to the next person, and so on; with the last person stating aloud what they heard ("The meat has gone bad, but the wine tastes pretty good").
As people relay information to each other, it becomes subject to distortion, much like The Game of Telephone. That's exactly what happens when usability testing is conducted as an outside activity, with the team members barely paying attention.
The most successful tests are because the team is involved at every step in the process. They are watching each test and absorbing the information, as quickly as it comes, without the natural filtering and distortion that happens when they have to hear the results second- or third-hand.
Avoiding this mistake is simple: just make sure the team is involved. We've found this means doing the tests as near the team as possible (such as a local conference room) and giving incentives to participate (food always works). Even when a team member can't attend a specific test, it should be easy for them to see the video or get a detailed summary of what happened.
Usability testing is all about seeing the design through the eyes of the test participants. As they work their way through the design, you get to see and hear what works well and where it becomes frustrating to accomplish their goals.
However, if you've recruited the wrong participant, you'll not learn what you want. If the participant knows too much, then they won't experience the problems that real users will encounter. If they don't have enough of the right experience, they'll become stuck with things your users will breeze right by.
One common mistake is to focus on demographics (such as age and income) and not look at those distinctions that make the users behave differently, such as their fluency in the design's content area. The risk is that you'll miss critical problems that are easy to fix, just because the participants you recruited didn't happen to encounter them.
When planning who to recruit, the best technique is to start by asking, "What attributes will cause one user to behave differently than another?" This can often focus the recruiting process on finding people who match the users extremely well, thus improving the quality of the test's results.
Years ago, we helped with a study of Ikea.com, looking at how people found products on the site. When we got there, they'd already started the testing process and were using tasks like "Find a bookcase." Interestingly, every participant did exactly the same thing: they went to the search box and typed "bookcase".
Upon our suggestion, the team made a subtle change to the instructions they were giving their participants: "You have 200+ books in your fiction collection, currently in boxes strewn around your living room. Find a way to organize them."
We instantly saw a change in how the participants behaved with the design. Most clicked through the various categories, looking for some sort of storage solution. Few used Search, typing in phrases like "Shelves" and "Storage Systems". And, nobody searched on "bookcase".
The way you design tasks could have a dramatic outcome on the results, without your even realizing it. In a testing situation, the participants really want to please you by following your directions. If the tasks direct participants to take a certain path, that's the way they'll go. If it's not what real users do in the true context of the design's use, then you may get distorted results.
You can get around this mistake by constantly exploring the "context of use." When designing tasks, ask yourself, "What events or conditions in the world would motivate someone to use this design?" Use the answers as the primary formation of the tasks you create.
Facilitating a usability test is a learned skill. We've never met anyone who was naturally adept at it. It's not hard to learn – it just takes training and practice.
A good facilitator knows how to draw out exactly the right information from the participant without giving away the store. They know how to use the very limited test time to focus on those elements that will be most important to the team.
Have you ever sat through a boring test? That won't happen with a top-notch facilitator. They know how to make every minute of the test interesting and exciting for the team members observing.
Since facilitating is easy to learn, it's simple for teams to train multiple members to be good facilitators. And with much practice and constructive critiquing, those facilitators become skilled, thereby dramatically improving the information that comes out of every test.
A usability test can produce all wonders of information, yet if the people making the design decisions aren't aware of what happened, the test has failed. Getting the information to the design team is critical.
Many usability professionals try to solve this problem by writing test reports. These reports attempt to summarize the testing and the findings into one place. The theory goes, if we write it down, everyone will have it. Unfortunately, it rarely works that way.
Most reports are never read. The few that are usually produce more questions than answers. Writing a quality report that communicates everything clearly requires amazing writing and composition skills, not to mention a tremendous amount of time. Unfortunately, these are things unavailable to most usability professionals.
We learned a long time ago that our reports were really only for archive purposes, to provide a way to review what we did years later. They really weren't a tool the team was using to make decisions now.
Instead, we've developed other techniques of communicating what happened during the testing, including review sessions that happen right after each test, starting an email discussion list to talk about the test and various interpretations, and interactive workshops to review the design and what we learned. Every team is different, so we've found we need to tailor our dissemination methods to the 2 or 3 ways each team best works.
Usability Testing is great for identifying problems. Yet, it's horrible at identifying solutions. Fortunately, we've never run into a design team that couldn't generate a half dozen possible solutions to any problem, within moments of its discovery.
The problem comes with choosing which solution is best to implement. You can't tell from the initial test, which pinpointed a problem, what solution will work. You need to test again, this time with a working implementation of the solution. (Often a paper mockup will suffice.)
We see teams skipping this step. Either they don't schedule a second round of testing to work out solutions or they cut corners in their process due to overconfidence. The results can be disastrous – the solution may actually be worse than the original implementation!
Planning a round of testing, to validate any yet-to-be-discovered potential solutions, is the antidote to this problem. You need to do this before you even know what the problems will be. Of course, if you don't have any problems, then you can always cancel the testing. (Yeah, like *that* will ever happen!)
Usability testing is a serious investment of time and resources for any team. Having a clear understanding of what you want to get from it is critical to its success.
The most successful teams constantly monitor the decisions that come out of the testing process. They look at subsequent usability problems that appear and ask, "How did our process miss this? What should we change for next time?"
Only with the constant process of honing our skills and improving our processes can we ensure that we're getting the best value from this priceless technique.
Improving your usability testing skills is essential to getting quality
results. Ginny Redish's popular seminar on this very topic is a
fun and easy way to sharpen your technique. Read more about Ginny ’s full-day seminar Honing Your Usability Testing Skills.