Published: Apr 18, 2012
"Damn, I wish we'd done this a year ago." That's what I hear right after I've started a team on their first user research project. The process of learning what your users need is so powerful, senior stakeholders wish they could go back in time and create their products and services over again, this time with the insights they've just received.
Of course, you can't go back in time (yet), but you can start with your own user research program right away. Once you decide to go down that road, the first thing you'll realize is how rich your choices are of research methods. Usability audits, heuristic evaluations, usability testing, field research – which one do you choose first? Which are the most effective?
Usability audits (also known as usability walkthroughs) are a simple task-based technique, yet they are the least effective. When you perform an audit, someone on the team sits down and role plays a user working through various tasks and activities with the design, looking for places where the design could frustrate or delay a user.
Let's say you're building a system for medical professionals to locate and share information about cancer treatments and options to patients and caregivers. The team member could play the role of a nurse or counselor, looking up the right information and pretending to give it to a patient's caregiver. In the process, the team member would identify places where the information is hard to find or see, and places where it's difficult to share the information.
While it's a good technique to produce a list of things to change in the interface, the problem is knowing whether those changes will, in fact, improve your real users' experience with the design. Though the team member is role playing a user, they are doing that from what they believe users will do with the product.
If the team member isn't a trained medical professional who does this type of work frequently, how will they know what would really frustrate someone who is? Delivering information to patients has a lot of subtlety and nuance to it. Without the skill and practice of doing it, it's unlikely they'll pick up on critical problems.
A usability audit is probably the least expensive form of user research. But it's missing a critical component: the user. Its success is based on the team members really knowing the ins and outs of the users' world. If the team is building a tool that they would use themselves, such as a project management tool for software development, then a usability audit will more likely produce a quality list of things to change.
When you have a list of changes that aren't based on what users really do, you run the risk of investing in enhancements that actually don't enhance anything. In fact, you run a strong risk of making the design worse than it was if you didn't make any changes at all. (Have you ever received an update to a piece of software you loved that made it worse?)
This is why we have other research methods.
A heuristic is a rule of thumb, usually based on some research that someone has done. A heuristic evaluation takes a set of heuristics and sees if the design matches it.
For example, you might have the heuristic, "System feedback should communicate the current state of the design." You could look through your design for any instances where it's unclear what the state of the system is and recommend a change that would fix it.
Heuristic evaluations are better than usability audits because, hopefully, there were users involved in the creation of the heuristic rules. However, like usability audits, there's a lot of assumptions that go into knowing where to look for problems and identifying them.
Some folks have the benefit of a set of heuristics that have been finely tuned over many similar projects. For example, one design firm that specializes in higher education websites has collected a set of heuristics that prove to be true for most university sites. They can use these rules quickly to spot problems and flaws in the design.
However, many folks try to use generic heuristics, the most popular being Molich and Nielsen's 10 Heuristics for Computer Systems, that were originally conceived in the 1980s. While it's possible to make a list of things to change (it's always easy to rattle off things that are wrong with something), like usability audits, you don't know if the changes will improve the design.
In fact, studies have shown the odds of making the interface worse with heuristic evaluations are pretty high for folks who haven't watched real users. This is why we like research methods that directly involve real users.
Usability testing is a more complicated type of user research than what I've listed so far, but it's still pretty easy. In its simplest form, you sit down next to a user and watch them use your design. Like many things in life, there are many flavors of doing this.
One flavor is what we call scavenger-hunt usability tests, because we ask each participant to do a specific, pre-determined task, much like you would in a scavenger hunt. We might ask them to "find the surgical options for pancreatic cancer" or "share the benefits and risks of a Whipple Procedure with the patient's caregiver." As they perform these tasks, we can tell where the design is helping them and where it's getting in their way.
We can record every place the design slows the user down or frustrates them. We can also record any place the user makes a wrong turn or a mistake. From this information, we can derive a list of things to change. With enough users and enough tasks, we can cover a lot of ground in the design.
This is better than the usability audits or heuristic evaluations because we're seeing real users interact with the design. Because they weren't intimately familiar with the creation of the design, you're essentially seeing what you've built through their eyes. They may get caught up on a term or command because it doesn't match their experience.
However, scavenger-hunt tests still require that we know what tasks to ask users to do. We create these tests from our own image of what we think the user does and needs. This means it suffers from the same flaws as the usability audit or heuristic evaluation, if we're not intimately familiar with our users and their work. To compensate for that, we create a different flavor of usability test – interview-based tests
Like the scavenger-hunt test, we ask a user to complete a series of tasks. However, unlike the scavenger-hunt tasks, we don't start with pre-determined tasks to run through.
Instead, in interview-based test, we do what the name implies: we interview our participant. For example, we might sit down with a cancer counselor and ask them to describe the last time they had to give treatment options to a family. As they are talking, we make notes on the activities they describe, what the details of the situation were, and the specific vocabulary they use.
Combining all of these, we then ask the user to do something very similar with the design. We use their own words (instead of the jargon we've adopted) to put the instructions together. And we use the situation they described as the background.
When we do interview-based tests, we often find huge holes in our design we hadn't seen before. Because the scenarios the users describe are more real, we can see where our design stops working in this real environment. Now we're getting closer to identifying problems and solutions that would truly improve the design.
While interview-based usability tests get us closer to what users really do and think, the technique still suffers from being done outside the real context of the user's experience. This is why field studies are the most effective way to start our user research.
A field study takes us into the user's own environment. We see what they do, how they do it, and where it's all done. We see the things they hang on their wall, the interactions they have with their co-workers or family members, and the natural chaos that exists in everyone's life.
For example, we can see what happens when people get interrupted with real-life activities, like a co-worker asking a question that needs an immediate answer. We can see that the design we've created doesn't lend itself for that interruption and they end up doing time consuming activities over.
Our first choice is always field studies. It's the richest, most influential method. When you take a team (and especially the stakeholders) out to meet real users in their own environments, that's when you see the most change in the design. So, that's where we like to start.
The funny thing is that while it seems more expensive, field studies aren't any more expensive than well-run usability tests. However, if we can't make it happen, we then opt for interview-based tasks.
For our initial studies, we stay away from scavenger-hunt usability tests, until we're really sure about those top tasks. We use scavenger-hunt tests later on during the project, when we're testing the fine details of the new features that were inspired by the earlier research.
We hardly ever use heuristic evaluations and never do usability audits. We just can't afford the risks associated with them. They are enticing, because they look so easy, but that's what makes them evil. You won't realize they've guided the project down the wrong road until it's too late.
There's no reason to wait to do user research. How will you start?
What does your user research program look like? Leave us your thoughts at the UIE Brain Sparks blog.
Read related articles: