Published: Mar 07, 2006
The silence was deafening. The corporate Vice President of Marketing sat at the head of the table, with the rest of the room populated by the development team. Nobody was saying a word. They were just letting the question just hang in the air: "What did we do wrong?"
It was the right question. The design recommendations seemed solid, yet sales had dropped 23% immediately after the changes were made. The recommendations came from a well-constructed set of usability tests. Everybody thought they were clear on where the problems were. Now they weren't so sure.
What they didn't know -- what they learned later -- is they had done everything right, almost. They'd recruited the right users, facilitated the test properly, and analyzed the results effectively. There was only one problem: the tasks didn't match what real users do with the site. That one problem was the source of their current pain.
The team used a very traditional approach to task design by carefully crafting a set of scavenger-hunt tasks. These tasks ask the users to find specific items on the web site.
When we first started testing on the web, we used them too. Some of our first tasks were "What's the cheapest hotel on the monorail at Walt Disney World?" or "Who is Arizona's Third District Congressional Representative?" They worked great. We could easily see if people could find the information on the site. We learned a lot about how the site worked.
Scavenger-hunt tasks work best when you've thoroughly researched the types of things people look for on the site. Our tasks came from extensive interviews and field research. Unfortunately, many times, teams just make up their tasks without doing the research. That's where the problems begin.
For us, the problem came into focus in the early days of the web, when we had the opportunity to work with one of the leading search engine sites. They had just completed a usability test series put together by their advertising agency. The agency asked folks who worked in the Wall Street district of New York to come to the Madison Avenue offices to participate in the tests.
As each participant filed into the room, they sat them down in front of a machine with a browser and gave each of them the same task: "Pretend you're interested in Leonardo DiCaprio. Find something out about him you don't currently know."
Now, imagine what you would do if you were asked to perform that task. If you have no real interest in Mr. DiCaprio's life, you might enter his name into the search box and declare the task done on the first page you find with any real substance. The task would only take a few moments to complete.
However, someone with a deep interest in Leo's world -- maybe one of his stalkers, let's say -- might spend more time on the task. They would dismiss information they already knew and hone in on content that was truly new and unique. We'd expect to see completely different behavior from this person.
Passion on a subject changes how participants invest in usability test tasks. That change can have profound effects on the results and the recommendations produced by the team.
One of our first attempts at countering the effects of DiCaprio tasks was to ask participants to role play. Role playing is a time-tested psychological technique to put people into a more conducive context to gain the information you need.
For example, we wouldn't just ask our participants to find the cheapest hotel on the monorail. We'd first explain how we wanted them to pretend they had a family with a six-year-old who loved trains. Then we told them we wanted to pretend they were planning a trip to Disney for the kid's birthday. Finally, we said that staying at a hotel on the monorail would be a real treat for the train-loving kid and was the desired outcome.
Imagining a trip to Disney isn't hard for many people. It's harder to get into the role of shopping for the ultimate retirement fund. We could only take role playing so far.
We were quick to see that people who had passion for the tasks behaved quite differently than those that didn't. People with passion demanded more from the content on the site. They came to the task with more background and they wanted to see more to arrive at the right outcome.
Our challenge became to control the passion in our testing process. That's when we started experimenting with interview-based tasks.
In interview-based tasks, the participants interests are discovered, not assigned. Unlike scavenger-hunt tasks, the test's facilitator and participant negotiate the tasks during the tests, instead of proceeding down a list of predefined tasks.
Because each task is drawn from the experience and interest of each participant, no two participants perform exactly the same tasks. As we're looking for the usability problems that pop up, we're also looking for how the user approaches their problem and the level of detail they require.
Surprisingly, we often see multiple participants run into the same problems. We find they rate the site consistently. Even though their tasks are radically different, they have very similar experiences.
We find the wording of their self-created tasks fascinating. As each participant designs their own tasks, they are telling us how they think about the content on the site, giving us insight into the words we choose for links and how we organize the material.
Before we can sit down with a test participant and create an interview-based task, we need the right participant. The process starts when we begin recruiting the participants for the study.
It's important to quickly identify those candidates that have a passion for the subject matter we're evaluating. For example, when we were testing an e-commerce site that sold camping and hiking gear, we looked for people passionate about those activities.
We've found an open-ended interview technique works best for our recruiters. As the recruiter talks with each candidate, they probe about the candidate's knowledge on the subject and their experience with the material.
For example, when we recruited for a retirement planning site, our recruiter discussed retirement plans with each candidate. They asked, with regard to retirement savings,
By starting with broad questions, the recruiter gets a good sense if this is a subject area the candidate is passionate about or one they haven't given much thought. A practiced recruiter can easily determine if a candidate is right for the study or won't the team's needs.
When facilitating the test, we add an additional 30 minutes for conducting the interview and creating the tasks. The goal of the interview is to explore the participant's passion. The result is one or more tasks for the participant to perform with the design.
Starting with similar questions to those the recruiter asked, the facilitator probes the participant about their experience with the subject matter. After learning the details of their background and knowledge, the facilitator guides the discussion to current interests and tasks the user would like to accomplish. It's at that point that they jointly craft the task description.
In crafting the task description, the facilitator wants to write down the words the participant used to describe their goals. They also want to clarify what "done" means, so that it's clear to everyone if and when the task is completed. Once they've created the tasks, the facilitator directs the participant to try them, just like in other types of usability tests, looking for obstacles to attaining the participant's objective.
With interview-based tasks, participants take us down paths we never expect to go. We've learned that users often have very different ideas of what is necessary and what is important. We regularly see obstacles that, once eliminated with a quick fix, lead to dramatic improvements in the design. Terminology emerges to describe user needs in a way we hadn't previously thought.
We're happy to have interview-based tasks as one of the techniques in our toolbox. It's ideal when we don't have the resources available to do a thorough task analysis using more expensive field research methods. Alongside scavenger-hunt tasks, 5-second tests, inherent-value testing, and traditional usability tests, it gives us one more method to get the critical information teams need to build designs that truly enhance the quality of the user experience.
Then you'll want to check out Steve Portigal's UIE Virtual Seminar - Deep Dive Interviewing Secrets: Making Sure You Don't Leave Key Information Behind.
Steve will show you the art of asking the question. He'll help you prepare your team for any opportunity, be it formal user research or less structured, ad-hoc research. He'll also give you tips on how to work with your stakeholders and executives, who may also be meeting potential customers and users, so they know what to ask and how to listen — integrating their efforts into the research team. Visit the web site for more information.
Have you tried interview-based tasks? What insights did you gain from it? How else have you checked the assumptions that go into your work? Join the discussion by submitting a comment on UIE's Brain Sparks blog.
Read related articles: