April 30th, 2012
The other day, I wrote about how choosing the right words in your tasks makes a critical difference to the outcome of your user research. Mike Pauley wrote a comment, asking how to make sure you’ve got the right words:
Great timing on this, as I am dealing with the same issue with a test I’m currently running. Do you have any guidelines for how to approach question wording? Other than running the same test multiple times with different wording, how do you know when task failure is the fault of your IA or the way the question is worded?
A great technique is to adopt an interview-based task design approach. You start by interviewing your participants to ask them about their previous experiences and current needs.
For example, if we were testing the IKEA site for it’s navigation, we’d recruit people who either have purchased IKEA furniture or are likely to do so in the near future. Then, during the first 15 to 25 minutes of the session, we’d interview them about their furniture buying process and desires.
In that interview, we’d get them to use all their own terms and define their own tasks. If it’s something they’ve done in the past, we ask them to re-enact it for us. If it’s something they’re planning in the near future, we look for them to show us how they think they’ll do it.
The trick is that we let them tell us the words they use. After you’ve done this with a few users, it’s likely you’ll hone in on some generic ways to formulate the tasks that don’t influence the tasks’ outcome as directly.
Back in 2006, I wrote about interview-based tasks and recorded a podcast about how to do them. Interview-based tasks have become a very important part of our usability toolbox, specifically to deal with this problem.Tweet