Two Simple Post-Test Questions

Jared Spool

March 23rd, 2006

Sometimes you do things for so long that you forget you do them at all…

This week, I was reminded of a practice we conduct at the conclusion of a usability test. After the participant(s) have finished with their tasks and filled out whatever paperwork we give them to subjectively rate the design, our test facilitator will ask two questions:

  1. What are two things about the design that you really liked?
  2. followed by

  3. What are two things about the design that you didn’t like?

The first question helps turn the session into a positive. This is really important, especially if things didn’t quite go as smoothly as everyone would like. The participant, along with the team, needs to focus on the positive for a few moments.

The second question helps prioritize. There may have been lots of issues, but what really jumps out? It’s possible these will be the last two problems the participant encountered, but, in my experience, more often then not, they are two issues that really stand out in the participant’s mind. (We often let the participant “tour” the design while attempting to answer these questions, to refresh their memory of what they just experienced.)

In addition to listening to the words they use, we also pay attention to the speed of their answer. If they produce an answer quickly, that tells us one thing. If it takes them a long time to think of a complaint (or a compliment), we give it less weight — it could be they were just fulfilling our request to name two things.

That doesn’t mean, if they take long to arrive at an answer, we disqualify it outright. Often, we’ll just make a note of it and see if it shows up as an issue with anyone else. If we don’t hear it anywhere else, then it probably goes to the bottom of our “issues list” for the design.

We started doing this technique years ago and it’s just become ingrained in our practice. It actually comes from an conflict resolution technique called Stop, Start, Continue. When dealing with individuals who are in constant conflict (such as dysfunctional work relationships), the mediator asks each person to list 2 things they want the other person to stop doing, 2 things they’d like them to start doing, and 2 things to continue doing. In the conflict case, you end on the positive note (continuing good behaviors). When we adapted it to testing, we inverted the polarity so the positive was at the beginning.

This simple technique of asking two questions often provides us some nice insights into where the participant’s mind is at and the lasting impressions they had from the test experience.

9 Responses to “Two Simple Post-Test Questions”

  1. mike madaio Says:

    I find it interesting how you word those questions — I always try to stay away from questions about “like” or “want”, because I find that people generally have a difficult time understanding and articulating what they truly like or want.

    Furthermore, direction in usability testing training always seems to focus more on the “pay attention to what they do, not what they say” concept.

    How does actually asking them what they like affect that?

  2. Jay Zipursky Says:

    I’ve been asking for “easiest to use” and “hardest to use” things on my written post-test questionnaire. I think I got these questions from Jeffry Rubin’s usabiliy testing book and I think they are more in line with studying usability (as opposed to say, usefulness, which I imagine Jared’s questions may uncover).

    It’s always interesting when they list something they obviously struggled with as “easy”. Sometimes it’s because it’s simply better than what they have today and other times it seems to be because the learning curve was difficult but they think it will be easier in the future.

    I do like noting the speed of their responses. I may switch to getting a verbal response.

  3. Artie Pajak Says:

    I’ve been questioning the value of the data I get from post-test questions because, like Jay said, people will say it was easy even though they’ve obviously struggled. Is it just that they don’t want to hurt someone’s feelings or what? I’m always careful to state (sometimes several times at different points) “Don’t worry about hurting anyone’s feelings”, “I didn’t design this”, “Your responses will be anonymous”, etc. but they are still reluctant to share “bad” news verbally or in writing. Maybe it just depends on the user base.

    I like Jared’s questions, because it gets at data that isn’t obvious from observation. Asking people whether something is easy to use after you’ve watched whether or not it was easy for them to use is almost like asking a politician if they support children. What’s the point? You already know what the answer is.

    By asking them what they like, it seems like one of three things can happen. 1. It gets at the same data you observed (because people often lump their likes with ease-of-use); 2. It gets you nothing but useless subjective design ideas (user’s are not designers); or 3. It could take you on a path you weren’t expecting. Both of the first two are pretty easy to discount. The third can give you some nuggets that you can use to prop up the team, make them realize what’s most important, or, if a pattern starts emerging, it could give you some different insight into what to keep or get rid of.

  4. Eric Smith Says:

    These summary questions are also something I regularly ask every participant towards the end of a test session. Aggregating the “issue” findings with all participants and ordering it according to frequency provides potent backup data for design recommendations. Invariably, there are often one or two criticisms suggesting improvements that may not have been previously considered by the design team.

    In my practice, I adminstered the questions silightly differently by asking for things participants “liked” and “liked least.” This phrasing of the negative issues gets around any objection a participant might have to inventing something they didn’t like when in actuality they liked everything about the product/website. In my experience, most participants have little difficulty describing those less positive aspects. For evaluating a more complex website/product, I further modify the questions and ask participants for 3 things liked and liked least to increase amount of collected data and possibly expose more repetition in the feedback.

  5. Jared Spool Says:

    Mike wrote:

    direction in usability testing training always seems to focus more on the “pay attention to what they do, not what they say” concept.

    How does actually asking them what they like affect that?

    We ask the questions after we’ve seen them work with the design. We compare the answers to the observations we made. Often, they match, thereby giving us perspective into what the user perceived as well as what they did.

    In the cases where they don’t, we pay more attention to what they did than what they said. However, it tells us that what we saw doesn’t match what the user perceived, which can help us down the road when we’re trying to evaluate the severity of the issues.

  6. E. Ears Says:

    Jay said:
    It’s always interesting when they list something they obviously struggled with as “easy”.

    This can also be because they don’t want to look stupid, even though no-one else would think they were, some may think a task is supposed to be easy or would be considered so by others and although they didn’t find it so say it was.

    Fear of looking stupid affects a lot of people’s behaviour!

  7. UIE Brain Sparks Says:

    The One-Minute Test

    A simple test we use to tell if everyone just sat through the same meeting.

  8. Jeff Bridgforth » Some Best Practices from Jared Spool and UIE Says:

    […] Two simple post-test questions – these are two questions they ask users after a usability test. I could see using these questions with others to evaluate a current site or in the process of a redesign. The questions are: “what are two things about the design that you liked?” and “what are two things about the design that you did not like?”. […]

  9. Meghan Ede Says:

    I’ve seen those questions for years and I’ve always had trouble with the answers.

    It seems to me that study participants often put on their “design hat” when answering and I end up with a lot of: “I liked the color of the UI, I found the features intuitive, you should change the color to blue, …” That is, I get feedback on the visual design or on the interaction design, instead of feedback that relates to the user’s own needs or context.

    In order to get around these problems, I’ve been using wrap-up questions more like:

    1. Thinking about everything you’ve seen today, what, if anything, would you like to use ?
    2. What things would you rarely or never use in your own ? Why?
    3. What things did you want to use, but couldn’t? Why? What could we change so that you could use those things?
    4. Let’s say you are really trying to convince a friend to get this. How would you describe it?
    5. Now let’s say you really wanted to convince a friend to NOT get this. How would you describe it?
    6. In your opinion, who would be the ideal user for this product?

    The last three questions (#4, 5, 6) often get some really interesting responses. In effect, they get at the same thing – which features met the user’s needs, or failed to meet the user’s needs, but somehow make the feedback less personal and therefore more truthful.

    They also move the conversation away from “like” and towards “utility”. When trying to convince someone else to get the product, study participants will list what they see as the product’s best attributes. When words like fast, easy, efficient, useful turn up, the product is generally going in the right direction. When words like “good color scheme” turn up, it’s often because there are few really useful things in the product.

    I’ve had studies in which participants told me that they couldn’t think of a positive thing to say to convince a friend to get this product, or that they wouldn’t inflict it on their friends. Ouch! It’s also gone the other way, they couldn’t think of features in the product that weren’t useful.

    The last question – about the “ideal user” – often reveals things we couldn’t get any other way. I’ve had studies in which the participants were really enthusiastic about a product, but for “someone else”. They had nothing negative to say about it, it worked well, but it didn’t meet their own needs. This often happens when the product provides solutions that are too simplistic. The app is “easy to use”, but not particularly “useful” for this highly skilled participant.

Add a Comment