Usability Tools Podcast: Inherent Value Tests

Jared Spool

September 17th, 2007

UIE Usability Tools Podcast: Inherent Value Tests
Recorded: September 17, 2007 from the studios of UIE
Brian Christiansen, UIE Podcast Producer
Duration: 25 min | File size: 14.5 MB
[ Subscribe to our podcast via iTunes. This link will launch the iTunes application.]
[ Subscribe with other podcast applications. ]

Play

Each week in our Usability Tools Podcast, I will be sitting down with UIE’s Managing Director, Christine Perfetti to discuss tips and tools for improving your site’s user experience. The goal of our weekly podcast is to share some of the most important findings from UIE’s research on web design and usability.

This week, Christine Perfetti asked me about one of UIE’s most valuable usability testing techniques, Inherent Value Tests. Inherent Value Testing gives the team important information about how well a web site communicates the inherent value the designers are putting into the site. In this podcast, Christine and I discuss:

» How inherent value tests help measure how your site communicates your product’s value
» How inherent value tests are different than traditional usability testing techniques
» How to recruit users with this technique
» How to combine inherent value tests with other types of tasks

As always, we’re very interested in hearing from you. Do you have questions or comments about this episode? Do you have suggestions for future episodes? We want to know. Please leave a comment below or email us directly at mailbag@uie.com.

UIE’s Latest Research: If you’re interested in the topics Christine and I discuss in the podcasts, I highly suggest you sign up for our free newsletter, UIEtips, to read our latest usability and design research as soon as we publish it. We’ll also notify you in UIEtips when we publish new podcasts.

New: Survey and listener drawing!
We would like to give you and your co-workers free admission to our next Virtual Seminar program, with full, lifetime access to the archived program as well! All you need to do to be eligible is give us your feedback on your podcast listening experience. Fill out the following survey and each week we’ll randomly send one survey participant a free admission to the next UIE Virtual Seminar and Archive, a $169.00 value! We appreciate your input!

Participate in our survey to win!

5 Responses to “Usability Tools Podcast: Inherent Value Tests”

  1. Alexander Says:

    The point of least astonishment: how do you anticipate how many people will be needed to reach the ‘point of least astonishment’? Knowing this would enable better planning for how many people to recruit!

  2. Alexander Says:

    For the interview based tasks in the ZipCar example, it sounds like you basically have them do the same task but you present it differently according to how they expressed themselves in the initial interview. Or, is it totally open and could you have all 6 people in the second group do totally different things?

  3. Jared Spool Says:

    Alexander asked a couple of great questions. First:

    The point of least astonishment: how do you anticipate how many people will be needed to reach the ‘point of least astonishment’? Knowing this would enable better planning for how many people to recruit!

    Great question. A few years back, Will Schroeder came up with an algorithmic approach to predicting how many users you need for a study to reach the point of least astonishment–that point where you no longer are learning anything new from a study.

    Basically, you track how much you’re learning from each test by quantifying the new things you learn. As you continue to run tests, you should start to see a increase in the amount of redundant things you see and a decrease in the new stuff you learn. If you plot these on a curve, you can extrapolate where the curve will approach the zero point.

    The formula is helpful, but has a flaw: you have to run tests to determine how many tests you need to run. What we’ve found in practice is experienced usability practitioners get a sense as to how many tests will be useful. If you’re not sure, you err on the side of too many. If you finish the study thinking you saw new stuff in every session, without any real decrease in the rate the new stuff was coming to you, then you run more tests in the next study. Like anything useful, experience helps you work it out.

    For the interview-based tasks in the ZipCar example, it sounds like you basically have them do the same task but you present it differently according to how they expressed themselves in the initial interview. Or, is it totally open and could you have all 6 people in the second group do totally different things?

    As with all interview-based task techniques, you’ll base the range of tasks for your participants on the range of activities that emerge from the interviews. If all your users do essentially the same things, just with their own terms, you won’t see a variety in the tasks. If their needs are more diverse, your tasks will be more diverse.

    One of the attributes of interview-based tasks is there may be tasks in your study only performed by a single participant. That makes it hard to produce statistics like, “7 out of 10 users completed the registration task.” You don’t want to use interview-based tasks if that type of quantification is important to your research.

    However, for the purposes of Inherent-Value Testing, quantification is not the primary objective. Instead, we’re trying to learn the design’s value, which is a qualitative analysis. So, it’s ok if each user performs different tasks.

  4. Alexander Says:

    Very helpful answers, thanks Jared!

  5. Usability Testing Content | a.jill.ity Says:

    [...] Tests. Christine Perfetti and Jared Spool talk about this method in a couple of articles, and in a podcast.  They say you’d want to run an Inherent Value Test when your team needs to know how well a Web [...]

Add a Comment