UIEtips Article: A Counter-Intuitive Approach to Evaluating Design Alternatives

Jared Spool

May 19th, 2008

Every week, teams approach us looking to conduct their first usability study. Having spent months (sometimes years) arguing the value of a study with their management, they’ve finally received the necessary approval.

Under the guise of making this study as valuable as possible, these teams make the novice mistake of trying to do too much. Their ambitious approach puts the project in jeopardy. A failed usability study can send a message through the organization that the technique is too expensive and difficult to do well.

In this week’s article for our email newsletter, I talk about a team who wants to evaluate a bunch of design prototypes with their first test, resulting in far more work than they originally realized. Instead, I propose a counter-intuitive way for them to get the necessary feedback without having users compare each alternative.

You can read my article here.

Have you needed to evaluate multiple designs with limited time and budget? What would you have proposed for our clients?

Managing usability studies on a shoestring budget is just one of the great full-day topics we’ll have at the User Interface 13 Conference, this October 13-16, in Cambridge, MA. If you register by Tuesday, May 20, you’ll get a great registration price and a Flip Ultra Video Camera. See UI13 Conference site for more information.

7 Responses to “UIEtips Article: A Counter-Intuitive Approach to Evaluating Design Alternatives”

  1. Roy Zornow Says:

    Just wondering how you would structure the matrix to rate differences which depend on an upstream alternatives. For example, lets say you have two multi-select tools. Assume one is very usable, but can only be accessed via a relatively unusable drop-down. Task completion suffers. How do you compensate? Test every permutation? Test each feature individually? Both these seem to have budget implications.

    thanks,
    Roy Zornow

  2. Livia Labate Says:

    I’ve found it that in situations where you are trying to test various alternatives, it is rare that you end up selecting one as the winner or one as the loser (sometimes there is a favorite, but it’s rare in my experience that the favorite is clearly “better”).

    In most situations where I’ve had to test under these circumstances, I’ve approached it as if I was testing each one as “the solution”. You find positives and negatives from all your alternatives (and hopefully you throw in a competitor for good measure, if that’s available/relevant).

    Ultimately it NEEDS to be a design team decision on how to refine towards a final solution — having an objective evaluation of the separate alternatives allows designers to create a stronger solution with the positive attributes from each version. Hopefully then, you can test again to validate how well they come together.

    The biggest risk with structuring user sessions where users are presented with multiple alternatives is that you are inherently making the the USER choose, which is problematic because 1) it’s hard to determine usability issues when you’re framing the situation as a preference/judgment call 2) forcing users to choose is a cop-out of the designers responsibility.

    (waits for booing…)

    Having said that, I don’t oppose what you’re proposing Jared, but I have to say it sounds complicated — and if these folks are testing something for the first time, there is to much build up, that you probably want to simplify as much as possible. They’ve been waiting for so long that their expectations have probably already exceeded any possibly realistic scenario of usability testing outcome. Instead of trying to live up to the idealistic notion of what this test will accomplish, I’d say make it very simple and focused so they get not just good lessons/learnings from the study to inform their design decisions, but also learn how great more usability testing can be for them (but that it isn’t the solution for world hunger).

  3. Daniel Szuc Says:

    “You find positives and negatives from all your alternatives’ – Agree.

    Before testing and as part of the planning did you see elements in the Home Page design alternatives that were clearly better than the original Home Page design?

    Asking this because we find that companies new to usability will often want to put every design in front of users to test the design, where an expert opinion, usability review or referring to existing best practice may already provide adequate insights.

    Its nice when companies become more receptive to this thinking without having to usability test every time. Although understand where usability tests are leveraged upon to help internal argument or to help teams inform a design forward.

  4. Thomas Says:

    I agree with Livia and Daniel, it sound a bit too complicate and why not involving an external experienced designer or UX professional to refine the number of alternatives (heuristic evaluation)… An external person could also help with what may be internal disagreements…

    I also noticed that when testing a design we often find out that the problem is not what we expected. I think it can be dangerous to rely on accuratly defined design solutions rather than designs which are supposed to evolve…

  5. Steve Portigal Says:

    Interesting discussion. We’re about to go into the field to look at different alternatives, but the question isn’t which one is best (they think they already know that) but by how much do the different alternatives vary. There are limitations and challenges to the question, the situation, the approach, etc. but we’re taking our shot at it.

  6. Daniel Szuc Says:

    “why not involving an external experienced designer or UX professional to refine the number of alternatives (heuristic evaluation)… An external person could also help with what may be internal disagreements…”

    Can depend on the company landscape, who teams need to convince and what their knowledge of the usability approach is. Sometimes the voice of the customer is much stronger than the hired expert.

    We have sometimes conducted a UX starter activity that may not be entirely the right approach but we know will have longer term benefit. Also as the company you are working with learns more about usability and the problems they are trying to solve, you can have deeper discussions about various approaches (without confusing them and yourself in the process :)

  7. Mark Baartse Says:

    It also seems to ignore the political elements. It sounds like a large conservative organisation (that IS an assumption), in such organisations in my experience there’s often low trust of their own staff. The process is still quite heavily dependent on the staff’s input and discretion, which might be a negative mark in the eyes of the management who are watching this test closely.

Add a Comment