SpoolCast: Follow-up to Conducting Usability Tests in the Wild

Jared Spool

November 21st, 2008

SpoolCast: Q&A Follow-Up from Conducting Usability Tests in the Wild
Recorded: November 12, 2008
Brian Christiansen, UIE Podcast Producer
Duration: 29m30s | File size: 17 MB
[ Subscribe to our podcast via iTunes. This link will launch the iTunes application.]
[ Subscribe with other podcast applications.]
[ Text Transcript Available ]


Back in October we had the good fortune to host Dana Chisnell’s popular Virtual Seminar, The Quick, the Cheap, and the Insightful: Conducting Usability Tests in the Wild, where she told us you don’t have to run usability tests by the book to get great value out of them. Quite a statement considering she co-wrote the book: The Handbook of Usability Testing, Second Edition.

[If you missed the live seminar, you can purchase lifetime access, for you and your team, to the recording here.]

As happens frequently, seminar viewers sent in more excellent questions than we could answer during the session, so we sat down with Dana afterwards for a quick follow-up.

In the interview, Dana gave me great answers to these viewer questions:

  • Is there a middle ground between “classic” testing and “quick and dirty” techniques?
  • How many people do you need in these “wild” tests to create enough valuable data?
  • How should you screen subjects?
  • Should designers observe “wild” tests?
  • How do you answer critics who claim quick and dirty testing is not scientific?
  • What ethical issues are there with recording test subjects?
  • Once you get this quick data, what are the next steps?

During the podcast, Dana & I talked about ways to analyze results and we mentioned the KJ Technique. This is a great way to get a team on the same page about the top priorities that emerge from testing. You can find more about the technique in this article.

Are you going rogue and conducting usability tests that aren’t “by the book”? Tell us your trials and tribulations in the comments!

Add a Comment