Determining Usability from Analytics

Jared Spool

March 8th, 2006

Hats off to Ziya for pointing me to MapSurface this morning. It’s an interesting analytics tool, competing for the same mindshare as a plethora of other tools, such as Google Analytics and even MeasureMap.

MapSurface’s interesting take is they report the results by layering it over the existing page using some very clever AJAX code. Nicely done. Seeing the link report mapped out on top of the page adds clarity that isn’t afforded by just a tabular report of the same information.

However, like all of today’s available analytics tools, it really can’t be too helpful in measuring whether the page or site is usable or not.

As I’m sure you’re aware, analytics can only measure what users do. They can’t measure what users are trying to do.

To assess the usability of a page or site, you need to know two things: (1) What the user wants to accomplish and (2) whether they accomplished it or not. Today’s analytics tools can’t tell you either of these things, making it impossible to use them to assess usability.

For example, in 1996, our observational studies determined about 1 out of every 7 people who tried to find a hotel in the Walt Disney World theme park at Disney.com accidentally looked in they Disneyland theme park without realizing it. It’s still pretty much that way today. Would MapSurface (or any other analytics tool) point out that 14% of users were lost or confused? While it’s obvious in our studies, it’s impossible to determine from inspecting the analytic reports.

Tools like this can help support observational data. For example, if we see certain behaviors in our studies, we can turn to the analytics to see if confirming patterns exist. However, that’s tricky since we can’t control the sampling process. In the Disney example, there’s no way to only look at the click patterns of people intending to stay in a WDW hotel versus everyone else who comes to the site.

Someday, I believe we’ll see tools that can help with these problems. We’re just not there today.

[If this is a topic that interests you, I highly recommend you check out what's happening at the upcoming E-Metrics Summits. Jim Sterne's put together a great lineup this year. I wish I could go.]

5 Responses to “Determining Usability from Analytics”

  1. Hiten Shah Says:

    Jared,

    We at Crazy Egg are trying to make an effort to bridge the gap between analytics and usability, at least as much as we can without being right in front of users. Email me at info at crazyegg.com, I’d love to provide you with a further look into what we have going on.

    Hiten

  2. Jeremy Kraybill Says:

    While I agree that analytics tools can’t read the user’s mind and tell you what their desires were, they are pretty useful as a tool for validating hypotheses after releasing software. You do some in-person usability studies, you realize that users are getting confused in one of your site flows and are therefore dropping off / not buying / not completing a form, so you create a hypothesis: adding help text here, larger green button here, less text here etc are going to confuse less users and improve flow output. You validate it with post-development/pre-release usability tests, plust post-release analytics comparisons. I find that analytics tools + in-person usability tests give you a pretty good picture of what is happening.

    In the example you cite, usability observation tests could have uncovered the problem with site confusion, and you could have tested software improvements with analytics: less confused users would result in higher booking rates in the hotel flow, and lower dropout rates in both the hotel and theme park flows. You also could have measured rate of booking for users who jumped from the hotel flow to the theme park and watched that change with rollouts of software changes.

    Saying it’s impossible to use analytics to assess usability is far-fetched; analytics are a piece of the picture, and should be used as such.

  3. Jared Spool Says:

    Jeremy wrote:

    In the example you cite, usability observation tests could have uncovered the problem with site confusion, and you could have tested software improvements with analytics: less confused users would result in higher booking rates in the hotel flow, and lower dropout rates in both the hotel and theme park flows. You also could have measured rate of booking for users who jumped from the hotel flow to the theme park and watched that change with rollouts of software changes.

    If you could isolate out those who intend to book a room at a WDW hotel from everyone else, then the analytics would be very helpful. However, millions of people visit Disney.com every day with only a segment of them ready to book a hotel at the theme park. Plus, Disney spends millions on promoting their resorts. How do you know that the increased booking rates are due to the changes to the site and not to something else? Maybe, on that particular day, the number of people who aren’t interested in booking a hotel just dropped off and artificially raised the percentages.

    We have to be very careful when drawing inferences from analytic data that we’re not just resorting to hopeful thinking and seeing what we want to see. That just hurts us in the long run.

    Until the analytics package can isolate streams by the intent of the user, I think they will continue to be very limited in their usefulness.

  4. Los textos de qweos.net» Blog Archive » El accidente, la crónica, los libros y MapSurface Says:

    [...] Jared Spool, de UIE Brain Sparks [...]

  5. Ace your user assistance - by Becky Lash, Epic Trends » Blog Archive » Web logs have limits Says:

    [...] Determining Usability from Analytics [...]

Add a Comment