Published: Nov 15, 2012
When designing for web applications, a well-implemented analytics backbone can provide incredibly useful insights. However, getting there can be a challenge for many organizations.
One of the most famous projects we’ve worked on resulted in a $300,000,000 increase in annual revenues for a major e-commerce site. We achieved that, in part, by studying the site’s analytics. However, it wasn’t easy because the team had made several common mistakes.
Our discovery started with a pattern that emerged from usability testing. As we were watching customers finish their product selections and start down the path of the checkout system, we saw them getting stuck in an early stage that no one had anticipated.
Our initial dive into the analytics showed a huge drop between people putting products into their cart and those that started on the first screen of the checkout process. The team had assumed that people were abandoning their cart because they had no intention to buy. We put the usability tests together to help us understand why that might be.
What emerged from our tests was that during the checkout process, shoppers were getting stuck on the account sign-in page that came up first. This was our first uncovered mistake: the team had never instrumented the sign-in page to call the analytics’ capture functions to collect the data. It wasn’t until we asked why we weren’t seeing this page show up in the clickstream data that we discovered it was missing.
The Forgot Your Password page and its confirmation message also, it turns out, weren’t instrumented. We had them instrumented and waited a few weeks for the data to populate. Suddenly, we discovered that the account sign-in page was the third most popular page on the site, after the home page and shopping cart pages.
Unfortunately, we also discovered the Forgot Your Password page was now the fourth most visited page. That wasn’t seen as good news.
Page-based Analytics Create More Problems than Insights
Tools like Google Analytics or Webtrends are very powerful for content sites, where the only things a user does is visit a page, click on another page, or leave the site. These tools can deliver insights on which pages people visited and in what order.
However, web-based applications are more complex. When implemented with something like a model/view/controller approach, it’s not uncommon for one “page” to serve many purposes. A single page can change dynamically depending on the view and model involved, even though it’s the same page. In a traditional analytics tool, that one page will show up inaccurately, without giving any insight on about what actually is going on.
Similarly, these tools can’t detect non-page interactions, such as lightboxes or pop-up dialogs. Sophisticated user interactions go unreported.
The result is the team can’t trust the reports from the tool. Once the team loses confidence in their reporting data, it’s value is greatly diminished. A different approach is necessary.
Instrumenting Activities, Not Pages
We’ve found the teams that get the most out of their analytics are the ones that build their own systems. Instead of relying on the http logs provided by the web server, they capture their own events -- ones they believe are the most important activities to study.
Moving to an activity-based logging system opens up a world of opportunity for the analytics. Now you can capture specific transactions and data requests. You can capture which button on the screen the user went to next. Even for rich interaction systems, you can see great results.
This approach can extend to multi-device and mobile apps. Since it’s no longer tied to pages, but to team-defined discrete events, the team can collect both the state change and the data involved.
For the e-commerce team, we could capture when the user had trouble remembering their username and password by recording each combination they tried to authenticate. Even though they weren’t leaving the page between authentication attempts, we could see that the average user that had trouble tried multiple user names. We could see many users still tried email addresses even though the system didn’t accept those as account identifiers. This gave us insights we couldn’t get from the page-based analytics tool.
Some of the most important activities to capture are when the user receives an error message. Client-side validation and asynchronous communication means that the nature and frequency of each error is a mystery to the team.
Capturing each time an error is displayed and the corresponding data that generated the error, can lead to interesting insights. Knowing which errors are most frequent can help a team discover rough spots in the design, such as poor labeling or instructions. Detecting a sudden increase in a certain error message might mean that a recent change broke an important interaction.
Recently, a team building a system administration application asked us about the best way to structure their menus. They told us their users, system admins, want to have immediate access to the more than 750 functions they’ve incorporated into the design. Some of their menus are 6 levels deep, which their users are frustrated by. Their question, in essence, was, “How do we structure the menus when everyone wants to get everywhere instantaneously?”
When resolving a problem like this, the team needs to answer two key questions: “What are the users doing today?” and “What are the users trying to do?” Well implemented analytics can help identify what users are doing today with the design and bring clarity to what the team hears from their users about what they are trying to do.
Capturing the movement around the application immediately yielded insights. The team recorded every time a user went to a new function, keeping track of which functions they’d just left. In essence, they were measuring the desire paths through the site.
The result was a giant transition matrix, with every starting screen across the top and every destination function down the left. The team could see the frequency for function from each screen.
Though the users claimed they wanted to get from anywhere to every other function, it was clear in the data they weren’t doing that. A small group of starting screens were far more popular than the rest, by several orders of magnitude. In fact, in the initial weeks the team collected the data, more than 20% of the functions were never used by any user, leading the team to wonder if these functions were needed at all.
Studying that small group of starting screens, the team saw users regularly visited specific functions from these screens. Looking at the most popular transitions, the team could identify transitions where having a big bright button on the screen was a better way than burying the function in a menu somewhere. In some cases, it made sense to combine the two screen’s functionality onto a single screen, since it was frequently something users did together.
These are only a few examples of the value teams get when they make the investment in building an analytics system for the critical activities in their application. Because off-the-shelf solutions aren’t tuned for the types of activities that an application has built in, the team needs to make a commitment to build their own.
It’s true that building the analytics platform in from the start, is the best approach. However, even a retrofitted system for specific functionality is worth the effort.
In addition to coding the data collection, the team will need to build the data warehouse to store the results and create a reporting solution to extract the data. Here’s where off-the-shelf business intelligence tools can be useful. (You’ll get to love Excel’s pivot tables, for sure.)
In the end, the insights the team can bring to the table will make for dramatically improved designs. That’s what makes it all worthwhile.
Find out more about the retailer that increased their revenues by $300 million.
Do you use Page-based Analytics? Or have you implemented your own Analytical system? What insights have you gained? Tell us about it on our blog.
Read related articles: