Originally published: Jan 01, 1999
Sherlock Holmes once deduced the solution to a case when he noticed that something didnt happen: a dog didnt bark at night, proving to Holmes that there was no intruder.
During usability tests, everyone notices when a user fails because a feature breaks down. We dont need Holmes to solve these! But when expected things dont happen, or illogical things do happen, it can mean that developers didnt understand what the users needed, or how they would use the product.
We try not to miss these non-events. We find them in two ways:
- Establishing what we expect the users to do beforehand.
- Looking for behavior patterns that "dont make sense."
Before testing, we ask the developers how they expect users to accomplish the tasks and what features they expect users to use. In prototyping, we only build what we expect the user to use. When users dont use a feature in the prototype, its a clue that something didnt happen.
In one test, the web site developers expected users to choose a path to information by selecting an option that classified their work: manager, webmaster, or designer. Early in the testing, users consistently chose a fourth option, the "documentation" link, not what the developers wanted or expected. It did the job without forcing users to choose.
In successive revisions, though we made the fourth option less visible, users continued to choose it. Until we removed the fourth option, we didnt think about why the users were not choosing one of the three preferred options. At that point, users began to balk; some even refused to proceed.
Only at this point did we discover the fundamental problem: users did not want to characterize themselves or their work into limited categories. If we had paid more attention to what users were not doing instead of trying to force them to do it we would have discovered the problem much sooner.
We tested a prototype application that let network managers perform an operation two different ways, by using a right-mouse menu or by directly dragging and dropping. The developers had put considerable effort into implementing drag-and-drop, so we agreed to note when users actually used it. They didnt. Users employed only the right-mouse menu to complete their tasks.
By watching the users and talking with them after the test, we learned that they didnt know about the drag-and-drop feature. The opening screen showed them a message about the right-mouse menu, but there was no message about drag-and-drop. Even when we told them about drag-and-drop, the users said they preferred using the right mouse menu. So, the developers kept the drag-and-drop feature, but decided that making it work perfectly had a significantly lower priority.
In developing a complex process-modeling application, the designers decided to add print options to the tabbed dialog used to "run" the process. They thought users would welcome this as a convenience. When user after user ignored the tab and went right to File|Print, the developers decided to spare themselves the effort.
(Or, as they dont say in France, "Ive seen that not happen before.") Interesting behavior and non-behavior patterns often surface after the test when we ask ourselves, "What else didnt take place?" This is more subtle than the examples above.
While evaluating Help methods, we asked users to look for help in an applications three online books. When they did, the users failed all the tasks.
As we reviewed the results, we realized that not one user had even looked at the books tables of contents. But we sometimes do see users going to the table of contents in printed books. From our observations, we inferred that no one tried to learn the structure of the online books or how they differed from each other.
We repeatedly saw them use the search feature often in the wrong book and this led to failure. Based on what we didnt see, we recommended that the developers add terms to the index, rather than focus on the table of contents.
We tested a search engine site at the Brimfield Antique Fair in Massachusetts. The antique dealers and collectors at this fair are a close-knit group. However, when we asked them to use a prototype web site to find what they were looking for at the fair, we noticed one thing that didnt happen: none of them used the sites "community" features.
Instead, they almost always searched on a topic related to their business and found their "community" in the results list. We were surprised to see that nearly every link prompted comments such as, "I know her," or "I didnt know he had a site," or " How did they get into the top 10?"
These users already had a community based on the type of antiques they collect or sell, so they didnt need a web site to create one for them.
In a series of tests of different web search engines, users never learned the syntax that each search engine required. Instead, they brought their own syntax a simple list of unpunctuated descriptive words and tried it in every search box that appeared. Even when their searches failed, none of the users tried either the Help or the Advanced Search features.
This pattern mystified us until we ran another test, asking knowledgeable users to search for things they wanted and knew about. They also tried the same approach a simple list of unpunctuated descriptive words but it worked! They found what they were looking for.
The difference? This second group of users knew a lot about the subject area, and knew which words to use in the search. Apparently the problem was with the keywords, not the syntax. The developers learned that users, particularly knowledgeable users, may not need the advanced search feature. Therefore, the team was less concerned about how frequently this feature was used and was able to focus on other, more important issues that surfaced in the testing.
Read related articles: