July 7th, 2008
Yesterday, I talked about how we come up with each row in our Weighted Differences Matrix. For each of these differences, we need to then assess how important it will be to the user’s experience, which we represent with a weight. The weight is a number from zero to ten, where a zero means the feature isn’t important at all and a ten means the design would fail without it. The question then becomes, how do we decide what the weight should be for each row?
The quality of the weights will depend on what information the team already has. In the case of the project I described in my presentation, the first time that team sat down to create their matrix, they had done very little user research (as in virtually none) to work from. So, to some extent, they were going to guess on the weights. However, they had been in business for years with a web site that was attracting millions of visitors a month, so they knew something about their users and the users’ needs. It’s with this information that they’d start the process of assessing weights.
Whether the team has good research to back up their assessment, or whether they are just guessing from their experience, the process is basically the same. Once we have our list of differences (as I explained in yesterday’s post), we assess the weights.
We like to use a facilitator for this process — someone who isn’t going to contribute to the weights and isn’t going to push an agenda. It can be a team member, but they need to understand that they are abstaining from giving their own opinions. (It’s ok if a team member has an agenda they want to push, they just shouldn’t be the facilitator.)
For each difference we’ve identified, the facilitator asks for the votes. The easiest way to make this happen is for everyone in the room to raise their two hands, displaying between zero and ten fingers. The facilitator then calls out each number around the room. (“7, 5, 6, 3, 8, 7”)
Then comes the fun part: The facilitator picks someone who had a vote that was very different from the rest and asks them to explain their rationale. So, if almost everyone was sixes or sevens, the facilitator would ask the person with the lowest vote, say a three, to explain why they rated it so low. Then, the facilitator would ask someone with an opposite vote to explain their rationale. The goal is to get a small debate going to bring out the differences in thinking.
The facilitator can decide when the group has heard enough of the debate. We don’t want to fixate on this, since there is still a lot to do, so moving quickly is a good thing.
After ending the debate, a second vote is taken, again by holding up fingers. Again, the facilitator reads off everyone’s fingers and then declares what the final number should be. We use the “olympic scoring method” by throwing out the highest and lowest scores and averaging the rest.
We like this three-step process (vote, discuss, vote again), because it keeps the proportionate to the differences of opinions. When the team really agrees, the debate is really short. Only when there is real disagreement does the debate take time, but with the help of a good facilitator, it still can go quickly.
(One of the most fun moments for us is when, in the first vote, everyone goes in one direction except for a single dissenting team member. That member shares their rationale and everyone goes, “Oh, of course” and then votes with that person. It’s fun to see a team have a group-wide “aha!” moment like that.)
In Part 3, I’ll talk about how we assess the scores for each design alternative.Tweet