Originally published: Nov 14, 2013
“Create useful personas in a day? That’s crazy talk!”
They were right. It is crazy talk. Why would we spend an entire day on creating personas when we can do it in three hours, max?
If you read Part 1 of this article, then you saw the preparation needed to create the critical design reference tools of principles, personas, and scenarios. We can do this in a single day because we’re feeding right off the field research we just completed. By involving the entire team in the research and the creation of these design reference tools, everyone ends up on the same page with minimal effort.
It’s a well-known fact that design doesn’t happen unless colored sticky notes are involved. This process is no exception. We grab a bunch, in all different sizes. (Those big 4x6 pads are particularly useful, along with the little colored flags.)
We’ll also want one flip chart pad for every three people on the team and a set of markers to use with these.
Those little colored circle stickers are also handy (We can use flag-size or small-size colored sticky notes when we don’t have the stickers available).
A big room where we can stick things to the wall works best. We’ll want to post both the sticky notes, printouts from the research, and the flip chart pages during the day.
Finally, remember those write-ups our team did of all the people they saw on the visits? We want a copy or two of each of those. It’s helpful if each page has a decent size picture of the participant they met, along with their name and the site where the team met them.
We use a simple rule to decide who gets a say in creating our personas, scenarios, and principles: only people who went on a minimum of two field visits. This way everyone is basing their decisions on research. If we let folks who haven’t been on visits participate, then they’ll draw from their own experiences or people we hadn’t talked to in this round of research. That reduces the chance we’ll get personas that match our audience, which, in turn, makes the reference tools less valuable.
Making this rule from the start of the project means everyone understands the price of entry. Want to help make the personas, scenarios, and design principles? Then you need to visit at least two sites and take notes.
If you haven’t figured it out, this is the secret part of our agenda. There’s lots of evidence to show the more exposure team members have to real users doing real work, the better the design. The reference tools we’re creating help us stretch the effects of that exposure, so without it, those tools are useless. (Read more about increasing exposure hours.)
On our team, we want to involve designers, developers, stakeholders, and other influencers of what our final design will be. It’s not unusual to have as many as 12 or 15 folks participating in creating our reference tools.
With this many people, we break into smaller project teams of three to five people. Each team gets one of the priorities from Part 1’s Prioritized Action List. They’ll use that priority as their project for the creation of their own personas, scenarios, and principles.
We can create the teams randomly, let them self organize, or be deliberate (such as ensuring there’s one designer on each team). When you’re dealing with smart folks, I’ve found it doesn’t matter too much who is on which team. It always works out.
A recent client focused on updating their HR software that helps managers do their performance reviews. After visiting 14 companies and interviewing 22 managers and HR representatives, we came together to make our personas, scenarios, and design principles. We prioritized the most important actions to update the software, and assigned one to each project team. Now we were ready.
Even though we call the process LiRPPS, for Lightweight Research-Based Principles, Personas, and Scenarios, we tend to create the personas first and the principles last. Everything works well when you define the personas first (including scheduling a mid-day lunch break — we keep our priorities straight). Unfortunately, LiRPPS isn’t as easy to pronounce.
The first step to creating our personas is to pick out the research participants we want to consider. Each team selects their own participants, looking through the lens of their newly assigned project.
Our team’s priority was helping managers with regularly collecting performance information all year long. We observed many managers struggle to track what their employees have done throughout the year. Their annual performance reviews become heavily biased to the few weeks before they sit down to write it. It’s not fair to employees who work hard all year long.
As we reviewed the participants visited during the field research, we looked for managers who struggled with this problem of tracking a year’s worth of employee information. We knew we wanted to build a tool to help with this, but we needed to understand which users would benefit most.
We used the colored dots to vote on which participants would be most critical to our project. We ranked them in order of votes, most to least.
Choosing and ranking the most relevant participants took us about half an hour. The other two project teams did the same for their priorities.
Next, we wanted to create a list of attributes that we’ll consider for our personas. The attributes are what make the participants different from each other.
Some of our managers worked for large, multinational companies. Some worked inside of an organization with fewer than 200 employees.
So, organizational size is an attribute we put on the flip chart. Same with whether the HR department had rigorous procedures or not. And whether the institution recommended quarterly or half-year reviews.
We also looked at differences because of team organization. Some managers had large teams, others had small ones. Some had teams where everyone did similar work (like a call center) and others had teams where everyone did something different (like a multi-disciplinary design and production team).
My team was also interested in how the manager seemed to work. Some were excellent at having regular one-on-ones with all of their direct reports, while others never had them. Some were very poor at taking notes, while others were rigorous about it. All these differences went on the flip chart.
Again, we’d seen all these things in the recently conducted research. If we didn’t see it, it didn’t go into our attribute list.
The simplest way to collect these differences was to take the top two participants out of our pile and put them side-by-side. We then asked what was different between them. We’d put the differences on the flip chart, then replaced one of the participants with the next one in the pile. We repeated until we weren’t finding anything new. The process to identifying attributes took about an hour.
Next, we took the list of attributes we just created and voted, using those colored dots, on the ones we thought most affect our new tool for collecting performance review data year round.
Using 4x6 sticky notes, we wrote each of the top attributes in the middle in large print. Then, we figured out the scale for the attribute and wrote the endpoints on each side.
One of our attributes was Manager’s Note-Taking Rigor. On one side of the sticky note we wrote “Takes thorough notes” and on the other side we wrote “Takes poor notes.”
Another attribute was Team Role Differences. On one side we wrote “Very similar” thinking of our call center folks, with the opposing side labeled “All different” for the production team we visited.
We stuck our sheets of attributes on the wall, going from most important (based on our earlier votes), to least. (I usually don’t bother with any attributes that didn’t garner any votes at all.)
Given a list of attributes, based on real people we observed, we could now assemble the most interesting ones to form our personas. Those personas will become the basis of future design decisions, so getting them right was worth the effort. Our goal was to have three to five personas to work with.
Using a different color for each possible persona, we grabbed small sticky notes or flags. Someone on the team would suggest a set of attributes, putting the sticky note on the side of the scale that they thought made for an interesting persona.
One person we were interested in was a first-time manager in a large organization where HR wasn’t very supportive. This manager may have a large team of people with similar roles.
One of our team members put a purple sticky flag on the Org. Size attribute near the “Large” side of the scale, on the HR Support attribute near “Not very,” on the Manager Experience attribute near “Newbie,” and on the Team Role Differences attribute near “Very similar.” Everyone nodded in approval. “That’s just like Roger at Acme,” one team member suggested.
We came up for a name for each color, like First-time Freddy and Experienced Manager Evelyn. The colored stickies helped us see if we were biasing our personas on certain attributes. The goal was to have a nice distribution of stickies on each of the important attribute’s scales.
Documenting each of our personas became simple. A simple write up, using mildly disguised descriptions of what we saw in the research, delivered us four solid personas. For each persona, we could tie it to real organizations and managers we met.
We took the last half hour of our morning to share our personas with the other teams. One at a time, each team shared their personas, reviewing the top attributes for their particular project. It was fascinating to see the overlap, and the differences gave us interesting perspectives on our own personas.
Writing the scenarios is the easiest part of the LiRPPS process. It’s rarely more effort than going through our research notes and documenting stories of what we actually saw.
For each persona, we wrote up two or three scenarios. These are stories that describe what that persona would try to do with our project. They set up the context, describe the persona’s flow, and outline what successful completion looks like.
For one of our personas, First-Time Freddy, we described the following scenario:
A few weeks after Freddy started as the manager of this group, he learned in his manager’s staff meeting that the company plans to open a second call center on the other side of the company. They’ll need trainers and Freddy wants to track who his smartest reps are, so he can be ready to suggest an elite training team.
He needs to identify the criteria that makes someone a good trainer and track the people on his team who are excelling in that criteria. He’d like to do this every couple of weeks, so he’s ready with the data when the time comes.
This scenario was easy to write up, because we’d seen a real world instance of it. It came directly from real life.
Again, we took time at the end of the activity to compare our scenarios with those of the other teams. As often happens, they had striking similarities to what we’d written up, but were bent slightly to fit their own projects.
In the last activity of the day, we created our design principles. These are specific rules of thumb to help guide design decisions. The idea is, when the team faces a hard decision between two choices, they can choose the one that’s closest to the principle. This can also help push the team to try harder in a certain dimension, to move past a design that’s just OK to one that’s great.
Our goal was to come up with four design principles specific to our project of creating a tool for regularly collecting employee performance data. We started with a simple trick: how would we make the crappiest version of this tool?
For a few minutes, we brainstormed all the ways to create a version of our tool that would be awful to use. We started with the obvious things: It would be hard to understand the instructions. It would be really slow. It would only work on an out-of-date piece of hardware, like an Apple Newton.
After that first burst, the team fell into silence. Had we covered it? Apparently not, because suddenly, out came some real interesting ideas: It wouldn’t let you save, so you had to complete it all at once. It wouldn’t let you jump around from employee to employee, instead you had to go through the entire team. (For Freddy and his 18 direct reports, that would be hellish.)
Over time, we came up with about 20 different things that would make whatever tool we came up with really suck to use. And it was from the fires of Hell we’d created that our understanding of what was really important emerged.
We next revisited the list of crappy features and came up with attributes for each one. A slow design meant performance was an important attribute. And not letting the user save meant interruptible was also an important attribute. We ended up with about 25 attributes.
Again, using our dots, we voted on the attributes we thought were most important. From here, we started to explore the top voted candidates. Which ones would push us to excel?
For each of the top attributes, we asked two questions, “What would push us beyond what we deliver today?” and “What pushes us into today’s bad habits?” Our goal was to come up with principles that break us of what we normally do to something better than what we’ve been doing. Using the answers to these two questions, we formed principles that we stated as tradeoff statements.
To let a manager save their work without finishing, we’d need to stop requiring that all database updates be completed transactions. Allowing for partial transactions was something the back-end couldn’t handle today, but it could be changed. From this, the design principle We’ll prefer completion across sessions over insisting on complete transactions emerged.
We chose four principles from our list of attributes. When we were done, we had a solid understanding on how our new design would push beyond what we’d been delivering in the past.
When we compared our four principles with what the other teams came up with, we were again struck by the similarities. One principle appeared on every list, even though the teams had never discussed it with each other. A couple others appeared on more than a single list. It was clear that the research had influenced a new philosophy towards design.
After 7 hours, we’d created four personas, ten scenarios, and four design principles for our project. The two other teams also had created a similar number in the same period. That’s quite an accomplishment.
What was even more exciting was how the team was energized to start designing. They felt they had a solid idea of who their users were, what those users needed, and how they would produce better designs than ever before.
The next day, we launched a two-day design studio for our projects. Each team used the principles, personas, and scenarios they’d created to guide the studio. By the end of it, they’d created solid design ideas that propelled the team into the fastest design and development cycle they’d ever seen.
Creating Lightweight Research-Based Principles, Personas, and Scenarios (LiRPPS) is a solid technique for putting users at the center of the design process.
Jared M. Spool is the founder of User Interface Engineering. He spends his time working with the research teams at the company and helps clients understand how to solve their design problems.
What process does your team follow when creating personas, scenarios, and principles? Tell us about it at the UIE blog.
Read related articles: