Originally published: Jun 25, 2013
I’ve seen the box on the project schedule a hundred times. It always has the same label: “Gather Requirements”. And it’s always remarkably short – scheduled for just a day or two (or sometimes less!).
When I ask the project manager what this step involves, they inevitably tell me they’ll interview the major stakeholders and gather up the requirements that emerge. It’s going into the fields and picking berries needed for the project.
How do these major stakeholders know these requirements? Well, they just do. They’ve been thinking about it for a while (except for the ones who haven’t). They’ve talked to customers (except for most of them, who never talk to customers). They’ve talked to the sales people and the technical folks and the business modeling folks, who told them exactly what’s needed to make this product successful (but how do those folks know?).
The requirements, once gathered, are etched into a document (the Product Requirements Document or PRD) that, once published, is never to be changed or updated. It’s the gospel that will become the basis of the project. It’s what we’ll design and build to serve.
We didn’t always gather requirements and publish a PRD. In the early days we’d just start in on the project from a simple objective like “convert the static HTML intranet to run on Microsoft Sharepoint.” After all, how hard can that be? We have all the pages already. We’ll just dump them into the new platform and, voilà, we’re done!
But that didn’t work. Somewhere in the middle of the project, we’d discover something we didn’t know. Something we should’ve known. Like the HR department has just bought a new goal setting package and wants it integrated. Or the policies manual is a PDF and needs to be converted.
Suddenly the project is derailed. Things we didn’t know we needed to do are added. We try to plug it into the existing architecture, but that only goes so far. Now we’re re–architecting the design. Everything is getting later and the schedule is slipping.
Adding a box to the beginning of the project was supposed to head off this schedule derailment. If we can get the HR department to tell us up front about the goal setting software and the policy manual, we can plan time to deal with it.
The problem is, adding the box to the beginning of the schedule doesn’t stop derailment. But if our goal is to pass the blame on to stakeholders we interviewed, saying “you didn’t tell us up front when we asked,” then we can declare this as mission accomplished. However, if our goal is to produce a great design that delights our users and meets their needs, we need to move on from adding a box to the schedule to a set of activities that work.
When we’re gathering requirements, we’re saying the team is comprised of two types of people: those who know what we need and those who need to discover it. It’s an imbalance in team structure that starts everyone off on the wrong foot.
The process requires we put faith in things that may or may not be true, without any checks or balances to see if we got it right. It requires we know everything up front and shuns the idea we could learn more as we go along. And it forces us to be ashamed that we somehow left out a detail, when in fact there was no way we could know.
For this process to work, stakeholders have to predict the future and be all knowing. Frankly, this never works.
If we’re not gathering requirements, how will we learn what our customers and users need? How will we get the business and technology constraints?
The answer is Science (and a little extra)!
We’ve found the best teams have replaced the requirements gathering box with four core activities which changes the power dynamic. It assumes that nobody knows what we need to build, though some people have ideas. If we can validate the ideas and then refine them, we all become smarter about what would delight customers and users.
The four activities are forming hypotheses, conducting research to test those hypotheses, documenting what we saw with scenarios, and using critique to validate the design is serving what we learned. What’s funny is none of this is new. We’ve known how to do these things all along. Most of us just don’t do them.
At the core of every requirement is a collection of assumptions. When our requirement reads “The home page should contain links to today’s most commonly visited pages” we’ve embedded a bunch of assumptions inside. We’re assuming that the most commonly visited pages are the things people want access to. We’re assuming that the home page is how they’ll get it. We’re even assuming that everyone needs the same home page with the same links.
How do we know these assumptions are correct? How do we know, for instance, that there isn’t another page out there that nobody accesses but is extremely useful? How do we know that people will use the home page, versus bookmarks in their browser?
By restating our assumptions as hypotheses, we’re acknowledging that we may not know everything. More importantly, we’re opening up the discussion to see what we do and don’t know.
Instead of stating a requirement, we can say, “We believe having links to the most commonly visited pages is the best way to serve the user. Let’s find out if that’s true.” A hypothesis is something to be proven or disproven, with us learning important stuff every step of the way.
Here the science really begins. When our team forms experiments to test our hypotheses we can go observe how users work today. We can see which pages they visit. Even more important, we can see where, during their day, they could’ve used a page, but didn’t (maybe because they didn’t know about it or it was too hard to find).
We’re doing research about our users by visiting them and observing their life and work. We can do research about the business constraints and technology limitations, too. We can write little prototype programs to see if the servers can respond fast enough. We can try out various pricing packages on prospective customers to see if they can describe the differences.
Each experiment makes us smarter. Which is why it’s important we involve the entire team, including stakeholders, in the experiments. We can improve their “gut” by asking them to predict the outcome of the experiment, then comparing that to what we really saw.
All this research will give us tremendous insight into how people work today. We get to see their motivations and the realities of their situation. We can begin to imagine how our new design will make their lives better.
We can capture that insight with scenarios. In each scenario, we describe what our future user needs to accomplish, why they need to accomplish it, and what challenges they need to overcome to make it happen.
Research–based scenarios are a practical way to capture requirements, because it keeps the team focused on what real users will do. It brings the “why” to the table, instead of trying to work with a list of often contradictory, dehumanized “whats.” The scenario becomes the mold that we pour our design into. It tells us what nooks and crannies we need to fill to build a great experience.
As design solutions emerge, using critique to keep the requirements alive is essential. Great critique asks two questions: “What are we trying to do here?” and “Does this design alternative accomplish that?” It’s the former that helps us make sure we’re still on the same page for the requirements.
By spending a little time with each critique to revisit the requirements, we are making sure they are still valid, especially with all the new stuff we’ve learned since we started. The discussion that emerges keeps the requirements top–of–mind, instead of buried in a document nobody’s opened for months if at all. It’s amazing to see how the creativity of the team expands when everyone has a chance to try out different solutions to match the requirement–embodying scenarios.
That box labeled “Gather Requirements” is pretty small. It’s only scheduled to take a couple of days.
The replacement activities of creating hypotheses, conducting research, creating scenarios, and running critiques will take more time. A lot more time. How do we do that when our schedules are already full?
We have to put it into context with the rest of the project. How much time will we save by getting closer to a great design faster? How much time will we get back because everyone is on the same page about why we’re doing what we’re doing?
We spread these activities evenly throughout the project, instead of a small box upfront. They make practically every other box in the project chart better and faster. In a weird twist of project physics, we end up saving time by spending time.
Most importantly, we end up with a design that uses real requirements to create a great experience. That’s what we were brought in to do in the first place.
See how to replace your product requirements document with something that works. At October’s User Interface 18 Conference, Jeff Gothelf will teach hypothesis testing, Christine Perfetti will show you how to bring research to your team, Kim Goodwin will help you craft great scenarios, and Adam Connor and Aaron Irizarry will show you the magic of great critique. Find out more at uiconf.com.
How have you dealt with requirements gathering? Let us know on our blog.
Read related articles: