Conducting research and gathering data are crucial parts in the process of creating great design. But once you have all of the data, what do you do with it? How do you know you’re extracting the right conclusions and not leaving anything important on the table? Steve Portigal discusses the methods of synthesis and ideation to approach this crucial next step.
Adam Churchill: Welcome, everyone to another episode of the SpoolCast.
Earlier this month, Steve Portigal joined us for a virtual seminar on user research analysis techniques. In the seminar, Steve answered the question "You've done all this research. Now what?"
Too often we work hard on the process of getting great information out of our users, and then we're stuck when it comes to those critical next steps. In the seminar, Steve shows how to take those steps and explains more about the process of synthesis and ideation.
He's offered to come back and tackle some of the questions we didn't get to address in the seminar. Hello, Steve, welcome back.
Steve Portigal: Hi, there. Thanks for having me back.
Adam: Now, folks who didn't listen to this particular seminar can get at it at UIE's user experience training library which has 60 recorded virtual seminars from experts just like Steve Portigal.
Steve, for those listening who weren't able to join us for the presentation, can you give them an overview?
Steve: Sure. I think you hit two really important words when you talked about synthesis and ideation, and that was the structure I used to talk through the work that we do. So, it's that point after you've done fieldwork but before you know what it means and what kind of solutions you want to come up with.
And so that's really the process of synthesis which is going through the data itself and reaching these points, which we call opportunities - a road map, some directions without identifying specific solutions. That's kind of all under the rubric of synthesis, which actually includes analysis, so you're kind of unpacking the data into smaller parts and then putting it back together. But you're putting it back together into opportunities. What could we do?
Then that transitions into this separate activity which we're giving the label ideation. So that's where you start saying, "How could we? How could we solve this problem? How could we create something new?"
And one of the things that I talked about in solution making in ideation -- you know, a lot of the great brainstorming stuff that people are maybe familiar with in being open and being generative - looking at a variety of different types of solutions, even if you can't build them, as a way to try to understand where these opportunities could be addressed.
One of the things that I talked about was just the idea of kind of creating bundles of solutions that are kind of around strategies. This is a lot of terminology to throw around there, but when you sort of have a problem, there are classes of solutions. There are things like if you want to introduce a new behavior like walking around with a new headset on that it doesn't exist before, you can use different kinds of strategies. Make it look like something else or make it disappear.
Those are different kinds of design strategies, and underneath those strategies are different kinds of specific implementations. Make it out of hair. Make it look like a baseball cap. Those are specific solutions.
But the strategy is of make it disappear, make it look like something else are different kind of directional ways to go.
So we want to keep people from, "Hey, I observed this; I'm going to make that," and really at least in terms of understanding a framework, really break that down into several discreet steps so you can come up with a richer and broader set of opportunities and really get to the right ones that really come from the research.
The last thing we talked about is just how do you then prioritize? If you generated a really broad and diverse set of possible solutions that could be built, what do you with that?
And so we talked about both using your heart and doing Post-It voting, which people may be familiar with, clustering Post-It notes from everybody in the room. And then doing a more analytical ranking in a spreadsheet where you kind of assign ranks and codes to things to sort them out. And then really look at what do the numbers tell you and what does your heart tell you and then using that as the direction for prioritizing.
That's the, you know, there's 90 minutes of content put into three minutes.
Adam: Well, and you did a great job with it. You definitely had our audience thinking that day. There were lots of great questions from the audience. Let's try and tackle some of the ones we couldn't get to in the session.
Our friends from Expeditor's wanted to know, "How long does it typically take to get from the first interview in the field to the opportunities?" What's your timeline look like, Steve?
Steve: Yeah, so I think we're probably in the field for anywhere from a week to ten days to two weeks. It's kind of that dedicated, heads down field work a week to two weeks, depending on scope and so on.
Right after that, we're doing the top line summary which kind of pulls out some of the biggest, earliest themes right from the raw experience of collecting the data. That's probably about two days. Then I think we're probably trying to do about two weeks of deeper analysis, deeper synthesis to try to get to a set of opportunities.
So kind of from first interview to opportunity, it's probably about three-and-a-half weeks or thereabouts.
Adam: Katie wants to know how much you involve the client in the analysis process.
Steve: That's a great question, and you know one of the things we're looking for as consultants is ways to engage and not simply take a piece of work and go off and do it and throw it back over the transom, under the transom -- whatever the positioning of the transom is.
So we're trying to involve people all along the way. The sort of typical one is bringing people out in the field. If we can bring people out in the field, we hope that we can involve them in the analysis process. And that means assigning them transcripts, giving them content from the field and asking them to go through and do their close reading of it and highlight examples and then come back and have a collaborative session with us.
You know, the client team can be very sprawling depending on how we're engaged. It's typically, you know, if we can get three or four people to join in that work it actually goes faster an we can get a lot of content from them, see what they're seeing.
So, yeah, we definitely like to do it. The ideation process, which is not exactly what Katie's asking, but that what can we do with this analysis is full on, large amount of client participation. We don't do that on our own. That's the kind of thing we're facilitating for them to do with us.
Adam: Early on in your presentation we talked about what you referred to as the top-line part of the process. There were a bunch of great questions left over on that. Kevin wants to know, in regards to delivering pieces of it to your stakeholders, he's heard that it's good to deliver both good an bad statements so that it's more palatable to your sensitive stakeholders. Does that play a factor in the top-line phase that you discussed?
Steve: I think that's an excellent point and I think, as consultants, you're always trying to create learning ready moments, right? Help people to hear what you want to tell them, and being sensitive to their communication styles, understand where they're sensitive, and finding out the way to say that. So I think balancing good and bad is a good technique.
I've heard people talk about the Oreo method. You know, first start off with the great stuff and then you talk about what's of concern and then you raise another great example so it's like the layers of an Oreo cookie. You know, we also sometimes do a "yes and..." We're maybe sitting around hearing ourselves thinking, "Oh my God this is terrible" and then we want to try to turn the conversation around to you set this objective, here's how you've met it, here's how you can meet it.
So we try to use I think language of empowerment and opportunities. This is definitely in the top-line part of it, but I think it's through the whole thing. Everything that we deliver we want to... it's not really about burying the lead, but it's just really about making it palpable. I think Kevin used a really, really great word. So I don't know if it's about striking an overall balance between good and bad, but sort of crafting a message that's relevant.
And good stuff is great. I mean, to tell people that they're doing a good job and what's really, really working is super important and I think Kevin highlights that. Sometimes I forget that. Sometimes I forget to tell people here are the success that they're having because I'm always looking for the new thing, what can we improve.
But to acknowledge what you've been successful with I think is really, really important from a sort of fact point of view, and so on but also just in terms of motivation and satisfaction and building excitement. So Kevin, I think, nailed it.
Adam: Svetlana wants to know who's the top-line report really for? Are there people that have to spend some time with it? Are there people that it's kind of wasted on? What are your guidelines there?
Steve: I always feel like we have kind of some concentric circles of people we're working with. I think we have these kind of gatekeepers, maybe one or two people that are working with us daily that are helping us plan the logistics and there's kind of a second ring of people that maybe are designers, maybe are some engineers that are kind of contributors.
And then we have sort of larger kind of director, funding level stakeholders that really we see at the beginning and the end. So I think the top-line is really for those first couple of rings. I don't think they're really for stakeholders in terms of the people that are really much removed that are looking for actionability. So I think this is really about planting seeds and getting conversations going.
I remember doing a top-line discussion and there were some designers there about a year and a half ago at this project and they said ,"Yeah, we can't act on any of this." Then, you know, my response was to nod and say "Yes, that's not where the process is at. We're working towards that."
So I appreciated their hunger for being able to act on it, but I also felt like it was important for us to engage them in at least thinking about this, to start paving the road towards where we were going to act on it. So it's a long, rambling answer. I think the top-line is for the people you're working with every day and kind of some of the people they work with, but maybe not some of the people they work for.
Adam: OK, well let's take that a little bit further. Nancy wanted to know about presenting the top-line report to the stakeholders. Is it dangerous? Do you run into the problem of stakeholders jumping immediately to making design decisions and conclusions and skipping steps in the process?
Steve: Yeah. Nancy, I think, knows of which she speaks. And, you know, we try so hard to frame every piece of content. Here's where we're at in the process, here's what we've done, here's what we have not done, and so here's how we want to use this piece for this conversation. And you can say that and I think Nancy is sort of right in that there's some people that may not hear that.
I think that form our side, from where Nancy and I are as people that are presenting top-lines and hearing people jump to conclusions, I think we have to listen to what's really happening because, you know, just like in meetings you have parking lots, right? You come up with stuff. It's not really a part of the meeting, but you have a place to put it.
So I think whenever we present content to people that has implications, whether or not we've spelled those out yet or not, you can see and hear the gears turning. They start talking about what could we do and that, sometimes to me, sounds like design decisions, but I think that's partly my own sort of tendency, as Nancy, to kind of feel like it's a danger.
She uses this word about danger and it may not be dangerous. I think we want to be careful to understand the difference between stakeholders engaging as problem solvers - because that's what they do and that's how they're thinking and that's what they're tasked with -- and stakeholders engaging and jumping to conclusions and running off and acting on things.
So if we have a project timeline with milestones and we know what is going to happen when and we're going to go do something, we're going to make decisions and build things, getting people to start thinking and talking and playing early on, I think that kind of goes to Katie's question about how much we engage people in the analysis process.
So getting people thinking is a way to engage them. It's hard to generalize around teams, because everybody knows their teams better than I would. So who are the people? What is the hierarchy and how do you set and manage those expectations?
I agree it's a risk. I'd like it to be kind of a measured risk where we balance getting people thinking against getting them to act prematurely.
Adam: John asks this wonderful question. From the perspective of a new researcher or someone that's new to a design team, they're following your process. What's the one thing they need to keep in mind?
Steve: This is a good question. I agree. Hopefully, this doesn't contradict what I just said about Nancy's question, but let's see. You know, I think researchers and designers that go into the field are increasingly competent in the "don't judge, don't jump to conclusions" part of field work. I think that's becoming more of the default of the practice.
You go out in the field, you're open-minded, you kind of see where things go. I guess I would advise new researchers, new designers who are taking this process on to bring that same open-mindedness into the synthesis and ideation process.
I did a workshop a few years ago with some designers where there's all these different steps we go through and I had broken it down step-wise into exercises. And exercise A, B, C, they all were about moving just the next chunk through this framework.
One group kind of waved me over and kind of was very proud. They said, "We're finished. We already went to the last section. I think it's exciting for, especially someone that's a designer who's kind of oriented towards making stuff to jump to solutions. When you're new, I think it's exciting for someone who's a designer who's oriented towards jumping to solutions and when you're new, that can be kind of heady stuff.
What I would encourage folks to do is step back, relax, and kind of let the process play out and really be open to a diversity of solutions and walking through the framework in the way that we, you know, are very good at deferring judgment in the field, we want to bring that same deferring mentality and defer solution-making until we're at ideation.
So I think if designers and researchers can bring that deference, if you will, to this process, I think that would be a great way to go from newbie to good on your way to great.
Adam: Our friends at the MathWorks want to know if the opportunities can translate into specific user requirements rather than solutions.
Steve: Yeah. I love that. I think that's a really great way to put it. We struggled for a little while about what even to call these opportunities. Someone that had really kind of coached me and talked about what this idea of opportunities was about.
They actually called it the User Experience Brief. It's the design brief that says what kind of experience you want to create without saying how you're going to do it. That sounds pretty much like user requirements. It's right on. It's right on.
Adam: The folks at Edmonds.com wonder if there are tools that you recommend in addition to Excel, which you spoke about in the seminar, that provide access to the individual analysis after the study is finished. For example, if people wanted to cull through all the data on a certain topic in the study.
Steve: Yeah. That's a good one. I mean, in terms of making it something that's searchable, because you have big chunks of data. You have huge amounts of video data. You have documents like transcripts or whatever data gathering you've been doing.
The piece that we're creating is this coded Excel spreadsheet/database. If you want to be able to find something on a certain topic, you're going to have to either go through and code it, which is really what we're doing in Excel or use something a little more brute force like text search and searching through a body of transcripts.
But search is not exactly the best way to retrieve all the stuff. You can't get searchable coded stuff for free. You've got to go through and do it. There are probably some more intensive tools that I'm afraid to even start dropping names of them, because I haven't used them.
But the things that I think that are used in usability analysis, which Adam, jump in if you have any ideas about any of these, but there's something called Nudist that I feel like - I know it's a crazy name - that I think social scientists have worked with. It's kind of, those aren't tools that we use, but I feel like those are tools around actively coding everything in a set of video or transcript data and creating a large database from that.
We're not kind of as exhaustive if we're a little more opportunistic in our analysis. So we're not producing something as rigorous as that, but I would look to tools for usability analysis and see if that's something that people could pull from. Not really a recommendation, but that's my limited take on it.
Adam: Collaborative analysis. There was a comment in the Twitter stream during the seminar that wondered if it should happen with the end users.
Steve: Yeah. That's a good question. We've sort of - in a lot of our engagements, we're kind of mashing up methodologies. As much as we can sort of draw here's what it looks like, we are picking and choosing, especially our data gathering methodologies.
And to me, if you're doing something with the end users now, you are doing some kind of field work or some kind of subsequent research, which is great. And I love iterating, I love kind of going back and forth from, what do we think it means, what does this mean to people.
I'm not sure that I want to forego the researchers and designers doing the analysis. I think if I had to take something away from this it would be, let's use these methodologies in ways that are user-facing, not kind of designer facing.
That really is very provocative, it gets me thinking. There are simple things people do where they edit video and then they show it back to the participant that they gathered it from and have that participant tell the researcher what they see in that video of themselves.
So, that's kind of a very low-fidelity analysis because you're really just creating a video edit, you're not really finding large patterns. But then you're taking that artifact back out. I think the same way that you're using these ideation sessions to produce concepts and scenarios, those start to feed into artifacts that can be used for another round of research. So, yes, it's a really provocative reframe there.
Adam: So, Olivia wanted to know a little bit more about the process during group voting. How do you pick relevant ranking factors?
Steve: Yeah, I think that's super important, right. If you were ranking things against something that no one cares about you're not really deriving value from that process. One thing that I think has been important for us is to align on the factors before we go into the ranking section, so that we know what we're going to do.
When I say align, I mean that's us and the people that are going to be in that session. We're doing idea generation with a large group but we're doing ranking with a very, very small group. My favorite would be like a maximum of three to keep it really kind of tight.
And so, I think we're trying to understand, facilitate discovery with our stakeholders and then these team leads, what are their measure of success. What are their kind of business goals? And I think there are sort of a handful and I threw some of them out in the webinar, of sort of obvious ones around kind of feasibility and cost and kind of investment and payoff.
I think those feel vaguely standard to me in terms of, you know, how do businesses perform and how do businesses make products. I think more what I would like to get out of that facilitation is to kind of tease out the nuance. I think I gave the example in the webinar of a criteria that one team we worked with had and that criteria is about feasibility.
So, when I hear that, to me that means, can we build it? Is this Star Trek technology or is this something that we can actually build? And when we kind of tease it out, feasibility for them was actually about regulatory stuff. It was about legal feasibility.
So, I think if we can bring a starter set to these people and say, here's a bunch of things. What do you guys think? Do we agree these are important? What are the nuances here? In terms of avoiding really, really ridiculous ones I think we have to call those out. We have to take the responsibility and say, "OK, this is not a food product so flavor is not something we want to really think about here. We strongly advise against using that as a ranking criteria."
That's my example of a stupid one. To really kind of use our expertise to help them create a useful one. So, let's step up and take some ownership of that and we'll coach and advise where we need to.
Adam: Steve, the last question that came in from Twitter, the one that we decided to talk about, has to do with approaching the problem or the challenge. And Cory says that he likes to approach things with a "how to" approach, where you were explaining in your process with the question was, "how can we?" Can you speak to that a little bit?
Steve: It's a great question, I love questions about kind of the nuances of the terms that we call these things because there's a lot of power in these small word choices. And I don't claim that I always have the best way of doing it.
So, these little debates might seem a little too grammar-wonky for some people but I really like this. So, the difference between how to, you know, how to address this opportunity? That might be a more open way of doing it. I feel like our role in these engagements is to kind of champion the research, the insights, the opportunities and facilitating, kind of drive this into the team, into the organization.
And so, using a phrase like, "how can we", I don't know if it's more or less open. It's certainly putting the subject, as opposed to "how to", I'm saying, "how can we". So, by asking the question that way the onus is on the people that are being asked to answer that question.
If you say "how to" it's now more projective, maybe it's a super grammar-wonk but whatever. Cory asked the question, so let's look at it. "How to" doesn't really assign any ownership to the people in the room. Well, you know, if somebody else did something then maybe that's how to do it.
But by asking the question as "how can we", we have to kind of dig in and really take some ownership as kind of creators and ideators as to how we might act on it. And I think as much as we're saying in the ideation session, you know, "Be really open, look at areas of business that you can't possibly address. Look beyond your constraints", ultimately at the end of the day we have to do something.
So, I think by putting that ownership and framing it around who the team is, it ultimately will be informed by who we are and what we love and what we believe and what we've learned and what we want to do. And that hopefully has more traction. So, intellectually as an exercise "how to" is great. I think for facilitating and creating action 'how can we" just pushes a little bit further.
Adam: That's great, I appreciate that. It's certainly not just grammar-wonk, I think it's an important consideration.
Well, Steve thanks so much for circling back with us today.
To our audience, thanks for listening in and for your interest in and support of the UIE Virtual Seminar Program.