Erin Malone – Choosing, Designing, and Implementing Ratings & Reviews

Sean Carmichael

June 4th, 2012

Play

[ Transcript Available ]

Reputation is everything. On sites where users can rate and review products or services, the result is, well, reputation. But a problem can arise when ratings aren’t accompanied by a qualifying review. A user could have had a negative experience with service or shipping but the product itself could be stellar.

There isn’t one feedback system that’s right for every situation. Erin Malone, author of Designing Social Interfaces, is a pioneer of effective social design. To get accurate feedback, she says, we need to eliminate biases. On a site like eBay, for instance, it’s easy for people to manipulate the system and give themselves positive reviews. She goes on to explain that whenever implementing a rating or review system, it’s better to focus on rating the activities of people rather than the people themselves to help take out any biases.

Another tricky part is that you often will only see reviews from either of the two extremes. If someone is compelled enough to leave a review, it is usually because they either had a fantastic experience or they were extremely dissatisfied. In this podcast, Erin and Jared discuss instances that don’t hit these extremes, like the purchase of an alarm clock, and how you’re not likely to see reviews on products that work as expected.

On Thursday, June 7, Erin will be sharing her knowledge and insights in a virtual seminar, Designing for Ratings & Reviews. Attend the seminar to get useful tips on the benefits and drawbacks of various ratings systems so you can choose, design, and implement the most effective one for your users. Learn more about Erin’s seminar.

Recorded: May, 2012
[ Subscribe to our podcast via Use iTunes to subscribe to UIE's RSS feed. ←This link will launch the iTunes application.]
[ Subscribe with other podcast applications.]

Full Transcript.


Jared Spool: Welcome, ladies and gentlemen, to another episode of the SpoolCast. I am very, very happy today to have with me one of my best friends in the whole world, Erin Malone, who is going to be giving a virtual seminar for us on June 7th called “Designing for Ratings and Review.” Erin, thanks for joining me today.

Erin Malone: Thanks, Jared, for having me. It’s my pleasure to be here.

Jared: I’m very excited about this seminar, because we have had all sorts of great seminars in the past. You’ve done stuff with us for our Masters Tour. One of the things that have come up, we get questions about all the time, is this idea of adding ratings and reviews to things.

People want to add them to the strangest things. I was using the Uber Car Service the other day and I noticed when I got out that they asked me to rate my driver. I asked the driver about it and he said, “Oh, yeah, I’m rating you, too.” I have a rating now with the car service. And so it’s strange to me, this idea that I now have a rating, and I’m not exactly sure what to do about that.

Erin: They want to make sure you’re a good customer or that you paid well or are not skipping out on the tab. It’s an interesting phenomenon to see these kinds of sites and kinds of practices popping up, because one of the things that we look at in Designing For Ratings and Reviews is to not rate people, and to rate the objects or the activity of people, rather than the person, themselves.

Jared: Really? Why is that?

Erin: Well, the activity of a person and the quality of that activity, whether it’s a review or comments or other contributions, over time becomes an aggregate reputation for that person and can be implied without actually going in and rating a person.

When you start to get into rating people, biases around personality, around perhaps one-off experiences–maybe someone was mad or, like in the case of these sites where people are asked to rate their teachers, if you have a bad situation where maybe you didn’t do well on a test and you got a bad grade and suddenly you hate your teacher and you give them a bad rating.

Jared: Oh, that never happens with teenagers and college students, does it?

Erin: [laughs] Oh, yeah. Sure. Over time, though, this may or may not weed out good or bad teachers, but you know how things are on the Internet. The information associated with you never goes away.

You may be able to look up stuff about a person 10 years later and an anomaly could turn into something that is hanging around and could have serious repercussions on that person’s reputation, on their career and ability to move forward in their jobs over time. So it’s a slippery slope.

Jared: Didn’t this start online with the eBay reputation system, right? You rate the seller, right, A++, you know, “Will buy again.”

Erin: Yes, and you see how well that worked. People can game it. People go in and give themselves ratings. They’re rating the transactions now. How did that transaction go? But, actually rating the seller. It’s better to rate the activity than the person for that very reason.

If you’ve got, say, something about a teacher, and you could rate how they present, the type of tests they do, or the curve, or something like that, but not necessarily the person and getting into personalities and things like that.

Jared: Oh, that’s interesting. So for eBay, they could be rating the transaction, per se, and not so much the seller.

I always think it’s interesting, because I’ll get these satisfaction surveys. They’ll be, “How was our customer service agent?” I didn’t talk to the customer service agent, everything went just peachy keen. There’s no place to indicate, so do I say it was great or do I say it was not great? It’s interesting.

I would guess, also, that a lot of these systems have to do with sort of the voluntary nature of it. You’re most likely to rate them when you’ve had an extreme situation, probably an extreme negative situation. Right?

Erin: Absolutely. You’re going to see both ends of the extreme, when someone’s very, very happy and someone went out of their way to solve an issue or a problem they were having, or the other end, where it was a disaster all the way around. Anything in the middle is pretty much not represented.

The idea of rating professionals in certain practices around their job is a little less troubling than rating people in a social environment, where you can end up with flame wars between community members and things like that. At least, in a job situation, for example, I recently saw on Zillow you can now rate the real estate agent.

Jared: Oh, wow.

Erin: And you know, personalities can play a part in it, but you’re also looking at what types of houses you were shown and how many people were you up against. Were they able to show you things before they went on the market? Were they very thorough in the types of comps they pulled? You’re really looking at the activity around the process of being served by a real estate agent.

Jared: Yeah, but I would think a lot of that would be biased by, if I’m looking for a house and I have a particular sort of house that I’m interested in, and I go to a real estate agent and they just don’t have any houses like that, or all the houses that they have like that are out of my price range, or I’m busy and I don’t respond fast enough and the house goes off the market, that I might interpret that that somehow was reflected on the agent, when, in fact, it was me.

Erin: Right. Right. That’s why I don’t think it’s a good idea to rate people, because you do get all sorts of biases. It’s one thing to rate the houses and say, “This was great. This was not great.” Usually, you rate things after you buy them or after you experience them, so houses is a little weird.

But the experience with a real estate agent is something where you want a good personality fit with you and you want that person to be representing the kinds of houses that you’re interested in.

So it should be treated more like interviewing someone to find the right fit and less about looking at reviews or ratings on this person. It might be a really quick way to filter, but I think it leaves a lot more issues in mind than not, especially if people have bad reviews or bad ratings and you’re not really sure why.

Was it the bias of the person because they were not being shown the houses that they really wanted because there actually weren’t any or was it some personality issue, or was it really just that this person’s bad?

Jared: Right. It’s interesting that you mentioned that because, when we were in New Orleans for the IA Summit a few months ago, a bunch of us went out to dinner. I had picked the restaurant because it looked funky and it was small and the chef seemed to be a character.

When I looked through the reviews, there were a lot of positive reviews but there were a bunch of negative reviews. When I looked at all the negative reviews, they were all because they thought the chef was obnoxious. Right? Apparently, he came out and he was a character and he was grumpy and obnoxious.

To me, that’s the quality of a great little restaurant. I like that. Had I just seen the ratings without the rationale behind them, I would have thought very differently about this restaurant. Everybody who gave it a five or four-star rating went on and on and on about how amazing the food was.

Everybody who gave it a one or two star rating went on about…One dude gave it a one star rating because he apparently, by accident, came to the restaurant half an hour before it was open and the chef yelled at him and told him to get out of the restaurant. And I’m thinking, “Good for the chef.”

Erin: In some ways, reviews get the short end of the stick. Ratings are sexy. They’re very visible. You’ve got these stars in most cases, although people use different icons sometimes, depending on the context. But the power is really in the review because it does give context. It gives you a sense of an understanding of quality. With multiple reviews, you can read a range of opinions about whatever it is you’re looking at.

And I like to look at negative reviews as well, because they really tell you a lot about the character and whether or not that person is just grumpy or whether there’s really an issue. It lends a real level of credibility and authenticity about the site or about the place when the negative reviews are kept and not left out or hidden. And so that you’ve got that balance about something.

Jared: Yeah. When we were studying shoppers on Amazon, one of the things I noticed was that a lot of people went straight for the one and two star reviews…

Erin: I do that myself.

Jared: …and would skim them through and then decide whether the things that those people were grumpy about were the things that made a difference. Like there was someone that was actually shopping for a grill and they’d actually planned to buy the grill, not on Amazon but someplace else. But they were using Amazon as the way to figure out if it was the right grill to buy.

All the complaints were about the packaging, that when it arrived the packaging was all broken up and damaged. But everybody who loved the grill said they had no trouble setting it up and it was fine. It was just people who had gotten the packaging wrong. But because this person wasn’t shipping it, they didn’t care about the packaging.

Erin: Right, right. Sometimes, ratings can be a little simplistic because you only have one axis to gauge, to give your opinion around and then you have to clarify that in the review. There is a type of ratings that let you do…That has multi-facets that let you target different aspects, whether it was shipping…

eBay, with some of the merchant’s ratings, is like that. Or a lot of car sites are like that, where there’s a lot of different facets that people can give their opinion around, because different things matter to different people.

Jared: That’s got to be tricky to get the facets in a way that they’re short labels but everybody’s clear on what they mean. Because if I ask you about something and you have a different definition in your head than what I had, you’re going to give me a different answer than what I think I’m looking for, right?

Erin: Absolutely. Absolutely. The clarity of what is being rated has to be there or you are, you’re getting apples and oranges and people think different things.

That’s one way the reviews part of the system can help disambiguate that. And also, just by having the facets and how they’re discussed and talked about and possibly having definitions or other things to help disambiguate that. Having them repeated over and over again, so through someone looking through, say, a set of listings, they can start to understand what certain things mean.

Jared: Now, how did you get into to working on ratings and reviews?

Erin: Well in my time as a designer over the last gazillion years, when I worked at Alta Vista and Zip2 we were doing city guides and entertainment guides. Ratings and reviews were part of that offering in the systems that we built for the newspaper sites. We layered in community ratings as well as reviews and ratings from reviewers at the newspapers.

And then, when I was at Yahoo, I managed the platform user experience design team. The ratings review and reputation platform was part of the group that I managed, the user experience, as well as personalization and social. So all those things are related and intertwined.

I know a lot about and have spent a lot of time understanding a lot of the platform implications of these things and how complicated they can be, and working with product managers and developers and understanding the data and what it tells us.

Jared: Yeah I mean it seems to me like some systems you need…I noticed at Apple, for example, in the App Store. They really struggle because sometimes people are reviewing the latest release. You see these reviews that say, “Crashes a lot. Crashes a lot.” But then they go and they come out with a new release and it fixes it, so none of those reviews are applicable anymore, yet there they are in the tome of reviews that are there.

Apple goes to some length to say, “Well, show me the reviews for the latest version versus previous versions,” but it’s a really hard problem, isn’t it, to figure out how to make the reviews relevant to what you’re actually going to buy when the thing that you buy can change.

Erin: It is. It is. And you know, there’s different ways to address it. They’re dealing with it with some filtering. You can do that by date, by release. It comes down to are there systems that you can build to automatically, when a new release of something comes out, automatically hide the old reviews? But if the reviews and the ratings are really about the content and that content hasn’t changed, it’s just some technical something that’s changed, then you’re hiding good information.

So it is a struggle and it’s something that can’t always be done 100 percent with technology. I’m a firm believer of having real people’s eyes periodically look at the stuff and go through and be cognizant of what the community is saying about…

Whether it’s other community related type content or whether it’s products, you have to be on top of this stuff. You have to be looking at what people are saying. You have to be looking at the analytics. You have to be looking and kind of combining everything together to make sure that what’s being given to new people coming in is appropriate and right.

It’s hard. If you’re a top tier site like Amazon or Apple where you’ve got millions of people coming in over the course of a month, it’s a lot of data to have to go through. You have to cherry pick, but you could build in systems around new releases of things or something. It’s a hard problem. I don’t have all the answers.

Jared: Right. I remember reading that Amazon had a problem on their own site with one of the releases of the Kindle where people used the review system as a platform for complaining about DRM. It actually had nothing to do with the Kindle at all.

Erin: That’s still happening.

Jared: Oh, wow.

Erin: And that is an indicator that there are not good mechanisms for general feedback that can be easily found by users. They’re using whatever book they downloaded and the ratings and reviews for that to complain about the technical issues.

And this is the problem, because it has a reflection on the author and their book and on the aggregate rating for their book, which has serious monetary implications on future sales, on how they get featured in recommendations and things like that.

It’s one of the things I do talk about in thinking about, you know, there are other implications to having the ratings and review system. People will use it to give you feedback that is totally not related to the object that it’s associated to. And you need to make sure that there are methods for that outlet of technical issues or technical feedback. It’s really important for Amazon.

Jared: Yeah, that’s a really good point.

See, this is interesting to me right? Because I’ve been going around telling people that designers often think in terms of discrete activities. We design to upload pictures, and then we design to view the pictures, and we design to share the pictures. We have these different, discrete activities. But from the user’s perspective, there’s an entire experience and those activities sort of meld together. They’re not as distinct as they are in the designer’s mind.

And what I’m hearing from you here is that some of the things that happen in reviews is this reflection that the experience of the user is not matching up with the desired experience the designers are putting together.

So you know, the designer thinks, “Well, I’m going to create this review thing to review a book,” but the users experience is not just reading the book and enjoying the book, but also getting the book to run on their machine and all that stuff. Those things get mixed up and then reported in the reviews. Of course, that causes issues.

Erin: Right, because their context and their top of mind is, “I just bought this book. It doesn’t work. OK, where do I tell people? I’ll just put it with this book.” Then, suddenly…

It has actually nothing to do with the quality of the book itself as a piece of content. So, all they know is, “I know I bought it. I have this page. Here’s a method for me to put my opinion out there around this whole experience.” And so they do. And it has real implications.

Jared: It reminds me of the old joke where the guy goes to the doctor and has his checkup. The doctor says, “I have bad news. You’re going to die in two weeks.”

The patient says, “I’d like a second opinion.”

He says, “OK, you’re ugly, too.”

[laughter]

Jared: To some extent, that’s what’s happening, right? It’s like, “Well, the book was OK, but the downloading sucks so I’m going to give this one star.”

Erin: Right. I’ve seen that happen in iTunes. You see it in the App Store. You see it in Google Play. Pretty much anywhere where there’s content bundled with some kind of technology, you see these kinds of issues reflected in some of the ratings and reviews because of glitches or bad experiences with the technology that doesn’t really have anything to do with the content. But there’s no other mechanism that people can find to let that be known.

Jared: Right. Right.

Erin: And so that is a problem in designing and something that really needs to be thought through when someone is thinking about adding ratings and reviews to their system. They really need to think about, “What is the type of content that I have here? Is it only consumed here? Is there e-commerce involved? Is there technology involved?”

If there are issues in those things, whether it’s packaging or shipping or some technical download or something, is there a way–and maybe it’s still associated with that object on that book page–to address any technical issues that someone can put in.

Because, as a merchant or as a person running the site, you want to know if there’s issues because you want to get those fixed as soon as possible.

Jared: Yeah. I remember years and years ago, we were testing an early version of the Target.com site. And our participant had come across reviews, they were shopping for an alarm clock, and all the alarm clocks had negative reviews.

Erin: Like ’cause they work too well?

Jared: Yeah. The alarm clock reviews, one of them was…Apparently the person had been shipped a different clock than the one they thought they’d ordered. So their review was all about how it didn’t look anything like it did in the picture and it didn’t really work. “I was really unhappy, and when I went to take it back to the store the clerk wouldn’t take it. I really hate Target right now. Don’t buy this clock and don’t ever do business with Target.” Right?

That was the review for this six-dollar alarm clock that was on the site. Every single review for every single alarm clock–they sold ten of them–every single one was a negative review. It was a one or two star review. The participant turned to me in the study and said, “Why do they sell these things if nobody likes them?”

Again, I think some of it was they were unhappy with the service they got at the store, they were unhappy with aspects that had nothing to do with the clock, per se. You know, and the shipping was obviously as problem.

But I think some of it, to me, felt like, “If I bought an alarm clock from Target and it worked like I expected it to, how likely am I going to, three weeks later, go in and say, you know, I’m going to write a fabulous 200 word review of how awesome this alarm clock is?”

Erin: If you were really passionate about how awesome it was, you might do that because you want other people to know how awesome it is.

Jared: But if it’s a functional item, it’s just an alarm clock, and I got it and it does what I expect it to do, but I’m not passionate about it. In fact, for about two minutes every day, I hate it. Other than that, I don’t think about it, right? It’s not going to garner a lot of reviews.

Erin: You never know what resonates with people. I noticed I purchased some film yesterday on B & H Photo. They have…You can rate items and leave comments and reviews about items, but once you’re done with the transaction, they use a third party service to get you to do a survey to rate the quality of the transaction.

So it is separated from the actual products and you’re given an opportunity to do that. If you give them your email address, then they’ll send you another questionnaire or survey or rating system when you get the products and you can rate, “Did you get the right thing shipped? Was it in good condition? Was there packaging issues?” All that kind of stuff.

They’ve come up with a system where they’re trying to address some of those things outside of the scope of the actual product on the site and give people opportunities to let those kinds of opinions be heard, whether they’re technical or shipping or whatever, in a forum that they can take action against. It’s a little annoying, but I think it’s good because it does give people an opportunity to let those issues come to light.

Jared: Yeah, I’ve been thinking about rating my children on a regular basis. Coming up with a way of…But I guess we’re not supposed to rate people. But I could…

Erin: You could rate the activities.

Jared: I could rate their activities, the quality of…

Erin: The quality of those activities. The quality of those contributions.

Jared: The quality of their cleaning their rooms.

Erin: Yeah, how clean is it? How much time did they take?

Jared: They could rate me on the quality of nagging for cleaning their rooms.

Erin: Well and over time, that adds up to a reputation about you without specifically saying you’re great or you suck.

Jared: That’s true. They could share it with other children to make sure I’m never their parent.

I want to thank you for taking some time to talk to us about this. I’m really looking forward to the virtual seminar. You’re going to get into a lot of details on how to implement these things and what to do, what not to do and how to encourage engagement by choosing the right system. Stuff like that, right?

Erin: Absolutely, and in great detail. It’s going to be fun. I think people will get a lot out of it and will be able to take away some tactics that they can put into place right away.

Jared: Fabulous. Well, ladies and gentlemen, if you want to catch Erin’s virtual seminar, it’s going to be June 7th at 1:30 Eastern time, which is 10:30 Pacific time. In the middle of the morning, if you’re in Australia. But you can get the recording. If you miss it on June 7th, you’ll be able to get the recording from our site, UIE.com.

You can find everything out at UIEVS, for UIE virtual seminar, dot com and see her virtual seminar and the other ones that we offer on a regular basis. Erin, thanks for taking the time to talk to me today.

Erin: Thank you for having me.

Jared: And I want to thank the audience. You guys, you’re awesome. You make us great. Thanks again for encouraging our behavior. We’ll talk to you later. Take care.

Add a Comment