Yelp for Robots

Can Customer Reviews Establish Trust Between People and a Self-Driving Car?

“We prefer brand new cars, but experienced drivers. So, What about self-driving cars?”

Trust is fundamental to any type of social interaction. As robots are becoming better at assisting  people in various tasks, is it essential to investigate how trust can be established between robots and the general public.  A recent Wired article suggests companies such as Uber and Airbnb have developed a reviews-ratings-based  eco-system that reintroduced “trust” to the public. I was interested to find out how people would relate to service robots once a review platform was established.  I wanted to test whether people’s attitudes toward reviews of services carried over to robots. So on a recent trip to New York, I illustrated a fictional customer review platform and asked people on the street for their reactions to self-driving cars.


I showed participants a mock up of a car service and explained that it was for self-driving cars. I told them that customers could review their trips (just as with Uber or Yelp)  and asked which car they’d choose to ride with based on the reviews..

a. A brand new car with no reviews yet.

b. A car with 15 good reviews and no bad reviews.

c.  A car with 105 reviews and a few bad reviews.


Among a dozen participants, 8 said that Car C seemed the most reliable even though they were told that cars would sync date all the time which means they are all equally “experienced” and  “intelligent”.When asked why option c felt most reliable (even though c has some bad reviews), a number of participants responded that imperfect reviews  seemed more realistic for certain reason.  It seems that the reviews platform influenced people to relate  to and evaluate a robotic car similarly to the manner in which they would evaluate human drivers.


This small experiment is just a snapshot of how a review system could affect people’s perception of the reliability of robots. The overall positive response made me think about a bigger question: Could this be a key for developing the public’s trust towards service robots, with a Yelp for robots?        



The Importance Of Acknowledgements

The greatest frustration when talking to Siri is you have no idea how much the system has “heard” your  sentences. This detail often leaves users repeating themselves, confused and frustrated. Though “listening” and “loading” animations indicate the process of the program, it just doesn’t feel enough like a message is getting through..

In conversation,  we as human, are constantly exchange acknowledgments. We nod, or say “Yeah,” or“O.K.” when responding to a partner talking. As a result, our conversational partner understands if we  are following, or paying attention. These  examples are just the tip of the acknowledgment iceberg: On the whole, we express acknowledgment through voice, tone, expression, gesture, eye gaze, timing, and a range of small, unconscious muscle movements. These cues add up to a subtle communicative exchange.

Acknowledgments do not require thinking. In fact, they are often made before we have thought about the information we just received. Like a nature reflection, acknowledgments display if we are engaging a conversation. It is such a natural and wonderful system even when you intentionally try to turn it off, you just cannot. And there is something really satisfying about getting an instant response from your interaction partner.

Technology designers are aware of the importance of steady acknowledgments in communication, and have been trying to reproduce the behavior since the dawn of the computer. The clicking  sounds of keys and buttons, the blinking lights, the animated “loading wheels”—these are the kinds of“artificial acknowledgments” we receive from computers. And for a moment, they seemed to function just well enough.  But then came voice interfaces such as Siri, Cortana, Echo and etc. By trying to make interaction seamless and more human, they in fact made it much harder to use by cutting out visual cues.

The very advances in talking computers have made their failure to offer subtle cues stand out so starkly. By trying to resemble natural of human interaction,  the the voice interface allow us to build a high expectation that computer to be able to communicate as fluidly and expressively as humans. However, because of the minimal feedback we receive, we find it hard to tell if the computer is engaging with us and whether we should  pause so the programs can “catch up with their thoughts.”No matter how human it sounds, it fails to give the instant satisfaction that comes from seeing a conversational partner nod.




Using Format