If you were in a burning building and a robot appeared to rescue you, would you follow it?
A new study out of Georgia Tech Research Institute (GTRI) found that the answer is yes — even if it isn’t in your best interest.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer at GTRI. “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”
In the experiment, researchers built the “Rescue Robot” and had it lead 42 volunteers into a conference room where they were told they were to fill out a questionnaire. Along the way, the robot lost its way or even broke down.
Soon after the volunteers made it to the conference room, it was filled with artificial smoke. The Rescue Robot — now brightly lit with red LEDs and arms that were used as pointers — entered the room and led the subjects to the back of the building rather than toward a doorway that was marked with an exit sign.
WATCH: Should you trust a robot?
Even though the robot had clearly demonstrated that it was capable of making errors earlier, the subjects still trusted it to lead them to safety.
“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a research engineer who conducted the study. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”
Only when the robot made errors during the simulated rescue attempt did some of the subjects question its directions.
The researchers believe that the robot took on the role of an authority figure, which people tend to place their trust in during times of crisis.