A few years ago, the curious folks at the Radiolab show/podcast asked some kids to hold a Barbie doll, a live hamster, and a Furby robot upside down. Not surprisingly, the children were unfazed by the Barbie, holding it on its head for a long time. When it was the hamster’s turn, the kids were quick to release the squirming animal, for fear that they were hurting it (no surprise here either). The interesting part came when they held the Furby. The children said that, even though they knew it was just a toy, they worried that they were “hurting” the robot (which loudly protested being upside down), suggesting that they felt some empathy for the furry machine.
Now, a new study by a team of Japanese researchers shows that, in certain situations, children
are actually horrible little brats may not be as empathetic towards robots as we’d previously thought, with gangs of unsupervised tykes repeatedly punching, kicking, and shaking a robot in a Japanese mall.
The researchers—from ATR Intelligent Robotics and Communication Laboratories, Osaka University, Ryukoku University, and Tokai University, in Japan—patrolled a public shopping complex in Osaka with a remotely operated Robovie 2 (a robot that is, incidentally, no stranger to abuse). Whenever somebody obstructed the robot’s path, it would politely ask the human to step aside. If the human didn’t listen, the robot moved in the opposite direction. Over the course of the study, researchers found that children were sometimes all too eager to give the robot a hard time. Particularly when in packs and unsupervised, the youngsters would intentionally block Robovie’s way.
Just look at these seemingly adorable hellmonsters refuse to let poor Robovie past:
According to the study, “Escaping from Children’s Abuse of Social Robots,” obstruction like this wasn’t nearly the worst of it. The tots’ behavior often escalated, and sometimes they’d get violent, hitting and kicking Robovie (below). They also engaged in verbal abuse, calling the robot “bad words.” (The researchers did not disclose what bad words may have been used, but they mention that one kid called the robot “idiot” eight times.)
The Japanese group didn’t just document the bullying behavior, though; they wanted to find clever ways of helping the robot avoid the abusive situations. They started by developing a computer simulation and statistical model of the children’s abuse towards the robot, showing that it happens primarily when the kids are in groups and no adults are nearby.
Next, they designed an abuse-evading algorithm to help the robot avoid situations where tiny humans might gang up on it. Literally tiny humans: the robot is programmed to run away from people who are below a certain height and escape in the direction of taller people. When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person. This ensures that an adult is there to intervene when one of the little brats decides to pound the robot’s head with a bottle (which only happened a couple times).
In a second paper, “Why Do Children Abuse Robots?,” based on the same Japanese mall experiment, the researchers interviewed the abusive children about their behavior. When questioned, 74 percent of the kids described the robot as “human-like” and only 13 percent as “machine-like.” Half of them said that they believed that their behavior was “stressful or painful” for the robot.
So basically, most of these kids perceive the robot they’re abusing as lifelike, and then just go ahead and abuse it anyway. While that’s a little disturbing, it appears to be in line with some child psychology research on animal abuse. Empathy for other entities may be something we learn as we age. And as for grown ups? It looks like adults are reluctant to abuse robots that respond in a lifelike way, and empathic adults even more so.
If our actions towards robots reflect our empathy, then there’s some hope for humanity. When friendly Canadian hitchhiking robot hitchBOT was vandalized (by adults, apparently) during its most recent road trip, this caused an outpouring of human support. Prior to that, the robot had successfully traveled thousands of miles by relying on the kindness of strangers, very few of whom, we assume, were unsupervised children.
“Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, were presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.
Dr. Kate Darling researches robot ethics and intellectual property at the MIT Media Lab. She’s a Fellow at the Harvard Berkman Center for Internet and Society and the Yale Information Society Project. Follow her on Twitter: @grok_