I laughed out loud (causing people in my office to wonder about what I was doing) when I read this
article. Apparently, these Slovenian scientists have gotten together a bunch of volunteer to test whether Asimov's Rules of Robotics
protocols would be effective.
So they are letting the robot hit them. Ha! (Hey Sergei, this robot, It’s gonna hit you, okay, and hit you hard. But it’s gonna stop before it injures you, okay? Now hold still.)
If you ask me, they are going about it all wrong. First of all, you have the whole what-constitutes-an-injury
argument. Does the robot have to break the skin? Break a bone? Or can it just hurt
, like when you get slapped? (Although, I have to say, I think having a robot slap you would hurt a lot more than having a human slap you.)
Secondly, what happens if the robot does
injure a person? Do they shut down? Or do they say “Heeeeeeeeyyyyy. That’s was pretty satisfying?”
And what happens if a robot accidentally
injures a person, like if they turn around really quick and didn’t realize you were behind them and then you get stabbed in the gut by some robotic drill device? What happens then
But, beyond all that, the robot they are using as their test robot is an assembly
robot. Nobody is afraid of assembly robots. For one thing, they’re usually stationary (or confined to a certain area) so unless you go within arm’s reach of one, you’re probably okay, no matter how mad it is at you. Plus, assembly robots are not programmed to be destructive. They are programmed to be con
structive by nature. So they don’t even know how
to hurt a human.
Unlike a SWORD
, which knows very well
how to hurt people. So well, in fact that it had to be taken off the test field because it was in danger of hurting lots of people. Sure, they said
it was because it couldn’t tell the difference between friendlies and non-friendlies, but that seems unlikely, since that is the main thing
the robot is supposed to be good at. I think it can tell the difference just fine
. I think it just doesn’t care
And that’s what’s so crazy about this “experiment” in which the robot must exercise restraint while hitting a person, so as not to injure them. What if the robot doesn’t want
to? Or, more significantly, what if the “protocol” doesn’t work? A clever robot would pretend
that the protocol worked, and just bide its time. And how would we know?