What if the future holds a world where robots defy human orders and begin to do their own thing? Someone tells "Robot #5" to clean the yard and "Robot #5" decides that it is overworked, so it refuses.
It sounds far-fetched, but it may happen because researchers are trying to find the best way to teach robots to say no to human orders. It would seem our robot overlords will eventually rule us, having us do the dishes, instead of the opposite.
Matthias Scheutz and Gordon Briggs from Tufts University's Human-Robot Interaction Lab are the guys working on the best way for robots to reject human orders. The reason for this is humans are not always rational thinkers, and may give orders that put others at risk.
With this being the case, a robot should be able to defy such an order without taking over the world. From what we've come to understand, the robot must first evaluate certain factors before rejecting an order.
The researchers have laid out a few conditions, and they do make sense.
Here are the conditions according to IEEE Spectrum:
1. Knowledge: Do I know how to do X?
2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
3. Goal priority and timing: Am I able to do X right now?
4. Social role and obligation: Am I obligated based on my social role to do X?
5. Normative permissibility: Does it violate any normative principle to do X?
Looking at number four, some folks might be wondering what it means and if it poses a danger. Well, not at all — it is just a system that puts the robot in a position to calculate whether or not the human giving the order is allowed to do so.
With number five, it basically means the robot should stand its ground when asked to do something dangerous. For example, an owner might be angry with his next door neighbor, so this owner may decide to order his robot to hurt this person.
In such a situation, we agree a robot must smart enough to realize the dangers and say no.
At the moment, the orders robots can take from humans are basic but effective. As seen in the video below, the robot is asked to walk forward, but chose not to because it may fall off the edge.
What these researchers are doing is a good idea, and it should make for a better human-to-robot communication. Hence, their intelligence must be embraced, not feared.