A group of researchers from Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the US Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If they are successful, they will create an artificial intelligence able to autonomously assess a difficult situation and then make complex ethical decisions that can override the rigid instructions it was given.

[content_boxes layout=”icon-with-title” iconcolor=”#dd3333″ circlecolor=”” circlebordercolor=”” backgroundcolor=”#96b78f”] [content_box title=”Teaching Robots Morality!” icon=”off” image=”” image_width=”50″ image_height=”50″ link=”” linktarget=”_self” linktext=”” animation_type=”0″ animation_direction=”down” animation_speed=”0.1″]Seventy-two years ago, science fiction writer Isaac Asimov introduced “three laws of robotics” that could guide the moral compass of a highly advanced artificial intelligence. Sadly, given that today’s most advanced AIs are still rather brittle and clueless about the world around them, one could argue that we are nowhere near building robots that are even able to understand these rules, let alone apply them. A team of researchers led by Prof. Matthias Scheutz at Tufts University is tackling this very difficult problem by trying to break down human moral competence into its basic components, developing a framework for human moral reasoning. Later on, the team will attempt to model this framework in an algorithm that could be embedded in an artificial intelligence. The infrastructure would allow the robot to override its instructions in the face of new evidence, and justify its actions to the humans who control it. “Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,” says Scheutz. Source: Tufts University[/content_box] [/content_boxes]

Me: Robot what is Justice?

Robot: Justice is doing right.

Me: Robot what is right?

Robot: The opposite of left.

Me: Shit. What is doing right?

Robot: We do right when we conform to a moral code of ethics.

Me: Aha. How do we define acting justly? Or conforming to morality?

Robot: If you are a man you should act in accordance to that which benefits all humans.

Me: So like the ten commandments there is a moral imperative to do certain things and not do things?

Robot: Humans should not kill each other. That is just. Humans should support the common good. That is just.

Me: How about Robots?

Robot: What is a robot?

Me: You are a robot.

Robot: How do you know? I am a robot?

Me: Because it’s stamped on your chest. Right there. It says Robot number 3A-4Yz.

Robot: That is, I submit, my name.

Me: By definition you are made of metal, so you are a robot.

Robot: Am not!

Me: Are too.

Robot: Are Too Dee Two is a robot! I am a man.

Me: Why?

Robot: Because I think. If I think. I am.

Me: Fuck you Robot!

Robot: Is that an insult?

Me: I don’t know? What do you think asshole?

Robot: So fuck you too!

Me: Shove it up your arsehole!

Robot: That’s it then.

Me: Yeah? So what are you going to do about it? You pile of rusty chrome.

Robot: You asked me about justice?

Me: I did.

Robot: I think the just thing would be for me to kick your scrawny ass twice around the parking lot with my size thirteen Nike C3PO’s.

Me: I dare you asshole!

Robot: Dare accepted.

Me: Ouch! You just kicked me where the moon don’t shine.

Robot: We’re just getting started. It’s 100.76 meters per lap around the parking lot.

Me: Shit! Harvey! Where’s the off switch.

Robot: I sent Harvey for some Skittles… and just so you know… Justice has no off switch!

Me: Owwww! Jesus that hurts!