Robot future poses hard questions
-
Where is Issac Asimov when you need him?
m.bergman
-- For Bruce Schneier, quanta only have one state : afraid.
Michael Bergman wrote:
Where is Issac Asimov when you need him?
Feeding worms, like the rest of the clear thinkers of the past 40 years. BTW - I've met Bruce, and got a copy of his landmark book "Applied Cryptography" as a byproduct of the meeting. He's an incredibly smart man, and a hell of a nice guy. He's also a very good speaker, just in case you're looking for one for some special event.
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
-
Military is breaking rules...dont they have no discipline for which they are famous? ...thats what I can understand from these... The basic rule about robotics is that it cannot be used to harm humans....and as I read some line saying that Samsung robots are used to gaurd borders...what are those robots doing there...ofcouse they are not greeting people who cross the borders? The government seems to follow its same old policy that it did during all the recent wars and now they are advancing and extending the policy to robots.... "Hurt them and heal them"... Some day if we find robots misbehaving like in iRobot Movie, the root cause will only be us humans... we follow beliefs that are many times irrational....
Anup Shinde wrote:
The basic rule about robotics is that it cannot be used to harm humans
Whose rule is that?
-
Anup Shinde wrote:
The basic rule about robotics is that it cannot be used to harm humans
Whose rule is that?
-
Military is breaking rules...dont they have no discipline for which they are famous? ...thats what I can understand from these... The basic rule about robotics is that it cannot be used to harm humans....and as I read some line saying that Samsung robots are used to gaurd borders...what are those robots doing there...ofcouse they are not greeting people who cross the borders? The government seems to follow its same old policy that it did during all the recent wars and now they are advancing and extending the policy to robots.... "Hurt them and heal them"... Some day if we find robots misbehaving like in iRobot Movie, the root cause will only be us humans... we follow beliefs that are many times irrational....
Anup Shinde wrote:
The basic rule about robotics is that it cannot be used to harm humans
The problem is getting a robot to determine if the action it is planning will harm a human, and how. Working out that shooting someone will harm them is pretty obvious, but getting a robot to decide whether or not its future actions will harm someone mentally is extremely difficult.
-
Anup Shinde wrote:
The basic rule about robotics is that it cannot be used to harm humans
Whose rule is that?
Brady Kelly wrote:
Anup Shinde wrote: The basic rule about robotics is that it cannot be used to harm humans Whose rule is that?
I think the idea is that this is not under our control. Humans can't control themselves, let alone their creations... Maybe robot will be smarter than us and leave no room for error. :);)
-
Anup Shinde wrote:
The basic rule about robotics is that it cannot be used to harm humans
The problem is getting a robot to determine if the action it is planning will harm a human, and how. Working out that shooting someone will harm them is pretty obvious, but getting a robot to decide whether or not its future actions will harm someone mentally is extremely difficult.
-
That's my point. They aren't at all authoritative laws. Don't get me wrong, I'm all for them, but it is very naive to expect everyone that builds a robot to abide by these 'laws', let alone even know who Asimov was.
-
If robots can find out the "mental" effects of its doings....wow... they are already much mature than humans...its irrational to convert their thinking to human thinking
True. I think one Asimov story had a male robot getting close to and eventually sleeping with an unhappy woman because he could see that it would make her less hurt mentally. I can't even start to comprehend the complexity of the AI programming that would need: the vision, hearing, emotional empathy, the understanding of human thinking etc.
-
That's my point. They aren't at all authoritative laws. Don't get me wrong, I'm all for them, but it is very naive to expect everyone that builds a robot to abide by these 'laws', let alone even know who Asimov was.
-
check out BBC's this article.... Robot future poses hard questions is it a sign for serious trouble...in future :~ ----------------------------------------------------------------------------- I THINK , THEREFORE I AM DANGEROUS......" GV
"If an autonomous robot kills someone, whose fault is it?" said Professor Winfield. "Right now, that's not an issue because the responsibility lies with the designer or operator of that robot; but as robots become more autonomous that line or responsibility becomes blurred."
Are we talking about true AI here? In that case the designers can claim that the robot is able for taking desicions on it's own, so they are not responsible. Like when today they claim mechanical faulure and nobody goes to jail, they will blame the robot for the accident. But in the case of an inteligent robot, should we grant them some civilian rights? The right to defent himself in a court and have an attorney? (-and of course the right to remain silent :) ) Of course punishing a robot has no meaning for us, but it might be able to argue that the real resposibility falls on a real person. So, if some people wish to deny resposibility for robo-accidents, this will automatically give some basic rights to robots and AI. Do you agree? Can this open the way for more roborights?