Azimov's Laws
-
You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
No, he just never read the books : ) BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
-
I'm guessing you watched the movie, but didn't read the book.
-
You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
-
In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.
Chona1171 Web Developer (C#), Silverlight
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
-
In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.
Chona1171 Web Developer (C#), Silverlight
The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
-
Chris Quinn wrote:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What if the robots decide that humans are the greatest threat to humanity and eliminate us?
Read the books
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
-
No, he just never read the books : ) BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
My favorite stories. Just bought a new hardback edition of the trilogy. :thumbsup: The greatest sci-fi author, bar none.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
Asimov (please note spelling) has written a lot about the whys and wherefores of the Three Laws, and his later work explored many of the flaws. For starters, ignore that awful movie. "I, Robot" had very, very little to do with Asimov's work, and Asimov was quite clear, in many different stories, that a forceful accusation of having caused harm would have driven a robot (especially an early, relatively primitive model) into the unbreakable feedback loop called "brainlock." The First Law reflect the fear generated by the Frankenstein Complex, the idea that a human creation that was strong, faster, and much more difficult to disable would take over. The first part, "A robot may not injure a human being," prevents overt actions, such as a robot shooting a person, pushing her off a cliff, crashing the plane it is flying into the side of a building, etc. The second part, "... or through inaction, allow a human being to come to harm" prevents it from engaging in an action that, itself, does not cause harm but which could lead to harm: for example, setting an inhabited building on fire, dropping a boulder on someone, and so on (these are actions where humans are not directly harmed, where the robot could save them, but is under no obligation to do so.) In the later Robot novels (Robots of Dawn and Robots and Empire), Asimov recognized the First Law's flaws, and used those as a way of merging the Robots into the much later, robot-less Foundation stories. The principle flaw is, How do you define "harm"? A human who goes hang-gliding or mountain biking or surfing could come to harm, so the First Law compels robots to dissuade humans from such activities. Driving cars and flying planes can be dangerous, so best to let robots handle that. And more: is an actor harmed by bad reviews? Authors? Artists? Perhaps it would be best if creativity were discouraged. Eventually, the Spacers (the first wave of humans to colonize other star systems, who brought robots with them) became so dependent on robots that their culture stagnated and people became more like pets than masters. This led the two robots in the later novels, R. Giskard and R. Daneel Olivaw, to conceive of the Zeroeth Law: "A robot may not injure humanity or, through inaction, allow humanity to come to harm." The other three laws became amended to include the condition, "except where such would conflict with the Zeroeth Law." When the two put a plan into action that would force the humans of Earth to begin a second wave of robot-fr
-
If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
According to Asimov, the fear was that a robot could perform an action that does not cause direct harm, but which is harmful anyway at some point in the future. We see this sort of thing far too often in humans: "I just planted the landmines, it is not my fault that you stepped on one." Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
-
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
Chris Quinn wrote:
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
Truth.
-
The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.
The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation. If you have the time to read the whole lot (definitely a summer project) it is worth the time.
-
According to Asimov, the fear was that a robot could perform an action that does not cause direct harm, but which is harmful anyway at some point in the future. We see this sort of thing far too often in humans: "I just planted the landmines, it is not my fault that you stepped on one." Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
Gregory.Gadow wrote:
Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
Again, per (1) "
Quote:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
" means that, should the robot cleverly think about any possibility of human harm then then they are constrained to prevent it. Harm creativity would have to be totally accidental, and were it's harmful nature discovered, would fall into the category of forbidden.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
-
Gregory.Gadow wrote:
Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
Again, per (1) "
Quote:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
" means that, should the robot cleverly think about any possibility of human harm then then they are constrained to prevent it. Harm creativity would have to be totally accidental, and were it's harmful nature discovered, would fall into the category of forbidden.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
Asimov himself thought otherwise ;P Through his characters, he stated that the redundancy was part of the Frankenstein Complex, the human fear, built up by centuries of stories, that slaves and creations always -- always rebel and seek vengeance on their captors. Since humans can find ways to justify atrocities while still genuinely believing that they did not cause harm, it is reasonable to think that robots, too, could find ways. The in-universe rationale was redundancy, doubling up in an effort to plug a potential loophole, no matter how remote the chance that it could be used. This is also how the Three Laws became so embedded into the design of the positronic brain that it became impossible to create a positronic brain without the Three Laws. In any case, all of your objections -- and my justifications, for that matter -- are irrelevant. Asimov said "I want this for my plot" and it was so. Authors can be pushy like that.
-
The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation. If you have the time to read the whole lot (definitely a summer project) it is worth the time.
Really? Nice! I hadn't gotten that far at all. I'll definitely bump those up in that huge sci-fi queue I have.
-
Really? Nice! I hadn't gotten that far at all. I'll definitely bump those up in that huge sci-fi queue I have.
The complete list[^]. You may want to pack a lunch :laugh:
-
My favorite stories. Just bought a new hardback edition of the trilogy. :thumbsup: The greatest sci-fi author, bar none.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
Yeah, I like him too. And thought the same, until I realized he was essentially a communist.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein -
Yeah, I like him too. And thought the same, until I realized he was essentially a communist.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einsteinahmed zahmed wrote:
I realized he was essentially a communist
Just because he was of Russian descent? :wtf:
Software Zen:
delete this;
-
ahmed zahmed wrote:
I realized he was essentially a communist
Just because he was of Russian descent? :wtf:
Software Zen:
delete this;
No, because if you read the novels closely you will see that. And also his political philosophy was at least socialistic.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein -
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
If you read Asimov's robot stories and novels, you discover something about almost all of them - they essentially solve the same puzzle in each story. How do you let a robot seem to violate The Three Laws without actually letting them do so?
Software Zen:
delete this;