Azimov's Laws
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
I think you need to get out more.
Regards, Rob Philpott.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
The very first problem with Azimuth Asimov's laws that it takes for sure a mechanical brain that can be think with the complexity of human brain. In Asimov's time it was far even more than today, so you can't expect him to create a perfect rule-set... I'm sure that if it will come to reality to build such robots we will have to create some new laws...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
If you read your Azimov, you'll see that he had the "brain" as hardwiring rather than as a form of byte code, and that the complexity of the positronic brain would prevent rewiring. This was cleverly documented in the first encounter with R Daneel Olivaw.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
Without thinking too hard about it, these are two issues I see immediately: 1. A human will have to encode these rules. How often do we infallibly develop perfect software? Assuming we can get past item 1 - 2. If we let the robots self replicate, that will be the fatal flaw. The rate they will be able to evolve will be beyond anything that we are able to comprehend. There was only a single robot in control in iRobot. Imagine 1 robot for every human being on the planet thinking, self-replicating and evolving. That would seem to end in the same scenario as the with the Black Goo and nanotechnology.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
I'm guessing you watched the movie, but didn't read the book.
-
Without thinking too hard about it, these are two issues I see immediately: 1. A human will have to encode these rules. How often do we infallibly develop perfect software? Assuming we can get past item 1 - 2. If we let the robots self replicate, that will be the fatal flaw. The rate they will be able to evolve will be beyond anything that we are able to comprehend. There was only a single robot in control in iRobot. Imagine 1 robot for every human being on the planet thinking, self-replicating and evolving. That would seem to end in the same scenario as the with the Black Goo and nanotechnology.
Paul Watt wrote:
How often do we infallibly develop perfect software
Yeah, Asimov even based some of his stories on error like that.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
-
You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
No, he just never read the books : ) BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
-
I'm guessing you watched the movie, but didn't read the book.
-
You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
-
In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.
Chona1171 Web Developer (C#), Silverlight
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
-
In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.
Chona1171 Web Developer (C#), Silverlight
The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
-
Chris Quinn wrote:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What if the robots decide that humans are the greatest threat to humanity and eliminate us?
Read the books
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
-
No, he just never read the books : ) BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
My favorite stories. Just bought a new hardback edition of the trilogy. :thumbsup: The greatest sci-fi author, bar none.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
Asimov (please note spelling) has written a lot about the whys and wherefores of the Three Laws, and his later work explored many of the flaws. For starters, ignore that awful movie. "I, Robot" had very, very little to do with Asimov's work, and Asimov was quite clear, in many different stories, that a forceful accusation of having caused harm would have driven a robot (especially an early, relatively primitive model) into the unbreakable feedback loop called "brainlock." The First Law reflect the fear generated by the Frankenstein Complex, the idea that a human creation that was strong, faster, and much more difficult to disable would take over. The first part, "A robot may not injure a human being," prevents overt actions, such as a robot shooting a person, pushing her off a cliff, crashing the plane it is flying into the side of a building, etc. The second part, "... or through inaction, allow a human being to come to harm" prevents it from engaging in an action that, itself, does not cause harm but which could lead to harm: for example, setting an inhabited building on fire, dropping a boulder on someone, and so on (these are actions where humans are not directly harmed, where the robot could save them, but is under no obligation to do so.) In the later Robot novels (Robots of Dawn and Robots and Empire), Asimov recognized the First Law's flaws, and used those as a way of merging the Robots into the much later, robot-less Foundation stories. The principle flaw is, How do you define "harm"? A human who goes hang-gliding or mountain biking or surfing could come to harm, so the First Law compels robots to dissuade humans from such activities. Driving cars and flying planes can be dangerous, so best to let robots handle that. And more: is an actor harmed by bad reviews? Authors? Artists? Perhaps it would be best if creativity were discouraged. Eventually, the Spacers (the first wave of humans to colonize other star systems, who brought robots with them) became so dependent on robots that their culture stagnated and people became more like pets than masters. This led the two robots in the later novels, R. Giskard and R. Daneel Olivaw, to conceive of the Zeroeth Law: "A robot may not injure humanity or, through inaction, allow humanity to come to harm." The other three laws became amended to include the condition, "except where such would conflict with the Zeroeth Law." When the two put a plan into action that would force the humans of Earth to begin a second wave of robot-fr
-
If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
According to Asimov, the fear was that a robot could perform an action that does not cause direct harm, but which is harmful anyway at some point in the future. We see this sort of thing far too often in humans: "I just planted the landmines, it is not my fault that you stepped on one." Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
-
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
Chris Quinn wrote:
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
Truth.
-
The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.
The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation. If you have the time to read the whole lot (definitely a summer project) it is worth the time.