Azimov's Laws
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
Asimov (please note spelling) has written a lot about the whys and wherefores of the Three Laws, and his later work explored many of the flaws. For starters, ignore that awful movie. "I, Robot" had very, very little to do with Asimov's work, and Asimov was quite clear, in many different stories, that a forceful accusation of having caused harm would have driven a robot (especially an early, relatively primitive model) into the unbreakable feedback loop called "brainlock." The First Law reflect the fear generated by the Frankenstein Complex, the idea that a human creation that was strong, faster, and much more difficult to disable would take over. The first part, "A robot may not injure a human being," prevents overt actions, such as a robot shooting a person, pushing her off a cliff, crashing the plane it is flying into the side of a building, etc. The second part, "... or through inaction, allow a human being to come to harm" prevents it from engaging in an action that, itself, does not cause harm but which could lead to harm: for example, setting an inhabited building on fire, dropping a boulder on someone, and so on (these are actions where humans are not directly harmed, where the robot could save them, but is under no obligation to do so.) In the later Robot novels (Robots of Dawn and Robots and Empire), Asimov recognized the First Law's flaws, and used those as a way of merging the Robots into the much later, robot-less Foundation stories. The principle flaw is, How do you define "harm"? A human who goes hang-gliding or mountain biking or surfing could come to harm, so the First Law compels robots to dissuade humans from such activities. Driving cars and flying planes can be dangerous, so best to let robots handle that. And more: is an actor harmed by bad reviews? Authors? Artists? Perhaps it would be best if creativity were discouraged. Eventually, the Spacers (the first wave of humans to colonize other star systems, who brought robots with them) became so dependent on robots that their culture stagnated and people became more like pets than masters. This led the two robots in the later novels, R. Giskard and R. Daneel Olivaw, to conceive of the Zeroeth Law: "A robot may not injure humanity or, through inaction, allow humanity to come to harm." The other three laws became amended to include the condition, "except where such would conflict with the Zeroeth Law." When the two put a plan into action that would force the humans of Earth to begin a second wave of robot-fr
-
If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
According to Asimov, the fear was that a robot could perform an action that does not cause direct harm, but which is harmful anyway at some point in the future. We see this sort of thing far too often in humans: "I just planted the landmines, it is not my fault that you stepped on one." Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
-
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================
Chris Quinn wrote:
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
Truth.
-
The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.
The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation. If you have the time to read the whole lot (definitely a summer project) it is worth the time.
-
According to Asimov, the fear was that a robot could perform an action that does not cause direct harm, but which is harmful anyway at some point in the future. We see this sort of thing far too often in humans: "I just planted the landmines, it is not my fault that you stepped on one." Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
Gregory.Gadow wrote:
Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
Again, per (1) "
Quote:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
" means that, should the robot cleverly think about any possibility of human harm then then they are constrained to prevent it. Harm creativity would have to be totally accidental, and were it's harmful nature discovered, would fall into the category of forbidden.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
-
Gregory.Gadow wrote:
Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?
Again, per (1) "
Quote:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
" means that, should the robot cleverly think about any possibility of human harm then then they are constrained to prevent it. Harm creativity would have to be totally accidental, and were it's harmful nature discovered, would fall into the category of forbidden.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
Asimov himself thought otherwise ;P Through his characters, he stated that the redundancy was part of the Frankenstein Complex, the human fear, built up by centuries of stories, that slaves and creations always -- always rebel and seek vengeance on their captors. Since humans can find ways to justify atrocities while still genuinely believing that they did not cause harm, it is reasonable to think that robots, too, could find ways. The in-universe rationale was redundancy, doubling up in an effort to plug a potential loophole, no matter how remote the chance that it could be used. This is also how the Three Laws became so embedded into the design of the positronic brain that it became impossible to create a positronic brain without the Three Laws. In any case, all of your objections -- and my justifications, for that matter -- are irrelevant. Asimov said "I want this for my plot" and it was so. Authors can be pushy like that.
-
The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation. If you have the time to read the whole lot (definitely a summer project) it is worth the time.
Really? Nice! I hadn't gotten that far at all. I'll definitely bump those up in that huge sci-fi queue I have.
-
Really? Nice! I hadn't gotten that far at all. I'll definitely bump those up in that huge sci-fi queue I have.
The complete list[^]. You may want to pack a lunch :laugh:
-
My favorite stories. Just bought a new hardback edition of the trilogy. :thumbsup: The greatest sci-fi author, bar none.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
Yeah, I like him too. And thought the same, until I realized he was essentially a communist.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein -
Yeah, I like him too. And thought the same, until I realized he was essentially a communist.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einsteinahmed zahmed wrote:
I realized he was essentially a communist
Just because he was of Russian descent? :wtf:
Software Zen:
delete this;
-
ahmed zahmed wrote:
I realized he was essentially a communist
Just because he was of Russian descent? :wtf:
Software Zen:
delete this;
No, because if you read the novels closely you will see that. And also his political philosophy was at least socialistic.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein -
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
If you read Asimov's robot stories and novels, you discover something about almost all of them - they essentially solve the same puzzle in each story. How do you let a robot seem to violate The Three Laws without actually letting them do so?
Software Zen:
delete this;
-
The complete list[^]. You may want to pack a lunch :laugh:
Yeah, I'm not THAT interested in Azimov. I'll read maybe a couple more. I didn't realize they were all in the same universe.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
Chona1171 wrote:
Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication.
Not sure where you live but where I live there are no "sophisticated" robots. There are automatons and nothing else. And it looks really unlikely where I am sitting that there will ever be a robot like Asimov wrote about.
-
Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it
Chona1171 Web Developer (C#), Silverlight
I was nine years old, when I first read Asimov's robot stories. Even then I realised that those rules were just plain ridiculous, because, even if you could build a machine that could understand the rules enough to follow them as intended, they would be unenforceable, because it would be smart enough to make its own decisions on whether or not to follow them.
Chona1171 wrote:
thought i should start an interesting debate topic
Then start one.
I wanna be a eunuchs developer! Pass me a bread knife!
-
Chona1171 wrote:
Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication.
Not sure where you live but where I live there are no "sophisticated" robots. There are automatons and nothing else. And it looks really unlikely where I am sitting that there will ever be a robot like Asimov wrote about.
Also thought it would go without saying but unfortunately have to say it. Computers and the ability to test certain theories and amend would have been impossible with their level of technology. What I mean by it is almost 60 years ago the memory and processing power required to operate an artificial intelligence (capable of even doing motion tracking processing data and identifying objects)they could only theorize machines with that capability (and the ability to change what the definition is of what they deem human). And its true that neural networks was also theorized round about the same time , putting it into practice would have been quite difficult in 1940's as the most advanced machine(outside of those vacuum tube monsters) was probably the Wilkes EDSAC, with a whopping 1k works , 17 bits machine capable of doing only 714 operations per second And yes I use the word sophistication lightly as robotics are not there yet in terms of our sci-fi movies if that is the standard you are holding it to.
Chona1171 Web Developer (C#), Silverlight
-
Asimov (please note spelling) has written a lot about the whys and wherefores of the Three Laws, and his later work explored many of the flaws. For starters, ignore that awful movie. "I, Robot" had very, very little to do with Asimov's work, and Asimov was quite clear, in many different stories, that a forceful accusation of having caused harm would have driven a robot (especially an early, relatively primitive model) into the unbreakable feedback loop called "brainlock." The First Law reflect the fear generated by the Frankenstein Complex, the idea that a human creation that was strong, faster, and much more difficult to disable would take over. The first part, "A robot may not injure a human being," prevents overt actions, such as a robot shooting a person, pushing her off a cliff, crashing the plane it is flying into the side of a building, etc. The second part, "... or through inaction, allow a human being to come to harm" prevents it from engaging in an action that, itself, does not cause harm but which could lead to harm: for example, setting an inhabited building on fire, dropping a boulder on someone, and so on (these are actions where humans are not directly harmed, where the robot could save them, but is under no obligation to do so.) In the later Robot novels (Robots of Dawn and Robots and Empire), Asimov recognized the First Law's flaws, and used those as a way of merging the Robots into the much later, robot-less Foundation stories. The principle flaw is, How do you define "harm"? A human who goes hang-gliding or mountain biking or surfing could come to harm, so the First Law compels robots to dissuade humans from such activities. Driving cars and flying planes can be dangerous, so best to let robots handle that. And more: is an actor harmed by bad reviews? Authors? Artists? Perhaps it would be best if creativity were discouraged. Eventually, the Spacers (the first wave of humans to colonize other star systems, who brought robots with them) became so dependent on robots that their culture stagnated and people became more like pets than masters. This led the two robots in the later novels, R. Giskard and R. Daneel Olivaw, to conceive of the Zeroeth Law: "A robot may not injure humanity or, through inaction, allow humanity to come to harm." The other three laws became amended to include the condition, "except where such would conflict with the Zeroeth Law." When the two put a plan into action that would force the humans of Earth to begin a second wave of robot-fr
-
Also thought it would go without saying but unfortunately have to say it. Computers and the ability to test certain theories and amend would have been impossible with their level of technology. What I mean by it is almost 60 years ago the memory and processing power required to operate an artificial intelligence (capable of even doing motion tracking processing data and identifying objects)they could only theorize machines with that capability (and the ability to change what the definition is of what they deem human). And its true that neural networks was also theorized round about the same time , putting it into practice would have been quite difficult in 1940's as the most advanced machine(outside of those vacuum tube monsters) was probably the Wilkes EDSAC, with a whopping 1k works , 17 bits machine capable of doing only 714 operations per second And yes I use the word sophistication lightly as robotics are not there yet in terms of our sci-fi movies if that is the standard you are holding it to.
Chona1171 Web Developer (C#), Silverlight
Chona1171 wrote:
And yes I use the word sophistication lightly as robotics are not there yet in terms of our sci-fi movies if that is the standard you are holding it to.
Almost every book and movie which depicts robots depicts them in a way which is far, far above what we have now. Consider the robots in the movie "I, Robot" at the very beginning before the master AI takes over where the personal robot is chasing the thief down the street. At that point the common robot is not considered (by the movie) as being very intelligent and definitely not self-aware. Yet the sophistication required for the entire sequence for that one single interaction is vastly more significant than anything that is possible now.
Chona1171 wrote:
Computers and the ability to test certain theories and amend would have been impossible with their level of technology.
That statement is magic by technology in that it presumes that technology can forever proceed without bounds. Yet it is limited by the constraints of the physical universe both in terms of physics, complexity and the abilities of the human mind (which is just a subset of the complexity problem.) There have been many remarkable advances since the 1950s but those are all predicated on incremental steps that proceed each one. And in terms actual advances in AI (in any way that suggests autonomous robot servants) has been very slow. The achievements have been remarkable in their mediocrity. As incremental steps towards any sort of AI it is not encouraging.
-
Chona1171 wrote:
And yes I use the word sophistication lightly as robotics are not there yet in terms of our sci-fi movies if that is the standard you are holding it to.
Almost every book and movie which depicts robots depicts them in a way which is far, far above what we have now. Consider the robots in the movie "I, Robot" at the very beginning before the master AI takes over where the personal robot is chasing the thief down the street. At that point the common robot is not considered (by the movie) as being very intelligent and definitely not self-aware. Yet the sophistication required for the entire sequence for that one single interaction is vastly more significant than anything that is possible now.
Chona1171 wrote:
Computers and the ability to test certain theories and amend would have been impossible with their level of technology.
That statement is magic by technology in that it presumes that technology can forever proceed without bounds. Yet it is limited by the constraints of the physical universe both in terms of physics, complexity and the abilities of the human mind (which is just a subset of the complexity problem.) There have been many remarkable advances since the 1950s but those are all predicated on incremental steps that proceed each one. And in terms actual advances in AI (in any way that suggests autonomous robot servants) has been very slow. The achievements have been remarkable in their mediocrity. As incremental steps towards any sort of AI it is not encouraging.
jschell wrote:
That statement is magic by technology in that it presumes that technology can forever proceed without bounds. Yet it is limited by the constraints of the physical universe both in terms of physics, complexity and the abilities of the human mind (which is just a subset of the complexity problem.)
True and as our technology grows so does the understanding of the limitations of such technology but keeping in mind that that the understanding of such limitation is only bound by what we know today (here I go looping again) but lets go back 200 years prior to the invention of radio , invisible waves carrying signals and voice across huge distances would have sounded absurd , what we are tinkering with today in terms of the edge of physics might become as laughable as the first experiments on static electricity 600 BC when the use of metal was limited to tools and weapons. Yes when you look at the robots of today and those bulky monstrosities they put together as a "robot that washes dishes" it does not look encouraging but remember even the 1 GHz quad core smartphone we carry in our pockets had humble beginnings Sure I believe one day we will push the maximum out of everything we have and know everything about sub atomic particles but for now I look to the future with great optimism and expectation and like the sailors back in the day careful that we dont fall off the edge of the world
Chona1171 Web Developer (C#), Silverlight