Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Azimov's Laws

Azimov's Laws

Scheduled Pinned Locked Moved The Lounge
csharp
38 Posts 19 Posters 4 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L L Viljoen

    Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

    Chona1171 Web Developer (C#), Silverlight

    R Offline
    R Offline
    Rob Philpott
    wrote on last edited by
    #4

    I think you need to get out more.

    Regards, Rob Philpott.

    1 Reply Last reply
    0
    • L L Viljoen

      Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

      Chona1171 Web Developer (C#), Silverlight

      Kornfeld Eliyahu PeterK Offline
      Kornfeld Eliyahu PeterK Offline
      Kornfeld Eliyahu Peter
      wrote on last edited by
      #5

      The very first problem with Azimuth Asimov's laws that it takes for sure a mechanical brain that can be think with the complexity of human brain. In Asimov's time it was far even more than today, so you can't expect him to create a perfect rule-set... I'm sure that if it will come to reality to build such robots we will have to create some new laws...

      I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)

      "It never ceases to amaze me that a spacecraft launched in 1977 can be fixed remotely from Earth." ― Brian Cox

      1 Reply Last reply
      0
      • L L Viljoen

        Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

        Chona1171 Web Developer (C#), Silverlight

        C Offline
        C Offline
        Chris Quinn
        wrote on last edited by
        #6

        You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

        ========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================

        E L 2 Replies Last reply
        0
        • L L Viljoen

          Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

          Chona1171 Web Developer (C#), Silverlight

          P Offline
          P Offline
          Pete OHanlon
          wrote on last edited by
          #7

          If you read your Azimov, you'll see that he had the "brain" as hardwiring rather than as a form of byte code, and that the complexity of the positronic brain would prevent rewiring. This was cleverly documented in the first encounter with R Daneel Olivaw.

          1 Reply Last reply
          0
          • L L Viljoen

            Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

            Chona1171 Web Developer (C#), Silverlight

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #8

            Nonsense. As soon as an entity understands the concept law, it will understand that it can be defied.

            Bastard Programmer from Hell :suss: If you can't read my code, try converting it here[^]

            1 Reply Last reply
            0
            • L L Viljoen

              Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

              Chona1171 Web Developer (C#), Silverlight

              P Offline
              P Offline
              Paul M Watt
              wrote on last edited by
              #9

              Without thinking too hard about it, these are two issues I see immediately: 1. A human will have to encode these rules. How often do we infallibly develop perfect software? Assuming we can get past item 1 - 2. If we let the robots self replicate, that will be the fatal flaw. The rate they will be able to evolve will be beyond anything that we are able to comprehend. There was only a single robot in control in iRobot. Imagine 1 robot for every human being on the planet thinking, self-replicating and evolving. That would seem to end in the same scenario as the with the Black Goo and nanotechnology.

              R 1 Reply Last reply
              0
              • L L Viljoen

                Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

                Chona1171 Web Developer (C#), Silverlight

                M Offline
                M Offline
                mikepwilson
                wrote on last edited by
                #10

                I'm guessing you watched the movie, but didn't read the book.

                L 1 Reply Last reply
                0
                • P Paul M Watt

                  Without thinking too hard about it, these are two issues I see immediately: 1. A human will have to encode these rules. How often do we infallibly develop perfect software? Assuming we can get past item 1 - 2. If we let the robots self replicate, that will be the fatal flaw. The rate they will be able to evolve will be beyond anything that we are able to comprehend. There was only a single robot in control in iRobot. Imagine 1 robot for every human being on the planet thinking, self-replicating and evolving. That would seem to end in the same scenario as the with the Black Goo and nanotechnology.

                  R Offline
                  R Offline
                  Rage
                  wrote on last edited by
                  #11

                  Paul Watt wrote:

                  How often do we infallibly develop perfect software

                  Yeah, Asimov even based some of his stories on error like that.

                  ~RaGE();

                  I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.

                  1 Reply Last reply
                  0
                  • C Chris Quinn

                    You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

                    ========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================

                    E Offline
                    E Offline
                    Ennis Ray Lynch Jr
                    wrote on last edited by
                    #12

                    No, he just never read the books : ) BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (

                    Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch

                    R 1 Reply Last reply
                    0
                    • M mikepwilson

                      I'm guessing you watched the movie, but didn't read the book.

                      L Offline
                      L Offline
                      L Viljoen
                      wrote on last edited by
                      #13

                      In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.

                      Chona1171 Web Developer (C#), Silverlight

                      C M 2 Replies Last reply
                      0
                      • C Chris Quinn

                        You missed the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

                        ========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================

                        L Offline
                        L Offline
                        Lost User
                        wrote on last edited by
                        #14

                        Chris Quinn wrote:

                        A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

                        What if the robots decide that humans are the greatest threat to humanity and eliminate us?

                        E 1 Reply Last reply
                        0
                        • L L Viljoen

                          In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.

                          Chona1171 Web Developer (C#), Silverlight

                          C Offline
                          C Offline
                          Chris Quinn
                          wrote on last edited by
                          #15

                          The only similarity between the book "I, Robot" and the film "I, Robot" is the title

                          ========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================

                          G 1 Reply Last reply
                          0
                          • L L Viljoen

                            In my busy schedule I like the summary that movies provide :) - not all the time though world war z (movie) was a giant let down so yeah i didnt read the i-robot book.

                            Chona1171 Web Developer (C#), Silverlight

                            M Offline
                            M Offline
                            mikepwilson
                            wrote on last edited by
                            #16

                            The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.

                            G 1 Reply Last reply
                            0
                            • L L Viljoen

                              Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

                              Chona1171 Web Developer (C#), Silverlight

                              W Offline
                              W Offline
                              W Balboos GHB
                              wrote on last edited by
                              #17

                              If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.

                              "The difference between genius and stupidity is that genius has its limits." - Albert Einstein

                              "As far as we know, our computer has never had an undetected error." - Weisert

                              "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010

                              G 1 Reply Last reply
                              0
                              • L Lost User

                                Chris Quinn wrote:

                                A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

                                What if the robots decide that humans are the greatest threat to humanity and eliminate us?

                                E Offline
                                E Offline
                                Ennis Ray Lynch Jr
                                wrote on last edited by
                                #18

                                Read the books

                                Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch

                                1 Reply Last reply
                                0
                                • E Ennis Ray Lynch Jr

                                  No, he just never read the books : ) BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (

                                  Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch

                                  R Offline
                                  R Offline
                                  R Giskard Reventlov
                                  wrote on last edited by
                                  #19

                                  My favorite stories. Just bought a new hardback edition of the trilogy. :thumbsup: The greatest sci-fi author, bar none.

                                  "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                                  T 1 Reply Last reply
                                  0
                                  • L L Viljoen

                                    Hi Guys thought i should start an interesting debate topic My recent research into robotics has made me contemplate Azimov's laws and the possible loopholes in the laws 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Well lets say we can put the exact comprehension of such law's in the required 1's and 0's and also forgetting the massive loophole exposed in i-robot where we basically get owned by the first law. How else could a robot willingly harm a human while obeying the laws. Well how about creating its own robot army and then willingly walking away before the dangerous robot army becomes alive (well the inaction phrase basically covers this) How about the ability for a robot to change its own base code , that will erase these directives. (would that be a violation of any of these rules) Edit - Now I know the guy wrote these years ago and he could not have possibly have contemplated robotics in its current state and sophistication. This is merely a debate topic for (find the loophole or try to perfect it) its not like I am going to send his relatives a bill for a robot rampage, it goes without saying but apparently I have to say it

                                    Chona1171 Web Developer (C#), Silverlight

                                    G Offline
                                    G Offline
                                    Gregory Gadow
                                    wrote on last edited by
                                    #20

                                    Asimov (please note spelling) has written a lot about the whys and wherefores of the Three Laws, and his later work explored many of the flaws. For starters, ignore that awful movie. "I, Robot" had very, very little to do with Asimov's work, and Asimov was quite clear, in many different stories, that a forceful accusation of having caused harm would have driven a robot (especially an early, relatively primitive model) into the unbreakable feedback loop called "brainlock." The First Law reflect the fear generated by the Frankenstein Complex, the idea that a human creation that was strong, faster, and much more difficult to disable would take over. The first part, "A robot may not injure a human being," prevents overt actions, such as a robot shooting a person, pushing her off a cliff, crashing the plane it is flying into the side of a building, etc. The second part, "... or through inaction, allow a human being to come to harm" prevents it from engaging in an action that, itself, does not cause harm but which could lead to harm: for example, setting an inhabited building on fire, dropping a boulder on someone, and so on (these are actions where humans are not directly harmed, where the robot could save them, but is under no obligation to do so.) In the later Robot novels (Robots of Dawn and Robots and Empire), Asimov recognized the First Law's flaws, and used those as a way of merging the Robots into the much later, robot-less Foundation stories. The principle flaw is, How do you define "harm"? A human who goes hang-gliding or mountain biking or surfing could come to harm, so the First Law compels robots to dissuade humans from such activities. Driving cars and flying planes can be dangerous, so best to let robots handle that. And more: is an actor harmed by bad reviews? Authors? Artists? Perhaps it would be best if creativity were discouraged. Eventually, the Spacers (the first wave of humans to colonize other star systems, who brought robots with them) became so dependent on robots that their culture stagnated and people became more like pets than masters. This led the two robots in the later novels, R. Giskard and R. Daneel Olivaw, to conceive of the Zeroeth Law: "A robot may not injure humanity or, through inaction, allow humanity to come to harm." The other three laws became amended to include the condition, "except where such would conflict with the Zeroeth Law." When the two put a plan into action that would force the humans of Earth to begin a second wave of robot-fr

                                    L 1 Reply Last reply
                                    0
                                    • W W Balboos GHB

                                      If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws. Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1). &etc.

                                      "The difference between genius and stupidity is that genius has its limits." - Albert Einstein

                                      "As far as we know, our computer has never had an undetected error." - Weisert

                                      "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010

                                      G Offline
                                      G Offline
                                      Gregory Gadow
                                      wrote on last edited by
                                      #21

                                      According to Asimov, the fear was that a robot could perform an action that does not cause direct harm, but which is harmful anyway at some point in the future. We see this sort of thing far too often in humans: "I just planted the landmines, it is not my fault that you stepped on one." Why should we expect that an artificial life form, engineered to be faster and smarter than humans, would be less creative in justifying its actions?

                                      W 1 Reply Last reply
                                      0
                                      • C Chris Quinn

                                        The only similarity between the book "I, Robot" and the film "I, Robot" is the title

                                        ========================================================= I'm an optoholic - my glass is always half full of vodka. =========================================================

                                        G Offline
                                        G Offline
                                        Gregory Gadow
                                        wrote on last edited by
                                        #22

                                        Chris Quinn wrote:

                                        The only similarity between the book "I, Robot" and the film "I, Robot" is the title

                                        Truth.

                                        1 Reply Last reply
                                        0
                                        • M mikepwilson

                                          The book is well worth the effort. It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.

                                          G Offline
                                          G Offline
                                          Gregory Gadow
                                          wrote on last edited by
                                          #23

                                          The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation. If you have the time to read the whole lot (definitely a summer project) it is worth the time.

                                          M 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups