Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Robot future poses hard questions

Robot future poses hard questions

Scheduled Pinned Locked Moved The Lounge
announcement
13 Posts 8 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Michael Bergman

    Where is Issac Asimov when you need him?

    m.bergman

    -- For Bruce Schneier, quanta only have one state : afraid.

    R Offline
    R Offline
    Roger Wright
    wrote on last edited by
    #4

    Michael Bergman wrote:

    Where is Issac Asimov when you need him?

    Feeding worms, like the rest of the clear thinkers of the past 40 years. BTW - I've met Bruce, and got a copy of his landmark book "Applied Cryptography" as a byproduct of the meeting. He's an incredibly smart man, and a hell of a nice guy. He's also a very good speaker, just in case you're looking for one for some special event.

    "A Journey of a Thousand Rest Stops Begins with a Single Movement"

    1 Reply Last reply
    0
    • L Lost User

      Military is breaking rules...dont they have no discipline for which they are famous? ...thats what I can understand from these... The basic rule about robotics is that it cannot be used to harm humans....and as I read some line saying that Samsung robots are used to gaurd borders...what are those robots doing there...ofcouse they are not greeting people who cross the borders? The government seems to follow its same old policy that it did during all the recent wars and now they are advancing and extending the policy to robots.... "Hurt them and heal them"... Some day if we find robots misbehaving like in iRobot Movie, the root cause will only be us humans... we follow beliefs that are many times irrational....

      B Offline
      B Offline
      Brady Kelly
      wrote on last edited by
      #5

      Anup Shinde wrote:

      The basic rule about robotics is that it cannot be used to harm humans

      Whose rule is that?

      L 2 Replies Last reply
      0
      • B Brady Kelly

        Anup Shinde wrote:

        The basic rule about robotics is that it cannot be used to harm humans

        Whose rule is that?

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #6

        The Three Laws of Robotics written by Isaac Asimov...Remember????

        B 1 Reply Last reply
        0
        • L Lost User

          Military is breaking rules...dont they have no discipline for which they are famous? ...thats what I can understand from these... The basic rule about robotics is that it cannot be used to harm humans....and as I read some line saying that Samsung robots are used to gaurd borders...what are those robots doing there...ofcouse they are not greeting people who cross the borders? The government seems to follow its same old policy that it did during all the recent wars and now they are advancing and extending the policy to robots.... "Hurt them and heal them"... Some day if we find robots misbehaving like in iRobot Movie, the root cause will only be us humans... we follow beliefs that are many times irrational....

          H Offline
          H Offline
          hairy_hats
          wrote on last edited by
          #7

          Anup Shinde wrote:

          The basic rule about robotics is that it cannot be used to harm humans

          The problem is getting a robot to determine if the action it is planning will harm a human, and how. Working out that shooting someone will harm them is pretty obvious, but getting a robot to decide whether or not its future actions will harm someone mentally is extremely difficult.

          L 1 Reply Last reply
          0
          • B Brady Kelly

            Anup Shinde wrote:

            The basic rule about robotics is that it cannot be used to harm humans

            Whose rule is that?

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #8

            Brady Kelly wrote:

            Anup Shinde wrote: The basic rule about robotics is that it cannot be used to harm humans Whose rule is that?

            I think the idea is that this is not under our control. Humans can't control themselves, let alone their creations... Maybe robot will be smarter than us and leave no room for error. :);)

            1 Reply Last reply
            0
            • H hairy_hats

              Anup Shinde wrote:

              The basic rule about robotics is that it cannot be used to harm humans

              The problem is getting a robot to determine if the action it is planning will harm a human, and how. Working out that shooting someone will harm them is pretty obvious, but getting a robot to decide whether or not its future actions will harm someone mentally is extremely difficult.

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #9

              If robots can find out the "mental" effects of its doings....wow... they are already much mature than humans...its irrational to convert their thinking to human thinking

              H 1 Reply Last reply
              0
              • L Lost User

                The Three Laws of Robotics written by Isaac Asimov...Remember????

                B Offline
                B Offline
                Brady Kelly
                wrote on last edited by
                #10

                That's my point. They aren't at all authoritative laws. Don't get me wrong, I'm all for them, but it is very naive to expect everyone that builds a robot to abide by these 'laws', let alone even know who Asimov was.

                R 1 Reply Last reply
                0
                • L Lost User

                  If robots can find out the "mental" effects of its doings....wow... they are already much mature than humans...its irrational to convert their thinking to human thinking

                  H Offline
                  H Offline
                  hairy_hats
                  wrote on last edited by
                  #11

                  True. I think one Asimov story had a male robot getting close to and eventually sleeping with an unhappy woman because he could see that it would make her less hurt mentally. I can't even start to comprehend the complexity of the AI programming that would need: the vision, hearing, emotional empathy, the understanding of human thinking etc.

                  1 Reply Last reply
                  0
                  • B Brady Kelly

                    That's my point. They aren't at all authoritative laws. Don't get me wrong, I'm all for them, but it is very naive to expect everyone that builds a robot to abide by these 'laws', let alone even know who Asimov was.

                    R Offline
                    R Offline
                    Ray Cassick
                    wrote on last edited by
                    #12

                    Brady Kelly wrote:

                    very naive to expect

                    That's why it should be a law and taught in schools right not to the often ignored (and slept through) ethics classes.


                    My Blog[^]
                    FFRF[^]


                    1 Reply Last reply
                    0
                    • G gvisgr8

                      check out BBC's this article.... Robot future poses hard questions is it a sign for serious trouble...in future :~ ----------------------------------------------------------------------------- I THINK , THEREFORE I AM DANGEROUS......" GV

                      K Offline
                      K Offline
                      Kastellanos Nikos
                      wrote on last edited by
                      #13

                      "If an autonomous robot kills someone, whose fault is it?" said Professor Winfield. "Right now, that's not an issue because the responsibility lies with the designer or operator of that robot; but as robots become more autonomous that line or responsibility becomes blurred."

                      Are we talking about true AI here? In that case the designers can claim that the robot is able for taking desicions on it's own, so they are not responsible. Like when today they claim mechanical faulure and nobody goes to jail, they will blame the robot for the accident. But in the case of an inteligent robot, should we grant them some civilian rights? The right to defent himself in a court and have an attorney? (-and of course the right to remain silent :) ) Of course punishing a robot has no meaning for us, but it might be able to argue that the real resposibility falls on a real person. So, if some people wish to deny resposibility for robo-accidents, this will automatically give some basic rights to robots and AI. Do you agree? Can this open the way for more roborights?

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups