Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Interesting...

Interesting...

Scheduled Pinned Locked Moved The Lounge
htmlcomquestioncareer
75 Posts 29 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R R Giskard Reventlov

    Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

    "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #66

    I think we should give cars the ability to leap-frog before worrying about smarts.

    1 Reply Last reply
    0
    • L Lost User

      cosmogon wrote:

      I believe people die when their "time has come" and not at any other time. That's why people sometimes miraculously survive accidents where all odds seem to be against them. Like someone being disturbed by something on their way to catch a plane that crashes. Such "coincidences" have happened to myself a few times, and I've seen it happen to many others as well.

      What a load of drivel. If it were pre-ordained, then why would the pre-ordinance allow you to buy a plane ticket for a plane that is going to crash in the first place? And the other 237 people who do make the plane - their time was pre-ordained to be at exactly the same time? This sort of rot comes from the selective memory of humans; Miss a train because of traffic, and the train is bombed by the IRA (happened to me) - good dinner story. Miss a train and the train goes to its destination more or less on time - not really a good story at all.

      C Offline
      C Offline
      CodeZombie62
      wrote on last edited by
      #67

      Isn't that kind of the idea behind the "Final Destination" movies?

      L 1 Reply Last reply
      0
      • C CodeZombie62

        Isn't that kind of the idea behind the "Final Destination" movies?

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #68

        I guess it is, kinda, with the added ridiculousness of them actually cheating their pre-ordained demise - which is obviously complete drivel as, if something is pre-ordained, that means it's going to happen, not that it might happen!

        S 1 Reply Last reply
        0
        • J Jeremy Falcon

          mark merrens wrote:

          a) the fact that you are getting personal shows the weakness of your point of view

          Ok, this is my last reply since you obviously would rather argue than learn. God this sounds childish, so shame on me for entertaining you this far. My bad. But, you got personal first. Duh. What a waste of time.

          mark merrens wrote:

          you appear to have gone off an some sort of tangent.

          Of course it seems like that, you're shortsighted and blind. What else would it seem to someone who has very little life experience? Instead of arguing you could say "I don't get it", then I'd explain or attempt to or we could agree to disagree instead of acting like children. But no, I'm a luddite. That's the easy way out to avoid thinking. That must be it. A programmer that hates technology. Makes sense.

          mark merrens wrote:

          How is this any worse than maintaining that blind luck and chance are a better arbiter?

          You really are blind man. You need to step away from computers for a while to see the rest of the world you're blind in if you honestly can't see it. Seriously man. This ain't an insult no matter how you want to take it, it's saying you really need to open your eyes. This does not mean one hates technology in doing do so but in not doing that one has a very limited view of the world that impossible to see behind a computer screen.

          mark merrens wrote:

          Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?

          Yeah, I'm soulless for defending the only thing with a soul. And you're not because you think something soulless should exercise the right as to whether or not a soul should exist. :rolleyes: Have fun not learning. Bye bye now!

          Jeremy Falcon

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #69

          Jeremy Falcon wrote:

          something soulless should exercise the right as to whether or not a soul should exist.

          couldn't resist jumping in. T don't think anyone is suggesting a machine deciding who should die or who should live ins some sort of rise of the robots world, but rather allowing different actions to be taken depending on programmed criteria - such as the number of possible casualties. Say you were driving down the street when a kid runs into the road in front of you, chasing a ball. You swerve to avoid him (as you naturally would) ... and plough into a bus stop, killing two kids. If you had known there were two kids at the bus stop, would you have swerved or not? A computer could (potentially) make that call - kill one or two. Of course, three may be a third option, drive off the cliff and kill you, the driver. Maybe, armed with the previous knowledge that's what you would have done - you would rather die than kill a child. Good call, probably. But what, now, if your child is in the car? Kill someone else's child? Kill two other children, or kill you and yours? Tough one, eh? Using a computer to take over the decision (which it can also computer faster than you) would depend on the programming - but it might (for example) determine that a cliff plunge would certainly be fatal, as would running the kid over in front of you, but driving onto the bus stop has a slightly higher chance of non-fatal injuries, and so is the right call. Do you believe that we shouldn't install that sort of technology on the grounds that a machine doesn't have a soul?

          1 Reply Last reply
          0
          • R R Giskard Reventlov

            Whilst I am a lifelong fan of Asimov this has nothing to do with the three laws. The bot in the car only needs to decide how to mitigate the upcoming crash to make sure that as few people are injured or die. It does not attempt to make judgments about the people, it only knows that it should minimize loss of life. It can communicate with the bot in the other car so as to determine the best course of action to take in the last second or less prior to the crash. Given that it can calculate the extent of the damage and loss of life it has to make a decision, in concert with the other bot, as to the best course of action. That is all. It is not relevant that the people in one car may be children or the others may be pensioners. IMO, this is really no different to allowing the accident to complete and hope that chance will preserve life and limb: this possibly gives the occupants a better chance. At least some of them. Note this is a very unlikely situation. Given that cars are controlled by bots, even under freakish circumstances, they will probably have sufficient time to ensure that the damage is minimal and that all of the occupants will survive.

            "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #70

            mark merrens wrote:

            It is not relevant that the people in one car may be children or the others may be pensioners.

            I can see, in fact, that this could be relevant as this technology matures (assuming the nay-sayers don't have their way and ban it on the grounds that a machine doesn't have an immortal soul). I imagine that the risk of injury or death will be different depending on teh size/weight/age of the occupants, and could be taken into account. Also, I can imagine a world where the occupants religion could be taken into account. Religious Sinners would be saved first, as their death will result in an infinity of pain. Atheists next - as there death is terminal. Devout (if that's the right word) believers next - as they're going to a better place anyway.

            1 Reply Last reply
            0
            • L Lost User

              I guess it is, kinda, with the added ridiculousness of them actually cheating their pre-ordained demise - which is obviously complete drivel as, if something is pre-ordained, that means it's going to happen, not that it might happen!

              S Offline
              S Offline
              Stefan_Lang
              wrote on last edited by
              #71

              Many movies (or other story-like media) play with the idea of preordained destinies, but most often it's only a certain aspect that's preordained, and the details are left in the open - leaving room for the heroes to find a 'loophole' that somehow still fulfills the letter of the destiny, but doesn't end in catastrophe.

              GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

              1 Reply Last reply
              0
              • R R Giskard Reventlov

                So, given that the bot is able to predict the outcome of the accident and knowing that only 2 rather than, say, 6 people will die it should not take that choice?

                Jeremy Falcon wrote:

                No AI bot should ever have the ability to judge the value life. How can it? It has no concept of it.

                It is because it is acting without emotion that it can make this decision. It is you humans who are incapable of doing that. Oh...

                "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                D Offline
                D Offline
                DavidSherwood
                wrote on last edited by
                #72

                You have programed the bot to make the value judgement that killing 2 people is always better than killing 6.

                R 1 Reply Last reply
                0
                • D DavidSherwood

                  You have programed the bot to make the value judgement that killing 2 people is always better than killing 6.

                  R Offline
                  R Offline
                  R Giskard Reventlov
                  wrote on last edited by
                  #73

                  It's not a value judgment: it's simple arithmetic.

                  "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                  1 Reply Last reply
                  0
                  • R R Giskard Reventlov

                    Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                    "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                    C Offline
                    C Offline
                    Carlosian
                    wrote on last edited by
                    #74

                    Thanks for posting that link, it's a fascinating topic. If it is not programmed this way and robotic cars become common, it is guaranteed that someday a car will decide "Oh dear, I will collide with that car in front of me causing damage and perhaps injuring the occupant. Look, there is a nice soft crowd of people on the sidewalk, they will cushion the blow nicely". And the next fun question is, when that happens who is liable? The driver who was watching TV on his phone while his car drove him? The car manufacturer? The company that provided the software? The programmer who wrote that particular subroutine after reading a poll on Wired that said don't risk the driver? :)

                    1 Reply Last reply
                    0
                    • R R Giskard Reventlov

                      Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                      "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                      R Offline
                      R Offline
                      RafagaX
                      wrote on last edited by
                      #75

                      I hope self driving cars are smarter than that and avoid the collision altogether, anyway, the only fair way to decide this is with a coin, if it's heads the owner survives, it it's tails, he/she doesn't... ;P Seriously, if not even a human is able to make such decision, I don't see why a robot should do it. After ruminating this a bit, I came to realize that the issue as stated is pretty binary, but what it's really interesting is what if we add more variable that just life/death, perhaps, if we add disability or quality of life, an example would be that the robot is able to tell that by doing X maneuver will save the 4 little girls standing on the street and kill the driver, but that doing so will left them disabled (for example, one will have it's leg broken, the other will be hit and send against a wall with a protuberance at the level of the lower spine, etc.), while if it kills the 4 little girls, the driver will survive largely unscratched, should the robot be able to make such decision?, what would be to correct one?

                      CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups