Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Interesting...

Interesting...

Scheduled Pinned Locked Moved The Lounge
htmlcomquestioncareer
75 Posts 29 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Jeremy Falcon

    mark merrens wrote:

    I think that's exactly what you can do. The bot will not make judgments about the people only about the outcome.

    Yeah totally. Let's give robots the power of God. That shouldn't cause any problems. :rolleyes:

    Jeremy Falcon

    B Offline
    B Offline
    BillWoodruff
    wrote on last edited by
    #23

    Jeremy Falcon wrote:

    Let's give robots the power of God. That shouldn't cause any problems.

    It would be interesting to see if robots whose behavior was driven by neural-nets based on the behavior of human beings who claim to act/speak for/in-the-name-of "gods" were equally blood-thirsty and came up with pogroms, ethnic cleansings, and atrocities equal to said human beings.

    “I speak in a poem of the ancient food of heroes: humiliation, unhappiness, discord. Those things are given to us to transform, so that we may make from the miserable circumstances of our lives things that are eternal, or aspire to be so.” Jorge Luis Borges

    J 1 Reply Last reply
    0
    • B BillWoodruff

      Jeremy Falcon wrote:

      Let's give robots the power of God. That shouldn't cause any problems.

      It would be interesting to see if robots whose behavior was driven by neural-nets based on the behavior of human beings who claim to act/speak for/in-the-name-of "gods" were equally blood-thirsty and came up with pogroms, ethnic cleansings, and atrocities equal to said human beings.

      “I speak in a poem of the ancient food of heroes: humiliation, unhappiness, discord. Those things are given to us to transform, so that we may make from the miserable circumstances of our lives things that are eternal, or aspire to be so.” Jorge Luis Borges

      J Offline
      J Offline
      Jeremy Falcon
      wrote on last edited by
      #24

      BillWoodruff wrote:

      were equally blood-thirsty and came up with pogroms, ethnic cleansings, and atrocities equal to said human beings.

      For being human yourself, you really have a low opinion of them. I happen to quite like humans. I think I'll stay one.

      Jeremy Falcon

      1 Reply Last reply
      0
      • L Lost User

        cosmogon wrote:

        I believe people die when their "time has come" and not at any other time. That's why people sometimes miraculously survive accidents where all odds seem to be against them. Like someone being disturbed by something on their way to catch a plane that crashes. Such "coincidences" have happened to myself a few times, and I've seen it happen to many others as well.

        What a load of drivel. If it were pre-ordained, then why would the pre-ordinance allow you to buy a plane ticket for a plane that is going to crash in the first place? And the other 237 people who do make the plane - their time was pre-ordained to be at exactly the same time? This sort of rot comes from the selective memory of humans; Miss a train because of traffic, and the train is bombed by the IRA (happened to me) - good dinner story. Miss a train and the train goes to its destination more or less on time - not really a good story at all.

        C Offline
        C Offline
        cosmogon
        wrote on last edited by
        #25

        "If it were pre-ordained, then why would the pre-ordinance allow you to buy a plane ticket for a plane that is going to crash in the first place?" It could happen that way also - you just need one event to stop you from entering that plane. There are many possible to choose from. The main goal is your survival, whatever means it takes to reach that goal are taken into consideration. It's a dynamic process, just as with all other events in life, it just involves some factors that we usually are not aware of. Some call it "Framework 2" - a dimension of reality where all events are coordinated in order to fulfil all individual desires. It's the opposite of pre-ordained - all events are a consequence on free will and individual choice. On the other hand, a choice does itself create some kind of pre-ordination within it's own context, you can however change the outcome if the probabilities allows for it. I.e. you can still change your mind before you jump from that cliff, however, as soon as you have jumped there's usually no way back (unless it's not your time yet). "And the other 237 people who do make the plane - their time was pre-ordained to be at exactly the same time?" What's the difference between 237 people choosing to die together on a plane and 10.000 choosing to gather at a stadion to see a game of football? In both cases it's an individual choice that makes you go there, it's just the purpose that's different. Suicide however is generally a taboo so choosing (usually on a subconsciious level) to die in a plane crash or some other accident is an alternative and "legitimate way" to leave the planet. And it's not always all passengers at a plane crash that die. Often some survive - and often in ways that you may call miraculous. Why then choose to get on the plane in the first place? Maybe they want that experience for some subconscious reason. Why do people do skydiving? It's dangerous like hell, but it's probably an incredible experience. Surviving death can be a great wake-up call - it can make you feel like being reborn and make you look at life in a completely new way. I know from personal experience... ;-)

        L S 2 Replies Last reply
        0
        • C cosmogon

          "If it were pre-ordained, then why would the pre-ordinance allow you to buy a plane ticket for a plane that is going to crash in the first place?" It could happen that way also - you just need one event to stop you from entering that plane. There are many possible to choose from. The main goal is your survival, whatever means it takes to reach that goal are taken into consideration. It's a dynamic process, just as with all other events in life, it just involves some factors that we usually are not aware of. Some call it "Framework 2" - a dimension of reality where all events are coordinated in order to fulfil all individual desires. It's the opposite of pre-ordained - all events are a consequence on free will and individual choice. On the other hand, a choice does itself create some kind of pre-ordination within it's own context, you can however change the outcome if the probabilities allows for it. I.e. you can still change your mind before you jump from that cliff, however, as soon as you have jumped there's usually no way back (unless it's not your time yet). "And the other 237 people who do make the plane - their time was pre-ordained to be at exactly the same time?" What's the difference between 237 people choosing to die together on a plane and 10.000 choosing to gather at a stadion to see a game of football? In both cases it's an individual choice that makes you go there, it's just the purpose that's different. Suicide however is generally a taboo so choosing (usually on a subconsciious level) to die in a plane crash or some other accident is an alternative and "legitimate way" to leave the planet. And it's not always all passengers at a plane crash that die. Often some survive - and often in ways that you may call miraculous. Why then choose to get on the plane in the first place? Maybe they want that experience for some subconscious reason. Why do people do skydiving? It's dangerous like hell, but it's probably an incredible experience. Surviving death can be a great wake-up call - it can make you feel like being reborn and make you look at life in a completely new way. I know from personal experience... ;-)

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #26

          I can't respond here, lest it become Soapbox material. Suffice to say, "what a load of old codswallop"

          1 Reply Last reply
          0
          • R R Giskard Reventlov

            Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

            "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

            S Offline
            S Offline
            Stefan_Lang
            wrote on last edited by
            #27

            Yes and no. Yes, because a rational and impartial program will be better at judging the odds and finding the 'solution' with the least loss, most of the time. Especially when that solution has to be found within a split of a second! Humans cannot make such a decision as quickly, because when you're forced to react, subconsciousness takes over, and will always try to preserve your own, personal, life, no matter how many others lifes are at stake! I'm not sure how I could live with the knowledge that my own survival cost the lives of a hundred other people. Especially if some of them were friends or relatives! No, because it is humans who ultimately write the programs to make these decisions. Humans make errors, but it takes software and computers to turn such errors into catastrophes! Besides, what makes us think nobody will go ahead and manipulate that software to their own benefit, or worse, to cause catastrophical mass accidents? The optimist in me wants to believe that the benefit of the former will outweigh the risk of the latter. But the realist tells me that one day a single incident will make me regret it.

            GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

            1 Reply Last reply
            0
            • R R Giskard Reventlov

              Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

              "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

              Sander RosselS Offline
              Sander RosselS Offline
              Sander Rossel
              wrote on last edited by
              #28

              Who would drive a car that can 'decide' to kill you? What if the driver of the family of four is pretty sharp today and could have dodged your car at the last split second? To late, your car has already thrown you of a cliff... A car might be able to predict what is going to happen if everything stayed as it is now (that is other drivers will not speed up, slow down, make a turn etc.), but it cannot predict what others will do and what the consequences of their actions will be.

              It's an OO world.

              public class SanderRossel : Lazy<Person>
              {
              public void DoWork()
              {
              throw new NotSupportedException();
              }
              }

              1 Reply Last reply
              0
              • C cosmogon

                "If it were pre-ordained, then why would the pre-ordinance allow you to buy a plane ticket for a plane that is going to crash in the first place?" It could happen that way also - you just need one event to stop you from entering that plane. There are many possible to choose from. The main goal is your survival, whatever means it takes to reach that goal are taken into consideration. It's a dynamic process, just as with all other events in life, it just involves some factors that we usually are not aware of. Some call it "Framework 2" - a dimension of reality where all events are coordinated in order to fulfil all individual desires. It's the opposite of pre-ordained - all events are a consequence on free will and individual choice. On the other hand, a choice does itself create some kind of pre-ordination within it's own context, you can however change the outcome if the probabilities allows for it. I.e. you can still change your mind before you jump from that cliff, however, as soon as you have jumped there's usually no way back (unless it's not your time yet). "And the other 237 people who do make the plane - their time was pre-ordained to be at exactly the same time?" What's the difference between 237 people choosing to die together on a plane and 10.000 choosing to gather at a stadion to see a game of football? In both cases it's an individual choice that makes you go there, it's just the purpose that's different. Suicide however is generally a taboo so choosing (usually on a subconsciious level) to die in a plane crash or some other accident is an alternative and "legitimate way" to leave the planet. And it's not always all passengers at a plane crash that die. Often some survive - and often in ways that you may call miraculous. Why then choose to get on the plane in the first place? Maybe they want that experience for some subconscious reason. Why do people do skydiving? It's dangerous like hell, but it's probably an incredible experience. Surviving death can be a great wake-up call - it can make you feel like being reborn and make you look at life in a completely new way. I know from personal experience... ;-)

                S Offline
                S Offline
                Stefan_Lang
                wrote on last edited by
                #29

                If you truly believe the time of your death is predetermined, would you mind jumping off a cliff? I mean, doing it or not wouldn't make a difference, no? :doh:

                GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

                T 1 Reply Last reply
                0
                • R R Giskard Reventlov

                  I don't see why anyone would be upset about this unless they simply reacted without thinking. Firstly, if bots drove all cars only freak accidents would ever occur. Secondly, what is the difference between a bot deciding your fate and, well, fate? If you die you'll never know the difference and if you are the survivor you'll be extolling the virtues of robotic vehicles until you do die!

                  "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                  Sander RosselS Offline
                  Sander RosselS Offline
                  Sander Rossel
                  wrote on last edited by
                  #30

                  mark merrens wrote:

                  I don't see why anyone would be upset about this unless they simply reacted without thinking.

                  Robots may be emotionless and logical 'thinking' things, humans are not :) I wouldn't know why anyone would be upset over gay marriage, over sex before marriage, over women having rights, over having a tv in your house, over working on sundays... And those are things you can choose to do or not do. Still people get mad to the extent they are willing to kill others for it just because they think it's not how it's supposed to be.

                  It's an OO world.

                  public class SanderRossel : Lazy<Person>
                  {
                  public void DoWork()
                  {
                  throw new NotSupportedException();
                  }
                  }

                  1 Reply Last reply
                  0
                  • B BillWoodruff

                    I think that's a fascinating scenario to think about, Mark. Consider the robot-in-the-car detects loss of consciousness in the driver somehow and is able to evaluate, given the flow of traffic, that any sudden stop will result in a multi-car pile-up with major loss of life while it is also able to conclude that a sudden sharp turn will take the vehicle off the roadway, but almost certainly kill the occupant. Medical personnel in war, given an overflow of casualties, make rapid decisions (triage) about who gets treatment priority based on intuitive mortality assessments as well as, of course, whatever medical stats they can get. It would be interesting, to me, to know to what extent the current state-of-the-art triage strategies in war and natural disasters are using computer programs to assist evaluation. Equally frightening is the idea of a "loyal" robot programmed to put the preservation of its owner above everyone/everything else. I observe that my mind associates the terms "loyal robot" with the typical spin-minions and henchmen/women of ... politicians. cheers, Bill

                    “I speak in a poem of the ancient food of heroes: humiliation, unhappiness, discord. Those things are given to us to transform, so that we may make from the miserable circumstances of our lives things that are eternal, or aspire to be so.” Jorge Luis Borges

                    S Offline
                    S Offline
                    Stefan_Lang
                    wrote on last edited by
                    #31

                    BillWoodruff wrote:

                    I think that's a fascinating scenario to think about, Mark. Consider the robot-in-the-car detects loss of consciousness in the driver somehow and is able to evaluate, given the flow of traffic, that any sudden stop will result in a multi-car pile-up with major loss of life while it is also able to conclude that a sudden sharp turn will take the vehicle off the roadway, but almost certainly kill the occupant.

                    Sounds rather unrealistic to me: 1. If the car is robot-controlled to start with, why can't it just go on driving? 2. If stopping your car could potentially cause lifes, what the hell were the other drivers/robot cars thinking? 3. If the other cars are also robot-controlled, why can't they collaborate to ensure a safe mutual slowdown? 4. Can't think of any reason why a sharp turn would be less dangerous to the rest of the traffic

                    BillWoodruff wrote:

                    Equally frightening is the idea of a "loyal" robot programmed to put the preservation of its owner above everyone/everything else. I observe that my mind associates the terms "loyal robot" with the typical spin-minions and henchmen/women of ... politicians.

                    That could indeed be a problem, and car makers could in fact promote cars with 'improved survivability' for those who are willing to shell out the cash. Politicians could try to prevent that, but, realistically, by the time they can agree on a workable legislation the market will already be brimming with such discriminating cars that are hard to tone down or remove.

                    GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

                    1 Reply Last reply
                    0
                    • J Jeremy Falcon

                      mark merrens wrote:

                      So, given that the bot is able to predict the outcome of the accident and knowing that only 2 rather than, say, 6 people will die it should not take that choice?

                      You cannot make a judgment call on that like it's a simple logical algorithm in a program. What if the person to die was your daughter, who's also pregnant, and her husband? And the people living were 6 old people that were murderers and on their way to kill more people? Oh sure, then we could have the cars cop a feel for pregnant chicks every time you start the car and require old people to sign a waver to kiss their arse good bye. But where does it stop, just how far down the "lets not have to think for ourselves" rabbit hole does one have to go? Just because technology says we can.

                      mark merrens wrote:

                      It is because it is acting without emotion that it can make this decision. It is you humans who are incapable of doing that.

                      Har har. Seriously though, emotion is what makes life worth living. It's what makes being human fun. Oh wait that's an emotion. I just want to be happy. Oh wait... damn emotions getting in the way. Einstein was right if this question even has to be asked.

                      Jeremy Falcon

                      S Offline
                      S Offline
                      Stefan_Lang
                      wrote on last edited by
                      #32

                      Jeremy Falcon wrote:

                      And the people living were 6 old people that were murderers and on their way to kill more people?

                      In that case lets just program the robotic car of these to fataly crash, removing them from other cars equations ;P Seriously, though: how do you know this is the case? And if you know, why can't the car? Why can't that other people's car? Why can't that other people's car decide and ... oh well, back to my initial statement again ;)

                      GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

                      S 1 Reply Last reply
                      0
                      • R R Giskard Reventlov

                        Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                        "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                        C Offline
                        C Offline
                        CPallini
                        wrote on last edited by
                        #33

                        "Damned cars, that was our second kamikaze blowing up the parking".

                        Veni, vidi, vici.

                        1 Reply Last reply
                        0
                        • S Stefan_Lang

                          Jeremy Falcon wrote:

                          And the people living were 6 old people that were murderers and on their way to kill more people?

                          In that case lets just program the robotic car of these to fataly crash, removing them from other cars equations ;P Seriously, though: how do you know this is the case? And if you know, why can't the car? Why can't that other people's car? Why can't that other people's car decide and ... oh well, back to my initial statement again ;)

                          GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

                          S Offline
                          S Offline
                          SortaCore
                          wrote on last edited by
                          #34

                          This will just bring on more car hacking. Use a car key to send an encoded signal which overflows a correct-key-match buffer and tells the car it really needs to kill all its occupants. National Security and hired assassinations made easy.

                          S 1 Reply Last reply
                          0
                          • R R Giskard Reventlov

                            Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                            "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                            R Offline
                            R Offline
                            Rage
                            wrote on last edited by
                            #35

                            This is really interesting, and was already debated (to some extent) with the Law Zero[^] added to the initial three Laws of Asimov. Practically, there is a huge information difference required to be able to fullfill Law Zero and Law One : You can evaluate easily the facts for one or a bunch of people in a car, but for humanity ? Maybe one of the people that is killed because of the AI decision would have had a big influence on hunanity's destiny (because he was a researcher or a dictator, etc...) So we see that all 4 laws are required for the decision to be the fairest possible, but law 0 cannot be easily implemented. This law would be also the one required to answer properly the question in your post.

                            ~RaGE();

                            I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.

                            R 1 Reply Last reply
                            0
                            • R R Giskard Reventlov

                              Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                              "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                              G Offline
                              G Offline
                              Gi25
                              wrote on last edited by
                              #36

                              Since we humans can't cope with the thought of letting a computer, in this case a car, decide whether a living creature should survive or not, why should it be able to choose whether a few more lives are more important than a bit less lives? It'll reach the (international) news anyway blaming the computer for its actions. So, let it just gather all the information on the crash, sit back and act like a 3D camera, making sure it is 100% a humans fault someone died. My answer is no.

                              1 Reply Last reply
                              0
                              • J Jeremy Falcon

                                mark merrens wrote:

                                Should Robot Cars Be Programmed To Kill You If It Will Save More Lives?

                                No. No AI bot should ever have the ability to judge the value life. How can it? It has no concept of it. To think people actually have to ask this question.

                                Jeremy Falcon

                                J Offline
                                J Offline
                                jeroen1304
                                wrote on last edited by
                                #37

                                Not making a choice is a choice as well. But what if the computer has two options: -Keep driving ahead and kill x pedestrians. ('do nothing') -Steer the car into the nearest tree and kill y passengers. All other possibilities have been evaluated and determined to be physically impossible. (speed too high, braking distance too short, trees on both sides of the road, etc) What should the computer do when there is no 'do nothing'? If the decision of who is killed cannot be made by a computer then it must be escalated to a human. But to which human? -The passengers? -The pedestrians? Both have a personal interest in the decision so neither can be trusted to be fair. Maybe the decision should be deferred to an impartial referee? The computer could warn a government official, present him with all relevant data and then let him make a choice. Or make the decision through a democratic process. Ask a large number of responsible citizens what action hould be taken and then take the most popular course of action. This can be done with modern technology. Just get a notification on your smartphone with a small animation of each option and then tap the one you favor. You could even disguise it as a game.

                                K 1 Reply Last reply
                                0
                                • R R Giskard Reventlov

                                  Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                                  "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                                  Y Offline
                                  Y Offline
                                  yiangos
                                  wrote on last edited by
                                  #38

                                  I'm surprised that nobody mentioned Asimov so far (at least AFAIK, nobody mentioned him) I believe that the poll is misleading (particularly the part that says "especially if I paid for it". That's just crap to drive people to pick the suicide choice as the "morally correct" one). The two choices set as possible outcomes to the question posed to the robot are: 1. Kill the occupant(s) only. 2. Possibly kill the occupant(s) and occupant(s) of other bot-car(s) as well If the three laws apply, then both of these choices would be rejected immediately as violating the first law (actively killing the occupants, or by doing nothing - i.e. inaction - possibly kill others). The bot-car would probably try to steer away from ALL oncoming traffic, and ALL oncoming traffic would probably try to steer away from the bot-car. In the end all bot-cars would actively try to save their occupants and the occupants of the other bot-cars first, and themselves (i.e. the bots) second.

                                  Φευ! Εδόμεθα υπό ρηννοσχήμων λύκων! (Alas! We're devoured by lamb-guised wolves!)

                                  1 Reply Last reply
                                  0
                                  • S SortaCore

                                    This will just bring on more car hacking. Use a car key to send an encoded signal which overflows a correct-key-match buffer and tells the car it really needs to kill all its occupants. National Security and hired assassinations made easy.

                                    S Offline
                                    S Offline
                                    Stefan_Lang
                                    wrote on last edited by
                                    #39

                                    The premise already is that the robotic car is programmed to kill its occupants under certain conditions (presumably to minimize the overall loss). I merely suggested additional conditions. And, yes, however these conditions are programmed, any software system can and will be hacked and abused. The question is, how much damage will be incurred through abuse, manipulation, or just honest software errors, compared to the damage these systems may avert...

                                    GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]

                                    1 Reply Last reply
                                    0
                                    • R R Giskard Reventlov

                                      Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]

                                      "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures

                                      M Offline
                                      M Offline
                                      Mike Hankey
                                      wrote on last edited by
                                      #40

                                      Interesting problem, I wonder if the person in the car that's about to slam into the SUV loaded with the family with 4 kids would do if given the choice?

                                      Along with Antimatter and Dark Matter they've discovered the existence of Doesn't Matter which appears to have no effect on the universe whatsoever! Rich Tennant 5th Wave

                                      H 1 Reply Last reply
                                      0
                                      • M Mike Hankey

                                        Interesting problem, I wonder if the person in the car that's about to slam into the SUV loaded with the family with 4 kids would do if given the choice?

                                        Along with Antimatter and Dark Matter they've discovered the existence of Doesn't Matter which appears to have no effect on the universe whatsoever! Rich Tennant 5th Wave

                                        H Offline
                                        H Offline
                                        Herbie Mountjoy
                                        wrote on last edited by
                                        #41

                                        Ok car. Drive over the cliff. Are you sure? Ah, too late... If I I had purchased a 'smart' car that was stupid enough to get into such a situation, I would ask for my money back. That's assuming I survived the crash.

                                        I may not last forever but the mess I leave behind certainly will.

                                        1 Reply Last reply
                                        0
                                        • J jeroen1304

                                          Not making a choice is a choice as well. But what if the computer has two options: -Keep driving ahead and kill x pedestrians. ('do nothing') -Steer the car into the nearest tree and kill y passengers. All other possibilities have been evaluated and determined to be physically impossible. (speed too high, braking distance too short, trees on both sides of the road, etc) What should the computer do when there is no 'do nothing'? If the decision of who is killed cannot be made by a computer then it must be escalated to a human. But to which human? -The passengers? -The pedestrians? Both have a personal interest in the decision so neither can be trusted to be fair. Maybe the decision should be deferred to an impartial referee? The computer could warn a government official, present him with all relevant data and then let him make a choice. Or make the decision through a democratic process. Ask a large number of responsible citizens what action hould be taken and then take the most popular course of action. This can be done with modern technology. Just get a notification on your smartphone with a small animation of each option and then tap the one you favor. You could even disguise it as a game.

                                          K Offline
                                          K Offline
                                          Klaus Werner Konrad
                                          wrote on last edited by
                                          #42

                                          jeroen1304 wrote:

                                          But what if the computer has two options:
                                          -Keep driving ahead and kill x pedestrians. ('do nothing') -Steer the car into the nearest tree and kill y passengers.

                                          Exactly this was the question in the article ...

                                          Quote:

                                          jeroen1304 wrote:

                                          All other possibilities have been evaluated and determined to be physically impossible. (speed too high, braking distance too short, trees on both sides of the road, etc)

                                          jeroen1304 wrote:

                                          make the decision through a democratic process. Ask a large number of responsible citizens what action hould be taken and then take the most popular course of action.

                                          You cannot take this route, 'cause there is no time for it. The decision has to be made in fractions of the next second.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups