Interesting...
-
I think that's a fascinating scenario to think about, Mark. Consider the robot-in-the-car detects loss of consciousness in the driver somehow and is able to evaluate, given the flow of traffic, that any sudden stop will result in a multi-car pile-up with major loss of life while it is also able to conclude that a sudden sharp turn will take the vehicle off the roadway, but almost certainly kill the occupant. Medical personnel in war, given an overflow of casualties, make rapid decisions (triage) about who gets treatment priority based on intuitive mortality assessments as well as, of course, whatever medical stats they can get. It would be interesting, to me, to know to what extent the current state-of-the-art triage strategies in war and natural disasters are using computer programs to assist evaluation. Equally frightening is the idea of a "loyal" robot programmed to put the preservation of its owner above everyone/everything else. I observe that my mind associates the terms "loyal robot" with the typical spin-minions and henchmen/women of ... politicians. cheers, Bill
“I speak in a poem of the ancient food of heroes: humiliation, unhappiness, discord. Those things are given to us to transform, so that we may make from the miserable circumstances of our lives things that are eternal, or aspire to be so.” Jorge Luis Borges
BillWoodruff wrote:
I think that's a fascinating scenario to think about, Mark. Consider the robot-in-the-car detects loss of consciousness in the driver somehow and is able to evaluate, given the flow of traffic, that any sudden stop will result in a multi-car pile-up with major loss of life while it is also able to conclude that a sudden sharp turn will take the vehicle off the roadway, but almost certainly kill the occupant.
Sounds rather unrealistic to me: 1. If the car is robot-controlled to start with, why can't it just go on driving? 2. If stopping your car could potentially cause lifes, what the hell were the other drivers/robot cars thinking? 3. If the other cars are also robot-controlled, why can't they collaborate to ensure a safe mutual slowdown? 4. Can't think of any reason why a sharp turn would be less dangerous to the rest of the traffic
BillWoodruff wrote:
Equally frightening is the idea of a "loyal" robot programmed to put the preservation of its owner above everyone/everything else. I observe that my mind associates the terms "loyal robot" with the typical spin-minions and henchmen/women of ... politicians.
That could indeed be a problem, and car makers could in fact promote cars with 'improved survivability' for those who are willing to shell out the cash. Politicians could try to prevent that, but, realistically, by the time they can agree on a workable legislation the market will already be brimming with such discriminating cars that are hard to tone down or remove.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]
-
mark merrens wrote:
So, given that the bot is able to predict the outcome of the accident and knowing that only 2 rather than, say, 6 people will die it should not take that choice?
You cannot make a judgment call on that like it's a simple logical algorithm in a program. What if the person to die was your daughter, who's also pregnant, and her husband? And the people living were 6 old people that were murderers and on their way to kill more people? Oh sure, then we could have the cars cop a feel for pregnant chicks every time you start the car and require old people to sign a waver to kiss their arse good bye. But where does it stop, just how far down the "lets not have to think for ourselves" rabbit hole does one have to go? Just because technology says we can.
mark merrens wrote:
It is because it is acting without emotion that it can make this decision. It is you humans who are incapable of doing that.
Har har. Seriously though, emotion is what makes life worth living. It's what makes being human fun. Oh wait that's an emotion. I just want to be happy. Oh wait... damn emotions getting in the way. Einstein was right if this question even has to be asked.
Jeremy Falcon
Jeremy Falcon wrote:
And the people living were 6 old people that were murderers and on their way to kill more people?
In that case lets just program the robotic car of these to fataly crash, removing them from other cars equations ;P Seriously, though: how do you know this is the case? And if you know, why can't the car? Why can't that other people's car? Why can't that other people's car decide and ... oh well, back to my initial statement again ;)
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
Jeremy Falcon wrote:
And the people living were 6 old people that were murderers and on their way to kill more people?
In that case lets just program the robotic car of these to fataly crash, removing them from other cars equations ;P Seriously, though: how do you know this is the case? And if you know, why can't the car? Why can't that other people's car? Why can't that other people's car decide and ... oh well, back to my initial statement again ;)
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
This is really interesting, and was already debated (to some extent) with the Law Zero[^] added to the initial three Laws of Asimov. Practically, there is a huge information difference required to be able to fullfill Law Zero and Law One : You can evaluate easily the facts for one or a bunch of people in a car, but for humanity ? Maybe one of the people that is killed because of the AI decision would have had a big influence on hunanity's destiny (because he was a researcher or a dictator, etc...) So we see that all 4 laws are required for the decision to be the fairest possible, but law 0 cannot be easily implemented. This law would be also the one required to answer properly the question in your post.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
Since we humans can't cope with the thought of letting a computer, in this case a car, decide whether a living creature should survive or not, why should it be able to choose whether a few more lives are more important than a bit less lives? It'll reach the (international) news anyway blaming the computer for its actions. So, let it just gather all the information on the crash, sit back and act like a 3D camera, making sure it is 100% a humans fault someone died. My answer is no.
-
mark merrens wrote:
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives?
No. No AI bot should ever have the ability to judge the value life. How can it? It has no concept of it. To think people actually have to ask this question.
Jeremy Falcon
Not making a choice is a choice as well. But what if the computer has two options: -Keep driving ahead and kill x pedestrians. ('do nothing') -Steer the car into the nearest tree and kill y passengers. All other possibilities have been evaluated and determined to be physically impossible. (speed too high, braking distance too short, trees on both sides of the road, etc) What should the computer do when there is no 'do nothing'? If the decision of who is killed cannot be made by a computer then it must be escalated to a human. But to which human? -The passengers? -The pedestrians? Both have a personal interest in the decision so neither can be trusted to be fair. Maybe the decision should be deferred to an impartial referee? The computer could warn a government official, present him with all relevant data and then let him make a choice. Or make the decision through a democratic process. Ask a large number of responsible citizens what action hould be taken and then take the most popular course of action. This can be done with modern technology. Just get a notification on your smartphone with a small animation of each option and then tap the one you favor. You could even disguise it as a game.
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
I'm surprised that nobody mentioned Asimov so far (at least AFAIK, nobody mentioned him) I believe that the poll is misleading (particularly the part that says "especially if I paid for it". That's just crap to drive people to pick the suicide choice as the "morally correct" one). The two choices set as possible outcomes to the question posed to the robot are: 1. Kill the occupant(s) only. 2. Possibly kill the occupant(s) and occupant(s) of other bot-car(s) as well If the three laws apply, then both of these choices would be rejected immediately as violating the first law (actively killing the occupants, or by doing nothing - i.e. inaction - possibly kill others). The bot-car would probably try to steer away from ALL oncoming traffic, and ALL oncoming traffic would probably try to steer away from the bot-car. In the end all bot-cars would actively try to save their occupants and the occupants of the other bot-cars first, and themselves (i.e. the bots) second.
Φευ! Εδόμεθα υπό ρηννοσχήμων λύκων! (Alas! We're devoured by lamb-guised wolves!)
-
This will just bring on more car hacking. Use a car key to send an encoded signal which overflows a correct-key-match buffer and tells the car it really needs to kill all its occupants. National Security and hired assassinations made easy.
The premise already is that the robotic car is programmed to kill its occupants under certain conditions (presumably to minimize the overall loss). I merely suggested additional conditions. And, yes, however these conditions are programmed, any software system can and will be hacked and abused. The question is, how much damage will be incurred through abuse, manipulation, or just honest software errors, compared to the damage these systems may avert...
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
Interesting problem, I wonder if the person in the car that's about to slam into the SUV loaded with the family with 4 kids would do if given the choice?
Along with Antimatter and Dark Matter they've discovered the existence of Doesn't Matter which appears to have no effect on the universe whatsoever! Rich Tennant 5th Wave
-
Interesting problem, I wonder if the person in the car that's about to slam into the SUV loaded with the family with 4 kids would do if given the choice?
Along with Antimatter and Dark Matter they've discovered the existence of Doesn't Matter which appears to have no effect on the universe whatsoever! Rich Tennant 5th Wave
Ok car. Drive over the cliff. Are you sure? Ah, too late... If I I had purchased a 'smart' car that was stupid enough to get into such a situation, I would ask for my money back. That's assuming I survived the crash.
I may not last forever but the mess I leave behind certainly will.
-
Not making a choice is a choice as well. But what if the computer has two options: -Keep driving ahead and kill x pedestrians. ('do nothing') -Steer the car into the nearest tree and kill y passengers. All other possibilities have been evaluated and determined to be physically impossible. (speed too high, braking distance too short, trees on both sides of the road, etc) What should the computer do when there is no 'do nothing'? If the decision of who is killed cannot be made by a computer then it must be escalated to a human. But to which human? -The passengers? -The pedestrians? Both have a personal interest in the decision so neither can be trusted to be fair. Maybe the decision should be deferred to an impartial referee? The computer could warn a government official, present him with all relevant data and then let him make a choice. Or make the decision through a democratic process. Ask a large number of responsible citizens what action hould be taken and then take the most popular course of action. This can be done with modern technology. Just get a notification on your smartphone with a small animation of each option and then tap the one you favor. You could even disguise it as a game.
jeroen1304 wrote:
But what if the computer has two options:
-Keep driving ahead and kill x pedestrians. ('do nothing') -Steer the car into the nearest tree and kill y passengers.Exactly this was the question in the article ...
Quote:
jeroen1304 wrote:
All other possibilities have been evaluated and determined to be physically impossible. (speed too high, braking distance too short, trees on both sides of the road, etc)
jeroen1304 wrote:
make the decision through a democratic process. Ask a large number of responsible citizens what action hould be taken and then take the most popular course of action.
You cannot take this route, 'cause there is no time for it. The decision has to be made in fractions of the next second.
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
I better stop kicking the tires. :-D
-
I'd rather it spent its cycles slowing the car.
You'll never get very far if all you do is follow instructions.
I believe the assumption is that it is beyond that - the accident is going to happen.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
mark merrens wrote:
I don't see why anyone would be upset about this unless they simply reacted without thinking.
Well you can be their beta tester. Have fun!
Jeremy Falcon
Surely the lives of the many outweigh the lives of the one?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
I think this is a spurious situation, arising from our innate tendency to anthropomorphise the 'robot'. I don't believe any robot car will ever* be programmed to make this sort of decision in this way. A car will never be able to know who the passengers of another car are, for privacy reasons. They will be (are?) programmed to do everything possible to safely avoid a collision. If the anti-collision routines of both cars cannot avoid colliding, the severity of the crash should be vastly diminished (via braking, evasive action etc. faster than any human could). On some very rare occasions (barring programming errors) a serious crash will be unavoidable, and will occur. A car will never* make any decision about the people riding in it, or in any other vehicle. * at least until a sentient AI is created.
Yeah, think that was pretty much already said.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
This is really interesting, and was already debated (to some extent) with the Law Zero[^] added to the initial three Laws of Asimov. Practically, there is a huge information difference required to be able to fullfill Law Zero and Law One : You can evaluate easily the facts for one or a bunch of people in a car, but for humanity ? Maybe one of the people that is killed because of the AI decision would have had a big influence on hunanity's destiny (because he was a researcher or a dictator, etc...) So we see that all 4 laws are required for the decision to be the fairest possible, but law 0 cannot be easily implemented. This law would be also the one required to answer properly the question in your post.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
Indeed though I think everyone is overthinking this. The bots will do everything to prevent an accident and I doubt that they would ever be given the power to decide if the occupants of car a will live and those of car b die. Still, it's fun to discuss the possibilities.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
Indeed though I think everyone is overthinking this. The bots will do everything to prevent an accident and I doubt that they would ever be given the power to decide if the occupants of car a will live and those of car b die. Still, it's fun to discuss the possibilities.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
I think the car technology will improve safety long before the AI will be able to decide about one's fate, so there are odds that the situation of having to make the choice will never happen.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
-
Surely the lives of the many outweigh the lives of the one?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
Surely the lives of the many outweigh the lives of the one?
Not always, and giving a car the power of God, when a car can't feel compassion or anything for that matter is a bad idea. I'd rather have one person saved that actually did something useful for the world than 5 that were freeloaders. Acting like the issue is so cut and dry is a very primitive way of looking at life.
mark merrens wrote:
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur."
Hey at least we agree on this!
Jeremy Falcon
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
"Save the girl!" I doubt we'll ever be able to program all factors that should be considered into that equation of who should die and who is worth preserving. Worse, as soon as that gets programmed into cars, someone somewhere will abuse it by deciding that their life is more valuable than N others and force that to get written into the programming. I don't so much mean individuals, as classes of people -- should we preserve doctors over McDonalds clerks, or political leaders over soldiers? No, cars (or robots in general) should not make these kinds of value-of-human-life decisions. They're better left to us humans, who will make them with incomplete information and totally subjectively, just like we've always done.
We can program with only 1's, but if all you've got are zeros, you've got nothing.