Interesting...
-
I think this is a spurious situation, arising from our innate tendency to anthropomorphise the 'robot'. I don't believe any robot car will ever* be programmed to make this sort of decision in this way. A car will never be able to know who the passengers of another car are, for privacy reasons. They will be (are?) programmed to do everything possible to safely avoid a collision. If the anti-collision routines of both cars cannot avoid colliding, the severity of the crash should be vastly diminished (via braking, evasive action etc. faster than any human could). On some very rare occasions (barring programming errors) a serious crash will be unavoidable, and will occur. A car will never* make any decision about the people riding in it, or in any other vehicle. * at least until a sentient AI is created.
Yeah, think that was pretty much already said.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
This is really interesting, and was already debated (to some extent) with the Law Zero[^] added to the initial three Laws of Asimov. Practically, there is a huge information difference required to be able to fullfill Law Zero and Law One : You can evaluate easily the facts for one or a bunch of people in a car, but for humanity ? Maybe one of the people that is killed because of the AI decision would have had a big influence on hunanity's destiny (because he was a researcher or a dictator, etc...) So we see that all 4 laws are required for the decision to be the fairest possible, but law 0 cannot be easily implemented. This law would be also the one required to answer properly the question in your post.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
Indeed though I think everyone is overthinking this. The bots will do everything to prevent an accident and I doubt that they would ever be given the power to decide if the occupants of car a will live and those of car b die. Still, it's fun to discuss the possibilities.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
Indeed though I think everyone is overthinking this. The bots will do everything to prevent an accident and I doubt that they would ever be given the power to decide if the occupants of car a will live and those of car b die. Still, it's fun to discuss the possibilities.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
I think the car technology will improve safety long before the AI will be able to decide about one's fate, so there are odds that the situation of having to make the choice will never happen.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
-
Surely the lives of the many outweigh the lives of the one?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
Surely the lives of the many outweigh the lives of the one?
Not always, and giving a car the power of God, when a car can't feel compassion or anything for that matter is a bad idea. I'd rather have one person saved that actually did something useful for the world than 5 that were freeloaders. Acting like the issue is so cut and dry is a very primitive way of looking at life.
mark merrens wrote:
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur."
Hey at least we agree on this!
Jeremy Falcon
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
"Save the girl!" I doubt we'll ever be able to program all factors that should be considered into that equation of who should die and who is worth preserving. Worse, as soon as that gets programmed into cars, someone somewhere will abuse it by deciding that their life is more valuable than N others and force that to get written into the programming. I don't so much mean individuals, as classes of people -- should we preserve doctors over McDonalds clerks, or political leaders over soldiers? No, cars (or robots in general) should not make these kinds of value-of-human-life decisions. They're better left to us humans, who will make them with incomplete information and totally subjectively, just like we've always done.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
-
mark merrens wrote:
Surely the lives of the many outweigh the lives of the one?
Not always, and giving a car the power of God, when a car can't feel compassion or anything for that matter is a bad idea. I'd rather have one person saved that actually did something useful for the world than 5 that were freeloaders. Acting like the issue is so cut and dry is a very primitive way of looking at life.
mark merrens wrote:
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur."
Hey at least we agree on this!
Jeremy Falcon
I think you're being a luddite. I can't see what difference it makes: would you rather leave it to chance?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
"Save the girl!" I doubt we'll ever be able to program all factors that should be considered into that equation of who should die and who is worth preserving. Worse, as soon as that gets programmed into cars, someone somewhere will abuse it by deciding that their life is more valuable than N others and force that to get written into the programming. I don't so much mean individuals, as classes of people -- should we preserve doctors over McDonalds clerks, or political leaders over soldiers? No, cars (or robots in general) should not make these kinds of value-of-human-life decisions. They're better left to us humans, who will make them with incomplete information and totally subjectively, just like we've always done.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
patbob wrote:
No, cars (or robots in general) should not make these kinds of value-of-human-life decisions.
Why? what difference does it make. In any case, I believe it will happen as more and more cars become 'intelligent' and especially when they no longer require human interference. You get in and tell it where you want to go, sit back and read a book or watch a movie. The reality is that, except under the most randomly freakish conditions, there are unlikely to be any more vehicular accidents once the bots take charge.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
I think you're being a luddite. I can't see what difference it makes: would you rather leave it to chance?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
I think you're being a luddite. I can't see what difference it makes: would you rather leave it to chance?
And I know you're being blind and shortsighted. Might want to go experience more life then try again.
Jeremy Falcon
-
mark merrens wrote:
I think you're being a luddite. I can't see what difference it makes: would you rather leave it to chance?
And I know you're being blind and shortsighted. Might want to go experience more life then try again.
Jeremy Falcon
Quote:
And I know you're being blind and shortsighted. Might want to go experience more life then try again.
What? Have you not understood any of this? Apparently not!
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
Quote:
And I know you're being blind and shortsighted. Might want to go experience more life then try again.
What? Have you not understood any of this? Apparently not!
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
What? Have you not understood any of this? Apparently not!
If that's what you must believe to rationalize your point of view, go right on ahead blind man.
Jeremy Falcon
-
mark merrens wrote:
What? Have you not understood any of this? Apparently not!
If that's what you must believe to rationalize your point of view, go right on ahead blind man.
Jeremy Falcon
a) the fact that you are getting personal shows the weakness of your point of view and b) you appear to have gone off an some sort of tangent. What, exactly, is your objection to robots, under very specific circumstances, deciding that the result of an accident could be somewhat mitigated (i.e. more people will live) by taking a specific course of action at the last moment. How is this any worse than maintaining that blind luck and chance are a better arbiter? Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
a) the fact that you are getting personal shows the weakness of your point of view and b) you appear to have gone off an some sort of tangent. What, exactly, is your objection to robots, under very specific circumstances, deciding that the result of an accident could be somewhat mitigated (i.e. more people will live) by taking a specific course of action at the last moment. How is this any worse than maintaining that blind luck and chance are a better arbiter? Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
a) the fact that you are getting personal shows the weakness of your point of view
Ok, this is my last reply since you obviously would rather argue than learn. God this sounds childish, so shame on me for entertaining you this far. My bad. But, you got personal first. Duh. What a waste of time.
mark merrens wrote:
you appear to have gone off an some sort of tangent.
Of course it seems like that, you're shortsighted and blind. What else would it seem to someone who has very little life experience? Instead of arguing you could say "I don't get it", then I'd explain or attempt to or we could agree to disagree instead of acting like children. But no, I'm a luddite. That's the easy way out to avoid thinking. That must be it. A programmer that hates technology. Makes sense.
mark merrens wrote:
How is this any worse than maintaining that blind luck and chance are a better arbiter?
You really are blind man. You need to step away from computers for a while to see the rest of the world you're blind in if you honestly can't see it. Seriously man. This ain't an insult no matter how you want to take it, it's saying you really need to open your eyes. This does not mean one hates technology in doing do so but in not doing that one has a very limited view of the world that impossible to see behind a computer screen.
mark merrens wrote:
Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
Yeah, I'm soulless for defending the only thing with a soul. And you're not because you think something soulless should exercise the right as to whether or not a soul should exist. :rolleyes: Have fun not learning. Bye bye now!
Jeremy Falcon
-
a) the fact that you are getting personal shows the weakness of your point of view and b) you appear to have gone off an some sort of tangent. What, exactly, is your objection to robots, under very specific circumstances, deciding that the result of an accident could be somewhat mitigated (i.e. more people will live) by taking a specific course of action at the last moment. How is this any worse than maintaining that blind luck and chance are a better arbiter? Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
Actually I read that last part wrong, in a lightening fast attempt to move on... Here's the short answer to that: yes! So yay, my bad, twice.
Jeremy Falcon
-
mark merrens wrote:
Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
Actually I read that last part wrong, in a lightening fast attempt to move on... Here's the short answer to that: yes! So yay, my bad, twice.
Jeremy Falcon
Just in case you can't resist because your ego is so massive. See, here you are. Intimating that you are a luddite is not getting personal: it is an observation. However, I do believe you are an arrogant twat incapable of understanding anything but your own perspective. Good luck with that: you'll need it in real life.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
It's not the power of god, it's the power of reason and making logical judgments based on probability.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
mark merrens wrote:
making logical judgments based on probability
Yeah... I prefer having some odds than no odds at all. So... slam them.
To alcohol! The cause of, and solution to, all of life's problems - Homer Simpson ---- Our heads are round so our thoughts can change direction - Francis Picabia
-
If you truly believe the time of your death is predetermined, would you mind jumping off a cliff? I mean, doing it or not wouldn't make a difference, no? :doh:
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto) Point in case: http://www.infoq.com/news/2014/02/apple_gotofail_lessons[^]
One could always jump off the cliff, and then manage to miss the ground. Just don't forget to take a towel!
-
Should Robot Cars Be Programmed To Kill You If It Will Save More Lives? [^]
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
As some of you know, or should know, this is the first law of Robotics as stated by Isaac Asimov in his series of books, I Robot. This absolutely nothing to do with the Will Smith in his I Robit movie of a few years ago. This is one of three laws. The second and third I am a little hazzy about, but one says that a robot can protect itself so long as it does not interfere with the first law, In other words, the robots are programmed to be subservient to human life, even if it means destroying themselves. I like thse laws. I fear that the drones that are now used to kill puported terrorists will eventually be changed from being under human control to being autonomous, and that can lead to all sorts of disasters. By the way, I just reread Asimov's book, "Caves of Steel" and found it to be fresh and a realistic portrayal, even by today's times, of of the future, although it did not include PCs and cell phones. It assumed interspacial travel without any mention of the means by which this is doen, ala Stra Trek warp drive. I recommend the book and others of his Robotic series to all.
-
As some of you know, or should know, this is the first law of Robotics as stated by Isaac Asimov in his series of books, I Robot. This absolutely nothing to do with the Will Smith in his I Robit movie of a few years ago. This is one of three laws. The second and third I am a little hazzy about, but one says that a robot can protect itself so long as it does not interfere with the first law, In other words, the robots are programmed to be subservient to human life, even if it means destroying themselves. I like thse laws. I fear that the drones that are now used to kill puported terrorists will eventually be changed from being under human control to being autonomous, and that can lead to all sorts of disasters. By the way, I just reread Asimov's book, "Caves of Steel" and found it to be fresh and a realistic portrayal, even by today's times, of of the future, although it did not include PCs and cell phones. It assumed interspacial travel without any mention of the means by which this is doen, ala Stra Trek warp drive. I recommend the book and others of his Robotic series to all.
Whilst I am a lifelong fan of Asimov this has nothing to do with the three laws. The bot in the car only needs to decide how to mitigate the upcoming crash to make sure that as few people are injured or die. It does not attempt to make judgments about the people, it only knows that it should minimize loss of life. It can communicate with the bot in the other car so as to determine the best course of action to take in the last second or less prior to the crash. Given that it can calculate the extent of the damage and loss of life it has to make a decision, in concert with the other bot, as to the best course of action. That is all. It is not relevant that the people in one car may be children or the others may be pensioners. IMO, this is really no different to allowing the accident to complete and hope that chance will preserve life and limb: this possibly gives the occupants a better chance. At least some of them. Note this is a very unlikely situation. Given that cars are controlled by bots, even under freakish circumstances, they will probably have sufficient time to ensure that the damage is minimal and that all of the occupants will survive.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
-
patbob wrote:
No, cars (or robots in general) should not make these kinds of value-of-human-life decisions.
Why? what difference does it make. In any case, I believe it will happen as more and more cars become 'intelligent' and especially when they no longer require human interference. You get in and tell it where you want to go, sit back and read a book or watch a movie. The reality is that, except under the most randomly freakish conditions, there are unlikely to be any more vehicular accidents once the bots take charge.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures
Accidents are pretty chaotic things, involving not only physics, but imperfectly maintained machines that behave unpredictably when under stress, and humans who make split-second decisions and behave unpredictably when under stress. Given this, no machine can reliably determine the outcome, so how can it know whether some action it can make would truly reduce the human injury quotient of an accident?
We can program with only 1's, but if all you've got are zeros, you've got nothing.
-
Accidents are pretty chaotic things, involving not only physics, but imperfectly maintained machines that behave unpredictably when under stress, and humans who make split-second decisions and behave unpredictably when under stress. Given this, no machine can reliably determine the outcome, so how can it know whether some action it can make would truly reduce the human injury quotient of an accident?
We can program with only 1's, but if all you've got are zeros, you've got nothing.
It is supposed to be hypothetical and is more about should they be allowed to do that rather than if they actually could.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. Those who seek perfection will only find imperfection nils illegitimus carborundum me, me, me me, in pictures