Robotic age poses ethical dilemma
-
Protection against VB?
-
Is your point that smart bombs are robots and therefore have rights? I disagree, that's why we need Jeffersonian principles running the US. The founders never intended a bunch of morally deficient liberals elevate robots into citizenry. Jefferson never considered robots human equals... they were just for doinking.
led mike
-
An ethical code (Robot Ethics Charter) to prevent humans abusing robots, and vice versa, is being drawn up by South Korea. [^] What would CP members like to see in this Charter? What legal rights should robots have?
Isaac Azimov's 3 Laws of Robotics http://www.auburn.edu/~vestmon/robotics.html[^] 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
-
Colin Angus Mackay wrote:
For example, he has likes and dislikes. He dislikes being called a "robot". He likes being called an "android"
Correction: For example, he has "likes" and "dislikes." He "dislikes" being called a 'robot.' He "likes" being called an 'android.' "Data" is a human being pretending to be a machine-that-can-think. There will never actually be a machine-that-can-think, because 'computation' is not 'thinking.'
Ilíon wrote:
There will never actually be a machine-that-can-think, because 'computation' is not 'thinking.'
Please then explain, scientifically correctly of course since we're being a pedant like yourself, what exactly is the process of 'thinking'?
Rhys A cult is a religion with no political power. Tom Wolfe Behind every argument is someone's ignorance. Louis D. Brandeis
-
Isaac Azimov's 3 Laws of Robotics http://www.auburn.edu/~vestmon/robotics.html[^] 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
oilFactotum wrote:
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
This law would make robots subservient to human beings always regardless of the robot's capabilities. At what point do you decide something is intelligent enough to have independent rights? Is intelligence enough? What about being self aware? Of course, we don't really have a good definition for intelligence applied to nonhumans, in my opinion. And while there are tests that show self awareness I don't think that failing them proves that the entity isn't self aware. It just doesn't display something we recognize as obviously self awareness.
The evolution of the human genome is too important to be left to chance idiots like CSS.
-
oilFactotum wrote:
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
This law would make robots subservient to human beings always regardless of the robot's capabilities. At what point do you decide something is intelligent enough to have independent rights? Is intelligence enough? What about being self aware? Of course, we don't really have a good definition for intelligence applied to nonhumans, in my opinion. And while there are tests that show self awareness I don't think that failing them proves that the entity isn't self aware. It just doesn't display something we recognize as obviously self awareness.
The evolution of the human genome is too important to be left to chance idiots like CSS.
Tim Craig wrote:
This law would make robots subservient to human beings always regardless of the robot's capabilities.
Yes, it would. Quite appropriately, I think.
Tim Craig wrote:
At what point do you decide something is intelligent enough to have independent rights?
A manufactured entity? Probably never. How would you test for self awareness? Especially a computer that was programmed to mimic self awareness.
-
Was this intended to be a reply to me? I got it in email but don't know where it broke. If so, I think the whole idea absurd and was mocking it.
-- Rules of thumb should not be taken for the whole hand.
dan neely wrote:
Was this intended to be a reply to me?
Yes.
dan neely wrote:
I got it in email but don't know where it broke.
It's a know CP bug
dan neely wrote:
I think the whole idea absurd and was mocking it.
I know... I was just continuing the mocking... and them some :-D
led mike
-
Isaac Azimov's 3 Laws of Robotics http://www.auburn.edu/~vestmon/robotics.html[^] 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
-
Colin Angus Mackay wrote:
For example, he has likes and dislikes. He dislikes being called a "robot". He likes being called an "android"
Correction: For example, he has "likes" and "dislikes." He "dislikes" being called a 'robot.' He "likes" being called an 'android.' "Data" is a human being pretending to be a machine-that-can-think. There will never actually be a machine-that-can-think, because 'computation' is not 'thinking.'
Perhaps the uncomfortable question is, if it's indistinguishable from 'thinking', then how do we know that there IS a difference ?
Christian Graus - Microsoft MVP - C++ Metal Musings - Rex and my new metal blog "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )
-
Ilíon wrote:
There will never actually be a machine-that-can-think, because 'computation' is not 'thinking.'
Please then explain, scientifically correctly of course since we're being a pedant like yourself, what exactly is the process of 'thinking'?
Rhys A cult is a religion with no political power. Tom Wolfe Behind every argument is someone's ignorance. Louis D. Brandeis
Rhys666 wrote:
Please then explain, scientifically correctly of course since we're being a pedant like yourself, what exactly is the process of 'thinking'?
Being an asshole on internet forums? :~
-
Perhaps the uncomfortable question is, if it's indistinguishable from 'thinking', then how do we know that there IS a difference ?
Christian Graus - Microsoft MVP - C++ Metal Musings - Rex and my new metal blog "I am working on a project that will convert a FORTRAN code to corresponding C++ code.I am not aware of FORTRAN syntax" ( spotted in the C++/CLI forum )
Christian Graus wrote:
Perhaps the uncomfortable question is, if it's indistinguishable from 'thinking', then how do we know that there IS a difference ?
Yur'oe tinhknig of 'cpmotunig' in the retsirtecd and dveriaitve secne of the aviittcy wcihh is dnoe on or wtih mreodn eecotirnlc cpumteors. As I alsmot aylwas do, I was unisg 'cpumiotng' in the mroe bsiac sncee of cniountg and the relus by wchih ctnoiung ootpreains are premfored. But tehn, wehn you get dwon to it, taht is all a merdon ecoteirnlc cpmteour deos, it jsut does teshe tihgns ftsater tahn erailer mcaheniacl ctpumeors did. And, of csoure, tshee erailer mcaeniachl ctpueomrs to wchih I rfeer did nnohtig at all, it was the hmaun mnid dinog evyerhintg whcih was dnoe. And, by the smae tekon, meordn ecotirnlec cpumteors do not raelly do aytinhng; it is aaign a hmaun mnid dniog evyerhintg wichh is dnoe. Deos an etlercic psuh-btuton coalctular 'tihnk?' Deos a manccaihel psuh-btuton coalctular 'tihnk?' Does a sidle-rlue 'tnihk?' Deos a picnel and ppear 'tnhik?' Deos a mahncecail acabus 'tihnk?' If the Cenishe had invented a peowred acubas, wihch ddni't ruriqee a hmaun to mvoe the bttouns, wulod you say taht it 'tihnks?' Of crouse not! So, why do you wnat to inagime taht a mreodn eecotirnlc cptumeor can 'tinhk' or taht smoe hopathietcyl furute cptumeor wlil 'tnihk?' 'Ctpmutaioon' is not 'tkinhing.' Is taht ralely so dciiulfft to garsp?
-
It was "worker" not "slave" to be picky.
-
Tim Craig wrote:
This law would make robots subservient to human beings always regardless of the robot's capabilities.
Yes, it would. Quite appropriately, I think.
Tim Craig wrote:
At what point do you decide something is intelligent enough to have independent rights?
A manufactured entity? Probably never. How would you test for self awareness? Especially a computer that was programmed to mimic self awareness.
oilFactotum wrote:
A manufactured entity?
So "manufactured" is the critical quality? What about "artificial" lifeforms that are more akin the "traditional" wet chemical processes we're used to?
oilFactotum wrote:
How would you test for self awareness?
The only test I currently know of is the famous mirror test. So far the only animals to pass are the great apes and much of that is probably due to their close functional relationship with us so that we can reasonably interpret the nonverbals given off when the light goes on. As I said, I don't think failing that test proves an animal, or anything else, is non self aware. All it says is that the animal has the analytical facilities to recognized the image in the mirror as itself. A creature capable of communicating could convey whether it is aware of itself.
oilFactotum wrote:
Especially a computer that was programmed to mimic self awareness.
When does "mimicry" become as good as the original? Humans wanted to fly and originally tried it by mimicing birds. When we actually flew, it was by another method. It's still flight. And in some aspects, it's surpasses what birds can do. If you're talking about the crude ELISA attempts at mimicing intelligence, those aren't really what I'm trying to discuss.
The evolution of the human genome is too important to be left to chance idiots like CSS.
-
Christian Graus wrote:
Perhaps the uncomfortable question is, if it's indistinguishable from 'thinking', then how do we know that there IS a difference ?
Yur'oe tinhknig of 'cpmotunig' in the retsirtecd and dveriaitve secne of the aviittcy wcihh is dnoe on or wtih mreodn eecotirnlc cpumteors. As I alsmot aylwas do, I was unisg 'cpumiotng' in the mroe bsiac sncee of cniountg and the relus by wchih ctnoiung ootpreains are premfored. But tehn, wehn you get dwon to it, taht is all a merdon ecoteirnlc cpmteour deos, it jsut does teshe tihgns ftsater tahn erailer mcaheniacl ctpumeors did. And, of csoure, tshee erailer mcaeniachl ctpueomrs to wchih I rfeer did nnohtig at all, it was the hmaun mnid dinog evyerhintg whcih was dnoe. And, by the smae tekon, meordn ecotirnlec cpumteors do not raelly do aytinhng; it is aaign a hmaun mnid dniog evyerhintg wichh is dnoe. Deos an etlercic psuh-btuton coalctular 'tihnk?' Deos a manccaihel psuh-btuton coalctular 'tihnk?' Does a sidle-rlue 'tnihk?' Deos a picnel and ppear 'tnhik?' Deos a mahncecail acabus 'tihnk?' If the Cenishe had invented a peowred acubas, wihch ddni't ruriqee a hmaun to mvoe the bttouns, wulod you say taht it 'tihnks?' Of crouse not! So, why do you wnat to inagime taht a mreodn eecotirnlc cptumeor can 'tinhk' or taht smoe hopathietcyl furute cptumeor wlil 'tnihk?' 'Ctpmutaioon' is not 'tkinhing.' Is taht ralely so dciiulfft to garsp?
A computer may not yet be as good as the human mind in pattern matching, but its rediculous to say that thinking is not computing. All you displayed here is pattern matching, and with the advent of quantum computers, I'd say wait and see.
This statement was never false.
-
Trollslayer wrote:
It was "worker" not "slave" to be picky.
Damn! How pedantic can a person get? :cool: Even I passed that one by ;)
Rhetorical questions such as this require the mirror to answer. Help yourself.
This statement was never false.
-
An ethical code (Robot Ethics Charter) to prevent humans abusing robots, and vice versa, is being drawn up by South Korea. [^] What would CP members like to see in this Charter? What legal rights should robots have?
Thats outrageous. Until we fully understand the human mind and consciousness we will not be able to create intelligent robots that deserve the same rights as a human. We can create a computer program that can seem intelligent and can seem to think and control a mechanical body of some kind but it is just software running on a computer. If I develop my own robot then I should be able to beat it with a hammer when it makes me mad. This also brings up an important point. What if the robot's software is running on my desktop computer and its controlling a virtual 3d robot body on my screen? I can tell you I wouldn't be going to jail if I deleted it or made it beat up another virtual robot's body. It's just dumb, It is extremely unlikely software and hardware will ever feel and experience the world no matter how smart it may seem.
█▒▒▒▒▒██▒█▒██ █▒█████▒▒▒▒▒█ █▒██████▒█▒██ █▒█████▒▒▒▒▒█ █▒▒▒▒▒██▒█▒██
-
oilFactotum wrote:
A manufactured entity?
So "manufactured" is the critical quality? What about "artificial" lifeforms that are more akin the "traditional" wet chemical processes we're used to?
oilFactotum wrote:
How would you test for self awareness?
The only test I currently know of is the famous mirror test. So far the only animals to pass are the great apes and much of that is probably due to their close functional relationship with us so that we can reasonably interpret the nonverbals given off when the light goes on. As I said, I don't think failing that test proves an animal, or anything else, is non self aware. All it says is that the animal has the analytical facilities to recognized the image in the mirror as itself. A creature capable of communicating could convey whether it is aware of itself.
oilFactotum wrote:
Especially a computer that was programmed to mimic self awareness.
When does "mimicry" become as good as the original? Humans wanted to fly and originally tried it by mimicing birds. When we actually flew, it was by another method. It's still flight. And in some aspects, it's surpasses what birds can do. If you're talking about the crude ELISA attempts at mimicing intelligence, those aren't really what I'm trying to discuss.
The evolution of the human genome is too important to be left to chance idiots like CSS.
Tim Craig wrote:
So "manufactured" is the critical quality?
Yeah, I would say that is a critical quality. Perhaps if we get our technology to the point of building Star Trek-like Datas or Cylons that would need reconsideration. The ramifications of immortal artificial life forms more intelligent than any human could ever hope to be - how would they not end up ruling humans?
Tim Craig wrote:
What about "artificial" lifeforms that are more akin the "traditional" wet chemical processes we're used to?
I don't know. Mechanical intelligence seems to be an easier task that creating an intelligent "artificial wet chemical" (biological?) lifeform. That would be a completely different issue in my mind. And that could even be more dangerous especially if it were self-replicating.
Tim Craig wrote:
So far the only animals to pass are the great apes
There was an elephant in the news recently that seemed to recognize itself in a mirror.
Tim Craig wrote:
Humans wanted to fly and originally tried it by mimicing birds
Humans learned to fly, but we are still not birds. Mimicing self awareness or intelligence doesn't make it intelligent or self aware.
Tim Craig wrote:
If you're talking about the crude ELISA attempts at mimicing intelligence, those aren't really what I'm trying to discuss.
No, I assume you talking about technologies that are decades away at the very least.
-
John, at the moment your reply is correct. But what if these robots are permitted to eventually develop "free-will"?
If a robot (or computer) becomes sentient then everything changes.
Anna :rose: Linting the day away :cool: Anna's Place | Tears and Laughter "If mushy peas are the food of the devil, the stotty cake is the frisbee of God"
-
A computer may not yet be as good as the human mind in pattern matching, but its rediculous to say that thinking is not computing. All you displayed here is pattern matching, and with the advent of quantum computers, I'd say wait and see.
This statement was never false.
Pattern matching is coming on leaps and bounds as well. The algorithms are being refined, massive repository's of information are being built up and calculation speed is always rising. To say that machines will never reach the level of complexity and ability of a human is to ignore the pattern of growth your brain should be perceiving :P The debatable part is whether or not a man made brain will ever be able to truly "feel emotion" and have "free will", but then again its debatable whether we ourselves truly have that or are we just responding the way we are programmed to our surroundings like any other animal, just in a more complex way.
-
John, at the moment your reply is correct. But what if these robots are permitted to eventually develop "free-will"?
Ask them to suggest their own charter. Rich:laugh: