Philosophical Friday
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
-
Well, I can't speak for the rest of you, but I like to think I am.
Regards, Rob Philpott.
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
If it's "you" that was recreated in this machine - place yourself in "his" metaphorical shoes. "You" would wake up, being conscious, but you would be paralyzed (you can't feel your legs or arms), blind (no eyes), deaf (no ears). You would feel extreme panic as your mind no longer registers the inhalation and exhalation of breath. Eventually - if you didn't go mad - you would calm down and and realize that something else is going on, this is when "you" would perhaps realize that you were only a recreation - a simulation of the original. Under these circumstances, if the original attempted to communicate with me, I'd tell myself to FRO.
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
Rob Philpott wrote:
For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts?
Which philosophers have been pondering for a very long time - definitely before anything like a computer existed. The basic discussion is...given that person A is talking to person B how does person A know that B is conscious? How does A know that B thinks the same way that A does? How does A know that even though B seems to be discussing everything in a reasonable way how does A know that the B is in fact understanding the same concepts that A is trying to convey? The "Turing Test" is an experimental technique defined in an attempt to at least arrive at an equivalence in behavior.
-
Are you a deity? If not, I'm afraid I do not accept your explanation.
Regards, Rob Philpott.
Rob Philpott wrote:
Are you a deity? If not, I'm afraid I do not accept your explanation.
Philosophically that is is basically an invalid refutation of the statement. Your statement embodies the assumption that you are in fact an intelligent being rather than being just some words in a forum. And also assumes that even if you are intelligent entity that one must be a deity and not just very smart to be capable of modeling you.
-
Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.
harold aptroot wrote:
Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.
Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.
-
harold aptroot wrote:
Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.
Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.
-
No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.
Regards, Rob Philpott.
Rob Philpott wrote:
Like I said, its like the sum is more than that parts.
And if you put gasoline and a bunch of steel on the ground you still are not going to be able to drive it from NY to Chicago but, presumably, you can do the same with an automobile.
Rob Philpott wrote:
If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.
The number of things that people attach morality to probably isn't infinite but it is certainly big enough that enumerating it would be endless. So I fail to see how that matters.
-
Rob Philpott wrote:
there will be lots of moral questions about the worth of what's created vs. the worth of the human
Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.
Rob Philpott wrote:
Like I said, its like the sum is more than that parts.
I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)
harold aptroot wrote:
Ok, well here's something to consider: an AI does not notice its death.
Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death. As for computers noticing the death of others there is speculation that the recent twitter hack about a white house attack which caused a drop in the stock market was automated - in that computers acted on the information that an attack occurred. If that is true then it would seem to be very unlikely that they wouldn't also react to the death of certain individuals.
-
What does intelligence have to do with anything? Mere humans certainly aren't intelligent
harold aptroot wrote:
What does intelligence have to do with anything? Mere humans certainly aren't intelligent
That sounds like a term definition problem. Your response was in some way related to the statement "Would this emulation have consciousness, feelings and motivations?" You statement certainly didn't seem to indicate that you thought humans didn't experience that. And excluding and philosophical meanderings then my statement of "intelligence" refers to whatever embodies the above concepts.
-
Rob Philpott wrote:
Would this emulation have consciousness, feelings and motivations?
Probably yes. However, it would have to be instructed to do so. Any human beeing, or animal is influenced by its sourroundings. The factors are quite unlimited as everything that you notice has directly or indirectly an impact on you (however small it might be). what leads a beeing to develop consciousness, feelings and motivations is probably the more interesting question. how can you influence something to provoke feelings in the future? I suppose motivation is primarily driven by feelings. (be it only to have some relief at the end of the month because you can affort your rent? ;)) As I see it an emulation of a human brain would have to go through a whole process of growing up (probably accellerated by even more powerful hardware? ;P)
Rob Philpott wrote:
Perhaps one for the pub, it is Friday after all. Mine's a pint.
Sadly, this has to wait for another few hours :beer:
Nicholas Marty wrote:
However, it would have to be instructed to do so.
That is a supposition. Either humans arrive at those qualities by being instructed, they learn it themselves or it is innate. And one can suppose that the first two are certainly possible for a machine intelligence. Myself I doubt the last because there are in fact observable differences between humans of that nature when one looks at culture and language. And if you stick a human at birth into a sensory deprivation environment and leave them there until they are 25 I seriously doubt they would have anything. They would probably be behaviorally brain dead.
-
As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.
"The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)
Matthew Faithfull wrote:
The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away.
So exactly how do you continue to live? Or does your definition of "life" not in include cows, chickens, broccoli and carrots?
-
Self-awareness seems to be rather difficult to accomplish. You could say that the earth is a large complicated processor and it was only able to produce one species (that we know of) that could invent the internet. I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums - the earth would have already done so. Wait... I think I remember seeing where a snake was posting on twitter https://twitter.com/BronxZoosCobra[^] well there goes that argument. Yes clearly self-awareness is replicable by humans because snakes are posting on forums. Next discussion please.
madmatter wrote:
I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums
That is a confused definition. Scientifically there are more precise definitions for "self-awareness" and there are tests for it. Humans are not the only species that pass those tests.
-
harold aptroot wrote:
Ok, well here's something to consider: an AI does not notice its death.
Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death. As for computers noticing the death of others there is speculation that the recent twitter hack about a white house attack which caused a drop in the stock market was automated - in that computers acted on the information that an attack occurred. If that is true then it would seem to be very unlikely that they wouldn't also react to the death of certain individuals.
-
harold aptroot wrote:
What does intelligence have to do with anything? Mere humans certainly aren't intelligent
That sounds like a term definition problem. Your response was in some way related to the statement "Would this emulation have consciousness, feelings and motivations?" You statement certainly didn't seem to indicate that you thought humans didn't experience that. And excluding and philosophical meanderings then my statement of "intelligence" refers to whatever embodies the above concepts.
Ok, fine, be serious about it.. I'd say humans experience those things by definition, because that's what the words were invented for. Seems sort of self-centered to me, but whatever. So back to the bag of chemicals: there's nothing else in there, so that has to be the part that's inducing all those aspects of consciousness.
-
jschell wrote:
Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death.
Yes, the same thing can obviously be said about any consciousness - it can't simultaneously be dead and be perceiving anything.
harold aptroot wrote:
Yes, the same thing can obviously be said about any consciousness - it can't simultaneously be dead and be perceiving anything.
At this point all I can say is that I don't understand what your point was in bringing death into the discussion as it relates to an AI.
-
I don't WANT to, I just feel that we NEED to in order to understand these philosophical questions. Actually, a sample size of one is pretty useless. We should torture hundreds of AI, just to be sure. ;P
-
I don't WANT to, I just feel that we NEED to in order to understand these philosophical questions. Actually, a sample size of one is pretty useless. We should torture hundreds of AI, just to be sure. ;P