Philosophical Friday
-
THAT WAS MY QUOTE!! :mad:
The United States invariably does the right thing, after having exhausted every other alternative. -Winston Churchill America is the only country that went from barbarism to decadence without civilization in between. -Oscar Wilde Wow, even the French showed a little more spine than that before they got their sh*t pushed in.[^] -Colin Mullikin
Heheheheheheheheheh.
“Education is not the piling on of learning, information, data, facts, skills, or abilities - that's training or instruction - but is rather making visible what is hidden as a seed”
“One of the greatest problems of our time is that many are schooled but few are educated”Sir Thomas More (1478 – 1535)
-
No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.
Regards, Rob Philpott.
Rob Philpott wrote:
there will be lots of moral questions about the worth of what's created vs. the worth of the human
Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.
Rob Philpott wrote:
Like I said, its like the sum is more than that parts.
I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.
"The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)
-
As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.
"The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)
-
Rob Philpott wrote:
there will be lots of moral questions about the worth of what's created vs. the worth of the human
Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.
Rob Philpott wrote:
Like I said, its like the sum is more than that parts.
I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)
Quote:
Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.
I would argue that in the case of a sudden death, humans lack the ability to know that they're dead (from the strictly human level - if existence continues it's post-human). Now, if life is slowly failing, a human can detect and anticipate the end of life, but so could a sufficiently sophisticated AI. With the proper inputs, I think a machine could anticipate death at least as well as a human.
-
Rob Philpott wrote:
there will be lots of moral questions about the worth of what's created vs. the worth of the human
Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.
Rob Philpott wrote:
Like I said, its like the sum is more than that parts.
I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
All I can ask, in return, is, if the Hulk lifts Thor while Thor is holding Mjolnir, does this meant that the Hulk lifted Mjolnir?
I was brought up to respect my elders. I don't respect many people nowadays.
CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier -
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
If you could create such a thing, then yes, it would have emotions and all the rest of it. The philosophy comes in at the step before that: if our brain is more than a physical object we'd never be able to create a simulation. You couldn't just plug a human brain into a beige box, though. Large parts of our brain are linked to our physical form (not just the obvious way of moving bits and feeling what happens to them, but all our senses, our sense of space, movement, even time are all linked to how we are put together). There's nothing intrinsically difficult about the idea of emergent complexity; we can see it in various places already. For example ants are simple but the complexity of an ant colony is not (or similarly with bees and storing information about good flower areas); genes and chromosomes are individually simple, to the level we chemically understand them, and yet put them together and you encode complex organisms; market trades are individually very simple and yet the behaviour of markets as a whole is not.
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
Self-awareness seems to be rather difficult to accomplish. You could say that the earth is a large complicated processor and it was only able to produce one species (that we know of) that could invent the internet. I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums - the earth would have already done so. Wait... I think I remember seeing where a snake was posting on twitter https://twitter.com/BronxZoosCobra[^] well there goes that argument. Yes clearly self-awareness is replicable by humans because snakes are posting on forums. Next discussion please.
-
All I can ask, in return, is, if the Hulk lifts Thor while Thor is holding Mjolnir, does this meant that the Hulk lifted Mjolnir?
I was brought up to respect my elders. I don't respect many people nowadays.
CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easierDude, that's Modus Ponuns[^] So yes, I can confirm this.
Regards, Rob Philpott.
-
:laugh: Very good.morally dead for the most part but still highly toxic. If only we could find a landfill sight that would take them.
"The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)
-
Quote:
Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.
I would argue that in the case of a sudden death, humans lack the ability to know that they're dead (from the strictly human level - if existence continues it's post-human). Now, if life is slowly failing, a human can detect and anticipate the end of life, but so could a sufficiently sophisticated AI. With the proper inputs, I think a machine could anticipate death at least as well as a human.
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
-
Well, I can't speak for the rest of you, but I like to think I am.
Regards, Rob Philpott.
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
If it's "you" that was recreated in this machine - place yourself in "his" metaphorical shoes. "You" would wake up, being conscious, but you would be paralyzed (you can't feel your legs or arms), blind (no eyes), deaf (no ears). You would feel extreme panic as your mind no longer registers the inhalation and exhalation of breath. Eventually - if you didn't go mad - you would calm down and and realize that something else is going on, this is when "you" would perhaps realize that you were only a recreation - a simulation of the original. Under these circumstances, if the original attempted to communicate with me, I'd tell myself to FRO.
-
Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.
Regards, Rob Philpott.
Rob Philpott wrote:
For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts?
Which philosophers have been pondering for a very long time - definitely before anything like a computer existed. The basic discussion is...given that person A is talking to person B how does person A know that B is conscious? How does A know that B thinks the same way that A does? How does A know that even though B seems to be discussing everything in a reasonable way how does A know that the B is in fact understanding the same concepts that A is trying to convey? The "Turing Test" is an experimental technique defined in an attempt to at least arrive at an equivalence in behavior.
-
Are you a deity? If not, I'm afraid I do not accept your explanation.
Regards, Rob Philpott.
Rob Philpott wrote:
Are you a deity? If not, I'm afraid I do not accept your explanation.
Philosophically that is is basically an invalid refutation of the statement. Your statement embodies the assumption that you are in fact an intelligent being rather than being just some words in a forum. And also assumes that even if you are intelligent entity that one must be a deity and not just very smart to be capable of modeling you.
-
Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.
harold aptroot wrote:
Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.
Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.
-
harold aptroot wrote:
Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.
Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.
-
No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.
Regards, Rob Philpott.
Rob Philpott wrote:
Like I said, its like the sum is more than that parts.
And if you put gasoline and a bunch of steel on the ground you still are not going to be able to drive it from NY to Chicago but, presumably, you can do the same with an automobile.
Rob Philpott wrote:
If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.
The number of things that people attach morality to probably isn't infinite but it is certainly big enough that enumerating it would be endless. So I fail to see how that matters.