Electronic Consciousness?
-
Creating an artificial consciousness is a far easier task, because you are not constrained to follow any particular implementation. It just has to have the external interface of a consciousness. That is what the Turing Test is about; defining an acceptance test for an artificial consciousness. It avoids metaphysical arguments about what consciousness *is*, and effectively says, if it looks like a duck and it quacks like a duck... See? Way easier.
SeattleC++ wrote:
Creating an artificial consciousness is a far easier task, because you are not constrained to follow any particular implementation. It just has to have the external interface of a consciousness.
Since both are very, very difficult the fact that one might or might not be easier doesn't alter the fact that it is very, very difficult. Moreover recognizing that consciousness exists is far different than understanding it. And although understanding it might lead to creation there is no guarantee.
SeattleC++ wrote:
That is what the Turing Test is about; defining an acceptance test for an artificial consciousness. It avoids metaphysical arguments about what consciousness *is*, and effectively says, if it looks like a duck and it quacks like a duck
Not really. In particular notice that the third subtopic in the following which notes that people have already used that 'test' to decide that programs are intelligent. http://en.wikipedia.org/wiki/Turing_test#Weaknesses_of_the_test[^]
-
patbob wrote:
It doesn't need to know we exist to do something that's catastrophic to us
Then, by definition, we would know it exists. And if we know it exists we can certainly do something to it.
We'd know, but only after the fact.. and only maybe. We'd need to know with certainty that an unnatural event happened. We'd know it was unnatural if A) our predictions didn't predict it, B) we're very certain our model isn't incorrect, and C) we don't just toss that data point out as anomalous. You hit most of the stoplights green on the way home tonight.. natural event, or unnatural? How can you tell?
We can program with only 1's, but if all you've got are zeros, you've got nothing.
-
I see no reason why it wouldn't be possible. After all, the human brain is nothing but a big processor that obeys the laws of physics. Man-made systems with similar capabilities of cognition, affectivity, introspection... should be able to support consciousness. Wikipedia supplies interesting material on this topic (https://en.wikipedia.org/wiki/Mind_uploading[^]). Moving consciousness from one being to another is something we (I) don't understand at the moment, and it seems to raise paradoxical situations.
I remember the book Neverness (think it was from Alan Dean Foster) that I read several years ago. In this book a sentence was quoted several times: If our brain was simple enough that we could understand it, we would be so simple that we couldn't. So maybe creating and teaching an AI is possible, but I don't think copying/transferring would work.
-
We'd know, but only after the fact.. and only maybe. We'd need to know with certainty that an unnatural event happened. We'd know it was unnatural if A) our predictions didn't predict it, B) we're very certain our model isn't incorrect, and C) we don't just toss that data point out as anomalous. You hit most of the stoplights green on the way home tonight.. natural event, or unnatural? How can you tell?
We can program with only 1's, but if all you've got are zeros, you've got nothing.
patbob wrote:
We'd know, but only after the fact..
That is a pretty big what if. You are suggesting that the did something awful but did nothing at all before that which was detectable. Sort of like suggesting that humans went from a club straight to a full nuclear missile launch.
-
I am curious from the point of view of people from a programming sort of environment, how many of us believe that it will be possible to store a consciousness electronically. This could be either to store an existing conciousness (as in copying or backing up an existing mind electronically) or to develop a completely new conciousness that never existed biologically. I'm trying to stay away from the far reaching philosophical and moralistic implications of doing either of these things. That's a massive can of worms. I just wondered how many coders actually think it will ever be possible and how many think it is something that could never be achieved. I think it will be possible.
Please see the page at http://tmor.exnihilum.org/ and reply here if anyone would be interested in working together with me on this way overdue project. Cheers, TJL
-
Lets just hope its not touch screen.
Depends upon who's doing the touching. :)
-
Depends upon who's doing the touching. :)
Good Point. :) As long as they wash their hands.