Extreme Artificial Intelligence
-
Nice insight Humans are self aware because of the senses and the way how the brain analyses the perceptions received. If the robots are fitted with the necessary sensors and the programs to analyse them and re-program themselves according to the sensory data received,then we will be no longer away from self-aware programs and robots.
-
this purpose is chosen by self. which criteria decides purpose for ai?
a beautiful signature
chosen by default, every being is designed to reproduce, leave a good legacy and be happy
-
Our intelligence is not a result of computations. We are consummate pattern-matchers. Here is a really simplified version of how it goes: When we perceive something, it causes a certain bunch of sensory neurons to fire, which correspond directly to that perception. The neurons connected to those sensory neurons fire in turn if they recognize a pattern there -- for example, some neurons only fire if they see a vertical bar traveling from left to right, or other specific patterns like that. Then the next level of connected neurons fire if they recognize a particular pattern in the level before them, and so forth. We learn by building up patterns of patterns. The match to a pattern pops up automatically, or in other words, perceiving and recalling a matching previous pattern happen because the perception and the recall are linked by sharing the same set of neurons in the middle. For an example of how this works, take driving. When you first got behind the wheel as a kid, everything seemed very unfamiliar. All the knobs were confusing, and you probably had to concentrate to remember which pedal was which. You probably had trouble recognizing following distances and when to turn to fit into a parking space and that kind of thing. But with practice, your brain began to recognize and store the patterns of driving, until almost all of driving became subconscious pattern-matching -- the lines on the road should be at particular distances, the feel of the brake matches to how quickly or how slowly the car comes to a stop, et cetera -- we don't have to think about any of these things because they match stored patterns in our minds. We don't have to consciously think about anything unless it breaks our expectations. Unexpected or unknown things draw our attention because they defy the patterns we know. In contrast, a computer is terrible at pattern-matching. Many, many man-years went into the Google search algorithm, but really, what it's doing is trying to mimic the natural human ability to glance over a list and recognize what you are looking for out of it. This very basic ability has to be painstakingly coded into the computer. If you lined up a bunch of toys and asked a preschooler to hand you the "meanest one," the preschooler will be able to match his or her idea of "meanness" to the various traits of the toys and decide which one is the most mean. The computer, on the other hand, has no ability to take the concept of "mean" and expand it to apply to a toy, *unless a human writes an algorithm d
Well i just had to give you +5 for that, yes the current computer programs are like you have described, what you just described there is called a standard model of vision, i 'am currently researching in computer vision and i'am trying to integrate figure - ground discrimination into the algorithms as efficiently as possible. The reason why computers are bad at pattern matching is that programmers haven't just figured out how to efficiently tell a computer how to do just that, we might not need new hardware, but such algorithms can be hardware accelerated by using Graphics Processing Units, introducing parallel processing and more better methods/algorithms will start solving the problem of perception by computers. Don't blame the computers blame us programmers for their shortcomings.
-
Well i just had to give you +5 for that, yes the current computer programs are like you have described, what you just described there is called a standard model of vision, i 'am currently researching in computer vision and i'am trying to integrate figure - ground discrimination into the algorithms as efficiently as possible. The reason why computers are bad at pattern matching is that programmers haven't just figured out how to efficiently tell a computer how to do just that, we might not need new hardware, but such algorithms can be hardware accelerated by using Graphics Processing Units, introducing parallel processing and more better methods/algorithms will start solving the problem of perception by computers. Don't blame the computers blame us programmers for their shortcomings.
You are sort of missing the point of the conversation here. When I said "we have to change the very basis for how they work," I didn't mean the hardware. While those ant-like robots are the most humanlike in intelligence, scaling up that hardware to human size would be technically infeasible. I meant that, if the goal is to have computers have humanlike intelligence, then the whole way the system works would have to change, hardware and software both. You said you are "trying to integrate figure-ground discrimination into the algorithms as efficiently as possible." In other words, the way the computer you are using "thinks," it requires you to tell it what patterns exist, what to do when it sees them, what to do when it doesn't see them, et cetera, et cetera. This "intelligence" of precisely following a programmer's algorithm is simply not humanlike. The extent to which that computer is humanlike depends, as you said, on your own programming ability to impose a small part of *your* human intelligence into the computer. The computer will only be able to mimic the small portion of your intelligence that you are able to give it, and not one whit more. The program you eventually write will not be really doing the same thing you are doing in your own head at all. You are not following any kind of algorithm when you look at a cute kitten and say, "Awwwwww." It's simply that the way your perception works, the kitten triggered a sufficient amount of a pattern linked to the "Awwww" emotion to evoke enough of said emotion that you were made consciously aware of it, and following a pattern of "what I do when I feel sufficient awwwwwww," and said pattern not being overridden by other patterns (such as "what I do when I'm in front of my boss"), you triggered the pattern for saying the sounds of "Awwwwww" in a particular tone of voice. This networking of patterns and behavior is the basis for humanlike intelligence. So, for a computer to be humanlike, we have to throw out our current programming techniques and start fresh with an attempt to create computer architecture (by which I mean, both hardware and software) with a pliable, adaptible artificial network that "learns as it goes" not because some algorithm tells it what it should be learning, but rather, because learning and processing are one and the same. That's how a human mind works: every time you perceive something, you reinforce or change patterns simply through the act of perception. The human mind is in a state of constant change. This is w
-
You are sort of missing the point of the conversation here. When I said "we have to change the very basis for how they work," I didn't mean the hardware. While those ant-like robots are the most humanlike in intelligence, scaling up that hardware to human size would be technically infeasible. I meant that, if the goal is to have computers have humanlike intelligence, then the whole way the system works would have to change, hardware and software both. You said you are "trying to integrate figure-ground discrimination into the algorithms as efficiently as possible." In other words, the way the computer you are using "thinks," it requires you to tell it what patterns exist, what to do when it sees them, what to do when it doesn't see them, et cetera, et cetera. This "intelligence" of precisely following a programmer's algorithm is simply not humanlike. The extent to which that computer is humanlike depends, as you said, on your own programming ability to impose a small part of *your* human intelligence into the computer. The computer will only be able to mimic the small portion of your intelligence that you are able to give it, and not one whit more. The program you eventually write will not be really doing the same thing you are doing in your own head at all. You are not following any kind of algorithm when you look at a cute kitten and say, "Awwwwww." It's simply that the way your perception works, the kitten triggered a sufficient amount of a pattern linked to the "Awwww" emotion to evoke enough of said emotion that you were made consciously aware of it, and following a pattern of "what I do when I feel sufficient awwwwwww," and said pattern not being overridden by other patterns (such as "what I do when I'm in front of my boss"), you triggered the pattern for saying the sounds of "Awwwwww" in a particular tone of voice. This networking of patterns and behavior is the basis for humanlike intelligence. So, for a computer to be humanlike, we have to throw out our current programming techniques and start fresh with an attempt to create computer architecture (by which I mean, both hardware and software) with a pliable, adaptible artificial network that "learns as it goes" not because some algorithm tells it what it should be learning, but rather, because learning and processing are one and the same. That's how a human mind works: every time you perceive something, you reinforce or change patterns simply through the act of perception. The human mind is in a state of constant change. This is w
I get your view, "new hardware and software", but any machine can be simulated on a digital computer before it is made,so those pliable, adaptible artificial network that "learns as it goes" can be emulated on already existing digital computers with the right coding. That's the point i'am putting across here. And one can also use already existing GPU's for accelerating the processing speed of the simulation.
-
+5, but with an addendum; Some animal rely on hereditary knowledge. Our brain being an animal-like one, I'd say that the reptile in there might be hard enough to simulate.
Bastard Programmer from Hell :suss:
-
Hey guys & ladies (to be gender insensitive), a theoretical thought, if a computer program simulates the human brain very accurately, does that make the program self - aware?
I believe there are aspects of a biological organism that just cannot be fully replicated in electronics. We may get to the point where CPUs and software can replicate the processing power of a human brain (like one post says: we are as close to that as earth is to the edge of the universe) but I don't think it would ever be "alive" or aware of its self. Of course, this is my non-professional opinion so take it with a grain of salt. This is indeed a very interesting and thought-provoking post.:thumbsup::thumbsup::thumbsup::thumbsup::thumbsup:
-
but the brain uses some form of neural computation to generate self awareness don't you think that anything with short term memory is self aware? Imagine we erase some part of ones short term memory, is he/she going to know that they did what they just did at that moment?
That person would still be aware of self. He or she would just be confused ;).
-
I believe there are aspects of a biological organism that just cannot be fully replicated in electronics. We may get to the point where CPUs and software can replicate the processing power of a human brain (like one post says: we are as close to that as earth is to the edge of the universe) but I don't think it would ever be "alive" or aware of its self. Of course, this is my non-professional opinion so take it with a grain of salt. This is indeed a very interesting and thought-provoking post.:thumbsup::thumbsup::thumbsup::thumbsup::thumbsup:
yeah it is hard to get around this thought, i was thinking that if you can hold a meaningful conversation with a machine and it can recognize you and respond to your emotions i don't see why it shouldn't be considered to be aware of it's environment and whats going on at least.
-
That person would still be aware of self. He or she would just be confused ;).
yeah, but i think memory has something to do with self awareness.
-
Hey guys & ladies (to be gender insensitive), a theoretical thought, if a computer program simulates the human brain very accurately, does that make the program self - aware?
You know you've looked into a can of worms that people have been looking into for many many years. I would argue that once a comptuer passes the turing test than it will probably demand human rights or "intelligent lifeform" rights and will probably get some form of legal protection. I would call it self aware, it would probably call itself self aware. Good question. And judging by the number of responses many other people are interested too.
-
yes it can be hard to do such a simulation on a single computer, but i also think self awareness can be achievable with programs not anywhere near as complex as the brain.
-
You know you've looked into a can of worms that people have been looking into for many many years. I would argue that once a comptuer passes the turing test than it will probably demand human rights or "intelligent lifeform" rights and will probably get some form of legal protection. I would call it self aware, it would probably call itself self aware. Good question. And judging by the number of responses many other people are interested too.
Nice reply Tim Yen. :thumbsup:
-
Good to see some support :laugh:
-
Hey guys & ladies (to be gender insensitive), a theoretical thought, if a computer program simulates the human brain very accurately, does that make the program self - aware?
When you build it, you should ask it.
-
When you build it, you should ask it.
Good one, but i think it can argue that it is self awareness and can convince a lot of people, and i think if it did that then it deserves to be considered self aware.:thumbsup:
-
yeah, but i think memory has something to do with self awareness.
Would that mean that people with alzheimer's disease aren't self aware? ;)
-
Would that mean that people with alzheimer's disease aren't self aware? ;)
No, they are self aware because alzheimer's disease affects long term memory,but short term memory maybe responsible for self awareness.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man.”
-
Hey guys & ladies (to be gender insensitive), a theoretical thought, if a computer program simulates the human brain very accurately, does that make the program self - aware?
One newbie mistake is looking at this from *only* a computer science aspect. Defining consciousness, as well as answering certain fundamental questions such as how it arises and is kept up are currently being researched very heavily. Anyone coming back with solely "In my opinion, *blah* defines consciousness" will be summarily dismissed ;P Thanks, Sean
-
One newbie mistake is looking at this from *only* a computer science aspect. Defining consciousness, as well as answering certain fundamental questions such as how it arises and is kept up are currently being researched very heavily. Anyone coming back with solely "In my opinion, *blah* defines consciousness" will be summarily dismissed ;P Thanks, Sean
I'am afraid i did not look at this from "only" a computer science perspective, i have researched in neural sensory processing as well.And i don't seem to get your point, the reply is not clear. :)
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man.”