artificial inteligence is a myth!!!
-
You have studied AI, so you should know that AI does not try to replicate human intelligence anyway. Mostly for one reason: Artificial Intelligence is hard, but Artificial Stupidity is much much harder. :-) Jokes aside, in AI we want to write systems that solve problems as well as or better than humans do. We do not want to create a human mind with all the unnecessary baggage that comes with it. So in a sense it doesn't quite matter whether it's possible or not, research in the field is not going in that direction because it wouldn't bring any benefit.
-+ HHexo +-
hhexo wrote:
Jokes aside, in AI we want to write systems that solve problems as well as or better than humans do. We do not want to create a human mind with all the unnecessary baggage that comes with it. So in a sense it doesn't quite matter whether it's possible or not, research in the field is not going in that direction because it wouldn't bring any benefit.
Creating an intelligence that could match that of an human was one of the early goals of AI, however as you correctly point, that's no longer the case, and it changed because early AI researchers promised "Human like intelligence" in a short span of time (10 or 20 years i guess), but as they predictions didn't turn true, they lost funding for their reserach, that's why they shifted focus in more manageable goals, like creating expert systems (narrow field experts) that could replace expert humans (like a chess player).
CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...
-
True, current computers cannot go that far. But I believe dreams will come true when quantum computers go commercial.
I can't wait for a quantum positronic brain. :-D
CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...
-
jschell wrote:
So why isn't there an AI now that is at least as smart as, for example, a dog?
In a way, we have[^]. Any field that relies on another can only advance so far on its own, just as AI relies to some extent on neuroscience and electrical engineering. For example, the science behind something like a warp drive works out, we can even figure out how much energy it would take, the only problem is we don't know how to apply the energy in a way to do it.
Using conventional computer components, we know how much electricity would be needed to run a computer that may be equal to the processing power of the human brain. Check out the article.
-
lewax00 wrote:
Because as we all know, fields never improve and never become more advanced.
Look at the advances in medicine in the past 50 years. And the past 200. Look at the advances in computers in the past 50 years. Look at the advances in bio-engineering in the past 50 years. Look at the 'advances' in parapsychology in the past 50 years. New sciences which can produce results tend to advance quickly. Those that can't - don't. AI is a new science. So why isn't there an AI now that is at least as smart as, for example, a dog?
A dog is a highly intelligent creature. If we could emulate a dog, a human wouldn't be much of an advancement.
-
Firstly define "human intelligence". Personally I believe shall never get to "human intelligence", whatever we make will be clearly a machine intelligence, which will be spooky/odd/wierd/inhuman, along the lines of the lines of "uncanny valley" is for modelling human physical items. Perhaps you mean "human consciousness" Because we haven't agreed on what defines an alternative intelligence as we've not met any human-equivalent species on this planet, or if we have we made them extinct (early hominids) so your initial question is unanswerable. Humans will eventually create programs that will become self aware and human-like in abilities, but I think **that** particular feat will be more of an accident than by design.
There's certainly an aspect of intelligence that involves responding to changes in the environment. I believe for this reason that many "AI" examples are actually forms of information processing. Connect the algorithm to a different bit feed, and it will never identify the patterns and adapt to produce output of value to its "caretakers" (the humans that keep the power on). Achieving that form of intelligence requires the ability to distinguish between the self and the environment. That is an aspect of consciousness, which is really nothing mysterious, it's just awareness of change. Of course, that requires some kind of semantic memory (not just the ability to store and retrieve tokens whose meaning is known only to others) to do the differencing against, and self-consciousness requires a capacity for self-introspection. I like the cockroach example somebody threw up earlier, but I think that it gives an incomplete picture of the challenges and opportunities ahead of us. The human mind, as any accomplished Buddhist will tell you, is a far more complex thing than the study of the brain alone can explain.
-
Map this: "Human General Intelligence" (a.k.a Common Sense) :-D, seriously, i believe he refers to the fact that we still don't have Terminator like intelligences running around us and i believe is not by the lack of tools, but of the lack of processing power.
CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...
Well if that is what he is refering to that is just silly. Just because a system can walk around and talk to humans does not make it an AI system. In addition just because it can not walk around and talk to humans does not mean it is not an AI system. But yes, those systems do not yet exist mostly because of processing power. It is comming though.
Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.
-
OK, dont know if you are trying to confuse me, playing dumb or it is a genuine question, i will suppose the last one. Lets stay with the example you gave of current AI, the social networks algorithms. It is based on a set of rules, lets say(for the sake of simplicity) if the user if from north america and is male, the algorithm will "decide" to show a beer adverticing. Now of course the algorithm may take tons of rules i used only 2 because i want to keep this simple. Now if the one who had to decide what adverticing must show to the users was a human being, he may decide to show other adverticing although he was only instructed to only take into account the location and gender, he may also take into account new paramters withouth being told to do so, like age, politic, religion, etc. So with this simple example(maybe not the best) just what im trying to say is that a part of our intelligence is the capability to break the rules, which i called free will(maybe not the best translation because english is not my native language). I hope i had made my point at least little bit more clear.
Can you break the ruleset that allows you to create and break new rules?
-
For all of you having problems with the idea of AI actually learning like we do, please Wiki Genetic Algorithms http://en.wikipedia.org/wiki/Genetic_algorithm Not only do they work extremely well, they learn, IMHO, much faster than back-propagation (which paradigm was being used earlier to make a point...) Kiwsa
GAs (and GPs) only 'appear' to learn. They're just efficient searchers. So efficient they cheat if you don't get the fitness function correct :p
-
The very existence of free will is still hotly contested in neuroscience, psychology, philosophy, and theology. Even theoretical physicists weight in on the controversy every now and then. If I take your definition of free will, which is not a universally accepted definition, what you are really describing is a system that is non-deterministic. While difficult, it is possible to create programs that are also non-deterministic so I would argue that, by your definition of free will, a program is indeed possible which has free will. A lot or research into emergent behavior has gown down this path. Even if one wanted to just simulate what you call free will, all one would have to do is insert a rule which says that all other rules can be broken (e.g. a statistical weight driven system).
The "Free Will" in your brain is the result of different portions of your brain, vying for attention and voting to decide the next course of action. While one section of your brain is voting to enforce rules and stay on task, another may be voting to go out and eat some ice cream. Parts of your brain are constantly voting to break rules. Though this is a component of our intelligence, it's not an example of rational intelligence.
-
Any programmer who thinks that we are going to reach the human intelligence by if switchs elses for loops, is either crazy or has inhaled a pound of cocaine
I think you totally underestimate the power of computability. Our brains are following physical and chemical laws that can be simulated by means of if-then contitions, for loops and mathematical calculations. Just take a look at protein-folding simulations that can be carried out today. I just think it's a matter of having enough computing and networking power as well as enough low-level understanding of how a real neuron operates for singularity to become a reality.
-
Yes i did and that makes me think that, I am not a troll, thats my particular point of view after watch the current state of the art of the AI
To what extent? I might have said the same thing after my one required undergrad class on the subject. Now that I'm just a thesis away from a masters concentrated on machine intelligence, I have a slightly different perspective. First, I don't believe that the limits of AI are inherent to the techniques themselves. Rather, AI is limited by the ability of us humans to provide effective mathematical models and enough hardware to to evaluate them, both of which are constantly being improved. Consider that it took about three BILLION years for the human brain to develop its current algorithms and resources. I'd say AI is progressing at a fantastic rate by comparison! Secondly, you denigrate the tradition program control structures (if, while, for, etc), but are you really so sure our brains work any differently?
-
Any programmer who thinks that we are going to reach the human intelligence by if switchs elses for loops, is either crazy or has inhaled a pound of cocaine
In principle there is no limit to how intelligent a machine can become. What is a myth though is the idea that intelligence is consciousness.
-
Free will is not as free as you may think. Free will is a decision that is reached by analyzing your current environment (hormonal balances and current blood pressure etc taken into this account as well), processing the current data and measuring the outcome against the cost of achieving the preferred goal and making a decision based on this threshold. Free will is an extremely complex mathematical algorithm.
-
Any programmer who thinks that we are going to reach the human intelligence by if switchs elses for loops, is either crazy or has inhaled a pound of cocaine
We are like programs, but instead of generating an unexpected error, when we don't understand something we say: "God made it that way"
To alcohol! The cause of, and solution to, all of life's problems - Homer Simpson ---- Our heads are round so our thoughts can change direction - Francis Picabia
-
jschell wrote:
So why isn't there an AI now that is at least as smart as, for example, a dog?
In a way, we have[^]. Any field that relies on another can only advance so far on its own, just as AI relies to some extent on neuroscience and electrical engineering. For example, the science behind something like a warp drive works out, we can even figure out how much energy it would take, the only problem is we don't know how to apply the energy in a way to do it.
lewax00 wrote:
No we haven't. All that is doing is attempting to correlate some extremely rough measure to another. No more relevant than attempting to quantify your intelligence to that of a chimpanzee by weighing the brain.
lewax00 wrote:
Any field that relies on another can only advance so far on its own,
However the correlation is not direct. And computer science (which has more impact than electrical engineering) has advanced greatly but still no AI.
-
jschell wrote:
Just curious - where you live do a lot of people have two way wrist radios?
Certainly not the case where I am.Unless you live in another universe it's certainly possible where you are. How common it is was never in dispute.
jschell wrote:
I can also note that none of the following exist either
- flying cars
- PSI powers
- Faster than light travel
- Aliens
- Superheroes
- Minature people living in a dome
- Many, many other things.First off: Flying cars - it's been done, just not in a way efficient enough for consumers (plus other issues like requiring additional licenses) Aliens - unless you're omniscient, you don't know that. It honestly seems very self centered to assume we're the only planet with life in the universe. Second, just because it doesn't exist now means it can never exist? Modern computers didn't exist 200 years ago, therefore they clearly cannot exist now and this conversation can't be taking place.
lewax00 wrote:
Flying cars - it's been done, just not in a way efficient enough for consumers (plus other issues like requiring additional licenses)
The point is that flying cars do not exist in the way that they were depicted in many ways in media. Just as wrist phones do not exist. Again... Hindsight is a wonderful thing but cherry picking a few items from literature that match current culture ignores the vast, vast number of things that do not and probably never will exist.
lewax00 wrote:
Aliens - unless you're omniscient, you don't know that. It honestly seems very self centered to assume we're the only planet with life in the universe.
I know that there are no aliens wandering the streets interacting with humans on a daily basis. Despite a HUGE number of media depictions of that in the past.
lewax00 wrote:
Second, just because it doesn't exist now means it can never exist?
New sciences that can have practical results, produce those results rapidly. Computers and bio-engineering are examples. New sciences that will not have results do not. Parapsychology is an example of that. The science of AI has been around since computers were invented. The results from that do NOT suggest that they will ever meet the common perception of the definition of "AI". Results that have come from that can have practical applications but do not meet the common definition. And there has not been any progress that would suggest that goal will be reached.
lewax00 wrote:
Modern computers didn't exist 200 years ago, therefore they clearly cannot exist now and this conversation can't be taking place.
This conversation would not have been taking place on computers 50 years ago. And it would have been far different on the computers of even 25 years ago. The difference is that computers have advanced significantly in that time. AI science has existed just as long.
-
jschell wrote:
There is no artificial intelligence. And with the current state of that study there never will be.
I would not give up that easily. We know one system that has declared itself to be intelligent. It is made up of spophisticated miniature switching units, known as neurons. We can emulate the switching function more or less precisely on a computer. But understanding and emulating this smallest unit of the system is not the key to intelligence. A brain is a network of large neural networks, so complex that it's unlikely that we can simply design a similar network and emulate it. But we do know the algorithm that has configured our brains. It's called evolution and we can also emulate it. If we disregard the amount of time it may require and also the capacity of the computer wich could do those emulations, I still think that it is possible to get results. And if that is true, those reults could just as well be aliens from another planet because they have been bred to survive and adapt to an emulated environment.
At least artificial intelligence already is superior to natural stupidity
CDP1802 wrote:
And if that is true, those reults could just as well be aliens from another planet because they have been bred to survive and adapt to an emulated environment.
There could be an alternative universe as well. However I am not discussing fantasy. I am discussing the current state of main stream research into AI that has been going on this planet for 50 years. And in that domain there has not been any significant breakthroughs and thus expecting that it will occur in any useful time from now is nothing but wishful thinking.
-
lewax00 wrote:
No we haven't. All that is doing is attempting to correlate some extremely rough measure to another. No more relevant than attempting to quantify your intelligence to that of a chimpanzee by weighing the brain.
lewax00 wrote:
Any field that relies on another can only advance so far on its own,
However the correlation is not direct. And computer science (which has more impact than electrical engineering) has advanced greatly but still no AI.
jschell wrote:
All that is doing is attempting to correlate some extremely rough measure to another.
Then give me a definition which can measure the intelligence of a dog and compare it to a computer. Not to mention, an equally complex neural network in a computer could theoretically out perform a biological one, just on the fact that the biological one also has to do a lot more things, like regulate bodily functions (which obviously a computer does not have).
jschell wrote:
However the correlation is not direct. And computer science (which has more impact than electrical engineering) has advanced greatly but still no AI.
Perhaps, but I'd argue neuroscience has an even bigger impact. How can we imitate what we don't understand? With greater understanding of how a brain works we may have better insight into how to replicate its functions.
-
Don't you mean "unimpeachable"? Neil.
Yes, I guess I put my bug foot in my mouth this time. Dave.
-
The sentence structure doesn't indicate that. Dave.