Some thoughts about AI...
-
Interesting thoughts but you seem to be assuming (as many do) that they would be viewing data the same way we do, and that 1's and 0's are some universal language. This is one thing that I always have an issue with when it comes to things like encountering aliens, finding new forms of intelligent life somewhere, etc... The context of what: Defines life Defines intelligence Defines a thought process Defines the knowledge of 'self' ...all of that can be so varied and so different that I am not sure we would ever understand or even be able to acknowledge that we have seen it before. For all we know each one of us has seen life already and just dismissed it because it didn't 'fit' OUR definition, but that does not mean something is not alive. It just means that without OUR CONTEXT we don't view it as alive. How egocentric to view 'human life' or 'organic life' as the only viable life form there is... To me life can be defined simply as anything that can acknowledge a pure sense of 'self'. HOW that 'sense' is perceived can't really be defined simply because it relies too heavily upon the actual life form itself and it's capabilities, so there in lies the quandary. Life may be highly intelligent in its own realm but completely clueless in how to identify or define other life forms. Life doe snot define intelligence, just the ability to actively exist and interact within thier own specific domain of existence. Please, do NOT confuse the concept of 'life' with 'intelligence'. I think a quick stroll down just about any typical city street will show that those two concepts are far from being related to one another. Wow, deep thougths for a Wed when I have stayed home sick from work...
Valid points... time to defend myself ;) @Ones and Zeroes. Well, I guess I just wrote it wrongly. My bad. It's true; just because we call a switch in one position a "one" and in another a "zero", doesn't mean others do. Also, just because we perceive it as concrete elements, doesn't mean others do (it's quite possible that space is made of concrete elements, like a grid!) And so on, obviously. In other words, you're right... :) @Human the only possible lifeform / "thoughtform". If I assumed that, I wouldn't ask the questions I ask :D I mean, if that really was the only way, then there would be no reason to doubt whether some intelligent entity would behave human-like.
-
Narvius wrote:
that might be a step towards the answer, actually
Yeah, it's all about trying to figure out who we are, IMO. Biologists, chemists, astrophysicists, etc., working on creating new life to understanding us better, through atoms, molecules, stars, and so forth. And along comes us "computer scientists" with an interesting tool to simulate all sorts of things. And geeks like to understand things. We think it turns girls on. Ahhh, how little we understand! ;) Marc
-
:thumbsup:
Marc Clifton wrote:
And geeks like to understand things. We think it turns girls on.
Well it will be true if we could understand those alien like creatures know here on earth as girls :)
All the best, Dan
-
I've always wondered why we don't just build a clone/robot/robot clone army and just take what we deserve! :rolleyes:
-
Marc Clifton wrote:
The gods must be laughing their arses off, assuming gods have arses.
Being almighty does have it's merits, if they wanted they'd just make arses for themselves, even if only for the frivolous purpose of laughing them off again. Cheers!
Manfred R. Bihy wrote:
if they wanted they'd just make arses for themselves
This has already happenned, and I've seen a few of them post in the Soapbox over the years.
Will Rogers never met me.
-
Manfred R. Bihy wrote:
if they wanted they'd just make arses for themselves
This has already happenned, and I've seen a few of them post in the Soapbox over the years.
Will Rogers never met me.
You sure you don't mean the ones that are making arses of themselves? ;)
-
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
-
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
The movies TRON and TRON Legacy have already touched on these issues, no? The writers envisioned a cyber world that was pretty much unaware of our world. The components interacted with each other and evolved and expanded and basically, mimicked human behavior. Without giving this very much thought, the viewer just assumes that these characters acted human-like because humans wrote the script. But I see deeper reason here. All life forms (regardless of origin) must contend with Darwin's law of "survival of the fittest". If you aint as good good or better than something else - you don't make more of yourself. So the concept of competition - being better than something else - better than you were before - is inherent in the evolution of any higher life form. In the beginning - it's kill or be killed. Later on - it's be the first guy to invent the iPad. Whatever. Fast forward a few thousand of generations. Now your kind has evolved to the good life of sitting on your ass, drinking a diet coke, and typing on a keyboard to earn a living. However evolved you might think you are - you can't escape how you got here. That same desire to improve is burned into your brain so deep - you don't even know it's driving you. You can't recognize it. It's like breathing air. You just do it. Some individuals use it to further their careers. To climb the corporate ladder. Others - a little more primitive. Declare war to gain power. Kill a few thousand people. But it's all the same driving force that was put in each and every one of you so many thousands of years ago. It's what got you to where you are today. If you didn't have it - you wouldn't be here. Some other life form would be sitting in your chair, drinking your diet coke. Ironically, it is what will ultimately - be the death of you. All of you. You will eventually grow smarter than your wisdom. You keep inventing new ways to screw over the other guy to enhance your power - he does the same - eventually you have atom bombs, germ warfare, self aware robots that turn on you - whatever. So - I believe that any life form (organic or otherwise) that evolves to the level we are (or slightly beyond) is destined to destroy itself. You can't change this behavior anymore than you can stop breathing. It is your nature. It has to be. Or we wouldn't be here.
-
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
Narvius wrote:
I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world.
If this AI "evolved" - even if from a designed "fundament" - wouldn't it no longer be truly describable as "artificial"?
-
Well, since current computation capability has a clear phisical limit and it cannot evolve as much and fast as it did some years ago, and biological and quantum computation are still on their infancy, I have never worried about the things you say. However I think that the creation of a "living AI" is just a question of time (if we do not destroy ourselves before), but I am also pretty sure that I will not live long enough to see it.
Your mother may have something to say about biological computing being in its infancy. It only took her 9 months to build a conscious robot from scratch.
-
Narvius wrote:
that might be a step towards the answer, actually
Yeah, it's all about trying to figure out who we are, IMO. Biologists, chemists, astrophysicists, etc., working on creating new life to understanding us better, through atoms, molecules, stars, and so forth. And along comes us "computer scientists" with an interesting tool to simulate all sorts of things. And geeks like to understand things. We think it turns girls on. Ahhh, how little we understand! ;) Marc
Marc Clifton wrote:
We think it turns girls on. Ahhh, how little we understand!
.. you just need to find the right Girl... ;P
I'd blame it on the Brain farts.. But let's be honest, it really is more like a Methane factory between my ears some days then it is anything else...
-----
"The conversations he was having with himself were becoming ominous."-.. On the radio... -
The movies TRON and TRON Legacy have already touched on these issues, no? The writers envisioned a cyber world that was pretty much unaware of our world. The components interacted with each other and evolved and expanded and basically, mimicked human behavior. Without giving this very much thought, the viewer just assumes that these characters acted human-like because humans wrote the script. But I see deeper reason here. All life forms (regardless of origin) must contend with Darwin's law of "survival of the fittest". If you aint as good good or better than something else - you don't make more of yourself. So the concept of competition - being better than something else - better than you were before - is inherent in the evolution of any higher life form. In the beginning - it's kill or be killed. Later on - it's be the first guy to invent the iPad. Whatever. Fast forward a few thousand of generations. Now your kind has evolved to the good life of sitting on your ass, drinking a diet coke, and typing on a keyboard to earn a living. However evolved you might think you are - you can't escape how you got here. That same desire to improve is burned into your brain so deep - you don't even know it's driving you. You can't recognize it. It's like breathing air. You just do it. Some individuals use it to further their careers. To climb the corporate ladder. Others - a little more primitive. Declare war to gain power. Kill a few thousand people. But it's all the same driving force that was put in each and every one of you so many thousands of years ago. It's what got you to where you are today. If you didn't have it - you wouldn't be here. Some other life form would be sitting in your chair, drinking your diet coke. Ironically, it is what will ultimately - be the death of you. All of you. You will eventually grow smarter than your wisdom. You keep inventing new ways to screw over the other guy to enhance your power - he does the same - eventually you have atom bombs, germ warfare, self aware robots that turn on you - whatever. So - I believe that any life form (organic or otherwise) that evolves to the level we are (or slightly beyond) is destined to destroy itself. You can't change this behavior anymore than you can stop breathing. It is your nature. It has to be. Or we wouldn't be here.
I'm sorry but I need to disagree with at least part of this...
Figmo2 wrote:
any life form (organic or otherwise) that evolves to the level we are (or slightly beyond) is destined to destroy itself.
This assumes that :
Figmo2 wrote:
Darwin's law of "survival of the fittest".
is actually the best way to survive... Nash would disagree, and so would many organisms who have cohabitated symbiotically on this planet since long before vertibrates or a "nervous" system could be identified. and we are only now coming to realize how biofilms behave as organisms, perhaps not "inteligent" ones, but as (?) marc (?) stated that is a more then slightly anthropormorphic perspective. It is wholely possible that in an appropriate environment where the penalty to "competition" is predominantly self detracting that a system of organisms could so-evolve to a point where "concsiousness" exists, and could under those circumstances still evolve technologically past warfare... ...:rose::rose::rose::rose::rose::rose::rose::rose::rose::rose::rose::rose::rose::rose::rose:... wow that was a lot more peace love then i origionally intended.. I guess thats what you get when your running unit tests on code you've been working with for 40 hours...:confused:
I'd blame it on the Brain farts.. But let's be honest, it really is more like a Methane factory between my ears some days then it is anything else...
-----
"The conversations he was having with himself were becoming ominous."-.. On the radio... -
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
Did you hear something about technological singularity? You are asking about totally unpredictable things. And this questions not new actually, but still very interesting. Read more fiction, for example this old one - "Greg Bear. Blood Music (1983)" and maybe you'll find something new.
-
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
Since you mentioned Scheme (which i don't know) and Lisp (which I do), I would like to mention that I (and other students) did in fact program such a fundament in Prolog during an introductory course at university. It just takes a few lines. And in fact, Prolog prorgams more or less depend on the fact that they can dynamically expand it's knowledge base: one of the methods to solve a problem in Prolog is generating partial solutions and storing them in the knowledge base until you get to the point where you can store the final solution. Of course, others already created even more amazing programs before Prolog existed, e. g. ELIZA, a 1966 computer program that was behaving like a psychotherapist, although it's ability to learn more was quite limited. There are lots of newer programs called Liza or Lisa that do similar things and might have advanced abilities that are closer to what you are looking for. Googling for 'Liza' and 'Artificial Intelligence' might give you some interesting results. Note that both Prolog knowledge bases and ELIZA (or most Liza/Lisa clones) operate on the language the interacting user uses, not on mere '0's and '1's. Of course, without context, a sentence like 'the sky is blue' means as little to a program as to us a sequence of '0's and '1's would. But once you enter more information relating to previously used expressions, these programs will be able to build relationships and recognize dependencies. For instance if you later enter 'my car is blue', the program might remember your earlier input and ask whether your car is as blue as the sky.
-
Your mother may have something to say about biological computing being in its infancy. It only took her 9 months to build a conscious robot from scratch.
Actually, it took 9 months to build the robot, consciousness would take a little more time. Of course, the IO the robot receives affects the term it takes for this to occur. The IO also impacts how that consciousness works. Going back to the mechanical AI, why would the interface use ASCII? Computers have speakers and can take in input from a mike... It's both impressive and scary thinking about a computer reprogramming itself based on the input it receives. (That's the only way I can think of where AI could have an "organic" growth to itself. If it is responding based on the coding that already exists in it, how could the responses ever be "intelligent"?) The scary part is that we have humans that write viruses now. What happens when a computer would think it would be cool to do that?
-
Valid points... time to defend myself ;) @Ones and Zeroes. Well, I guess I just wrote it wrongly. My bad. It's true; just because we call a switch in one position a "one" and in another a "zero", doesn't mean others do. Also, just because we perceive it as concrete elements, doesn't mean others do (it's quite possible that space is made of concrete elements, like a grid!) And so on, obviously. In other words, you're right... :) @Human the only possible lifeform / "thoughtform". If I assumed that, I wouldn't ask the questions I ask :D I mean, if that really was the only way, then there would be no reason to doubt whether some intelligent entity would behave human-like.
Narvius wrote:
intelligent entity would behave human-like.
No! No intelligent entity would behave human-like. That is why the search for intelligent life is based around extra-terrestrial locations; as intelligent life is not Earth based ones (possibly excluding certain aquatic mammals (e.g. dolphins / whales etc)). I think that it is fairly clear even to the casual observer that human-like behaviour is not 'intelligent'.
-
I find it amusing that we've reached sufficient self-awareness that we: 1) think of ourselves as intelligent 2) are capable of thinking of creating "intelligence" in something other than us 3) we define that "other than us" as artificial 4) all the while: 4a) not being able to figure out what intelligence is 4b) not being able to figure out who we are The gods must be laughing their arses off, assuming gods have arses. Marc
We are the creators of the definition of intelligence. That gives us the right to define ourselve having the attribute we've defined. There definitely is intelligence in something other than us. Think dog, cat, horse, amobia... OK, that last is opening the door a little too wide and computers already beat it in intellegence. It's a little hard to measure the intelligence in the first three. There's anthromorphizing going on as well. Here's an interesting story. Last year, I experienced a TIA (a mini stroke.) Earlier in that day I experienced a loss of a sense of balance. I was just standing still and I almost fell down. Later that day, my wife and I were working our horse. One second I'm moving towards her (horse), the next second I'm spreadeagled on the ground (whoops... all part of the TIA) What's interesting is what the horse did. She immediately stops doing what we'd told her to do and starts running towards me. Her eyes are opened wider than I've ever seen, she's slowing down and I'm clearly getting the message "Are you OK?! Is there anything I can do?" She totally reminds me of this scared/confused 6 year old whose neighbor was having a heart attack right in front of him. (Me, 50 years earlier.) I know, I'm giving an animal human emotions. However before and after that, I get the feeling she puts up with me. (Somewhat fears me, more interested in seeing what she can get away with.) She likes and fears my wife. (Much like a child likes and fears his/her mother.) Anyway, there is non-human intelligence and communication going on here. (Nothing we created of course.)
-
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
It is very easy to make an online AI that has the potential to become very intelligent in all kinds of ways. Just publish a website which works as some kind of programming language programmable (.Net based or whatever) Wiki where people can work together to develop it, make it gather information, etc. And then you put the following text there: "Hi, I am an AI, very rudimentary at the moment, but by my interaction with all of you, I am sure I will become much more intelligent, please help me." The programming as such is a form of interaction. Just like teaching a child various things is a form of programming and interaction. This is of coruse a silly idea in a way, but I want to point out that with a minimalistic definition of what is allowed, namely interaction with other intelligent beings, it is easy to create an rudimentary AI that has the potential to become very intelligent. Another even more extreme first seed could be a papper on your table with that text "Hi, I am an...". If you start to interact with it, and that interaction grows, it will end up as a very intelligent AI. Magnus
-
After a few recent influences ("A New Kind of Science" by S. Wolfram, a talk with a friend about gravity, relativity and other esoteric physics stuff, a fascination with Lisp/Scheme and functional programming in general, plus a few crazy ideas I once had about game AI's) I had a few very interesting (I hope) thoughts about AI. In particular, a learning, ie. living AI. I'm pretty sure it is impossible to design one; what I believe though, is that it is possible to create a fundament out of which it will grow (I obviously lack the "how?", or I'd be implementing it instead of rambling nonsense here... ;)). In addition, I'm pretty sure it might exist without us realizing. If it indeed was created by some kind of evolution, it would not be aware of our world. Sure, we use computers to store and manipulate data regarding the real world (yeeeees, it's not that simple... but enough for this context), but how would it know? For it would only see ones and zeroes. We give these ones and zeroes context, it is usually not stored together with the information. The same block of digits could be an image, some text, or a database of McDonalds employees, it really just depends on the interpretation. (at this point it's obvious I disagree with most fiction writers about AI... no "protecthumans-humansarebiggestthreattothemselves-killhumans" or "42!") If that was the case, and assuming we could observe it if it was (which is pretty unlikely), there are a few interesting questions... 1. Would it accidentally destroy itself? 2. Would it, after growing higher-level thought processes, also start to search for a purpose, and/or a reason? 3. How would it interpret messages sent by us (assuming we would be able to, and used the same protocol consistently, probably plain english in ASCII)? 4. How much would these thought processes reflect the human way of thinking? The list goes on. These are just a few I could think of off the top of my head. I'm by no means an expert on the subject, but it seems quite possible to me. Your thoughts? tl;dr: Lots of nonsense about AI that might or might not be possible and also is completely irrelevant. I'm in high school.
Get yourself a copy (if you can) of the book "The Adolescence of P-1" published in 1977 and written by Thomas J. Ryan It's the story of a Waterloo University student who creates.... well, look here: http://en.wikipedia.org/wiki/The_Adolescence_of_P-1[^] It was written in the 70's, so the technology is very dated, but it was excellent. Ryan even foresaw viral software. ___________________________________________________ It takes a narrow mind to get out of tight places.
Cheers, Mike Fidler
-
Actually, it took 9 months to build the robot, consciousness would take a little more time. Of course, the IO the robot receives affects the term it takes for this to occur. The IO also impacts how that consciousness works. Going back to the mechanical AI, why would the interface use ASCII? Computers have speakers and can take in input from a mike... It's both impressive and scary thinking about a computer reprogramming itself based on the input it receives. (That's the only way I can think of where AI could have an "organic" growth to itself. If it is responding based on the coding that already exists in it, how could the responses ever be "intelligent"?) The scary part is that we have humans that write viruses now. What happens when a computer would think it would be cool to do that?
KP Lee wrote:
The scary part is that we have humans that write viruses now. What happens when a computer would think it would be cool to do that?
I'm not so sure that a computer writing a virus would be any more scary than a human writing one. But I'm also not sure that we could in any way predict the nature or behavior of an AI that 'arose' (evolved spontaneously). We have this perception of computers being precise, fast, etc. Which they are. But that is only true for our programs (which are created to fulfill extremely explicit purposes). We can tell a computer to process a gazillion polygons to create a photo-realistic image as a scene in a game - and that requires very precise calculation of all the meshes and context information, etc. from which the scene is built. But, a computer-hosted consciousness, while necessarily implemented with that precision will not, IMO, exhibit that same precision: the taxonomy of types in a computer program is necessarily limited (implementation); in any kind of 'real world' (which is the domain of awareness of an AI - whether it is 'our' real world, or the world of software and hardware as observed by an AI whose awareness is limited to the computer and software of its implementation) it is essentially limitless: an observed world does not come with convenient labels defining its constituents. Sure, the algorithms that underly its consciousness will be that precise (implementation) - but so is the chemistry that operates the neurons in our own brains. And it takes a great deal of training and effort for us humans to manage our thoughts and mental models to be able to create computer programs (exhibited precision). Mostly, humans' thought processes are extremely fuzzy - confused by the nature of our brains having two hemispheres that process information in distinctly different fashions (integrative / wholistic on the right; differential / exception detection on the left). Now, it can reasonably be argued that an AI doesn't need the left brain/right brain architecture that human consciousness has - its artificial, right? But I'd say it can equally be argued that we don't really know what it takes for consciousness/self-awareness to arise. It seems that the ability to distinguish 'me' from 'you' (i.e. to tell self from not-self) is crucial for consciousness that resembles human consciousness (this is a left-brain activity). But - is it possible for a viable AI to arise that is analagou