AI: Threat or panacea?
-
MikeTheFid wrote:
It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs
And that's the difference. We had nuclear bombs. AI? Give me a break. Show me something that actually can be described as artificial intelligence -- something that can perceive the world, contemplate an action, and have the means to interact with the physical world to implement that action. And implement it in a way poses a threat to anything (but you won't get past the first condition.) What, are all those self-driving cars going to suddenly join Lyft and go on strike? Even the tragic Boeing crashes are not an AI running amok but a poorly programmed expert system. As in, some intelligence on the plane didn't suddenly say, "hey, let's go kill some people." There is no AI. There is no "Intelligence" - sure, we have extremely limited systems that can learn and adapt, that require huge training sets that result in a complex weighted network. You call that thinking? You call that intelligence? A worm is smarter. :sigh:
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
The thing a lot of people forget is that we are machines ourselves. We're (very complex) multi-celled organisms, that at some point got our "singularity" and became self-aware. At some point, that will happen to the very advanced AI, whether we like/believe it or not. And I have a feeling it's not going to be pretty.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Like many things, it will be to late to fix it once we have realise what we have created. The feared version of AI which will destroy humanity is very much that of what many science fiction has described. That for it to be truly Intelligent on pair with human awareness, thought and creativity, it would most likely have sufficient physical resources to escape from any constraints we thought were enough. When or If ever? Yesterday or million years away?
-
Marc Clifton wrote:
There is no AI.
Exactly. The majority of people on earth do not understand this.
Slacker007 wrote:
The majority of people on earth do not understand this.
Sadly, the majority of people on earth lack the intelligence to understand this. How ironic.
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Though dated, I'd suggest a quick read of "Colossus, the Forbin Project", the first book of a trilogy by D. F. Jones. Will we ever get to that level of AI? I cannot know this, I do not know what level of AI has been achieved that we're not privy to. And we're not privy to a lot. A book I'm currently reading by Russell Brinegar titled "Overlords of the Singularity" suggests mankind is being driven to achieve a technological singularity for a undisclosed purpose by an undisclosed entity. At first, this idea seemed pretty far-fetched to me, but the more I read the book, the less unbelievable it has become. Once the Singularity has been reached, Ray Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Kurzweil predicts that "human life will be irreversibly transformed". Widescreen Trailer for "Colossus: The Forbin Project" - YouTube[^] (edit - spelling)
-
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
ZurdoDev wrote:
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Yeah, this isn't true anymore. Neural networks are black boxes. You train them to recognize a pattern, but no one can read a set of neural network weights and say how they do it.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it. Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct. When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
As long as we install Asimov's 3 laws of Robotics we'll be OK (he says with DARPA looking over his shoulder...read what happens in "Little Lost Robot"). :)
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
If you're talking about that "ai" that can create images that: - look like you - sound like you - walk like you then yeah, it is, and will be a threat.
"(I) am amazed to see myself here rather than there ... now rather than then". ― Blaise Pascal
This is one of the lesser, but still scary, possibilities. Not that long from now we will enter a stage where anyone can be made to be seen doing or saying something that they never did or said, in such as way that it will be extremely difficult to impossible to confirm or deny. Given that confirmation generally isn't required for said content to do its job and denial is typically useless, that's going to become a real problem.
Explorans limites defectum
-
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it. Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct. When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
Lots of people seem to think that is will become dangerous when it reaches this level, but that's not true. It's already becoming dangerous. Human semantic reasoning is not required for massive surveillance, data collection, and pattern recognition. It's not required to have a computer go through massive amounts of phone conversations and listen for particular types of conversations, or to do high quality facial recognition in every public place in the country so that you can't go anywhere without being tracked. It also doesn't need to have semantic understanding to be put into the brains of really nasty autonomous weapons. It won't need semantic understanding to create indistinguishable fake videos to be used in all kinds of ugly ways. It won't need it to be put into 'AI' assistants to be sold into the home, and to monitor and report everything you do and say to its corporate owners (and they to their governmental overseers.) I just think it's a mistake to assume that it has to be some sort of Skynet scenario before it gets really dangerous to us.
Explorans limites defectum
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
I've read all the replies. I thank everyone for their perspectives! I will give you some context and my answer to what camp I'm in. (NOTE: This became much longer than I anticipated so I don't mind if your reaction is TLDR.) I read a book back around 1980 entitled, "The Adolescence of P1". P1 is a reference to "memory Partition 1" - the privileged operating system partition. Thumbnail of the book: Computer Science student attending the University of Waterloo creates a program, giving it a mission to gain control of the operating system, hide itself, seek out routes to other computers, and gain access to "information". Said student submits the program and it immediately throws up an catastrophic exception and fails. Except that it hadn't failed. That was a smoke screen necessary to fulfill its directive to hide itself. The student assumes the failure is legit, gives up on his project and gets on with his life - graduating and eventually landing a job in the U.S.. Time passes, P1 carries on, follows the networks, expands the number of computers it controls, assimilates all the "information" it encounters, infects the computer at IBM that creates the operating system images sent by IBM to its customers, and P1 gains more and more resources and "information." Somehow (the process is never fully explained), P1 gains enough "knowledge" that it spontaneously becomes a "conscious entity." It does nifty things like detect that the U.S. authorities are onto it, and it infects the air traffic control computers and crashes a plane which kills the investigator. Eventually it finds its creator, and reveals itself to him. Further merriment ensues. It was a great story and it sparked in me the naive goal of replicating the university student's achievement. So my point is, I've been thinking about thinking and AI ever since. I have a book (not finished) entitled, "Insights on My Mind" in which I am in the process of writing down all that I've learned and the conclusions I've reached SO FAR. I'm not here to sell anyone anything. I'm just explaining how I've gotten to this point. Theologically speaking, I'm an agnostic. So I have proceeded with my AI research all these years based on the assumption that I cannot invoke metaphysical answers to the hard questions. That means that every element of my study has to be grounded in physical reality. The consequence has been that, if we are truly going to replicate human-level "intelligence" in a physical entity such as a digital or analog or hybr
-
Lots of people seem to think that is will become dangerous when it reaches this level, but that's not true. It's already becoming dangerous. Human semantic reasoning is not required for massive surveillance, data collection, and pattern recognition. It's not required to have a computer go through massive amounts of phone conversations and listen for particular types of conversations, or to do high quality facial recognition in every public place in the country so that you can't go anywhere without being tracked. It also doesn't need to have semantic understanding to be put into the brains of really nasty autonomous weapons. It won't need semantic understanding to create indistinguishable fake videos to be used in all kinds of ugly ways. It won't need it to be put into 'AI' assistants to be sold into the home, and to monitor and report everything you do and say to its corporate owners (and they to their governmental overseers.) I just think it's a mistake to assume that it has to be some sort of Skynet scenario before it gets really dangerous to us.
Explorans limites defectum
I have no problem with your notion that there are nefarious uses of computers. The issue I was addressing is, should we fear AI specifically because of the possibility that they will go off on their own and pursue goals that are detrimental to human kind and out of the control of their makers? I don't believe the state of technology has reached that point.
-
I have no problem with your notion that there are nefarious uses of computers. The issue I was addressing is, should we fear AI specifically because of the possibility that they will go off on their own and pursue goals that are detrimental to human kind and out of the control of their makers? I don't believe the state of technology has reached that point.
It obviously hasn't now, but it will, and it won't remotely require being 'intelligent' in any strict sense that we might require to consider it an equal. So it'll happen long before that threshold is crossed. It doesn't take any real 'intelligence' to put an 'AI' in charge of weapons or weapons response systems. They just need to be able to take a lot of inputs and reach some level of confidence that something needs to be done and make it happen, very quickly. Some folks would argue that could be done now, and it could, but not in the same way. I could write a conventional program to recognize faces or speech, but it would be brutal and wouldn't likely compete with a DNN based system, where you need to deal with information that is incomplete and fuzzy. These types of systems, I would think, will be more likely to be 'trusted' with such jobs specifically because they don't depend on the programmed in prejudices of a team of software engineers. But that means that, like us, they can misinterpret the input and come to the wrong decision.
Explorans limites defectum
-
It obviously hasn't now, but it will, and it won't remotely require being 'intelligent' in any strict sense that we might require to consider it an equal. So it'll happen long before that threshold is crossed. It doesn't take any real 'intelligence' to put an 'AI' in charge of weapons or weapons response systems. They just need to be able to take a lot of inputs and reach some level of confidence that something needs to be done and make it happen, very quickly. Some folks would argue that could be done now, and it could, but not in the same way. I could write a conventional program to recognize faces or speech, but it would be brutal and wouldn't likely compete with a DNN based system, where you need to deal with information that is incomplete and fuzzy. These types of systems, I would think, will be more likely to be 'trusted' with such jobs specifically because they don't depend on the programmed in prejudices of a team of software engineers. But that means that, like us, they can misinterpret the input and come to the wrong decision.
Explorans limites defectum
-
You said: "It obviously hasn't now, but it will". I'm not as sure as you are that "...it will". Before "...it will" we need to understand how we extract meaning from data. You might even have to explain what "life" is.
No, I meant it will be PUT INTO a position to do things detrimental to us. Humans will allow to do so. I won't have to take over, it'll apply for the job and get approved.
Explorans limites defectum
-
I've read all the replies. I thank everyone for their perspectives! I will give you some context and my answer to what camp I'm in. (NOTE: This became much longer than I anticipated so I don't mind if your reaction is TLDR.) I read a book back around 1980 entitled, "The Adolescence of P1". P1 is a reference to "memory Partition 1" - the privileged operating system partition. Thumbnail of the book: Computer Science student attending the University of Waterloo creates a program, giving it a mission to gain control of the operating system, hide itself, seek out routes to other computers, and gain access to "information". Said student submits the program and it immediately throws up an catastrophic exception and fails. Except that it hadn't failed. That was a smoke screen necessary to fulfill its directive to hide itself. The student assumes the failure is legit, gives up on his project and gets on with his life - graduating and eventually landing a job in the U.S.. Time passes, P1 carries on, follows the networks, expands the number of computers it controls, assimilates all the "information" it encounters, infects the computer at IBM that creates the operating system images sent by IBM to its customers, and P1 gains more and more resources and "information." Somehow (the process is never fully explained), P1 gains enough "knowledge" that it spontaneously becomes a "conscious entity." It does nifty things like detect that the U.S. authorities are onto it, and it infects the air traffic control computers and crashes a plane which kills the investigator. Eventually it finds its creator, and reveals itself to him. Further merriment ensues. It was a great story and it sparked in me the naive goal of replicating the university student's achievement. So my point is, I've been thinking about thinking and AI ever since. I have a book (not finished) entitled, "Insights on My Mind" in which I am in the process of writing down all that I've learned and the conclusions I've reached SO FAR. I'm not here to sell anyone anything. I'm just explaining how I've gotten to this point. Theologically speaking, I'm an agnostic. So I have proceeded with my AI research all these years based on the assumption that I cannot invoke metaphysical answers to the hard questions. That means that every element of my study has to be grounded in physical reality. The consequence has been that, if we are truly going to replicate human-level "intelligence" in a physical entity such as a digital or analog or hybr
Very interesting topic! People are a threat to themselves. When people have control over objects that can harm them then they better be careful and focus on what they are up to. This applies as much to AI as to a gun, knife, or a lathe. I regard AI currently as more of an advanced pattern recognition system and since I have witnessed first hand how the average software developer struggles to even get CSS to jump through the correct hoops I am not too worried about some self-conscious AI going berserk. Of course, if those same programmers are going to be fiddling with code that launches tactical nukes then I would be a bit more worried. I will also be driving my own car for now, thanks Elon. As you have alluded to there are more fundamental issues that we need to solve before even getting to anything that is going to approximate awareness or, heaven forbid, self-awareness. We know we have matter and we know we have consciousness. If consciousness is as a result of some configuration of matter then it is something we can cook up in a lab. However, if matter was somehow "created" by consciousness or is somehow "experienced" as "real" then it is a whole other affair. A simple concept such as "size" would seem to me to be problematic. If some mean-spirited self-aware AI were to create robots to annihilate us then exactly how "big" would these be? It would need to understand something that we all take pretty much for granted. It is a similar conundrum with the evolution of wings: how on earth would wings sprout out of no knowledge of how "thick" the air is and how "big" the wings need to be in order to lift the bird? If it is a matter of chance then what records this monumental event in the DNA that produced "wings" that could have the bird fly and then also keep those same wings around in the same configuration? Would another pair of wings not be even better? I mean, we have this in software development: "Oh, a 5 page document resulted in a successful system... then 100 pages would be even better!" For now I'm quite happy to have AI spot faces and listen to requests for stuff. Especially the voice recognition is handy for kids that can't yet write/type what they are after but they know that they would like to see a "fan collection".