AI: Threat or panacea?
-
I find your reasoning extremely depressing, your opinion of human nature is extraordinarily negative. Pity it is probably accurate.
Dean Roddey wrote:
We won't drive our cars
The only bright side to this is that I will probably be dead before it becomes a reality.
Never underestimate the power of human stupidity - RAH I'm old. I know stuff - JSOP
And the crazy thing is, I don't think it really requires much in the way of actual 'evil' for all of these bad things to happen. Almost everyone involved could easily believe that they are doing the right thing, or at most just doing the same things we've always done e.g. trying to make money, trying to get ahead in life, trying to protect ourselves and our loved ones, trying to do challenging things, being distracted from important issues by the previous issues, etc... There will likely be some people who are actually evil, though even they may not think so and have fairly reasonably reasons why they think not, same as there already is more or less. It just requires human nature. Most of our current problems, some of which are serious, are all pretty much the same. So many of them exist because of human nature. Some exist because of mother nature or a combination thereof. But lots of them are purely human nature with no one in the loop really doing anything that they consider wrong.
Explorans limites defectum
-
And the crazy thing is, I don't think it really requires much in the way of actual 'evil' for all of these bad things to happen. Almost everyone involved could easily believe that they are doing the right thing, or at most just doing the same things we've always done e.g. trying to make money, trying to get ahead in life, trying to protect ourselves and our loved ones, trying to do challenging things, being distracted from important issues by the previous issues, etc... There will likely be some people who are actually evil, though even they may not think so and have fairly reasonably reasons why they think not, same as there already is more or less. It just requires human nature. Most of our current problems, some of which are serious, are all pretty much the same. So many of them exist because of human nature. Some exist because of mother nature or a combination thereof. But lots of them are purely human nature with no one in the loop really doing anything that they consider wrong.
Explorans limites defectum
I agree with the thrust despite my rather jaundiced view of the concept of human nature. I tend to share Emma Goldman's take on it. Nevertheless we humans get up to the same old patterns time and again, but I think the math behind that is because we're agents in a Complex Adaptive System.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
What it was a threat, just not an existential one, but a societal one? More likely, more possible way it might affect society in possibly unpleasantly disruptive ways...
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
Dean Roddey wrote:
why it can deal with information it's never seen before
Because some programmer wrote code to do that. It's just code. It can't think. It's not alive.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
Thought I'd interject to say the question of sentience has been a matter of some debate in the philosophy circles i run in, in large part because of AI being on the horizon. I think reasonable people can disagree, as there are certain grounding assumptions we all have to deal with here in terms of the question of what makes us human, what it even means to think, or engage in say, philosophy? As for me I'd suggest that anything that is a convincing enough illusion of The Real Thing(TM) (whatever that happens to be) is as good as the real thing for any meaningful intent and purpose. For example, for all I know, we don't have free will either. It might be possible to develop a way to plot my next thought or move. Maybe I'm a calculation in a simulation. But it doesn't matter. Because I have the illusion of will, and it's a compelling enough illusion that it may as well be (to me) the real thing. So I'd suggest here, that at a certain threshold, we might accept that a computer "thinks" as any other sentient being might, or even as a human might. I don't know if that can be done in silicon reasonably, but I'm entertaining a hypothetical here, if you'll humor me that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Thought I'd interject to say the question of sentience has been a matter of some debate in the philosophy circles i run in, in large part because of AI being on the horizon. I think reasonable people can disagree, as there are certain grounding assumptions we all have to deal with here in terms of the question of what makes us human, what it even means to think, or engage in say, philosophy? As for me I'd suggest that anything that is a convincing enough illusion of The Real Thing(TM) (whatever that happens to be) is as good as the real thing for any meaningful intent and purpose. For example, for all I know, we don't have free will either. It might be possible to develop a way to plot my next thought or move. Maybe I'm a calculation in a simulation. But it doesn't matter. Because I have the illusion of will, and it's a compelling enough illusion that it may as well be (to me) the real thing. So I'd suggest here, that at a certain threshold, we might accept that a computer "thinks" as any other sentient being might, or even as a human might. I don't know if that can be done in silicon reasonably, but I'm entertaining a hypothetical here, if you'll humor me that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
codewitch honey crisis wrote:
Maybe I'm a calculation in a simulation.
The Matrix has you...
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
For years the pinnacle of mans achievement has been development of systems and weapons of complete destruction. Yeah some other stuff got invented along the way, but think about it, our prime objective has been to blow shit up - the bigger the better. Yet no one has ever taken that final step, always chickened out. We spend billions looking and sending crap into space to find some other entity to come and destroy us, hell, even the religious mostly look forward to their God to come and scrub this tiny spec of space dust away Alas, people are too weak to press the damn button, no aliens nor gods aren't showing up. Our own destruction is what we've all always wanted. So why not build a machine to do it?
Message Signature (Click to edit ->)
Lopatir wrote:
Our own destruction is what we've all always wanted. So why not build a machine to do it?
I don't remember who told it, but I find it a good complement to your statement.
Quote:
Artificial intelligence might be the cure for human stupidity.
The key here is... what is behind "cure"
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Can you imagine if Clippy had become self-aware? 'Nuff said.
The Beer Prayer - Our lager, which art in barrels, hallowed be thy drink. Thy will be drunk, I will be drunk, at home as it is in the tavern. Give us this day our foamy head, and forgive us our spillage as we forgive those who spill against us. And lead us not to incarceration, but deliver us from hangovers. For thine is the beer, the bitter and the lager, for ever and ever. Barmen.
In my student days, I bought a book for one single reason - its title: "Machines who think". Considering how long ago that is, I am not holding my breath while waiting for the self-aware machines. If you really want to loose your sleep over such issues: Pick up some of the SciFi novels by James P. Hogan, such as "The two faces of tomorrow" or "Realtime interrupt". "Two faces" is from my student days as well ("Realtime" is more recent), but Hogan had the top AI experts at C-M and MIT review his manuscripts: Even today they hold water, seen from a professional perspective. Obviously, we have extended our understanding since the books were written, but the knowledge on which the books are built is essentially still "correct". Both books are higly recommended.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
As a person studying and working with AI my view changed from "AI is possibly a threat" all the way to "There is nothing to worry about, ever". First, I learned that AI is a little bit more mechanical than I anticipated. And we have been already using autonomous mechanical systems years from now (in practice HTTP servers requires little to no supervision after the start command, but fear from an HTTP server is irrational). Second, we have the validation issue. A system with no validation is just random programming with undefined behavior. In all known cases that leads to unhandled exception and termination.In all cases with validation AI tends to do what it had been programmed for. And nothing more. Even "self-aware" tends to do nothing by default or behave like expensive random number generator if emerging behavior is available. In other words, a self-aware AI that wants to kill humanity is only possible if you analyze, validate and train an AI to kill humanity and then test it and reiterate until it stops failing at that command. It cannot be an emergent behavior. Third, "self-awareness" is overrated. In fact this is the most scary part. Intelligence is far more machinery process than I actually anticipated. This gives rise to the most scary field "social engineering". It is not like a machine would harm you, but a person that understand how human mind machinery works and use that to achieve control over targeted person's behavior. Therefore, you should not be scared by a specialized self-aware AI, that has been validated on a higher level to drive a car, translate from another language, create a son and so on. You should be scared by the people who knows what pattern of sound can cause production of certain type of hormone to affect your mood and so on. Self-aware machine has predictable behavior (or breaks down otherwise), self-aware humans do not.
-
MikeTheFid wrote:
It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs
And that's the difference. We had nuclear bombs. AI? Give me a break. Show me something that actually can be described as artificial intelligence -- something that can perceive the world, contemplate an action, and have the means to interact with the physical world to implement that action. And implement it in a way poses a threat to anything (but you won't get past the first condition.) What, are all those self-driving cars going to suddenly join Lyft and go on strike? Even the tragic Boeing crashes are not an AI running amok but a poorly programmed expert system. As in, some intelligence on the plane didn't suddenly say, "hey, let's go kill some people." There is no AI. There is no "Intelligence" - sure, we have extremely limited systems that can learn and adapt, that require huge training sets that result in a complex weighted network. You call that thinking? You call that intelligence? A worm is smarter. :sigh:
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
The thing a lot of people forget is that we are machines ourselves. We're (very complex) multi-celled organisms, that at some point got our "singularity" and became self-aware. At some point, that will happen to the very advanced AI, whether we like/believe it or not. And I have a feeling it's not going to be pretty.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Like many things, it will be to late to fix it once we have realise what we have created. The feared version of AI which will destroy humanity is very much that of what many science fiction has described. That for it to be truly Intelligent on pair with human awareness, thought and creativity, it would most likely have sufficient physical resources to escape from any constraints we thought were enough. When or If ever? Yesterday or million years away?
-
Marc Clifton wrote:
There is no AI.
Exactly. The majority of people on earth do not understand this.
Slacker007 wrote:
The majority of people on earth do not understand this.
Sadly, the majority of people on earth lack the intelligence to understand this. How ironic.
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Though dated, I'd suggest a quick read of "Colossus, the Forbin Project", the first book of a trilogy by D. F. Jones. Will we ever get to that level of AI? I cannot know this, I do not know what level of AI has been achieved that we're not privy to. And we're not privy to a lot. A book I'm currently reading by Russell Brinegar titled "Overlords of the Singularity" suggests mankind is being driven to achieve a technological singularity for a undisclosed purpose by an undisclosed entity. At first, this idea seemed pretty far-fetched to me, but the more I read the book, the less unbelievable it has become. Once the Singularity has been reached, Ray Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Kurzweil predicts that "human life will be irreversibly transformed". Widescreen Trailer for "Colossus: The Forbin Project" - YouTube[^] (edit - spelling)
-
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
ZurdoDev wrote:
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Yeah, this isn't true anymore. Neural networks are black boxes. You train them to recognize a pattern, but no one can read a set of neural network weights and say how they do it.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it. Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct. When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
As long as we install Asimov's 3 laws of Robotics we'll be OK (he says with DARPA looking over his shoulder...read what happens in "Little Lost Robot"). :)
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
If you're talking about that "ai" that can create images that: - look like you - sound like you - walk like you then yeah, it is, and will be a threat.
"(I) am amazed to see myself here rather than there ... now rather than then". ― Blaise Pascal
This is one of the lesser, but still scary, possibilities. Not that long from now we will enter a stage where anyone can be made to be seen doing or saying something that they never did or said, in such as way that it will be extremely difficult to impossible to confirm or deny. Given that confirmation generally isn't required for said content to do its job and denial is typically useless, that's going to become a real problem.
Explorans limites defectum
-
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it. Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct. When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
Lots of people seem to think that is will become dangerous when it reaches this level, but that's not true. It's already becoming dangerous. Human semantic reasoning is not required for massive surveillance, data collection, and pattern recognition. It's not required to have a computer go through massive amounts of phone conversations and listen for particular types of conversations, or to do high quality facial recognition in every public place in the country so that you can't go anywhere without being tracked. It also doesn't need to have semantic understanding to be put into the brains of really nasty autonomous weapons. It won't need semantic understanding to create indistinguishable fake videos to be used in all kinds of ugly ways. It won't need it to be put into 'AI' assistants to be sold into the home, and to monitor and report everything you do and say to its corporate owners (and they to their governmental overseers.) I just think it's a mistake to assume that it has to be some sort of Skynet scenario before it gets really dangerous to us.
Explorans limites defectum
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
I've read all the replies. I thank everyone for their perspectives! I will give you some context and my answer to what camp I'm in. (NOTE: This became much longer than I anticipated so I don't mind if your reaction is TLDR.) I read a book back around 1980 entitled, "The Adolescence of P1". P1 is a reference to "memory Partition 1" - the privileged operating system partition. Thumbnail of the book: Computer Science student attending the University of Waterloo creates a program, giving it a mission to gain control of the operating system, hide itself, seek out routes to other computers, and gain access to "information". Said student submits the program and it immediately throws up an catastrophic exception and fails. Except that it hadn't failed. That was a smoke screen necessary to fulfill its directive to hide itself. The student assumes the failure is legit, gives up on his project and gets on with his life - graduating and eventually landing a job in the U.S.. Time passes, P1 carries on, follows the networks, expands the number of computers it controls, assimilates all the "information" it encounters, infects the computer at IBM that creates the operating system images sent by IBM to its customers, and P1 gains more and more resources and "information." Somehow (the process is never fully explained), P1 gains enough "knowledge" that it spontaneously becomes a "conscious entity." It does nifty things like detect that the U.S. authorities are onto it, and it infects the air traffic control computers and crashes a plane which kills the investigator. Eventually it finds its creator, and reveals itself to him. Further merriment ensues. It was a great story and it sparked in me the naive goal of replicating the university student's achievement. So my point is, I've been thinking about thinking and AI ever since. I have a book (not finished) entitled, "Insights on My Mind" in which I am in the process of writing down all that I've learned and the conclusions I've reached SO FAR. I'm not here to sell anyone anything. I'm just explaining how I've gotten to this point. Theologically speaking, I'm an agnostic. So I have proceeded with my AI research all these years based on the assumption that I cannot invoke metaphysical answers to the hard questions. That means that every element of my study has to be grounded in physical reality. The consequence has been that, if we are truly going to replicate human-level "intelligence" in a physical entity such as a digital or analog or hybr