AI: Threat or panacea?
-
It's absolutely a threat. Not in and of itself, anymore than a knife is. But it's a huge threat just because of human nature. Anyone here who thinks that our current baby stuff is indicative of what's to come is fooling themselves. You only have to look at the massive progress made over the last decade or so and project that forward at even a non-increasing rate for some time to come to know what it's going to be like. And more likely it will continue to improve quite non-linearly. Will it really be intelligent? Not really, IMO. But that doesn't matter. It'll be capable of reacting to massive amounts of input, finding patterns very fast, and making decisions. That will make it irresistible to a lot of players who don't have our best interests at heart. And, despite the fact that there will have been by that time thousands of books and movies (fiction and non-fiction) predicting the bad consequences of putting such AI's (or whatever you want to call them) in charge of dangerous toys or in charge of us, it's going to happen as sure as the sun rises. Even if every government says it's not going to do it, it'll still be done secretly on the assumption that everyone else is doing it secretly. And it'll become an arms race, both in the weapons world and in surveillance (both business and government.) Everyone will have an 'AI' assistant in their homes which will effectively know everything they do and say and when they do it and say it and to whom. People will happily pay $1000 a pop to install something that no government could ever get away with forcing them to install. And then everyone will immediately start to work hacking them. Massive resources will be (and already are pretty much) used in the correlation of information in uncountable petabytes of data that will be flowing, which will find everything you do on line, as a consumer, on social media, etc... and ultimately in your own home. Everywhere you go you will be recognized by facial recognition systems. We won't drive our cars or fly our airplanes anymore. Leaving aside weapons systems, most of these things will be happily adopted and paid for by us. Many of the people working on them or financing them will have intentions that are no worse than just a great interest in making them happen (just as with the bomb) to just old fashioned greed. But, it'll all be a huge system of surveillance and control just waiting to be abused. And they all will be eventually. That will be far, far too juicy a target or tool. Every government and business and cr
I find your reasoning extremely depressing, your opinion of human nature is extraordinarily negative. Pity it is probably accurate.
Dean Roddey wrote:
We won't drive our cars
The only bright side to this is that I will probably be dead before it becomes a reality.
Never underestimate the power of human stupidity - RAH I'm old. I know stuff - JSOP
-
I find your reasoning extremely depressing, your opinion of human nature is extraordinarily negative. Pity it is probably accurate.
Dean Roddey wrote:
We won't drive our cars
The only bright side to this is that I will probably be dead before it becomes a reality.
Never underestimate the power of human stupidity - RAH I'm old. I know stuff - JSOP
And the crazy thing is, I don't think it really requires much in the way of actual 'evil' for all of these bad things to happen. Almost everyone involved could easily believe that they are doing the right thing, or at most just doing the same things we've always done e.g. trying to make money, trying to get ahead in life, trying to protect ourselves and our loved ones, trying to do challenging things, being distracted from important issues by the previous issues, etc... There will likely be some people who are actually evil, though even they may not think so and have fairly reasonably reasons why they think not, same as there already is more or less. It just requires human nature. Most of our current problems, some of which are serious, are all pretty much the same. So many of them exist because of human nature. Some exist because of mother nature or a combination thereof. But lots of them are purely human nature with no one in the loop really doing anything that they consider wrong.
Explorans limites defectum
-
And the crazy thing is, I don't think it really requires much in the way of actual 'evil' for all of these bad things to happen. Almost everyone involved could easily believe that they are doing the right thing, or at most just doing the same things we've always done e.g. trying to make money, trying to get ahead in life, trying to protect ourselves and our loved ones, trying to do challenging things, being distracted from important issues by the previous issues, etc... There will likely be some people who are actually evil, though even they may not think so and have fairly reasonably reasons why they think not, same as there already is more or less. It just requires human nature. Most of our current problems, some of which are serious, are all pretty much the same. So many of them exist because of human nature. Some exist because of mother nature or a combination thereof. But lots of them are purely human nature with no one in the loop really doing anything that they consider wrong.
Explorans limites defectum
I agree with the thrust despite my rather jaundiced view of the concept of human nature. I tend to share Emma Goldman's take on it. Nevertheless we humans get up to the same old patterns time and again, but I think the math behind that is because we're agents in a Complex Adaptive System.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
What it was a threat, just not an existential one, but a societal one? More likely, more possible way it might affect society in possibly unpleasantly disruptive ways...
A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!
-
Dean Roddey wrote:
why it can deal with information it's never seen before
Because some programmer wrote code to do that. It's just code. It can't think. It's not alive.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
Thought I'd interject to say the question of sentience has been a matter of some debate in the philosophy circles i run in, in large part because of AI being on the horizon. I think reasonable people can disagree, as there are certain grounding assumptions we all have to deal with here in terms of the question of what makes us human, what it even means to think, or engage in say, philosophy? As for me I'd suggest that anything that is a convincing enough illusion of The Real Thing(TM) (whatever that happens to be) is as good as the real thing for any meaningful intent and purpose. For example, for all I know, we don't have free will either. It might be possible to develop a way to plot my next thought or move. Maybe I'm a calculation in a simulation. But it doesn't matter. Because I have the illusion of will, and it's a compelling enough illusion that it may as well be (to me) the real thing. So I'd suggest here, that at a certain threshold, we might accept that a computer "thinks" as any other sentient being might, or even as a human might. I don't know if that can be done in silicon reasonably, but I'm entertaining a hypothetical here, if you'll humor me that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Thought I'd interject to say the question of sentience has been a matter of some debate in the philosophy circles i run in, in large part because of AI being on the horizon. I think reasonable people can disagree, as there are certain grounding assumptions we all have to deal with here in terms of the question of what makes us human, what it even means to think, or engage in say, philosophy? As for me I'd suggest that anything that is a convincing enough illusion of The Real Thing(TM) (whatever that happens to be) is as good as the real thing for any meaningful intent and purpose. For example, for all I know, we don't have free will either. It might be possible to develop a way to plot my next thought or move. Maybe I'm a calculation in a simulation. But it doesn't matter. Because I have the illusion of will, and it's a compelling enough illusion that it may as well be (to me) the real thing. So I'd suggest here, that at a certain threshold, we might accept that a computer "thinks" as any other sentient being might, or even as a human might. I don't know if that can be done in silicon reasonably, but I'm entertaining a hypothetical here, if you'll humor me that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
codewitch honey crisis wrote:
Maybe I'm a calculation in a simulation.
The Matrix has you...
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
For years the pinnacle of mans achievement has been development of systems and weapons of complete destruction. Yeah some other stuff got invented along the way, but think about it, our prime objective has been to blow shit up - the bigger the better. Yet no one has ever taken that final step, always chickened out. We spend billions looking and sending crap into space to find some other entity to come and destroy us, hell, even the religious mostly look forward to their God to come and scrub this tiny spec of space dust away Alas, people are too weak to press the damn button, no aliens nor gods aren't showing up. Our own destruction is what we've all always wanted. So why not build a machine to do it?
Message Signature (Click to edit ->)
Lopatir wrote:
Our own destruction is what we've all always wanted. So why not build a machine to do it?
I don't remember who told it, but I find it a good complement to your statement.
Quote:
Artificial intelligence might be the cure for human stupidity.
The key here is... what is behind "cure"
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Can you imagine if Clippy had become self-aware? 'Nuff said.
The Beer Prayer - Our lager, which art in barrels, hallowed be thy drink. Thy will be drunk, I will be drunk, at home as it is in the tavern. Give us this day our foamy head, and forgive us our spillage as we forgive those who spill against us. And lead us not to incarceration, but deliver us from hangovers. For thine is the beer, the bitter and the lager, for ever and ever. Barmen.
In my student days, I bought a book for one single reason - its title: "Machines who think". Considering how long ago that is, I am not holding my breath while waiting for the self-aware machines. If you really want to loose your sleep over such issues: Pick up some of the SciFi novels by James P. Hogan, such as "The two faces of tomorrow" or "Realtime interrupt". "Two faces" is from my student days as well ("Realtime" is more recent), but Hogan had the top AI experts at C-M and MIT review his manuscripts: Even today they hold water, seen from a professional perspective. Obviously, we have extended our understanding since the books were written, but the knowledge on which the books are built is essentially still "correct". Both books are higly recommended.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
As a person studying and working with AI my view changed from "AI is possibly a threat" all the way to "There is nothing to worry about, ever". First, I learned that AI is a little bit more mechanical than I anticipated. And we have been already using autonomous mechanical systems years from now (in practice HTTP servers requires little to no supervision after the start command, but fear from an HTTP server is irrational). Second, we have the validation issue. A system with no validation is just random programming with undefined behavior. In all known cases that leads to unhandled exception and termination.In all cases with validation AI tends to do what it had been programmed for. And nothing more. Even "self-aware" tends to do nothing by default or behave like expensive random number generator if emerging behavior is available. In other words, a self-aware AI that wants to kill humanity is only possible if you analyze, validate and train an AI to kill humanity and then test it and reiterate until it stops failing at that command. It cannot be an emergent behavior. Third, "self-awareness" is overrated. In fact this is the most scary part. Intelligence is far more machinery process than I actually anticipated. This gives rise to the most scary field "social engineering". It is not like a machine would harm you, but a person that understand how human mind machinery works and use that to achieve control over targeted person's behavior. Therefore, you should not be scared by a specialized self-aware AI, that has been validated on a higher level to drive a car, translate from another language, create a son and so on. You should be scared by the people who knows what pattern of sound can cause production of certain type of hormone to affect your mood and so on. Self-aware machine has predictable behavior (or breaks down otherwise), self-aware humans do not.
-
MikeTheFid wrote:
It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs
And that's the difference. We had nuclear bombs. AI? Give me a break. Show me something that actually can be described as artificial intelligence -- something that can perceive the world, contemplate an action, and have the means to interact with the physical world to implement that action. And implement it in a way poses a threat to anything (but you won't get past the first condition.) What, are all those self-driving cars going to suddenly join Lyft and go on strike? Even the tragic Boeing crashes are not an AI running amok but a poorly programmed expert system. As in, some intelligence on the plane didn't suddenly say, "hey, let's go kill some people." There is no AI. There is no "Intelligence" - sure, we have extremely limited systems that can learn and adapt, that require huge training sets that result in a complex weighted network. You call that thinking? You call that intelligence? A worm is smarter. :sigh:
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
The thing a lot of people forget is that we are machines ourselves. We're (very complex) multi-celled organisms, that at some point got our "singularity" and became self-aware. At some point, that will happen to the very advanced AI, whether we like/believe it or not. And I have a feeling it's not going to be pretty.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Like many things, it will be to late to fix it once we have realise what we have created. The feared version of AI which will destroy humanity is very much that of what many science fiction has described. That for it to be truly Intelligent on pair with human awareness, thought and creativity, it would most likely have sufficient physical resources to escape from any constraints we thought were enough. When or If ever? Yesterday or million years away?
-
Marc Clifton wrote:
There is no AI.
Exactly. The majority of people on earth do not understand this.
Slacker007 wrote:
The majority of people on earth do not understand this.
Sadly, the majority of people on earth lack the intelligence to understand this. How ironic.
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Though dated, I'd suggest a quick read of "Colossus, the Forbin Project", the first book of a trilogy by D. F. Jones. Will we ever get to that level of AI? I cannot know this, I do not know what level of AI has been achieved that we're not privy to. And we're not privy to a lot. A book I'm currently reading by Russell Brinegar titled "Overlords of the Singularity" suggests mankind is being driven to achieve a technological singularity for a undisclosed purpose by an undisclosed entity. At first, this idea seemed pretty far-fetched to me, but the more I read the book, the less unbelievable it has become. Once the Singularity has been reached, Ray Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Kurzweil predicts that "human life will be irreversibly transformed". Widescreen Trailer for "Colossus: The Forbin Project" - YouTube[^] (edit - spelling)
-
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
ZurdoDev wrote:
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Yeah, this isn't true anymore. Neural networks are black boxes. You train them to recognize a pattern, but no one can read a set of neural network weights and say how they do it.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it. Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct. When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
As long as we install Asimov's 3 laws of Robotics we'll be OK (he says with DARPA looking over his shoulder...read what happens in "Little Lost Robot"). :)
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
If you're talking about that "ai" that can create images that: - look like you - sound like you - walk like you then yeah, it is, and will be a threat.
"(I) am amazed to see myself here rather than there ... now rather than then". ― Blaise Pascal
This is one of the lesser, but still scary, possibilities. Not that long from now we will enter a stage where anyone can be made to be seen doing or saying something that they never did or said, in such as way that it will be extremely difficult to impossible to confirm or deny. Given that confirmation generally isn't required for said content to do its job and denial is typically useless, that's going to become a real problem.
Explorans limites defectum
-
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it. Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct. When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
Lots of people seem to think that is will become dangerous when it reaches this level, but that's not true. It's already becoming dangerous. Human semantic reasoning is not required for massive surveillance, data collection, and pattern recognition. It's not required to have a computer go through massive amounts of phone conversations and listen for particular types of conversations, or to do high quality facial recognition in every public place in the country so that you can't go anywhere without being tracked. It also doesn't need to have semantic understanding to be put into the brains of really nasty autonomous weapons. It won't need semantic understanding to create indistinguishable fake videos to be used in all kinds of ugly ways. It won't need it to be put into 'AI' assistants to be sold into the home, and to monitor and report everything you do and say to its corporate owners (and they to their governmental overseers.) I just think it's a mistake to assume that it has to be some sort of Skynet scenario before it gets really dangerous to us.
Explorans limites defectum