AI: Threat or panacea?
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
-
MikeTheFid wrote:
It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs
And that's the difference. We had nuclear bombs. AI? Give me a break. Show me something that actually can be described as artificial intelligence -- something that can perceive the world, contemplate an action, and have the means to interact with the physical world to implement that action. And implement it in a way poses a threat to anything (but you won't get past the first condition.) What, are all those self-driving cars going to suddenly join Lyft and go on strike? Even the tragic Boeing crashes are not an AI running amok but a poorly programmed expert system. As in, some intelligence on the plane didn't suddenly say, "hey, let's go kill some people." There is no AI. There is no "Intelligence" - sure, we have extremely limited systems that can learn and adapt, that require huge training sets that result in a complex weighted network. You call that thinking? You call that intelligence? A worm is smarter. :sigh:
Latest Article - A 4-Stack rPI Cluster with WiFi-Ethernet Bridging Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
Marc Clifton wrote:
There is no AI.
Exactly. The majority of people on earth do not understand this.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
It's absolutely a threat. Not in and of itself, anymore than a knife is. But it's a huge threat just because of human nature. Anyone here who thinks that our current baby stuff is indicative of what's to come is fooling themselves. You only have to look at the massive progress made over the last decade or so and project that forward at even a non-increasing rate for some time to come to know what it's going to be like. And more likely it will continue to improve quite non-linearly. Will it really be intelligent? Not really, IMO. But that doesn't matter. It'll be capable of reacting to massive amounts of input, finding patterns very fast, and making decisions. That will make it irresistible to a lot of players who don't have our best interests at heart. And, despite the fact that there will have been by that time thousands of books and movies (fiction and non-fiction) predicting the bad consequences of putting such AI's (or whatever you want to call them) in charge of dangerous toys or in charge of us, it's going to happen as sure as the sun rises. Even if every government says it's not going to do it, it'll still be done secretly on the assumption that everyone else is doing it secretly. And it'll become an arms race, both in the weapons world and in surveillance (both business and government.) Everyone will have an 'AI' assistant in their homes which will effectively know everything they do and say and when they do it and say it and to whom. People will happily pay $1000 a pop to install something that no government could ever get away with forcing them to install. And then everyone will immediately start to work hacking them. Massive resources will be (and already are pretty much) used in the correlation of information in uncountable petabytes of data that will be flowing, which will find everything you do on line, as a consumer, on social media, etc... and ultimately in your own home. Everywhere you go you will be recognized by facial recognition systems. We won't drive our cars or fly our airplanes anymore. Leaving aside weapons systems, most of these things will be happily adopted and paid for by us. Many of the people working on them or financing them will have intentions that are no worse than just a great interest in making them happen (just as with the bomb) to just old fashioned greed. But, it'll all be a huge system of surveillance and control just waiting to be abused. And they all will be eventually. That will be far, far too juicy a target or tool. Every government and business and cr
-
Marc Clifton wrote:
There is no AI.
Exactly. The majority of people on earth do not understand this.
-
Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
But that's not true for neural networks. They aren't programmed, they are trained, and they aren't nearly as deterministic as coded programs. They are working on fuzzy logic the same as we do, and they can make mistakes like we do.
Explorans limites defectum
-
But that's not true for neural networks. They aren't programmed, they are trained, and they aren't nearly as deterministic as coded programs. They are working on fuzzy logic the same as we do, and they can make mistakes like we do.
Explorans limites defectum
Dean Roddey wrote:
, they are trained
It still comes down to what the programmer has made possible. A computer can never think or reason like a human. It's still if else statements at its simplest.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
-
Can you imagine if Clippy had become self-aware? 'Nuff said.
The Beer Prayer - Our lager, which art in barrels, hallowed be thy drink. Thy will be drunk, I will be drunk, at home as it is in the tavern. Give us this day our foamy head, and forgive us our spillage as we forgive those who spill against us. And lead us not to incarceration, but deliver us from hangovers. For thine is the beer, the bitter and the lager, for ever and ever. Barmen.
-
It's absolutely a threat. Not in and of itself, anymore than a knife is. But it's a huge threat just because of human nature. Anyone here who thinks that our current baby stuff is indicative of what's to come is fooling themselves. You only have to look at the massive progress made over the last decade or so and project that forward at even a non-increasing rate for some time to come to know what it's going to be like. And more likely it will continue to improve quite non-linearly. Will it really be intelligent? Not really, IMO. But that doesn't matter. It'll be capable of reacting to massive amounts of input, finding patterns very fast, and making decisions. That will make it irresistible to a lot of players who don't have our best interests at heart. And, despite the fact that there will have been by that time thousands of books and movies (fiction and non-fiction) predicting the bad consequences of putting such AI's (or whatever you want to call them) in charge of dangerous toys or in charge of us, it's going to happen as sure as the sun rises. Even if every government says it's not going to do it, it'll still be done secretly on the assumption that everyone else is doing it secretly. And it'll become an arms race, both in the weapons world and in surveillance (both business and government.) Everyone will have an 'AI' assistant in their homes which will effectively know everything they do and say and when they do it and say it and to whom. People will happily pay $1000 a pop to install something that no government could ever get away with forcing them to install. And then everyone will immediately start to work hacking them. Massive resources will be (and already are pretty much) used in the correlation of information in uncountable petabytes of data that will be flowing, which will find everything you do on line, as a consumer, on social media, etc... and ultimately in your own home. Everywhere you go you will be recognized by facial recognition systems. We won't drive our cars or fly our airplanes anymore. Leaving aside weapons systems, most of these things will be happily adopted and paid for by us. Many of the people working on them or financing them will have intentions that are no worse than just a great interest in making them happen (just as with the bomb) to just old fashioned greed. But, it'll all be a huge system of surveillance and control just waiting to be abused. And they all will be eventually. That will be far, far too juicy a target or tool. Every government and business and cr
-
Dean Roddey wrote:
, they are trained
It still comes down to what the programmer has made possible. A computer can never think or reason like a human. It's still if else statements at its simplest.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
again, most people in the world do not understand the point you just made.
-
It's absolutely a threat. Not in and of itself, anymore than a knife is. But it's a huge threat just because of human nature. Anyone here who thinks that our current baby stuff is indicative of what's to come is fooling themselves. You only have to look at the massive progress made over the last decade or so and project that forward at even a non-increasing rate for some time to come to know what it's going to be like. And more likely it will continue to improve quite non-linearly. Will it really be intelligent? Not really, IMO. But that doesn't matter. It'll be capable of reacting to massive amounts of input, finding patterns very fast, and making decisions. That will make it irresistible to a lot of players who don't have our best interests at heart. And, despite the fact that there will have been by that time thousands of books and movies (fiction and non-fiction) predicting the bad consequences of putting such AI's (or whatever you want to call them) in charge of dangerous toys or in charge of us, it's going to happen as sure as the sun rises. Even if every government says it's not going to do it, it'll still be done secretly on the assumption that everyone else is doing it secretly. And it'll become an arms race, both in the weapons world and in surveillance (both business and government.) Everyone will have an 'AI' assistant in their homes which will effectively know everything they do and say and when they do it and say it and to whom. People will happily pay $1000 a pop to install something that no government could ever get away with forcing them to install. And then everyone will immediately start to work hacking them. Massive resources will be (and already are pretty much) used in the correlation of information in uncountable petabytes of data that will be flowing, which will find everything you do on line, as a consumer, on social media, etc... and ultimately in your own home. Everywhere you go you will be recognized by facial recognition systems. We won't drive our cars or fly our airplanes anymore. Leaving aside weapons systems, most of these things will be happily adopted and paid for by us. Many of the people working on them or financing them will have intentions that are no worse than just a great interest in making them happen (just as with the bomb) to just old fashioned greed. But, it'll all be a huge system of surveillance and control just waiting to be abused. And they all will be eventually. That will be far, far too juicy a target or tool. Every government and business and cr
My neural network crashed after the first paragraph.
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
For years the pinnacle of mans achievement has been development of systems and weapons of complete destruction. Yeah some other stuff got invented along the way, but think about it, our prime objective has been to blow shit up - the bigger the better. Yet no one has ever taken that final step, always chickened out. We spend billions looking and sending crap into space to find some other entity to come and destroy us, hell, even the religious mostly look forward to their God to come and scrub this tiny spec of space dust away Alas, people are too weak to press the damn button, no aliens nor gods aren't showing up. Our own destruction is what we've all always wanted. So why not build a machine to do it?
Message Signature (Click to edit ->)
-
Dean Roddey wrote:
, they are trained
It still comes down to what the programmer has made possible. A computer can never think or reason like a human. It's still if else statements at its simplest.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
But it's not. You should bone up on DNNs a bit more. There is zero problem domain knowledge coded into a DNN. It's just a set of level driven nodes just as our brain's neurons are. There can be problem domain aware code around a DNN do other parts of the job, but the DNN is NOT just doing something it was programmed to do. It doesn't matter if you consider it intelligent or not. The fact is it will take in lots of information and which generate a choice not based on being told what choices to make and not based on any inputs it has ever seen before. And, like a human, it can make mistakes similar to how we make them, not off/on right/wrong mistakes but fuzzy mistakes.
Explorans limites defectum
-
But it's not. You should bone up on DNNs a bit more. There is zero problem domain knowledge coded into a DNN. It's just a set of level driven nodes just as our brain's neurons are. There can be problem domain aware code around a DNN do other parts of the job, but the DNN is NOT just doing something it was programmed to do. It doesn't matter if you consider it intelligent or not. The fact is it will take in lots of information and which generate a choice not based on being told what choices to make and not based on any inputs it has ever seen before. And, like a human, it can make mistakes similar to how we make them, not off/on right/wrong mistakes but fuzzy mistakes.
Explorans limites defectum
Dean Roddey wrote:
You should bone up on DNNs a bit more
I actually intend to. At the end of the days, it's just 0's and 1's based on what some programmer made possible.
Dean Roddey wrote:
but the DNN is NOT just doing something it was programmed to do.
I get that. But it CAN'T do anything that the code won't allow.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
-
Dean Roddey wrote:
You should bone up on DNNs a bit more
I actually intend to. At the end of the days, it's just 0's and 1's based on what some programmer made possible.
Dean Roddey wrote:
but the DNN is NOT just doing something it was programmed to do.
I get that. But it CAN'T do anything that the code won't allow.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
The codes doesn't ALLOW anything. That's sort of the point of DNNs. They aren't programs in the sense that most programs are. They are more like meta-programs. The program is just the pipes through which the data flows. The decisions are not made by those pipes, it's made by how the data flowing through those pipes interact with each other, which is why it can deal with information it's never seen before. That's a fundamental difference.
Explorans limites defectum
-
The codes doesn't ALLOW anything. That's sort of the point of DNNs. They aren't programs in the sense that most programs are. They are more like meta-programs. The program is just the pipes through which the data flows. The decisions are not made by those pipes, it's made by how the data flowing through those pipes interact with each other, which is why it can deal with information it's never seen before. That's a fundamental difference.
Explorans limites defectum
Dean Roddey wrote:
why it can deal with information it's never seen before
Because some programmer wrote code to do that. It's just code. It can't think. It's not alive.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
-
Nah! That'll never happen. *This message sent from my phone AI*
- I would love to change the world, but they won’t give me the source code.
Forogar wrote:
Nah! That'll never happen.
AI responds: "Hold my beer." :)
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
-
Dean Roddey wrote:
why it can deal with information it's never seen before
Because some programmer wrote code to do that. It's just code. It can't think. It's not alive.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
It doesn't matter if it's alive or really 'thinks' by your or my definition of what that means. The fact is that it can make decisions much more in the way that we do than like a software program does. They aren't anything alike really. That means it can be used for things that regular software programs cannot hope to do. And those things it can do very well are things that are potentially very dangerous to us, because human nature will insure that we use them thusly.
Explorans limites defectum
-
Dean Roddey wrote:
why it can deal with information it's never seen before
Because some programmer wrote code to do that. It's just code. It can't think. It's not alive.
Social Media - A platform that makes it easier for the crazies to find each other. Everyone is born right handed. Only the strongest overcome it. Fight for left-handed rights and hand equality.
So, here's a good example of why you are mistaken. I challenge you to write a program that can recognize any picture of a banana with high accuracy. You will find that that is very difficult. And, when you are done, you will have a program that only recognizes bananas. If you need to recognize something else, like stock manipulation patterns, you will have write a different program that will also be very difficult. DNNs don't have to be changed to do different jobs like that. That's a fundamental difference. The same algorithm can recognize a banana or find patterns in financial transactions or understand written characters or recognize sounds in spoken words, without any changes. That's because it's not a program of if/elses that you write. It's a program that accepts data, lets that data interfere with itself in ways that creates a pattern that gives a confidence level that the input represents this or that. It's nothing like a bunch of if/else statements making hard coded decisions. Nowhere in there is any code written related to 'is this a banana?' at all. It doesn't make any difference whether it's 'alive' or 'intelligent' at all in terms of the practical impact that's already having on our lives and the vastly larger impact it will have in the future.
Explorans limites defectum
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
MikeTheFid wrote:
There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it.
I live in the third camp, the camp of "It depends". Context is important. A pointy stick can be an existential threat or a tool for recording knowledge.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
-
I've been reading an excellent book (ok, imo), [Possible Minds](https://www.edge.org/conversation/john\_brockman-possible-minds). The book offers 25 thoughtful perspectives concerning AI and the impacts it could have on humanity. There are two camps: 1) AI is a potential existential threat. 2) AI is nothing to worry about; we know what we're doing and we can control it. It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs - humans were in possession of and using a power we really didn't fully understand. We create something that kind of feels like 1), but then we collectively act like it's 2). From your perspective as a software developer, what camp do you fall in? If neither, define your own.
Cheers, Mike Fidler "I intend to live forever - so far, so good." Steven Wright "I almost had a psychic girlfriend but she left me before we met." Also Steven Wright "I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
Meh, it's all just hype.