ASI: Artificial SuperIntelligence
-
raddevus wrote:
If you're interested in these types of thought experiments about where the future might lead,
I would be interested... if that future were not that fvcked up.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
As the current AI iterations are just very good statistical machines I can't see us getting to superAI any time soon. Having said that if they ever get AI to be creative then Ghu knows what will be the next step. I doubt I will be around to see it happen.
Never underestimate the power of human stupidity - RAH I'm old. I know stuff - JSOP
Interestingly, the algorithm that beat the world champion at Go, actually did commit an act of "creativity". I didn't know about this either, until I read about it in the book, The Creativity Code: Art and Innovation in the Age of AI[^] by Marcus du Sautoy who is a professor of math. What happened was that during a particular match the Go AI committed a move that everyone said was childish and wrong but then ended up causing the AI to beat the champion. The really interesting thing was that even though there is like 3000 years of Go to study no Master Go player would ever say to use that move. However, now, all Go players know this strategy now and use it. But it was created by the Go AI. There are some other things you'll learn in this book to and that is that Math proofs are so complex (some are 10,000 pages long - seriously) now that only AI can determine if they are correct or not. But how do we know the AI is correct? It's really interesting.
-
There are a few problems with AI, the first being that we don't even know what intelligence is. I like to point at The Big Bang Theory where main character Sheldon is supposed to be super smart, yet he can't function in society. In a way, Penny is much smarter than Sheldon despite having about a third of his IQ. I know it's just a show meant for laughs, but that part isn't far-fetched. You've probably heard the tribes-in-the-jungle argument before, they can't do basic math, but they're able to survive out in the jungle, something most of us couldn't. An IQ test tests that which we, in the modern west, think a reasonably intelligent person should know, but it's shaped around our current time and place. A tribe member wouldn't score a 1 on an IQ test, but they're still intelligent by their own standards. So what is intelligence and how do we test it? The dictionary says "the ability to acquire and apply knowledge and skills." That's a very broad definition and I'd like to argue it's not very accurate either. Any "AI" that's around today is nothing more than a machine learning algorithm that just finds patterns. Not to downplay the technology, but it's hardly "intelligent". Take that computer that "learned" how to play Super Mario simply by failing thousands of times and then doing something else. By the dictionary definition it "acquired" a skill (playing Super Mario) and then "applied" it (by finishing the level/game). Yet, I don't think trial and error would generally be considered as intelligent. So at what point do we consider it intelligent? So then comes the next question, if we don't know what intelligence is then how are we going to recreate it? I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet. That's not to say computers can't completely elephant us over right now. There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset :laugh: Still, that will be a completely logical decision, not intelligence :)
Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
Very good post and interesting points. I thought that AI was just a bunch of math toppled upon math until I read the book, The Creativity Code: Art and Innovation in the Age of AI[^] by math professor Marcus du Sautoy. In it he explains the AI that defeated the world champion of Go. The interesting thing is that it committed a move that no human Go player would ever commit. That move led the AI to win a particular match. But as the world watched the commentors said it was a childish and faulty move. They said the algo had obviously failed. Then because of that move the algo went on to win. There is like 3000 years of Go and people began to study that move and wonder why the AI chose it. They cannot explain why, but now every Go player uses that move at a particular point in matches. It actually did something that others had not done before. Somewhat of a point of creativity. Very interesting.
-
Two other excellent books are: Life 3.0 - Wikipedia[^] Which starts with an entertaining story of how a [benign] AI might take over. It then is slow but halfway through the book it becomes super-interesting again And the other one is pretty advanced, I can recommend it warmly: Superintelligence: Paths, Dangers, Strategies - Wikipedia[^] BTW, who noticed the absence of marketing the market leaders?
"If we don't change direction, we'll end up where we're going"
-
There are a few problems with AI, the first being that we don't even know what intelligence is. I like to point at The Big Bang Theory where main character Sheldon is supposed to be super smart, yet he can't function in society. In a way, Penny is much smarter than Sheldon despite having about a third of his IQ. I know it's just a show meant for laughs, but that part isn't far-fetched. You've probably heard the tribes-in-the-jungle argument before, they can't do basic math, but they're able to survive out in the jungle, something most of us couldn't. An IQ test tests that which we, in the modern west, think a reasonably intelligent person should know, but it's shaped around our current time and place. A tribe member wouldn't score a 1 on an IQ test, but they're still intelligent by their own standards. So what is intelligence and how do we test it? The dictionary says "the ability to acquire and apply knowledge and skills." That's a very broad definition and I'd like to argue it's not very accurate either. Any "AI" that's around today is nothing more than a machine learning algorithm that just finds patterns. Not to downplay the technology, but it's hardly "intelligent". Take that computer that "learned" how to play Super Mario simply by failing thousands of times and then doing something else. By the dictionary definition it "acquired" a skill (playing Super Mario) and then "applied" it (by finishing the level/game). Yet, I don't think trial and error would generally be considered as intelligent. So at what point do we consider it intelligent? So then comes the next question, if we don't know what intelligence is then how are we going to recreate it? I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet. That's not to say computers can't completely elephant us over right now. There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset :laugh: Still, that will be a completely logical decision, not intelligence :)
Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
Sander Rossel wrote:
I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet.
I am not worry about super AI, because I agree with you. I am worried about idiots in charge (and in the society itself) giving the falible and not so intelligent systems so much power.
Sander Rossel wrote:
There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset
Exactly... Hello David, do you want to play a game...?
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Very good post and interesting points. I thought that AI was just a bunch of math toppled upon math until I read the book, The Creativity Code: Art and Innovation in the Age of AI[^] by math professor Marcus du Sautoy. In it he explains the AI that defeated the world champion of Go. The interesting thing is that it committed a move that no human Go player would ever commit. That move led the AI to win a particular match. But as the world watched the commentors said it was a childish and faulty move. They said the algo had obviously failed. Then because of that move the algo went on to win. There is like 3000 years of Go and people began to study that move and wonder why the AI chose it. They cannot explain why, but now every Go player uses that move at a particular point in matches. It actually did something that others had not done before. Somewhat of a point of creativity. Very interesting.
raddevus wrote:
The interesting thing is that it committed a move that no human Go player would ever commit. That move led the AI to win a particular match. But as the world watched the commentors said it was a childish and faulty move. They said the algo had obviously failed. Then because of that move the algo went on to win.
Statistics, probability, many other things... but creativity?
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
raddevus wrote:
The interesting thing is that it committed a move that no human Go player would ever commit. That move led the AI to win a particular match. But as the world watched the commentors said it was a childish and faulty move. They said the algo had obviously failed. Then because of that move the algo went on to win.
Statistics, probability, many other things... but creativity?
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
Nelek wrote:
Statistics, probability, many other things... but creativity?
I know. I agree with you. But there is a lot more to the story than it would seem. And this is the part that is so interesting and confusing. Algorithms are "learning" making changes based upon choices they made and then making changes again, in a huge loop. But, already, the people who've developed these AIs do not know why the AI made a particular decision. In the past, you could say, "well, look here in the source code there is an if statement and this flag variable. However, the way things are done now, the algorithm tries things and the humans are not even sure why. You'd have to read that entire book to really see how complex it is becoming but it isn't just pure stats now, it is something a level beyond that. That's why when the AI that beat the Go world champion made this choice it was as if it exercised some form of random creativity. It's quite interesting. Read that book and it'll really make you think.
-
Nelek wrote:
Statistics, probability, many other things... but creativity?
I know. I agree with you. But there is a lot more to the story than it would seem. And this is the part that is so interesting and confusing. Algorithms are "learning" making changes based upon choices they made and then making changes again, in a huge loop. But, already, the people who've developed these AIs do not know why the AI made a particular decision. In the past, you could say, "well, look here in the source code there is an if statement and this flag variable. However, the way things are done now, the algorithm tries things and the humans are not even sure why. You'd have to read that entire book to really see how complex it is becoming but it isn't just pure stats now, it is something a level beyond that. That's why when the AI that beat the Go world champion made this choice it was as if it exercised some form of random creativity. It's quite interesting. Read that book and it'll really make you think.
raddevus wrote:
You'd have to read that entire book to really see how complex it is becoming but it isn't just pure stats now,
I know I should read it to be able to speak with based arguments, but this is not going to happen. I am just saying my opinion on the topic, with a "general user" knowledge about the topic.
raddevus wrote:
Algorithms are "learning" making changes based upon choices they made and then making changes again, in a huge loop.
Humans do it too, specially babies learn a lot using the "trial and error" method. Nothing against it.
raddevus wrote:
But, already, the people who've developed these AIs do not know why the AI made a particular decision.
People can be unforeseeable too, so it is something one could "live with"
raddevus wrote:
In the past, you could say, "well, look here in the source code there is an if statement and this flag variable. However, the way things are done now, the algorithm tries things and the humans are not even sure why.
And that's exactly the dangerous part of it. We are trying things were you can't know "a priori" what's going to happen. And not only with AI or in the IT branches. I am not against the advances, I would only wish a bit more of caution doing things. As someone already said:
Quote:
Humanity wins knowledge way, way faster than wins wisdom.
Kids usually learn the hard way that to start running without having learned to walk properly can be painful. The biggest difference is... in these kind of topics the running without walking properly of few can bring us ALL to a very unpleasant situation.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Been reading various books on AI. Most recent one is really a very interesting thought experiment. Our Final Invention: James Barrat [^] Some of it may be a bit over the top, but the author does a great job of explaining why a future Artificial SuperIntelligence may off us with no malice.
from the book
You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.
Also, we tend to anthropomorphize things (animals, robots, etc) and then believe "they'll think similarly to us." However, a SuperIntelligence probably will not think with the same logic as us:
from the book:
A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Therefore, anthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes.
The author continues, prompted by Asimov's three laws and how those laws really don't actually cover the details they need if we were to meet Artificial Intelligence.
from the book
And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.
What people call AI is not more than some (clever) algorithms. Even so-called human intelligence, are basically learned algorithms. The only thing that humans have, is a form of creativity. If we would compare the 'intelligence' of the people that build Stonehenge to our own, they would be considered rather unintelligent. But we couldn't survive in their time for more than a few days(at most) basically AI is the next version of what was called CA*. (CAD, CAM, etc) We should call it what it really is: Computer-Aided Decision Making (CADM):cool: Let's face it: humans are not 'intelligent' enough to fix the current problems in the world, so don't expect them to create 'Artificial Intelligence' :wtf:
-
Been reading various books on AI. Most recent one is really a very interesting thought experiment. Our Final Invention: James Barrat [^] Some of it may be a bit over the top, but the author does a great job of explaining why a future Artificial SuperIntelligence may off us with no malice.
from the book
You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.
Also, we tend to anthropomorphize things (animals, robots, etc) and then believe "they'll think similarly to us." However, a SuperIntelligence probably will not think with the same logic as us:
from the book:
A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Therefore, anthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes.
The author continues, prompted by Asimov's three laws and how those laws really don't actually cover the details they need if we were to meet Artificial Intelligence.
from the book
And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.
Actually that is the same argument for why, possibly, aliens would destroy humanity if they ever came here. We are nothing compared to advanced civilizations so they wouldn't care about us. No malicious intent necessary. Lex Fridman had an awesome interview with Michio Kaku where they discuss this. It's super interesting. Here's the link if you want to take a look (starts at 15 minutes): Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Artificial Intelligence (AI) Podcast[^]
-
Been reading various books on AI. Most recent one is really a very interesting thought experiment. Our Final Invention: James Barrat [^] Some of it may be a bit over the top, but the author does a great job of explaining why a future Artificial SuperIntelligence may off us with no malice.
from the book
You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.
Also, we tend to anthropomorphize things (animals, robots, etc) and then believe "they'll think similarly to us." However, a SuperIntelligence probably will not think with the same logic as us:
from the book:
A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Therefore, anthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes.
The author continues, prompted by Asimov's three laws and how those laws really don't actually cover the details they need if we were to meet Artificial Intelligence.
from the book
And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.
Someone asked an ET how many civilizations in the galaxy had android soldiers. The answer was zero, because any civilization that developed them was destroyed by them.
-
raddevus wrote:
You'd have to read that entire book to really see how complex it is becoming but it isn't just pure stats now,
I know I should read it to be able to speak with based arguments, but this is not going to happen. I am just saying my opinion on the topic, with a "general user" knowledge about the topic.
raddevus wrote:
Algorithms are "learning" making changes based upon choices they made and then making changes again, in a huge loop.
Humans do it too, specially babies learn a lot using the "trial and error" method. Nothing against it.
raddevus wrote:
But, already, the people who've developed these AIs do not know why the AI made a particular decision.
People can be unforeseeable too, so it is something one could "live with"
raddevus wrote:
In the past, you could say, "well, look here in the source code there is an if statement and this flag variable. However, the way things are done now, the algorithm tries things and the humans are not even sure why.
And that's exactly the dangerous part of it. We are trying things were you can't know "a priori" what's going to happen. And not only with AI or in the IT branches. I am not against the advances, I would only wish a bit more of caution doing things. As someone already said:
Quote:
Humanity wins knowledge way, way faster than wins wisdom.
Kids usually learn the hard way that to start running without having learned to walk properly can be painful. The biggest difference is... in these kind of topics the running without walking properly of few can bring us ALL to a very unpleasant situation.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Actually that is the same argument for why, possibly, aliens would destroy humanity if they ever came here. We are nothing compared to advanced civilizations so they wouldn't care about us. No malicious intent necessary. Lex Fridman had an awesome interview with Michio Kaku where they discuss this. It's super interesting. Here's the link if you want to take a look (starts at 15 minutes): Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Artificial Intelligence (AI) Podcast[^]
CCostaT wrote:
Actually that is the same argument for why, possibly, aliens would destroy humanity if they ever came here
The comparison to aliens arriving is a good one. I was also reading Human Compatible: Artificial Intelligence and the Problem of Control[^] That author says something along the lines of what would we think and how would we react if we knew that intelligent aliens would arrive in 10 years? They very well may arrive in the form of AI.
-
Someone asked an ET how many civilizations in the galaxy had android soldiers. The answer was zero, because any civilization that developed them was destroyed by them.
-
It's from the Twitter account @SandiaWisdom, which I've decided is an alter ego of the psychic who runs it.
-
There are a few problems with AI, the first being that we don't even know what intelligence is. I like to point at The Big Bang Theory where main character Sheldon is supposed to be super smart, yet he can't function in society. In a way, Penny is much smarter than Sheldon despite having about a third of his IQ. I know it's just a show meant for laughs, but that part isn't far-fetched. You've probably heard the tribes-in-the-jungle argument before, they can't do basic math, but they're able to survive out in the jungle, something most of us couldn't. An IQ test tests that which we, in the modern west, think a reasonably intelligent person should know, but it's shaped around our current time and place. A tribe member wouldn't score a 1 on an IQ test, but they're still intelligent by their own standards. So what is intelligence and how do we test it? The dictionary says "the ability to acquire and apply knowledge and skills." That's a very broad definition and I'd like to argue it's not very accurate either. Any "AI" that's around today is nothing more than a machine learning algorithm that just finds patterns. Not to downplay the technology, but it's hardly "intelligent". Take that computer that "learned" how to play Super Mario simply by failing thousands of times and then doing something else. By the dictionary definition it "acquired" a skill (playing Super Mario) and then "applied" it (by finishing the level/game). Yet, I don't think trial and error would generally be considered as intelligent. So at what point do we consider it intelligent? So then comes the next question, if we don't know what intelligence is then how are we going to recreate it? I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet. That's not to say computers can't completely elephant us over right now. There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset :laugh: Still, that will be a completely logical decision, not intelligence :)
Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
Sander Rossel wrote:
I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet.
I am not worry about super AI, because I agree with you. I am worried about idiots in charge (and in the society itself) giving the falible and not so intelligent systems so much power.
Sander Rossel wrote:
There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset
Exactly... Hello David, do you want to play a game...?
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
You forgot to credit the WHOPPER with the quote. I remember it as "Would you like to play a game?" Modems and phone phreaking are lost arts. Movie: War Games
englebart wrote:
You forgot to credit the WHOPPER with the quote.
:confused::confused::confused:
englebart wrote:
I remember it as "Would you like to play a game?"
And might be like that. I saw it 20 years ago in spanish...
englebart wrote:
Movie: War Games
Exactly.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Thanks :)
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
englebart wrote:
You forgot to credit the WHOPPER with the quote.
:confused::confused::confused:
englebart wrote:
I remember it as "Would you like to play a game?"
And might be like that. I saw it 20 years ago in spanish...
englebart wrote:
Movie: War Games
Exactly.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.