Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. ASI: Artificial SuperIntelligence

ASI: Artificial SuperIntelligence

Scheduled Pinned Locked Moved The Lounge
discussioncomtutorialquestion
32 Posts 14 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R raddevus

    Been reading various books on AI. Most recent one is really a very interesting thought experiment. Our Final Invention: James Barrat [^] Some of it may be a bit over the top, but the author does a great job of explaining why a future Artificial SuperIntelligence may off us with no malice.

    from the book

    You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.

    Also, we tend to anthropomorphize things (animals, robots, etc) and then believe "they'll think similarly to us." However, a SuperIntelligence probably will not think with the same logic as us:

    from the book:

    A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Therefore, anthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes.

    The author continues, prompted by Asimov's three laws and how those laws really don't actually cover the details they need if we were to meet Artificial Intelligence.

    from the book

    And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.

    B Offline
    B Offline
    Bruce Patin
    wrote on last edited by
    #19

    Someone asked an ET how many civilizations in the galaxy had android soldiers. The answer was zero, because any civilization that developed them was destroyed by them.

    R 1 Reply Last reply
    0
    • N Nelek

      raddevus wrote:

      You'd have to read that entire book to really see how complex it is becoming but it isn't just pure stats now,

      I know I should read it to be able to speak with based arguments, but this is not going to happen. I am just saying my opinion on the topic, with a "general user" knowledge about the topic.

      raddevus wrote:

      Algorithms are "learning" making changes based upon choices they made and then making changes again, in a huge loop.

      Humans do it too, specially babies learn a lot using the "trial and error" method. Nothing against it.

      raddevus wrote:

      But, already, the people who've developed these AIs do not know why the AI made a particular decision.

      People can be unforeseeable too, so it is something one could "live with"

      raddevus wrote:

      In the past, you could say, "well, look here in the source code there is an if statement and this flag variable. However, the way things are done now, the algorithm tries things and the humans are not even sure why.

      And that's exactly the dangerous part of it. We are trying things were you can't know "a priori" what's going to happen. And not only with AI or in the IT branches. I am not against the advances, I would only wish a bit more of caution doing things. As someone already said:

      Quote:

      Humanity wins knowledge way, way faster than wins wisdom.

      Kids usually learn the hard way that to start running without having learned to walk properly can be painful. The biggest difference is... in these kind of topics the running without walking properly of few can bring us ALL to a very unpleasant situation.

      M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

      R Offline
      R Offline
      raddevus
      wrote on last edited by
      #20

      Fantastic points. I agree. Great post. :thumbsup:

      N 1 Reply Last reply
      0
      • C CCostaT

        Actually that is the same argument for why, possibly, aliens would destroy humanity if they ever came here. We are nothing compared to advanced civilizations so they wouldn't care about us. No malicious intent necessary. Lex Fridman had an awesome interview with Michio Kaku where they discuss this. It's super interesting. Here's the link if you want to take a look (starts at 15 minutes): Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Artificial Intelligence (AI) Podcast[^]

        R Offline
        R Offline
        raddevus
        wrote on last edited by
        #21

        CCostaT wrote:

        Actually that is the same argument for why, possibly, aliens would destroy humanity if they ever came here

        The comparison to aliens arriving is a good one. I was also reading Human Compatible: Artificial Intelligence and the Problem of Control[^] That author says something along the lines of what would we think and how would we react if we knew that intelligent aliens would arrive in 10 years? They very well may arrive in the form of AI.

        1 Reply Last reply
        0
        • B Bruce Patin

          Someone asked an ET how many civilizations in the galaxy had android soldiers. The answer was zero, because any civilization that developed them was destroyed by them.

          R Offline
          R Offline
          raddevus
          wrote on last edited by
          #22

          That's funny/interesting. Is that a quote form the movie ET or something else?

          B 1 Reply Last reply
          0
          • R raddevus

            That's funny/interesting. Is that a quote form the movie ET or something else?

            B Offline
            B Offline
            Bruce Patin
            wrote on last edited by
            #23

            It's from the Twitter account @SandiaWisdom, which I've decided is an alter ego of the psychic who runs it.

            1 Reply Last reply
            0
            • Sander RosselS Sander Rossel

              There are a few problems with AI, the first being that we don't even know what intelligence is. I like to point at The Big Bang Theory where main character Sheldon is supposed to be super smart, yet he can't function in society. In a way, Penny is much smarter than Sheldon despite having about a third of his IQ. I know it's just a show meant for laughs, but that part isn't far-fetched. You've probably heard the tribes-in-the-jungle argument before, they can't do basic math, but they're able to survive out in the jungle, something most of us couldn't. An IQ test tests that which we, in the modern west, think a reasonably intelligent person should know, but it's shaped around our current time and place. A tribe member wouldn't score a 1 on an IQ test, but they're still intelligent by their own standards. So what is intelligence and how do we test it? The dictionary says "the ability to acquire and apply knowledge and skills." That's a very broad definition and I'd like to argue it's not very accurate either. Any "AI" that's around today is nothing more than a machine learning algorithm that just finds patterns. Not to downplay the technology, but it's hardly "intelligent". Take that computer that "learned" how to play Super Mario simply by failing thousands of times and then doing something else. By the dictionary definition it "acquired" a skill (playing Super Mario) and then "applied" it (by finishing the level/game). Yet, I don't think trial and error would generally be considered as intelligent. So at what point do we consider it intelligent? So then comes the next question, if we don't know what intelligence is then how are we going to recreate it? I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet. That's not to say computers can't completely elephant us over right now. There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset :laugh: Still, that will be a completely logical decision, not intelligence :)

              Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript

              D Offline
              D Offline
              Dave DD
              wrote on last edited by
              #24

              As someone who has taken an IQ test administered by doctors of psychology I can assure you that jungle dwelling indigenous peoples would score much higher than 1. An proper IQ test covers much more than book learning.

              1 Reply Last reply
              0
              • N Nelek

                Sander Rossel wrote:

                I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet.

                I am not worry about super AI, because I agree with you. I am worried about idiots in charge (and in the society itself) giving the falible and not so intelligent systems so much power.

                Sander Rossel wrote:

                There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset

                Exactly... Hello David, do you want to play a game...?

                M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                E Offline
                E Offline
                englebart
                wrote on last edited by
                #25

                You forgot to credit the WHOPPER with the quote. I remember it as "Would you like to play a game?" Modems and phone phreaking are lost arts. Movie: War Games

                N 1 Reply Last reply
                0
                • E englebart

                  You forgot to credit the WHOPPER with the quote. I remember it as "Would you like to play a game?" Modems and phone phreaking are lost arts. Movie: War Games

                  N Offline
                  N Offline
                  Nelek
                  wrote on last edited by
                  #26

                  englebart wrote:

                  You forgot to credit the WHOPPER with the quote.

                  :confused::confused::confused:

                  englebart wrote:

                  I remember it as "Would you like to play a game?"

                  And might be like that. I saw it 20 years ago in spanish...

                  englebart wrote:

                  Movie: War Games

                  Exactly.

                  M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                  E 1 Reply Last reply
                  0
                  • R raddevus

                    Fantastic points. I agree. Great post. :thumbsup:

                    N Offline
                    N Offline
                    Nelek
                    wrote on last edited by
                    #27

                    Thanks :)

                    M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                    1 Reply Last reply
                    0
                    • N Nelek

                      englebart wrote:

                      You forgot to credit the WHOPPER with the quote.

                      :confused::confused::confused:

                      englebart wrote:

                      I remember it as "Would you like to play a game?"

                      And might be like that. I saw it 20 years ago in spanish...

                      englebart wrote:

                      Movie: War Games

                      Exactly.

                      M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                      E Offline
                      E Offline
                      englebart
                      wrote on last edited by
                      #28

                      My faulty Spanish: ¿Te gustaría jugar un juego? I just found the original English: "Shall we play a game?" My brain stored it closer to your translation than the original. I think I saw the original version in the theater as a teenager. Soy viejo.

                      N 1 Reply Last reply
                      0
                      • R raddevus

                        Been reading various books on AI. Most recent one is really a very interesting thought experiment. Our Final Invention: James Barrat [^] Some of it may be a bit over the top, but the author does a great job of explaining why a future Artificial SuperIntelligence may off us with no malice.

                        from the book

                        You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.

                        Also, we tend to anthropomorphize things (animals, robots, etc) and then believe "they'll think similarly to us." However, a SuperIntelligence probably will not think with the same logic as us:

                        from the book:

                        A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Therefore, anthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes.

                        The author continues, prompted by Asimov's three laws and how those laws really don't actually cover the details they need if we were to meet Artificial Intelligence.

                        from the book

                        And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.

                        K Offline
                        K Offline
                        Kirk 10389821
                        wrote on last edited by
                        #29

                        The scariest thing I read was: Given that General AI isn't a thing yet, and that is what to be afraid of... But also given, that if you thought of what General AI was, it's actually an AI that manages Specific AIs with feedback on how each performed, and limited resources (You can only play so many games of Go, or the Chess games have to stop)... If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources... I fear we are in trouble. And since a moron like myself can put it in such simple terms, it scares me because SOMEONE has to be working on exactly this. (This is a riff on the Hopfield Hierarchical Network, but at more of an architectural level, IMO). Thanks for sharing!

                        R 1 Reply Last reply
                        0
                        • E englebart

                          My faulty Spanish: ¿Te gustaría jugar un juego? I just found the original English: "Shall we play a game?" My brain stored it closer to your translation than the original. I think I saw the original version in the theater as a teenager. Soy viejo.

                          N Offline
                          N Offline
                          Nelek
                          wrote on last edited by
                          #30

                          I saw it on TV as kid, I am not that old, but neither a youngster ;)

                          M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                          1 Reply Last reply
                          0
                          • K Kirk 10389821

                            The scariest thing I read was: Given that General AI isn't a thing yet, and that is what to be afraid of... But also given, that if you thought of what General AI was, it's actually an AI that manages Specific AIs with feedback on how each performed, and limited resources (You can only play so many games of Go, or the Chess games have to stop)... If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources... I fear we are in trouble. And since a moron like myself can put it in such simple terms, it scares me because SOMEONE has to be working on exactly this. (This is a riff on the Hopfield Hierarchical Network, but at more of an architectural level, IMO). Thanks for sharing!

                            R Offline
                            R Offline
                            raddevus
                            wrote on last edited by
                            #31

                            Kirk 10389821 wrote:

                            If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources...

                            That's a really great high-level design idea and makes sense. Then the AI is the maintainer of all the other AIs and insures they are all top notch while needing no sleep or food or anything. And would all just keep getting better and better.

                            K 1 Reply Last reply
                            0
                            • R raddevus

                              Kirk 10389821 wrote:

                              If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources...

                              That's a really great high-level design idea and makes sense. Then the AI is the maintainer of all the other AIs and insures they are all top notch while needing no sleep or food or anything. And would all just keep getting better and better.

                              K Offline
                              K Offline
                              Kirk 10389821
                              wrote on last edited by
                              #32

                              Yeah, there is a bleed-over concept that is missing. But "magic insight" happens when we apply theory from one area of expertise to another. (Picture the math behind music, or using meditation to to eventually learn to control ones blood pressure on demand)... So, this would be an "integrator AI" piece. It's job would be to see if any tricks learned by ANY ONE AI could be reflected as a strategy/paradigm shift in another AI. Could a strategy that works at Go somehow be optimized into learning to fly, or playing chess, or playing the piano. One of the other interesting ideas is that as humans we struggle with energy/stamina. Sometimes we optimize things PURELY to conserve Energy (System 1 vs. System 2 for quick analysis). AIs do not suffer from this, but they don't gain from it. Also, I believe a lot of training is GAN (working against another AI). Being able to swap sides, or choose various AIs to test our skills with, to see if they are worth adopting... To be clear. Humans take experience from playing piano, and carry it into other areas pretty easily. A segmented AI would have to work at this. Once it could do that, and manage multiple personalities, I think we are there...

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups