Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. ASI: Artificial SuperIntelligence

ASI: Artificial SuperIntelligence

Scheduled Pinned Locked Moved The Lounge
discussioncomtutorialquestion
32 Posts 14 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R raddevus

    That's funny/interesting. Is that a quote form the movie ET or something else?

    B Offline
    B Offline
    Bruce Patin
    wrote on last edited by
    #23

    It's from the Twitter account @SandiaWisdom, which I've decided is an alter ego of the psychic who runs it.

    1 Reply Last reply
    0
    • Sander RosselS Sander Rossel

      There are a few problems with AI, the first being that we don't even know what intelligence is. I like to point at The Big Bang Theory where main character Sheldon is supposed to be super smart, yet he can't function in society. In a way, Penny is much smarter than Sheldon despite having about a third of his IQ. I know it's just a show meant for laughs, but that part isn't far-fetched. You've probably heard the tribes-in-the-jungle argument before, they can't do basic math, but they're able to survive out in the jungle, something most of us couldn't. An IQ test tests that which we, in the modern west, think a reasonably intelligent person should know, but it's shaped around our current time and place. A tribe member wouldn't score a 1 on an IQ test, but they're still intelligent by their own standards. So what is intelligence and how do we test it? The dictionary says "the ability to acquire and apply knowledge and skills." That's a very broad definition and I'd like to argue it's not very accurate either. Any "AI" that's around today is nothing more than a machine learning algorithm that just finds patterns. Not to downplay the technology, but it's hardly "intelligent". Take that computer that "learned" how to play Super Mario simply by failing thousands of times and then doing something else. By the dictionary definition it "acquired" a skill (playing Super Mario) and then "applied" it (by finishing the level/game). Yet, I don't think trial and error would generally be considered as intelligent. So at what point do we consider it intelligent? So then comes the next question, if we don't know what intelligence is then how are we going to recreate it? I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet. That's not to say computers can't completely elephant us over right now. There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset :laugh: Still, that will be a completely logical decision, not intelligence :)

      Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript

      D Offline
      D Offline
      Dave DD
      wrote on last edited by
      #24

      As someone who has taken an IQ test administered by doctors of psychology I can assure you that jungle dwelling indigenous peoples would score much higher than 1. An proper IQ test covers much more than book learning.

      1 Reply Last reply
      0
      • N Nelek

        Sander Rossel wrote:

        I find it funny that people are worrying about artificial super intelligence while we don't even have artificial regular intelligence yet.

        I am not worry about super AI, because I agree with you. I am worried about idiots in charge (and in the society itself) giving the falible and not so intelligent systems so much power.

        Sander Rossel wrote:

        There's a few super computers out there that run very complex computations and simulations and I wouldn't be surprised if one of them concludes it'd be best if the entire world got nuked and reset

        Exactly... Hello David, do you want to play a game...?

        M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

        E Offline
        E Offline
        englebart
        wrote on last edited by
        #25

        You forgot to credit the WHOPPER with the quote. I remember it as "Would you like to play a game?" Modems and phone phreaking are lost arts. Movie: War Games

        N 1 Reply Last reply
        0
        • E englebart

          You forgot to credit the WHOPPER with the quote. I remember it as "Would you like to play a game?" Modems and phone phreaking are lost arts. Movie: War Games

          N Offline
          N Offline
          Nelek
          wrote on last edited by
          #26

          englebart wrote:

          You forgot to credit the WHOPPER with the quote.

          :confused::confused::confused:

          englebart wrote:

          I remember it as "Would you like to play a game?"

          And might be like that. I saw it 20 years ago in spanish...

          englebart wrote:

          Movie: War Games

          Exactly.

          M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

          E 1 Reply Last reply
          0
          • R raddevus

            Fantastic points. I agree. Great post. :thumbsup:

            N Offline
            N Offline
            Nelek
            wrote on last edited by
            #27

            Thanks :)

            M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

            1 Reply Last reply
            0
            • N Nelek

              englebart wrote:

              You forgot to credit the WHOPPER with the quote.

              :confused::confused::confused:

              englebart wrote:

              I remember it as "Would you like to play a game?"

              And might be like that. I saw it 20 years ago in spanish...

              englebart wrote:

              Movie: War Games

              Exactly.

              M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

              E Offline
              E Offline
              englebart
              wrote on last edited by
              #28

              My faulty Spanish: ¿Te gustaría jugar un juego? I just found the original English: "Shall we play a game?" My brain stored it closer to your translation than the original. I think I saw the original version in the theater as a teenager. Soy viejo.

              N 1 Reply Last reply
              0
              • R raddevus

                Been reading various books on AI. Most recent one is really a very interesting thought experiment. Our Final Invention: James Barrat [^] Some of it may be a bit over the top, but the author does a great job of explaining why a future Artificial SuperIntelligence may off us with no malice.

                from the book

                You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.

                Also, we tend to anthropomorphize things (animals, robots, etc) and then believe "they'll think similarly to us." However, a SuperIntelligence probably will not think with the same logic as us:

                from the book:

                A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions. Therefore, anthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes.

                The author continues, prompted by Asimov's three laws and how those laws really don't actually cover the details they need if we were to meet Artificial Intelligence.

                from the book

                And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.

                K Offline
                K Offline
                Kirk 10389821
                wrote on last edited by
                #29

                The scariest thing I read was: Given that General AI isn't a thing yet, and that is what to be afraid of... But also given, that if you thought of what General AI was, it's actually an AI that manages Specific AIs with feedback on how each performed, and limited resources (You can only play so many games of Go, or the Chess games have to stop)... If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources... I fear we are in trouble. And since a moron like myself can put it in such simple terms, it scares me because SOMEONE has to be working on exactly this. (This is a riff on the Hopfield Hierarchical Network, but at more of an architectural level, IMO). Thanks for sharing!

                R 1 Reply Last reply
                0
                • E englebart

                  My faulty Spanish: ¿Te gustaría jugar un juego? I just found the original English: "Shall we play a game?" My brain stored it closer to your translation than the original. I think I saw the original version in the theater as a teenager. Soy viejo.

                  N Offline
                  N Offline
                  Nelek
                  wrote on last edited by
                  #30

                  I saw it on TV as kid, I am not that old, but neither a youngster ;)

                  M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                  1 Reply Last reply
                  0
                  • K Kirk 10389821

                    The scariest thing I read was: Given that General AI isn't a thing yet, and that is what to be afraid of... But also given, that if you thought of what General AI was, it's actually an AI that manages Specific AIs with feedback on how each performed, and limited resources (You can only play so many games of Go, or the Chess games have to stop)... If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources... I fear we are in trouble. And since a moron like myself can put it in such simple terms, it scares me because SOMEONE has to be working on exactly this. (This is a riff on the Hopfield Hierarchical Network, but at more of an architectural level, IMO). Thanks for sharing!

                    R Offline
                    R Offline
                    raddevus
                    wrote on last edited by
                    #31

                    Kirk 10389821 wrote:

                    If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources...

                    That's a really great high-level design idea and makes sense. Then the AI is the maintainer of all the other AIs and insures they are all top notch while needing no sleep or food or anything. And would all just keep getting better and better.

                    K 1 Reply Last reply
                    0
                    • R raddevus

                      Kirk 10389821 wrote:

                      If they find a SIMPLE enough way to represent the "Many Personalities" of General Intelligence as a game dashboard of option, where the main AI gets rewards for making all of his AI Personalities top-notch, while managing the CPU/Resources...

                      That's a really great high-level design idea and makes sense. Then the AI is the maintainer of all the other AIs and insures they are all top notch while needing no sleep or food or anything. And would all just keep getting better and better.

                      K Offline
                      K Offline
                      Kirk 10389821
                      wrote on last edited by
                      #32

                      Yeah, there is a bleed-over concept that is missing. But "magic insight" happens when we apply theory from one area of expertise to another. (Picture the math behind music, or using meditation to to eventually learn to control ones blood pressure on demand)... So, this would be an "integrator AI" piece. It's job would be to see if any tricks learned by ANY ONE AI could be reflected as a strategy/paradigm shift in another AI. Could a strategy that works at Go somehow be optimized into learning to fly, or playing chess, or playing the piano. One of the other interesting ideas is that as humans we struggle with energy/stamina. Sometimes we optimize things PURELY to conserve Energy (System 1 vs. System 2 for quick analysis). AIs do not suffer from this, but they don't gain from it. Also, I believe a lot of training is GAN (working against another AI). Being able to swap sides, or choose various AIs to test our skills with, to see if they are worth adopting... To be clear. Humans take experience from playing piano, and carry it into other areas pretty easily. A segmented AI would have to work at this. Once it could do that, and manage multiple personalities, I think we are there...

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups