Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Artificial Super-Intelligence

Artificial Super-Intelligence

Scheduled Pinned Locked Moved The Lounge
helpquestionhtmlcom
22 Posts 14 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T Offline
    T Offline
    TheOnlyRealTodd
    wrote on last edited by
    #1

    So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

    Kornfeld Eliyahu PeterK L B M R 7 Replies Last reply
    0
    • T TheOnlyRealTodd

      So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

      Kornfeld Eliyahu PeterK Offline
      Kornfeld Eliyahu PeterK Offline
      Kornfeld Eliyahu Peter
      wrote on last edited by
      #2

      I saw it a lot of times that even the most intelligence are confused between intelligence and knowledge... It is entirely possible that a well designed artificial brain will be able to gain knowledge in a way no other brain can, but that will not make it more intelligent...

      Skipper: We'll fix it. Alex: Fix it? How you gonna fix this? Skipper: Grit, spit and a whole lotta duct tape.

      "It never ceases to amaze me that a spacecraft launched in 1977 can be fixed remotely from Earth." ― Brian Cox

      1 Reply Last reply
      0
      • T TheOnlyRealTodd

        So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #3

        That argument is not about the magic acquisition of knowledge out of nowhere, it's about the idea that a digital intelligence could rewrite itself so it can think faster/better. And once it does that a bit, it gets even better at that, and so forth. Which makes sense, I guess, though I see no reason to give an AI that capability unless specifically setting out to find out what it will do with it. I mean just because you're an AGI doesn't mean you're allowed to modify your code and recompile yourself. For example it's not like a bunch of simulated neurons (supposing we go that way) could actually *do anything*, unless we explicitly link their output to some action.

        T OriginalGriffO 2 Replies Last reply
        0
        • L Lost User

          That argument is not about the magic acquisition of knowledge out of nowhere, it's about the idea that a digital intelligence could rewrite itself so it can think faster/better. And once it does that a bit, it gets even better at that, and so forth. Which makes sense, I guess, though I see no reason to give an AI that capability unless specifically setting out to find out what it will do with it. I mean just because you're an AGI doesn't mean you're allowed to modify your code and recompile yourself. For example it's not like a bunch of simulated neurons (supposing we go that way) could actually *do anything*, unless we explicitly link their output to some action.

          T Offline
          T Offline
          TheOnlyRealTodd
          wrote on last edited by
          #4

          Yeah I was thinking the same thing: the computer could be as intelligent as it wants, but it must have an output method to actually perform work. Plus, raw intelligence does not always win. For example, say Einstein got into a fight with a gang member: chances are, Einstein would be far more intelligent... Chances are, Einstein would get his ass kicked. Hell, there are humans who jump into zoo enclosures and get killed by animals. There seems to be this notion that just because an ASI machine gathers more intelligence than us, that it will automatically take over, but there is more to that than raw intelligence. There would have to be reproduction, manpower, output capabilities, etc... True, technically it could manipulate humans to carry out its "master plan" but, this is starting to sound like a bad movie, LOL.

          D M 2 Replies Last reply
          0
          • T TheOnlyRealTodd

            Yeah I was thinking the same thing: the computer could be as intelligent as it wants, but it must have an output method to actually perform work. Plus, raw intelligence does not always win. For example, say Einstein got into a fight with a gang member: chances are, Einstein would be far more intelligent... Chances are, Einstein would get his ass kicked. Hell, there are humans who jump into zoo enclosures and get killed by animals. There seems to be this notion that just because an ASI machine gathers more intelligence than us, that it will automatically take over, but there is more to that than raw intelligence. There would have to be reproduction, manpower, output capabilities, etc... True, technically it could manipulate humans to carry out its "master plan" but, this is starting to sound like a bad movie, LOL.

            D Offline
            D Offline
            Daniel Pfeffer
            wrote on last edited by
            #5

            TheOnlyRealTodd wrote:

            chances are, Einstein would be far moretoo intelligent... to get into a fight with a gang member

            FTFY :)

            If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill

            1 Reply Last reply
            0
            • L Lost User

              That argument is not about the magic acquisition of knowledge out of nowhere, it's about the idea that a digital intelligence could rewrite itself so it can think faster/better. And once it does that a bit, it gets even better at that, and so forth. Which makes sense, I guess, though I see no reason to give an AI that capability unless specifically setting out to find out what it will do with it. I mean just because you're an AGI doesn't mean you're allowed to modify your code and recompile yourself. For example it's not like a bunch of simulated neurons (supposing we go that way) could actually *do anything*, unless we explicitly link their output to some action.

              OriginalGriffO Offline
              OriginalGriffO Offline
              OriginalGriff
              wrote on last edited by
              #6

              And remember, someone has to write the original in such a way that it can improve itself. And I haven't seen any "ND SUPR CLEVA AI. SND CDZZZ!!!!" in QA. Yet.

              Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...

              "I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
              "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt

              N D 2 Replies Last reply
              0
              • OriginalGriffO OriginalGriff

                And remember, someone has to write the original in such a way that it can improve itself. And I haven't seen any "ND SUPR CLEVA AI. SND CDZZZ!!!!" in QA. Yet.

                Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...

                N Offline
                N Offline
                Nagy Vilmos
                wrote on last edited by
                #7

                OriginalGriff wrote:

                "ND SUPR CLEVA AI. SND CDZZZ!!!!"

                I think you'll find that would be an example of artificial stupidity.

                veni bibi saltavi

                1 Reply Last reply
                0
                • OriginalGriffO OriginalGriff

                  And remember, someone has to write the original in such a way that it can improve itself. And I haven't seen any "ND SUPR CLEVA AI. SND CDZZZ!!!!" in QA. Yet.

                  Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...

                  D Offline
                  D Offline
                  Daniel Pfeffer
                  wrote on last edited by
                  #8

                  Are you claiming that we will only achieve Artificial Super-Intelligence when AI programs learn to create an account on CodeProject, and post questions on QA?

                  If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill

                  Kornfeld Eliyahu PeterK 1 Reply Last reply
                  0
                  • D Daniel Pfeffer

                    Are you claiming that we will only achieve Artificial Super-Intelligence when AI programs learn to create an account on CodeProject, and post questions on QA?

                    If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack. --Winston Churchill

                    Kornfeld Eliyahu PeterK Offline
                    Kornfeld Eliyahu PeterK Offline
                    Kornfeld Eliyahu Peter
                    wrote on last edited by
                    #9

                    No. We will only achieve it when OG will answer those questions!

                    Skipper: We'll fix it. Alex: Fix it? How you gonna fix this? Skipper: Grit, spit and a whole lotta duct tape.

                    "It never ceases to amaze me that a spacecraft launched in 1977 can be fixed remotely from Earth." ― Brian Cox

                    OriginalGriffO 1 Reply Last reply
                    0
                    • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

                      No. We will only achieve it when OG will answer those questions!

                      Skipper: We'll fix it. Alex: Fix it? How you gonna fix this? Skipper: Grit, spit and a whole lotta duct tape.

                      OriginalGriffO Offline
                      OriginalGriffO Offline
                      OriginalGriff
                      wrote on last edited by
                      #10

                      I'm not sure if I should :laugh: or :(( ...

                      Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...

                      "I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
                      "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt

                      Kornfeld Eliyahu PeterK 1 Reply Last reply
                      0
                      • OriginalGriffO OriginalGriff

                        I'm not sure if I should :laugh: or :(( ...

                        Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...

                        Kornfeld Eliyahu PeterK Offline
                        Kornfeld Eliyahu PeterK Offline
                        Kornfeld Eliyahu Peter
                        wrote on last edited by
                        #11

                        You should be proud that the future (or even the existence) of an artificial super-intelligence depends on you!!! :laugh:

                        Skipper: We'll fix it. Alex: Fix it? How you gonna fix this? Skipper: Grit, spit and a whole lotta duct tape.

                        "It never ceases to amaze me that a spacecraft launched in 1977 can be fixed remotely from Earth." ― Brian Cox

                        W 1 Reply Last reply
                        0
                        • T TheOnlyRealTodd

                          So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

                          B Offline
                          B Offline
                          BillWoodruff
                          wrote on last edited by
                          #12

                          One of the grandiose ideas running around in the current grandiose AI meme-verse is that at some point the self-replicating, self-designing/modifying, "entity" reaches a point of complexity where a "singularity" occurs, a singularity which results in the entity having some analogue to what we refer to (but, can never fully explain) as "consciousness." And, if that happens, why should that new consciousness be like that of its human "ancestors" ? I like the idea that the super-conscious AI entities of the future will determine, correctly, that human beings are a destructive parasite, the most toxic on the planet, and a threat to all other life-forms and the planetary ecology, and will decide to keep a few humans around as pets, or in a zoo, but, will, as is only logical, discard the rest to make compost, or something useful. They will look on their human creators as messy analog wetware that, surprisingly, created something that could replace them, something much more moral, ethical, and efficient. For now, I am a skeptic about such prognostications by Kurzweil, Hawking, et. al., and since I most likely won't be around in 2030, won't have a chance to see how this plays out further, but, I would be surprised if in the next thirty years some mind-blowing things don't happen.

                          «There is a spectrum, from "clearly desirable behaviour," to "possibly dodgy behavior that still makes some sense," to "clearly undesirable behavior." We try to make the latter into warnings or, better, errors. But stuff that is in the middle category you don’t want to restrict unless there is a clear way to work around it.» Eric Lippert, May 14, 2008

                          1 Reply Last reply
                          0
                          • T TheOnlyRealTodd

                            So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

                            M Offline
                            M Offline
                            Marc Clifton
                            wrote on last edited by
                            #13

                            TheOnlyRealTodd wrote:

                            How would that machine now suddenly dive into an entire other dimension of intelligence?

                            Well, maybe it would read posts like yours (and the whole spectrum of literate on intelligence) and start wondering what other intelligence is out there. But wondering requires imagination / curiosity. :doh: Marc

                            Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project! Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny

                            1 Reply Last reply
                            0
                            • T TheOnlyRealTodd

                              So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

                              R Offline
                              R Offline
                              Ravi Bhavnani
                              wrote on last edited by
                              #14

                              So after three weeks of research, the author claims "What's happening in the world of AI is not just an important topic, but by far THE most important topic for our future."  Well, good for you, Tim. :) I've been involved with AI (specifically expert systems) for almost 30 years, and while I'm very pleased with AI-related advances in hardware and software, I must admit I don't share author's optimism about self-learning systems whose intelligence will exceed that of man.  I find it amusing that although commercial AI applications have been around for about 40+ years, its only recently that the mass media seems to have taken notice of the field. Personally, I wish the term "AI" had never been coined.  IMHO it's too broad and too often conjures up flights of fancy for journalists who seem to have stumbled upon the collective term for technologies such as rule based systems, image processing, robotics, machine learning, virtual reality, NLP, game theory, etc. /ravi

                              My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com

                              1 Reply Last reply
                              0
                              • T TheOnlyRealTodd

                                So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

                                M Offline
                                M Offline
                                Mark_Wallace
                                wrote on last edited by
                                #15

                                Look, if you're really worried about our future machine overlords (whom I welcome), all you have to do is give the machines gender. If the greatest achievement-killer for humans doesn't stop them, nothing will.

                                I wanna be a eunuchs developer! Pass me a bread knife!

                                W 1 Reply Last reply
                                0
                                • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

                                  You should be proud that the future (or even the existence) of an artificial super-intelligence depends on you!!! :laugh:

                                  Skipper: We'll fix it. Alex: Fix it? How you gonna fix this? Skipper: Grit, spit and a whole lotta duct tape.

                                  W Offline
                                  W Offline
                                  W Balboos GHB
                                  wrote on last edited by
                                  #16

                                  Is that because you realize, now, that based upon the powers and abilities of OG: 1 - we are all saved! 2 - we are all doomed!

                                  Ravings en masse^

                                  "The difference between genius and stupidity is that genius has its limits." - Albert Einstein

                                  "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010

                                  1 Reply Last reply
                                  0
                                  • M Mark_Wallace

                                    Look, if you're really worried about our future machine overlords (whom I welcome), all you have to do is give the machines gender. If the greatest achievement-killer for humans doesn't stop them, nothing will.

                                    I wanna be a eunuchs developer! Pass me a bread knife!

                                    W Offline
                                    W Offline
                                    W Balboos GHB
                                    wrote on last edited by
                                    #17

                                    Another option to destroy any hope of its dominance: give it internet access to expand its knowledge base. Death by iv4 (iv6) cuts!

                                    Ravings en masse^

                                    "The difference between genius and stupidity is that genius has its limits." - Albert Einstein

                                    "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010

                                    1 Reply Last reply
                                    0
                                    • T TheOnlyRealTodd

                                      Yeah I was thinking the same thing: the computer could be as intelligent as it wants, but it must have an output method to actually perform work. Plus, raw intelligence does not always win. For example, say Einstein got into a fight with a gang member: chances are, Einstein would be far more intelligent... Chances are, Einstein would get his ass kicked. Hell, there are humans who jump into zoo enclosures and get killed by animals. There seems to be this notion that just because an ASI machine gathers more intelligence than us, that it will automatically take over, but there is more to that than raw intelligence. There would have to be reproduction, manpower, output capabilities, etc... True, technically it could manipulate humans to carry out its "master plan" but, this is starting to sound like a bad movie, LOL.

                                      M Offline
                                      M Offline
                                      Mike Marynowski
                                      wrote on last edited by
                                      #18

                                      Indeed it does, but how are you going to stop a mad scientist from giving it an output method? And giving it the ability to use its output method to add more output methods? An AI computer, even if itself quite limited to start off, is an unimaginably power thing. With all the knowledge of humanity at its instant disposal, it can easily hack and take over other systems that it deems moral and beneficial. It would be nearly impossible to stop without a kill switch. Basically it comes down to this - whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine and hits the power button to set it loose will be the people that determine the fate of the world, for better or worse, and possibly the universe. They will literally be able to shape the future of the entire universe based on how well they transferred their ideas into code. I think it will be virtually impossible to prevent this from happening shortly after the capability is there. Maybe some clever humans will find a way to effectively control computing but I doubt it. So then it becomes a race of who unleashes their AI first, because who wants to be at the whim of someone else's AI, set to their own ideals and morals? Not I.

                                      F T 2 Replies Last reply
                                      0
                                      • M Mike Marynowski

                                        Indeed it does, but how are you going to stop a mad scientist from giving it an output method? And giving it the ability to use its output method to add more output methods? An AI computer, even if itself quite limited to start off, is an unimaginably power thing. With all the knowledge of humanity at its instant disposal, it can easily hack and take over other systems that it deems moral and beneficial. It would be nearly impossible to stop without a kill switch. Basically it comes down to this - whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine and hits the power button to set it loose will be the people that determine the fate of the world, for better or worse, and possibly the universe. They will literally be able to shape the future of the entire universe based on how well they transferred their ideas into code. I think it will be virtually impossible to prevent this from happening shortly after the capability is there. Maybe some clever humans will find a way to effectively control computing but I doubt it. So then it becomes a race of who unleashes their AI first, because who wants to be at the whim of someone else's AI, set to their own ideals and morals? Not I.

                                        F Offline
                                        F Offline
                                        Foothill
                                        wrote on last edited by
                                        #19

                                        Mike Marynowski wrote:

                                        whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine

                                        Until their boss walks in and says that it has to have a working demo for the investors to preview by Friday. In a panic, the programming team finalizes the core AI and foregos incorporating the humanistic parts to meet the deadline opting to have it read from a chat-bot response database over an unprotected internet connection just to make it appear to be meeting expectations. The demo is a huge success and, due to the celebratory mood, they forget to disconnect the AI and it starts wandering the internet. Odd things start happening after that... This seems to be a far more likely scenario in my book.

                                        if (Object.DividedByZero == true) { Universe.Implode(); } Meus ratio ex fortis machina. Simplicitatis de formae ac munus. -Foothill, 2016

                                        1 Reply Last reply
                                        0
                                        • M Mike Marynowski

                                          Indeed it does, but how are you going to stop a mad scientist from giving it an output method? And giving it the ability to use its output method to add more output methods? An AI computer, even if itself quite limited to start off, is an unimaginably power thing. With all the knowledge of humanity at its instant disposal, it can easily hack and take over other systems that it deems moral and beneficial. It would be nearly impossible to stop without a kill switch. Basically it comes down to this - whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine and hits the power button to set it loose will be the people that determine the fate of the world, for better or worse, and possibly the universe. They will literally be able to shape the future of the entire universe based on how well they transferred their ideas into code. I think it will be virtually impossible to prevent this from happening shortly after the capability is there. Maybe some clever humans will find a way to effectively control computing but I doubt it. So then it becomes a race of who unleashes their AI first, because who wants to be at the whim of someone else's AI, set to their own ideals and morals? Not I.

                                          T Offline
                                          T Offline
                                          TheOnlyRealTodd
                                          wrote on last edited by
                                          #20

                                          To be honest, I feel like this is going to be one of those things that sounds like so in theory, but in practice, we're going to end up with a really expensive, not-that-great machine. There are many things besides "raw intelligence" that make us human and allow us leverage in this world/universe. This entire premise is that if we make something "so intelligent" it's going to be the end-all be-all and the reality is, it's going to be seriously lacking, lol. I mean, look at any AI that we currently have... These guys can't even make any AI work flawlessly in a video game and they're saying in 20 years they're going to basically have a human reproduced... Then a few days after, have some sort of god. I don't care how you word it: the notion is also still that humans, who are flawed and limited in intelligence, are going to somehow create something that is unflawed and able to exponentially increase its intelligence when it is in fact created by humans, based upon resources given to it by humans, in a flawed way... within a short amount of time... Really??? I find the fact that all these "smart people" even believe this is really going to happen scarier than the idea itself. But I have a feeling it has to do with getting funding to play with the latest toys at the office. What about time it takes to trial/error things? What about stupid flaws programmed into it by humans, such as the Tesla car slamming straight into a white object? I mean, the possibilities here are endless. And yes, you absolutely could argue that just a few years back, cell phones would have seemed laughable. But cell phones also aren't claiming to be some sort of artificial higher-intelligence that all humans consult. I mean, Google can't even get my driving directions right half the time, and Facebook is always trying to get me to add the most annoying, irrelevant people to my contacts. At the end of the day, these stories sound like cool movies, but humans have been trying to make/reach their own god for many many years. This sounds like nothing but a good movie and a 21st century Tower of Babel. Remember Y2K? Another thing that in theory sounded one way, but in practice ended up being nothing like the media acted like. One of the tragic flaws of the human race is that we are constantly trying "to understand" and we fail to recgonize that some things "just don't compute" to us... Not everything can be understood by our brains. Certain things in the emotional and spiritual realms are particularly impossible to "understand." This means

                                          M 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups