Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Artificial Super-Intelligence

Artificial Super-Intelligence

Scheduled Pinned Locked Moved The Lounge
helpquestionhtmlcom
22 Posts 14 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T TheOnlyRealTodd

    To be honest, I feel like this is going to be one of those things that sounds like so in theory, but in practice, we're going to end up with a really expensive, not-that-great machine. There are many things besides "raw intelligence" that make us human and allow us leverage in this world/universe. This entire premise is that if we make something "so intelligent" it's going to be the end-all be-all and the reality is, it's going to be seriously lacking, lol. I mean, look at any AI that we currently have... These guys can't even make any AI work flawlessly in a video game and they're saying in 20 years they're going to basically have a human reproduced... Then a few days after, have some sort of god. I don't care how you word it: the notion is also still that humans, who are flawed and limited in intelligence, are going to somehow create something that is unflawed and able to exponentially increase its intelligence when it is in fact created by humans, based upon resources given to it by humans, in a flawed way... within a short amount of time... Really??? I find the fact that all these "smart people" even believe this is really going to happen scarier than the idea itself. But I have a feeling it has to do with getting funding to play with the latest toys at the office. What about time it takes to trial/error things? What about stupid flaws programmed into it by humans, such as the Tesla car slamming straight into a white object? I mean, the possibilities here are endless. And yes, you absolutely could argue that just a few years back, cell phones would have seemed laughable. But cell phones also aren't claiming to be some sort of artificial higher-intelligence that all humans consult. I mean, Google can't even get my driving directions right half the time, and Facebook is always trying to get me to add the most annoying, irrelevant people to my contacts. At the end of the day, these stories sound like cool movies, but humans have been trying to make/reach their own god for many many years. This sounds like nothing but a good movie and a 21st century Tower of Babel. Remember Y2K? Another thing that in theory sounded one way, but in practice ended up being nothing like the media acted like. One of the tragic flaws of the human race is that we are constantly trying "to understand" and we fail to recgonize that some things "just don't compute" to us... Not everything can be understood by our brains. Certain things in the emotional and spiritual realms are particularly impossible to "understand." This means

    M Offline
    M Offline
    Mike Marynowski
    wrote on last edited by
    #21

    Yeah but nature iterated from ant-level intelligence to human level intelligence somehow, without the need for a greater intelligence to create us, didn't it? Nature basically said "modify yourself with each generation and the output creature that is best at surviving continues on and repeats this process". A similar process can be programmed into a computer, the result of which could be more intelligent than the inputs. You are correct that many things are beyond our understanding. The whole idea behind the singularity is we actually won't understand what the AI is doing after we let it loose, our minds indeed will not be completely incapable of understanding anything it is doing. If programmed with the proper "seed" goals, such as "maximize happiness of humans without bringing harm to any living creatures" (this is very simplified and AI researchers are split on whether something like this can be programmed effectively but I think it is possible), then the result will theoretically be a bunch of happy humans without us actually understanding the mechanisms by which the AI is accomplishing this task. What I think is much more interesting than letting an AI loose like this is slowly augmenting and replacing our brains with modules that interface directly with technology. Then we slowly, piece by piece, become the AI ourselves. This will lead to a much more of a controlled ascent into the singularity but with this approach we bring all our human flaws into the process as well. Could be good, could be very very bad. What I'm hoping is that when we all begin to connect our minds together in this way, the vast amount of information and processing that will become directly available to our expanded minds along with the capability to eventually transit ideas and thoughts directly between each other will result in a new beautiful age of empathy and understanding between our minds to the point where our human flaws are minimized and eventually disappear as we become one. What a lovely way that would be to meet the end of the universe :)

    1 Reply Last reply
    0
    • T TheOnlyRealTodd

      So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

      M Offline
      M Offline
      Mikhael Loo
      wrote on last edited by
      #22

      Having human problems that create human pain are what motivates human solutions that humans value. An AI with AI problems doesn't seem relevant. Lions and humans both have pain and suffering in their design. Intelligence is only one part of the complete package that must include empathy for the hosts of problems there are to solve using intelligence. "Solve all suffering = destroy all life" is pretty intelligent but not so empathetic.

      1 Reply Last reply
      0
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Don't have an account? Register

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Users
      • Groups