Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Philosophical Friday

Philosophical Friday

Scheduled Pinned Locked Moved The Lounge
questiondatabasesql-serversysadminalgorithms
41 Posts 14 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R Rob Philpott

    No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

    Regards, Rob Philpott.

    J Offline
    J Offline
    jschell
    wrote on last edited by
    #31

    Rob Philpott wrote:

    Like I said, its like the sum is more than that parts.

    And if you put gasoline and a bunch of steel on the ground you still are not going to be able to drive it from NY to Chicago but, presumably, you can do the same with an automobile.

    Rob Philpott wrote:

    If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

    The number of things that people attach morality to probably isn't infinite but it is certainly big enough that enumerating it would be endless. So I fail to see how that matters.

    1 Reply Last reply
    0
    • L Lost User

      Rob Philpott wrote:

      there will be lots of moral questions about the worth of what's created vs. the worth of the human

      Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

      Rob Philpott wrote:

      Like I said, its like the sum is more than that parts.

      I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)

      J Offline
      J Offline
      jschell
      wrote on last edited by
      #32

      harold aptroot wrote:

      Ok, well here's something to consider: an AI does not notice its death.

      Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death. As for computers noticing the death of others there is speculation that the recent twitter hack about a white house attack which caused a drop in the stock market was automated - in that computers acted on the information that an attack occurred. If that is true then it would seem to be very unlikely that they wouldn't also react to the death of certain individuals.

      L 1 Reply Last reply
      0
      • L Lost User

        What does intelligence have to do with anything? Mere humans certainly aren't intelligent

        J Offline
        J Offline
        jschell
        wrote on last edited by
        #33

        harold aptroot wrote:

        What does intelligence have to do with anything? Mere humans certainly aren't intelligent

        That sounds like a term definition problem. Your response was in some way related to the statement "Would this emulation have consciousness, feelings and motivations?" You statement certainly didn't seem to indicate that you thought humans didn't experience that. And excluding and philosophical meanderings then my statement of "intelligence" refers to whatever embodies the above concepts.

        L 1 Reply Last reply
        0
        • N Nicholas Marty

          Rob Philpott wrote:

          Would this emulation have consciousness, feelings and motivations?

          Probably yes. However, it would have to be instructed to do so. Any human beeing, or animal is influenced by its sourroundings. The factors are quite unlimited as everything that you notice has directly or indirectly an impact on you (however small it might be). what leads a beeing to develop consciousness, feelings and motivations is probably the more interesting question. how can you influence something to provoke feelings in the future? I suppose motivation is primarily driven by feelings. (be it only to have some relief at the end of the month because you can affort your rent? ;)) As I see it an emulation of a human brain would have to go through a whole process of growing up (probably accellerated by even more powerful hardware? ;P)

          Rob Philpott wrote:

          Perhaps one for the pub, it is Friday after all. Mine's a pint.

          Sadly, this has to wait for another few hours :beer:

          J Offline
          J Offline
          jschell
          wrote on last edited by
          #34

          Nicholas Marty wrote:

          However, it would have to be instructed to do so.

          That is a supposition. Either humans arrive at those qualities by being instructed, they learn it themselves or it is innate. And one can suppose that the first two are certainly possible for a machine intelligence. Myself I doubt the last because there are in fact observable differences between humans of that nature when one looks at culture and language. And if you stick a human at birth into a sensory deprivation environment and leave them there until they are 25 I seriously doubt they would have anything. They would probably be behaviorally brain dead.

          1 Reply Last reply
          0
          • M Matthew Faithfull

            As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

            "The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)

            J Offline
            J Offline
            jschell
            wrote on last edited by
            #35

            Matthew Faithfull wrote:

            The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away.

            So exactly how do you continue to live? Or does your definition of "life" not in include cows, chickens, broccoli and carrots?

            1 Reply Last reply
            0
            • M madmatter

              Self-awareness seems to be rather difficult to accomplish. You could say that the earth is a large complicated processor and it was only able to produce one species (that we know of) that could invent the internet. I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums - the earth would have already done so. Wait... I think I remember seeing where a snake was posting on twitter https://twitter.com/BronxZoosCobra[^] well there goes that argument. Yes clearly self-awareness is replicable by humans because snakes are posting on forums. Next discussion please.

              J Offline
              J Offline
              jschell
              wrote on last edited by
              #36

              madmatter wrote:

              I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums

              That is a confused definition. Scientifically there are more precise definitions for "self-awareness" and there are tests for it. Humans are not the only species that pass those tests.

              1 Reply Last reply
              0
              • J jschell

                harold aptroot wrote:

                Ok, well here's something to consider: an AI does not notice its death.

                Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death. As for computers noticing the death of others there is speculation that the recent twitter hack about a white house attack which caused a drop in the stock market was automated - in that computers acted on the information that an attack occurred. If that is true then it would seem to be very unlikely that they wouldn't also react to the death of certain individuals.

                L Offline
                L Offline
                Lost User
                wrote on last edited by
                #37

                jschell wrote:

                Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death.

                Yes, the same thing can obviously be said about any consciousness - it can't simultaneously be dead and be perceiving anything.

                J 1 Reply Last reply
                0
                • J jschell

                  harold aptroot wrote:

                  What does intelligence have to do with anything? Mere humans certainly aren't intelligent

                  That sounds like a term definition problem. Your response was in some way related to the statement "Would this emulation have consciousness, feelings and motivations?" You statement certainly didn't seem to indicate that you thought humans didn't experience that. And excluding and philosophical meanderings then my statement of "intelligence" refers to whatever embodies the above concepts.

                  L Offline
                  L Offline
                  Lost User
                  wrote on last edited by
                  #38

                  Ok, fine, be serious about it.. I'd say humans experience those things by definition, because that's what the words were invented for. Seems sort of self-centered to me, but whatever. So back to the bag of chemicals: there's nothing else in there, so that has to be the part that's inducing all those aspects of consciousness.

                  1 Reply Last reply
                  0
                  • L Lost User

                    jschell wrote:

                    Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death.

                    Yes, the same thing can obviously be said about any consciousness - it can't simultaneously be dead and be perceiving anything.

                    J Offline
                    J Offline
                    jschell
                    wrote on last edited by
                    #39

                    harold aptroot wrote:

                    Yes, the same thing can obviously be said about any consciousness - it can't simultaneously be dead and be perceiving anything.

                    At this point all I can say is that I don't understand what your point was in bringing death into the discussion as it relates to an AI.

                    1 Reply Last reply
                    0
                    • L Lost User

                      So.. you want to torture an AI? :laugh:

                      T Offline
                      T Offline
                      Testing 1 2 uh 7
                      wrote on last edited by
                      #40

                      I don't WANT to, I just feel that we NEED to in order to understand these philosophical questions. Actually, a sample size of one is pretty useless. We should torture hundreds of AI, just to be sure. ;P

                      L 1 Reply Last reply
                      0
                      • T Testing 1 2 uh 7

                        I don't WANT to, I just feel that we NEED to in order to understand these philosophical questions. Actually, a sample size of one is pretty useless. We should torture hundreds of AI, just to be sure. ;P

                        L Offline
                        L Offline
                        Lost User
                        wrote on last edited by
                        #41

                        Since they're AI's anyway, you could probably torture them automatically. Millions of AI's could be tortured efficiently that way :)

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups