Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Philosophical Friday

Philosophical Friday

Scheduled Pinned Locked Moved The Lounge
questiondatabasesql-serversysadminalgorithms
41 Posts 14 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R Rob Philpott

    Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

    Regards, Rob Philpott.

    P Offline
    P Offline
    Pete OHanlon
    wrote on last edited by
    #18

    All I can ask, in return, is, if the Hulk lifts Thor while Thor is holding Mjolnir, does this meant that the Hulk lifted Mjolnir?

    I was brought up to respect my elders. I don't respect many people nowadays.
    CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

    R 1 Reply Last reply
    0
    • R Rob Philpott

      Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

      Regards, Rob Philpott.

      B Offline
      B Offline
      BobJanova
      wrote on last edited by
      #19

      If you could create such a thing, then yes, it would have emotions and all the rest of it. The philosophy comes in at the step before that: if our brain is more than a physical object we'd never be able to create a simulation. You couldn't just plug a human brain into a beige box, though. Large parts of our brain are linked to our physical form (not just the obvious way of moving bits and feeling what happens to them, but all our senses, our sense of space, movement, even time are all linked to how we are put together). There's nothing intrinsically difficult about the idea of emergent complexity; we can see it in various places already. For example ants are simple but the complexity of an ant colony is not (or similarly with bees and storing information about good flower areas); genes and chromosomes are individually simple, to the level we chemically understand them, and yet put them together and you encode complex organisms; market trades are individually very simple and yet the behaviour of markets as a whole is not.

      1 Reply Last reply
      0
      • R Rob Philpott

        Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

        Regards, Rob Philpott.

        M Offline
        M Offline
        madmatter
        wrote on last edited by
        #20

        Self-awareness seems to be rather difficult to accomplish. You could say that the earth is a large complicated processor and it was only able to produce one species (that we know of) that could invent the internet. I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums - the earth would have already done so. Wait... I think I remember seeing where a snake was posting on twitter https://twitter.com/BronxZoosCobra[^] well there goes that argument. Yes clearly self-awareness is replicable by humans because snakes are posting on forums. Next discussion please.

        J 1 Reply Last reply
        0
        • P Pete OHanlon

          All I can ask, in return, is, if the Hulk lifts Thor while Thor is holding Mjolnir, does this meant that the Hulk lifted Mjolnir?

          I was brought up to respect my elders. I don't respect many people nowadays.
          CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

          R Offline
          R Offline
          Rob Philpott
          wrote on last edited by
          #21

          Dude, that's Modus Ponuns[^] So yes, I can confirm this.

          Regards, Rob Philpott.

          1 Reply Last reply
          0
          • S S Houghtelin

            Matthew Faithfull wrote:

            The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

            Here are ssome good examples. Here[^] Here[^] and Here[^]

            It was broke, so I fixed it.

            M Offline
            M Offline
            Matthew Faithfull
            wrote on last edited by
            #22

            :laugh: Very good.morally dead for the most part but still highly toxic. If only we could find a landfill sight that would take them.

            "The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)

            1 Reply Last reply
            0
            • T Testing 1 2 uh 7

              Quote:

              Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

              I would argue that in the case of a sudden death, humans lack the ability to know that they're dead (from the strictly human level - if existence continues it's post-human). Now, if life is slowly failing, a human can detect and anticipate the end of life, but so could a sufficiently sophisticated AI. With the proper inputs, I think a machine could anticipate death at least as well as a human.

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #23

              So.. you want to torture an AI? :laugh:

              T 1 Reply Last reply
              0
              • R Rob Philpott

                Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                Regards, Rob Philpott.

                L Offline
                L Offline
                Lost User
                wrote on last edited by
                #24

                Are human beings self-aware?

                R 1 Reply Last reply
                0
                • L Lost User

                  Are human beings self-aware?

                  R Offline
                  R Offline
                  Rob Philpott
                  wrote on last edited by
                  #25

                  Well, I can't speak for the rest of you, but I like to think I am.

                  Regards, Rob Philpott.

                  1 Reply Last reply
                  0
                  • R Rob Philpott

                    Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                    Regards, Rob Philpott.

                    V Offline
                    V Offline
                    Vark111
                    wrote on last edited by
                    #26

                    If it's "you" that was recreated in this machine - place yourself in "his" metaphorical shoes. "You" would wake up, being conscious, but you would be paralyzed (you can't feel your legs or arms), blind (no eyes), deaf (no ears). You would feel extreme panic as your mind no longer registers the inhalation and exhalation of breath. Eventually - if you didn't go mad - you would calm down and and realize that something else is going on, this is when "you" would perhaps realize that you were only a recreation - a simulation of the original. Under these circumstances, if the original attempted to communicate with me, I'd tell myself to FRO.

                    1 Reply Last reply
                    0
                    • R Rob Philpott

                      Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                      Regards, Rob Philpott.

                      J Offline
                      J Offline
                      jschell
                      wrote on last edited by
                      #27

                      Rob Philpott wrote:

                      For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts?

                      Which philosophers have been pondering for a very long time - definitely before anything like a computer existed. The basic discussion is...given that person A is talking to person B how does person A know that B is conscious? How does A know that B thinks the same way that A does? How does A know that even though B seems to be discussing everything in a reasonable way how does A know that the B is in fact understanding the same concepts that A is trying to convey? The "Turing Test" is an experimental technique defined in an attempt to at least arrive at an equivalence in behavior.

                      1 Reply Last reply
                      0
                      • R Rob Philpott

                        Are you a deity? If not, I'm afraid I do not accept your explanation.

                        Regards, Rob Philpott.

                        J Offline
                        J Offline
                        jschell
                        wrote on last edited by
                        #28

                        Rob Philpott wrote:

                        Are you a deity? If not, I'm afraid I do not accept your explanation.

                        Philosophically that is is basically an invalid refutation of the statement. Your statement embodies the assumption that you are in fact an intelligent being rather than being just some words in a forum. And also assumes that even if you are intelligent entity that one must be a deity and not just very smart to be capable of modeling you.

                        1 Reply Last reply
                        0
                        • L Lost User

                          Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.

                          J Offline
                          J Offline
                          jschell
                          wrote on last edited by
                          #29

                          harold aptroot wrote:

                          Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.

                          Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.

                          L 1 Reply Last reply
                          0
                          • J jschell

                            harold aptroot wrote:

                            Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.

                            Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.

                            L Offline
                            L Offline
                            Lost User
                            wrote on last edited by
                            #30

                            What does intelligence have to do with anything? Mere humans certainly aren't intelligent

                            J 1 Reply Last reply
                            0
                            • R Rob Philpott

                              No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

                              Regards, Rob Philpott.

                              J Offline
                              J Offline
                              jschell
                              wrote on last edited by
                              #31

                              Rob Philpott wrote:

                              Like I said, its like the sum is more than that parts.

                              And if you put gasoline and a bunch of steel on the ground you still are not going to be able to drive it from NY to Chicago but, presumably, you can do the same with an automobile.

                              Rob Philpott wrote:

                              If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

                              The number of things that people attach morality to probably isn't infinite but it is certainly big enough that enumerating it would be endless. So I fail to see how that matters.

                              1 Reply Last reply
                              0
                              • L Lost User

                                Rob Philpott wrote:

                                there will be lots of moral questions about the worth of what's created vs. the worth of the human

                                Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

                                Rob Philpott wrote:

                                Like I said, its like the sum is more than that parts.

                                I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)

                                J Offline
                                J Offline
                                jschell
                                wrote on last edited by
                                #32

                                harold aptroot wrote:

                                Ok, well here's something to consider: an AI does not notice its death.

                                Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death. As for computers noticing the death of others there is speculation that the recent twitter hack about a white house attack which caused a drop in the stock market was automated - in that computers acted on the information that an attack occurred. If that is true then it would seem to be very unlikely that they wouldn't also react to the death of certain individuals.

                                L 1 Reply Last reply
                                0
                                • L Lost User

                                  What does intelligence have to do with anything? Mere humans certainly aren't intelligent

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #33

                                  harold aptroot wrote:

                                  What does intelligence have to do with anything? Mere humans certainly aren't intelligent

                                  That sounds like a term definition problem. Your response was in some way related to the statement "Would this emulation have consciousness, feelings and motivations?" You statement certainly didn't seem to indicate that you thought humans didn't experience that. And excluding and philosophical meanderings then my statement of "intelligence" refers to whatever embodies the above concepts.

                                  L 1 Reply Last reply
                                  0
                                  • N Nicholas Marty

                                    Rob Philpott wrote:

                                    Would this emulation have consciousness, feelings and motivations?

                                    Probably yes. However, it would have to be instructed to do so. Any human beeing, or animal is influenced by its sourroundings. The factors are quite unlimited as everything that you notice has directly or indirectly an impact on you (however small it might be). what leads a beeing to develop consciousness, feelings and motivations is probably the more interesting question. how can you influence something to provoke feelings in the future? I suppose motivation is primarily driven by feelings. (be it only to have some relief at the end of the month because you can affort your rent? ;)) As I see it an emulation of a human brain would have to go through a whole process of growing up (probably accellerated by even more powerful hardware? ;P)

                                    Rob Philpott wrote:

                                    Perhaps one for the pub, it is Friday after all. Mine's a pint.

                                    Sadly, this has to wait for another few hours :beer:

                                    J Offline
                                    J Offline
                                    jschell
                                    wrote on last edited by
                                    #34

                                    Nicholas Marty wrote:

                                    However, it would have to be instructed to do so.

                                    That is a supposition. Either humans arrive at those qualities by being instructed, they learn it themselves or it is innate. And one can suppose that the first two are certainly possible for a machine intelligence. Myself I doubt the last because there are in fact observable differences between humans of that nature when one looks at culture and language. And if you stick a human at birth into a sensory deprivation environment and leave them there until they are 25 I seriously doubt they would have anything. They would probably be behaviorally brain dead.

                                    1 Reply Last reply
                                    0
                                    • M Matthew Faithfull

                                      As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

                                      "The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)

                                      J Offline
                                      J Offline
                                      jschell
                                      wrote on last edited by
                                      #35

                                      Matthew Faithfull wrote:

                                      The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away.

                                      So exactly how do you continue to live? Or does your definition of "life" not in include cows, chickens, broccoli and carrots?

                                      1 Reply Last reply
                                      0
                                      • M madmatter

                                        Self-awareness seems to be rather difficult to accomplish. You could say that the earth is a large complicated processor and it was only able to produce one species (that we know of) that could invent the internet. I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums - the earth would have already done so. Wait... I think I remember seeing where a snake was posting on twitter https://twitter.com/BronxZoosCobra[^] well there goes that argument. Yes clearly self-awareness is replicable by humans because snakes are posting on forums. Next discussion please.

                                        J Offline
                                        J Offline
                                        jschell
                                        wrote on last edited by
                                        #36

                                        madmatter wrote:

                                        I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums

                                        That is a confused definition. Scientifically there are more precise definitions for "self-awareness" and there are tests for it. Humans are not the only species that pass those tests.

                                        1 Reply Last reply
                                        0
                                        • J jschell

                                          harold aptroot wrote:

                                          Ok, well here's something to consider: an AI does not notice its death.

                                          Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death. As for computers noticing the death of others there is speculation that the recent twitter hack about a white house attack which caused a drop in the stock market was automated - in that computers acted on the information that an attack occurred. If that is true then it would seem to be very unlikely that they wouldn't also react to the death of certain individuals.

                                          L Offline
                                          L Offline
                                          Lost User
                                          wrote on last edited by
                                          #37

                                          jschell wrote:

                                          Precluding some supernatural explanation then I seriously doubt that humans "notice" their own death.

                                          Yes, the same thing can obviously be said about any consciousness - it can't simultaneously be dead and be perceiving anything.

                                          J 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups