Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Philosophical Friday

Philosophical Friday

Scheduled Pinned Locked Moved The Lounge
questiondatabasesql-serversysadminalgorithms
41 Posts 14 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C CMullikin

    THAT WAS MY QUOTE!! :mad:

    The United States invariably does the right thing, after having exhausted every other alternative. -Winston Churchill America is the only country that went from barbarism to decadence without civilization in between. -Oscar Wilde Wow, even the French showed a little more spine than that before they got their sh*t pushed in.[^] -Colin Mullikin

    K Offline
    K Offline
    Keith Barrow
    wrote on last edited by
    #12

    Heheheheheheheheheh.

    “Education is not the piling on of learning, information, data, facts, skills, or abilities - that's training or instruction - but is rather making visible what is hidden as a seed”
    “One of the greatest problems of our time is that many are schooled but few are educated”

    Sir Thomas More (1478 – 1535)

    1 Reply Last reply
    0
    • R Rob Philpott

      No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

      Regards, Rob Philpott.

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #13

      Rob Philpott wrote:

      there will be lots of moral questions about the worth of what's created vs. the worth of the human

      Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

      Rob Philpott wrote:

      Like I said, its like the sum is more than that parts.

      I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)

      T B J 3 Replies Last reply
      0
      • R Rob Philpott

        Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

        Regards, Rob Philpott.

        M Offline
        M Offline
        Matthew Faithfull
        wrote on last edited by
        #14

        As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

        "The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)

        S J 2 Replies Last reply
        0
        • M Matthew Faithfull

          As Dr Who himself once said. They built a copy of a human brain once, exact in every detail, it was the size of London and it didn't work. The answer is no, life is life, you can't give it and that's one reason why you shouldn't take it away. The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

          "The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)

          S Offline
          S Offline
          S Houghtelin
          wrote on last edited by
          #15

          Matthew Faithfull wrote:

          The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

          Here are ssome good examples. Here[^] Here[^] and Here[^]

          It was broke, so I fixed it.

          M 1 Reply Last reply
          0
          • L Lost User

            Rob Philpott wrote:

            there will be lots of moral questions about the worth of what's created vs. the worth of the human

            Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

            Rob Philpott wrote:

            Like I said, its like the sum is more than that parts.

            I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)

            T Offline
            T Offline
            Testing 1 2 uh 7
            wrote on last edited by
            #16

            Quote:

            Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

            I would argue that in the case of a sudden death, humans lack the ability to know that they're dead (from the strictly human level - if existence continues it's post-human). Now, if life is slowly failing, a human can detect and anticipate the end of life, but so could a sufficiently sophisticated AI. With the proper inputs, I think a machine could anticipate death at least as well as a human.

            L 1 Reply Last reply
            0
            • L Lost User

              Rob Philpott wrote:

              there will be lots of moral questions about the worth of what's created vs. the worth of the human

              Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

              Rob Philpott wrote:

              Like I said, its like the sum is more than that parts.

              I wouldn't really say so, I mean, we like to think of it as special somehow, but that's just our bias in favour of ourselves. (sort of like a watered-down version of Vitalism)

              B Offline
              B Offline
              BobJanova
              wrote on last edited by
              #17

              A person or animal doesn't notice its death either. Certain brain problems that cause you to lose your memory can cause your past not to exist, too.

              1 Reply Last reply
              0
              • R Rob Philpott

                Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                Regards, Rob Philpott.

                P Offline
                P Offline
                Pete OHanlon
                wrote on last edited by
                #18

                All I can ask, in return, is, if the Hulk lifts Thor while Thor is holding Mjolnir, does this meant that the Hulk lifted Mjolnir?

                I was brought up to respect my elders. I don't respect many people nowadays.
                CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

                R 1 Reply Last reply
                0
                • R Rob Philpott

                  Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                  Regards, Rob Philpott.

                  B Offline
                  B Offline
                  BobJanova
                  wrote on last edited by
                  #19

                  If you could create such a thing, then yes, it would have emotions and all the rest of it. The philosophy comes in at the step before that: if our brain is more than a physical object we'd never be able to create a simulation. You couldn't just plug a human brain into a beige box, though. Large parts of our brain are linked to our physical form (not just the obvious way of moving bits and feeling what happens to them, but all our senses, our sense of space, movement, even time are all linked to how we are put together). There's nothing intrinsically difficult about the idea of emergent complexity; we can see it in various places already. For example ants are simple but the complexity of an ant colony is not (or similarly with bees and storing information about good flower areas); genes and chromosomes are individually simple, to the level we chemically understand them, and yet put them together and you encode complex organisms; market trades are individually very simple and yet the behaviour of markets as a whole is not.

                  1 Reply Last reply
                  0
                  • R Rob Philpott

                    Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                    Regards, Rob Philpott.

                    M Offline
                    M Offline
                    madmatter
                    wrote on last edited by
                    #20

                    Self-awareness seems to be rather difficult to accomplish. You could say that the earth is a large complicated processor and it was only able to produce one species (that we know of) that could invent the internet. I believe that if it were possible for humans to create self-awareness - the ability to post on electronic forums - the earth would have already done so. Wait... I think I remember seeing where a snake was posting on twitter https://twitter.com/BronxZoosCobra[^] well there goes that argument. Yes clearly self-awareness is replicable by humans because snakes are posting on forums. Next discussion please.

                    J 1 Reply Last reply
                    0
                    • P Pete OHanlon

                      All I can ask, in return, is, if the Hulk lifts Thor while Thor is holding Mjolnir, does this meant that the Hulk lifted Mjolnir?

                      I was brought up to respect my elders. I don't respect many people nowadays.
                      CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

                      R Offline
                      R Offline
                      Rob Philpott
                      wrote on last edited by
                      #21

                      Dude, that's Modus Ponuns[^] So yes, I can confirm this.

                      Regards, Rob Philpott.

                      1 Reply Last reply
                      0
                      • S S Houghtelin

                        Matthew Faithfull wrote:

                        The most perfect model of a human brain you'll find is a human brain that died 2 minutes ago and it's quite as useless as 2 pounds of jelly for anything except anatomy lessons.

                        Here are ssome good examples. Here[^] Here[^] and Here[^]

                        It was broke, so I fixed it.

                        M Offline
                        M Offline
                        Matthew Faithfull
                        wrote on last edited by
                        #22

                        :laugh: Very good.morally dead for the most part but still highly toxic. If only we could find a landfill sight that would take them.

                        "The secret of happiness is freedom, and the secret of freedom, courage." Thucydides (B.C. 460-400)

                        1 Reply Last reply
                        0
                        • T Testing 1 2 uh 7

                          Quote:

                          Ok, well here's something to consider: an AI does not notice its death. You can make it stop updating, which it can't notice because it's not updating (from its perspective, time stopped). If you then delete its state, well.. as far as the AI is concerned, none of that ever happened, from one moment to an other it just ceased to exist.

                          I would argue that in the case of a sudden death, humans lack the ability to know that they're dead (from the strictly human level - if existence continues it's post-human). Now, if life is slowly failing, a human can detect and anticipate the end of life, but so could a sufficiently sophisticated AI. With the proper inputs, I think a machine could anticipate death at least as well as a human.

                          L Offline
                          L Offline
                          Lost User
                          wrote on last edited by
                          #23

                          So.. you want to torture an AI? :laugh:

                          T 1 Reply Last reply
                          0
                          • R Rob Philpott

                            Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                            Regards, Rob Philpott.

                            L Offline
                            L Offline
                            Lost User
                            wrote on last edited by
                            #24

                            Are human beings self-aware?

                            R 1 Reply Last reply
                            0
                            • L Lost User

                              Are human beings self-aware?

                              R Offline
                              R Offline
                              Rob Philpott
                              wrote on last edited by
                              #25

                              Well, I can't speak for the rest of you, but I like to think I am.

                              Regards, Rob Philpott.

                              1 Reply Last reply
                              0
                              • R Rob Philpott

                                Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                                Regards, Rob Philpott.

                                V Offline
                                V Offline
                                Vark111
                                wrote on last edited by
                                #26

                                If it's "you" that was recreated in this machine - place yourself in "his" metaphorical shoes. "You" would wake up, being conscious, but you would be paralyzed (you can't feel your legs or arms), blind (no eyes), deaf (no ears). You would feel extreme panic as your mind no longer registers the inhalation and exhalation of breath. Eventually - if you didn't go mad - you would calm down and and realize that something else is going on, this is when "you" would perhaps realize that you were only a recreation - a simulation of the original. Under these circumstances, if the original attempted to communicate with me, I'd tell myself to FRO.

                                1 Reply Last reply
                                0
                                • R Rob Philpott

                                  Say you had a very very powerful computer and went about emulating the human brain with it. 100 billion neurons each with a thousand odd synapses, then you get into the mucky business of all the different neurotransmitters (Yes, SQL Server Data Center edition may be required). Anyway, upon hitting F5 you find yourself able to converse with your emulation which may be a tad annoyed that it’s been reincarnated as a bit of software. Would this emulation have consciousness, feelings and motivations? In order to function properly you would expect so, but how can you create that when at the end of the day all the computer is doing is a set of simple operations involving registers and memory? For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts? Complexity is often just a product of large amounts of simplicity, but I can't personally make the mental leap from simple operations on a computer or a neuron to the forming of the idea of self. Perhaps one for the pub, it is Friday after all. Mine's a pint.

                                  Regards, Rob Philpott.

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #27

                                  Rob Philpott wrote:

                                  For me, this is a pertinent philosophical question - how do you end up with something which is more than the sum of its parts?

                                  Which philosophers have been pondering for a very long time - definitely before anything like a computer existed. The basic discussion is...given that person A is talking to person B how does person A know that B is conscious? How does A know that B thinks the same way that A does? How does A know that even though B seems to be discussing everything in a reasonable way how does A know that the B is in fact understanding the same concepts that A is trying to convey? The "Turing Test" is an experimental technique defined in an attempt to at least arrive at an equivalence in behavior.

                                  1 Reply Last reply
                                  0
                                  • R Rob Philpott

                                    Are you a deity? If not, I'm afraid I do not accept your explanation.

                                    Regards, Rob Philpott.

                                    J Offline
                                    J Offline
                                    jschell
                                    wrote on last edited by
                                    #28

                                    Rob Philpott wrote:

                                    Are you a deity? If not, I'm afraid I do not accept your explanation.

                                    Philosophically that is is basically an invalid refutation of the statement. Your statement embodies the assumption that you are in fact an intelligent being rather than being just some words in a forum. And also assumes that even if you are intelligent entity that one must be a deity and not just very smart to be capable of modeling you.

                                    1 Reply Last reply
                                    0
                                    • L Lost User

                                      Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.

                                      J Offline
                                      J Offline
                                      jschell
                                      wrote on last edited by
                                      #29

                                      harold aptroot wrote:

                                      Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.

                                      Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.

                                      L 1 Reply Last reply
                                      0
                                      • J jschell

                                        harold aptroot wrote:

                                        Why not? At the end of the day, a biological human brain is just a big bag of chemicals. There's no magic involved.

                                        Until you provide a working definition of intelligence and a test that demonstrates it one way or the other then your statement is nothing more definitive than whether you like vanilla ice cream or not.

                                        L Offline
                                        L Offline
                                        Lost User
                                        wrote on last edited by
                                        #30

                                        What does intelligence have to do with anything? Mere humans certainly aren't intelligent

                                        J 1 Reply Last reply
                                        0
                                        • R Rob Philpott

                                          No magic, no. They say that 95% of the brain is in the unconscious, just processing and that's fine. But perception and emotion - can't see it myself. Like I said, its like the sum is more than that parts. If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

                                          Regards, Rob Philpott.

                                          J Offline
                                          J Offline
                                          jschell
                                          wrote on last edited by
                                          #31

                                          Rob Philpott wrote:

                                          Like I said, its like the sum is more than that parts.

                                          And if you put gasoline and a bunch of steel on the ground you still are not going to be able to drive it from NY to Chicago but, presumably, you can do the same with an automobile.

                                          Rob Philpott wrote:

                                          If you exclude the divine there has to be an interesting science of how things move up a level, and if we do ever manage to create AI there will be lots of moral questions about the worth of what's created vs. the worth of the human.

                                          The number of things that people attach morality to probably isn't infinite but it is certainly big enough that enumerating it would be endless. So I fail to see how that matters.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups