Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. What are we doing to our kids?

What are we doing to our kids?

Scheduled Pinned Locked Moved The Lounge
learningtestingbusinesstoolshelp
62 Posts 32 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B BillWoodruff

    "we" are doing the same thing to "our kids" that you are doing in the Lounge: increasing methane production.

    «The mind is not a vessel to be filled but a fire to be kindled» Plutarch

    H Offline
    H Offline
    haughtonomous
    wrote on last edited by
    #39

    You may be - I recommend a change of diet. The rest of us are comparing and exchanging thoughts and ideas. I believe it's called 'intelligence'.

    1 Reply Last reply
    0
    • N nepdev

      Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?

      P Offline
      P Offline
      Peter Adam
      wrote on last edited by
      #40

      Two problems: - I'm sure everyone here already met a very intelligent person, who, in a timespan of few days or weeks, came to two mutually exclusive, perfectly logical outcome, which changed the course of the project. Probably multiple times during the project. Some things had to be decided AND written. In History, too. - The real professions had to be based on "repeating random facts", there is no use of a lawyer who search in the constitutional laws for the course of a divorce, nor a doctor who builds up the solution to a cold from the basics of microbiology. Also, learning "useless" things is the same exercise for the brain as doing reps of a workout. Solving crosswords is useful against dementia, for example. Learn and reciting what learned keeps the gears running.

      1 Reply Last reply
      0
      • S Slacker007

        nepdev wrote:

        mindless robot

        wow, just wow. :confused: :doh: :sigh:

        P Offline
        P Offline
        PhilipOakley
        wrote on last edited by
        #41

        Just what is this consciousness that makes you Human? Does this question assert an untruth? :rolleyes:

        M 1 Reply Last reply
        0
        • N nepdev

          Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?

          G Offline
          G Offline
          Gary Wheeler
          wrote on last edited by
          #42

          _"In a stunning announcement today, a plug-in for the ChatGPT AI was released. If you have access to a 3D printer and the raw materials, you can now have the AI create for you your own children. You can select from a board range of physical and mental characteristics for your child: gender (or lack thereof), ethnicity, demeanor, intelligence, and so on. The crowd was shocked when the presenter from the plug-in company jumped into the 3D printer's material hopper, and was then reconstituted as his own child. After a few moments of confusion (an issue to be corrected in version 1.1 according to company officials) the child continued the presentation."

          Software Zen: delete this;

          _

          1 Reply Last reply
          0
          • N nepdev

            Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?

            P Offline
            P Offline
            Paul Kemner
            wrote on last edited by
            #43

            Most of the certifications I had to take were worthless rote BS. Had to memorize the properties of different optical cables. They're so expensive that you'd never just order them without making sure they were the right kind. Minor point- Egyptian pyramids were not built by slaves. They were built by corvee labor, drafted during the flood season of the Nile. Equivalent to a tax in a non-monetary economy. They did massive cattle drives from the Delta and fed the workers far more beef than they would ever have seen in their lives. They were also paid in beer.

            1 Reply Last reply
            0
            • S Slacker007

              Don't you see that ChatGPT is only going to get "smarter" with time? Don't you see that? it's passing all the tests, barely, but passing. It won't be long at all when it passes all the tests with 100% scores. Humans make silly mistakes, like forgetting to remove all the gauze from a site before sewing up. AI bots will not forget. I will be laughing at all of this, especially at you haters and doubters, everyday till I die.

              B Offline
              B Offline
              Bill Castle
              wrote on last edited by
              #44

              It's easiest way to see what these "AI" models are doing by looking at the "art" models. The systems are very good at finding the source material to steal (and there are lawsuits filed), but the rest is just a merge/morph operation with no real understanding of the material. Sticking with the art model, if you tell it to "draw" a woman with red hair wearing a black dress, you'll get several reasonable representations. However, if you tell it to draw a Christmas parade, you'll get something that looks ok from a distance, but the people all have warped faces, or too many arms or some such issue. This is because the system doesn't actually understand the material. It's the same thing with the text models. You tell it you want a paper on the theology of bed bugs, and it'll dutifully go out and find a bunch of source material on theology and bed bugs and attempt to merge these concepts into something that "sounds right". It will result in a final product that is as non-sensical as the original input. GIGO. Now, if you take this initial technology and use that to train the next model on the concepts of "person", "dog", "car", "love", etc. you might get another step closer. However, there still isn't a reasoning engine in the mix. Until then, these toys won't be able to pass the Turing test. For all the fluff and thunder in the news, there are just as many stories of how easily these simple models can be tripped up, fooled and twisted. The true danger of "AI" at this point in time is how much people believe that it exists.

              S 1 Reply Last reply
              0
              • D Daniel Pfeffer

                Humans feel most comfortable at a temperature of ~20-22 degrees Centigrade (293-295 Kelvin), and have a body temperature of 37 degrees Centigrade (310 Kelvin) Given 8.5 billion people, each of whom produces ~100W of heat, we have for the total usable energy: 8.5 * 109 * 100 * (310 - 295) / 295 = 43 GW. This is the total energy production of ten large power stations. From this, you need to subtract the energy required for growing & distribution of food and waste elimination for all those bodies. 'The Matrix' is not very efficient at power production. :sigh:

                Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

                A Offline
                A Offline
                atali
                wrote on last edited by
                #45

                Bender from Futurama: "Wouldn't it be better to use a potato? Or a battery?"

                1 Reply Last reply
                0
                • B Bill Castle

                  It's easiest way to see what these "AI" models are doing by looking at the "art" models. The systems are very good at finding the source material to steal (and there are lawsuits filed), but the rest is just a merge/morph operation with no real understanding of the material. Sticking with the art model, if you tell it to "draw" a woman with red hair wearing a black dress, you'll get several reasonable representations. However, if you tell it to draw a Christmas parade, you'll get something that looks ok from a distance, but the people all have warped faces, or too many arms or some such issue. This is because the system doesn't actually understand the material. It's the same thing with the text models. You tell it you want a paper on the theology of bed bugs, and it'll dutifully go out and find a bunch of source material on theology and bed bugs and attempt to merge these concepts into something that "sounds right". It will result in a final product that is as non-sensical as the original input. GIGO. Now, if you take this initial technology and use that to train the next model on the concepts of "person", "dog", "car", "love", etc. you might get another step closer. However, there still isn't a reasoning engine in the mix. Until then, these toys won't be able to pass the Turing test. For all the fluff and thunder in the news, there are just as many stories of how easily these simple models can be tripped up, fooled and twisted. The true danger of "AI" at this point in time is how much people believe that it exists.

                  S Offline
                  S Offline
                  Slacker007
                  wrote on last edited by
                  #46

                  We as humans steal too. Artists steal ALL THE TIME, its called "inspiration". Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. AI - angel to some, demon to others.

                  F J 2 Replies Last reply
                  0
                  • N nepdev

                    Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?

                    J Offline
                    J Offline
                    john morrison leon
                    wrote on last edited by
                    #47

                    They used to grind us down with pages of sums because they needed us to be calculating machines. Now they grind us down with bullshit because they need bullshitters. Now that AI really is a master of bullshit, they should leave us to be human beings.

                    1 Reply Last reply
                    0
                    • S Slacker007

                      We as humans steal too. Artists steal ALL THE TIME, its called "inspiration". Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. AI - angel to some, demon to others.

                      F Offline
                      F Offline
                      fgs1963
                      wrote on last edited by
                      #48

                      Slacker007 wrote:

                      Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future.

                      I'm old enough to remember the critiques from old mainframe programmers when PCs were introduced in the late 70's / early 80's. Much the same apathy.

                      1 Reply Last reply
                      0
                      • P PhilipOakley

                        Just what is this consciousness that makes you Human? Does this question assert an untruth? :rolleyes:

                        M Offline
                        M Offline
                        Member_5893260
                        wrote on last edited by
                        #49

                        I find it interesting that questions like this are being asked in context of a glorified search engine with fancy language output.

                        1 Reply Last reply
                        0
                        • N nepdev

                          Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?

                          M Offline
                          M Offline
                          Member_5893260
                          wrote on last edited by
                          #50

                          Well, it's a glorified search engine with a fancy language output algorithm. But people are easily-pleased... it's interesting that it (apparently) passed the Turing test recently: it is possible, of course, to point out that the Turing test can equally as well be applied to the person administering it as to the machine being tested... that ChatGPT passed says more about the current educational level of the species than it does about the AI itself.

                          J 1 Reply Last reply
                          0
                          • N nepdev

                            Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?

                            M Offline
                            M Offline
                            Mirko796
                            wrote on last edited by
                            #51

                            As I am getting older I am learning that we took away too much "automatism" from our life and it is not a good thing since now we are getting negative returns. For example: - we don't walk or do any physical activity, why should we when everything is so much easier with car, or with elevator to go to 3rd floor, or with electric chainsaw when you just need to cut through one inch thick branch... you get the point... and eventually our body and brain suffers (just started to run few years ago to try to counter this and I'm still impressed with overall gains I get back) - we don't think deep, why should we when everything is available on Google (and via ChatGPT now). So we slowly loose our imagination since we don't have a need to use it - we don't talk to people face to face, it's easier to automate this by using chats and all other sort of modern tools I do like ChatGPT, it's easiest way to get answer to some fairly complex questions BUT you must validate it like any other answer you get instead of following it blindly (so it is not ChatGPT's problem) ChatGPT will become even better, and that's not a bad thing, but we should treat it as we should treat cars - and not use it for every elephant thing, we still must try to think for the sake of thinking However... people are lazy, kids are even more "lazier" (they are very good at finding the path that consume the least energy, have two of them... oh boy...) and I am really afraid of what ChatGPT and similar tech will to them because of reasons listed above.

                            J 1 Reply Last reply
                            0
                            • S Slacker007

                              Don't you see that ChatGPT is only going to get "smarter" with time? Don't you see that? it's passing all the tests, barely, but passing. It won't be long at all when it passes all the tests with 100% scores. Humans make silly mistakes, like forgetting to remove all the gauze from a site before sewing up. AI bots will not forget. I will be laughing at all of this, especially at you haters and doubters, everyday till I die.

                              J Offline
                              J Offline
                              jschell
                              wrote on last edited by
                              #52

                              Slacker007 wrote:

                              Don't you see that ChatGPT is only going to get "smarter" with time?

                              I see claims about that. And claims that self driving cars are just around the corner. And flying cars are just around the corner. And autonomous robots are just around the corner (got to love the marketing videos of the robot company that has them dancing and opening doors.)

                              Slacker007 wrote:

                              Humans make silly mistakes,

                              And when they attempt to predict the future that is where they fail all the time. Even the near future.

                              In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.”

                              1 Reply Last reply
                              0
                              • S Slacker007

                                AI bot haters and doubters remind me of this historical event: Get A Horse! America’s Skepticism Toward the First Automobiles | The Saturday Evening Post[^]

                                J Offline
                                J Offline
                                jschell
                                wrote on last edited by
                                #53

                                Slacker007 wrote:

                                AI bot haters and doubters remind me of this historical event:

                                Hindsight successes do not prove predictions of the future.

                                1 Reply Last reply
                                0
                                • S Slacker007

                                  We as humans steal too. Artists steal ALL THE TIME, its called "inspiration". Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. AI - angel to some, demon to others.

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #54

                                  Slacker007 wrote:

                                  but I can only hope that you are all intelligent enough to see past the now,

                                  I am intelligent enough to know that these sort of claims show up every 10 years or so. And none of them pan out. More so there are hundreds and even thousands (or more) of claims about something that will 'revolutionize' this that or the other things every single year. However there is no such thing as a 'revolutionary' development. Everything new is built on achievements of the past. This latest cycle of AI is not in fact new. The companies involved have been trying to make them better for years if not decades. And yet the current level is all that they have achieved.

                                  1 Reply Last reply
                                  0
                                  • C charlieg

                                    After having taught my children basic math, algebra, geometry and trig - I absolutely despise walking away from "rote memorization". The add/sub/mult/div tables dealt with the fundamentals of basic facts that got children over the details that actually allowed them to think. They get to Algebra - which is a fascinating time - and rather than wrestle with basic math, they can focus on the abstract concepts. Creative thinking as it were... that golden mid point. But Bureaucracy gets paid for elephanting basic concepts to prove they need a job.

                                    Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.

                                    J Offline
                                    J Offline
                                    jschell
                                    wrote on last edited by
                                    #55

                                    charlieg wrote:

                                    I absolutely despise walking away from "rote memorization"

                                    Not to mention how do you teach them discipline, drive, focus, working towards a goal and completing it? Certainly seems to me that taking a test once a week to regurgitate what was covered in the previous week (or month) has a chance of producing some positive improvement in those that I mentioned.

                                    charlieg wrote:

                                    Creative thinking as it were

                                    Never ever seen any alternative studies that could demonstrate that creativity was actually being taught. But certainly can measure whether a 10 year knows how to add two numbers together.

                                    1 Reply Last reply
                                    0
                                    • F fgs1963

                                      Slacker007 wrote:

                                      I will be laughing at all of this, especially at you haters and doubters, everyday till I die.

                                      Agreed. I'm amazed by the number of developers and computer scientists (supposedly smart people) that are burying their heads on this one. Automation and robotics will be eliminating physical / manual jobs soon enough. AI will be eliminating MANY white collar jobs in roughly the same timespan. The world needs to figure out what to do with 8.5 billion idle humans.

                                      M Offline
                                      M Offline
                                      Matt Bond
                                      wrote on last edited by
                                      #56

                                      Obligatory XKCD.com

                                      Bond Keep all things as simple as possible, but no simpler. -said someone, somewhere

                                      1 Reply Last reply
                                      0
                                      • P PIEBALDconsult

                                        I'll need to give that more thought. Personally, I'm unclear on what constitutes instinct anyway, so I may be a bit lost. As to choice, I'd still be unsure where to draw the line. For instance: When a pack of predators attacks the weakest members of a herd of prey, is that instinct or choice? Wouldn't instinct demand they attack the largest/meatiest? Is attacking the weakest members a learned strategy? This reminds me of "A Beautiful Mind". I think humans have probably lost much of the instinct our ancestors must have had and replaced it with learned knowledge. Maybe that's what makes the difference today, but there still must have been chooser-zero who had the ability and acted on it. Probably some bratty kid refusing to eat his mammoth.

                                        J Offline
                                        J Offline
                                        jschell
                                        wrote on last edited by
                                        #57

                                        PIEBALDconsult wrote:

                                        Is attacking the weakest members a learned strategy?

                                        Certainly is learned for larger carnivore mammals. Packs are easiest to see this but even when non-pack animals younger animals often have to survive on smaller prey because they keep picking the wrong prey animal to attack.

                                        1 Reply Last reply
                                        0
                                        • M Member_5893260

                                          Well, it's a glorified search engine with a fancy language output algorithm. But people are easily-pleased... it's interesting that it (apparently) passed the Turing test recently: it is possible, of course, to point out that the Turing test can equally as well be applied to the person administering it as to the machine being tested... that ChatGPT passed says more about the current educational level of the species than it does about the AI itself.

                                          J Offline
                                          J Offline
                                          jschell
                                          wrote on last edited by
                                          #58

                                          Dan Sutton wrote:

                                          it's interesting that it (apparently) passed the Turing test recently:

                                          But that is not new. See Eugene Goostman. Moreover humans can fail to be recognized as intelligent as well. https://www.nbcnews.com/tech/tech-news/humans-mistake-humans-machines-during-turing-tests-n163206[^]

                                          M 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups