What are we doing to our kids?
-
Slacker007 wrote:
I will be laughing at all of this, especially at you haters and doubters, everyday till I die.
Agreed. I'm amazed by the number of developers and computer scientists (supposedly smart people) that are burying their heads on this one. Automation and robotics will be eliminating physical / manual jobs soon enough. AI will be eliminating MANY white collar jobs in roughly the same timespan. The world needs to figure out what to do with 8.5 billion idle humans.
fgs1963 wrote:
The world needs to figure out what to do with 8.5 billion idle humans.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
I'll need to give that more thought. Personally, I'm unclear on what constitutes instinct anyway, so I may be a bit lost. As to choice, I'd still be unsure where to draw the line. For instance: When a pack of predators attacks the weakest members of a herd of prey, is that instinct or choice? Wouldn't instinct demand they attack the largest/meatiest? Is attacking the weakest members a learned strategy? This reminds me of "A Beautiful Mind". I think humans have probably lost much of the instinct our ancestors must have had and replaced it with learned knowledge. Maybe that's what makes the difference today, but there still must have been chooser-zero who had the ability and acted on it. Probably some bratty kid refusing to eat his mammoth.
PIEBALDconsult wrote:
Is attacking the weakest members a learned strategy?
The most amount of energy for the least amount of work, plus some morale/long-term thinking? Human babies could grow up and eventually help us so we let them be, some animals think very differently. (This sort of went dark awfully fast). At any rate: AI could definitely do that since its basically just math: Calculus of variations - Wikipedia[^]
-
Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?
"we" are doing the same thing to "our kids" that you are doing in the Lounge: increasing methane production.
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
-
I think you only have to look at how ChatGPT actually works - it tries to figure out the "best" next word in the sentence. That is NOT how to reason or think. Do you think that way? Certainly not - I would guess you have a CONCEPT first before you open the mouth. ChatGTP has no concept. It is just word babble. Thinking is not talking, no matter how many "scientists" may tell you that the way we think is through words. Einstein did not. And what about musicians? They don't think "now I need to put a F# semiquaver here in this position" (and if they do, their music is balderdash)
-
PIEBALDconsult wrote:
But that means that "something else" must separate humans (and probably our extinct proto-human ancestors) from "the lower animals" -- but not "intelligence".
The simplest concept of what that "something else" is, is the ability to choose. Animals generally respond instinctively (they can be trained to not respond instinctively, but that's still not choice.) We humans are unique in that we can choose not to respond by instinct. Ooh, that cake looks delicious, I'm going to eat it. Or, nice cake, but I'm watching my calories. Or I'm lactose intolerant so eating that would not be a good idea. Conversely, my cat loves to chew on certain plant leaves regardless of how many times he barfs them up later. Therefore, I would say that intelligence is making good choices based on knowledge and skill, and also making poor choices for reasons we are conscious of but choose to ignore.
Latest Article:
SVG Grids: Squares, Triangles, Hexagons with scrolling, sprites and simple animation examplesAs your cat does with the leaves.
-
fgs1963 wrote:
The world needs to figure out what to do with 8.5 billion idle humans.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
Humans feel most comfortable at a temperature of ~20-22 degrees Centigrade (293-295 Kelvin), and have a body temperature of 37 degrees Centigrade (310 Kelvin) Given 8.5 billion people, each of whom produces ~100W of heat, we have for the total usable energy: 8.5 * 109 * 100 * (310 - 295) / 295 = 43 GW. This is the total energy production of ten large power stations. From this, you need to subtract the energy required for growing & distribution of food and waste elimination for all those bodies. 'The Matrix' is not very efficient at power production. :sigh:
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
If it's not linked, it's "unknowable". There will be a new movement to not record anything; we'll just exchange information with winks and nods so it can't be used.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
So the ubiquitous cameras will be used by the AI to learn the "winks and nods" language. Resistance is futile!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
Me? I'm cashing out my home equity to invest in HVAC and septic. Everyone wants to be warm, cool and their toilets work.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
charlieg wrote:
invest in HVAC and septic.
Go one level lower, and become a plumber. Those HVAC and septic companies need skilled workers!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
In my teaching days (community college level), I created the final exams. Each problem was structured according to the Bloom taxonomy of learning. The students could bring any printed material to the exam - the course textbook or anything else. The 'a)' question asks for 'simple facts' that can usually be copied directly from the course textbook, 'knowledge'. 'b)' is for the student to show that (s)he understands the meaning of facts, 'comprehension'. 'c)' asks the student to demonstrate how the understanding of facts is used to solve a specific task, 'application'. 'd)' asks for an explanation of a suitable breakdown of a complex situation/system, 'analysis'. 'e)' asks the student to combine various elements / principles / ... into a larger, more complex whole to create a solution, 'synthesis'. 'f)' asks for a critical, 'professional' evaluation of some technique / solution / ..., 'assessment'. In recent years, many people put 'synthesis' (aka. 'creating') at the very top of the learning pyramid, 'assessment' (aka. 'evaluation') at the second level from the top. I tend to agree more with the original ordering: I have met lots of people - both in programming and in other fields - sprouting creative ideas like a fountain, producing lots of results, yet completely unable to do any sort of critical evaluation / assessment of either their own creations or the works of others. The other way around: In order to make a true assessment, you cannot be a stranger to the process of synthesizing a whole from constituents; you must master it quite well. Assessment is going a step further in mastering your field. It did take some practice & experience to create exams suitable to tell how far up the pyramid a student could climb, but after a handful of exams, as I got the grip on how to phrase the a) - f), I could quite easily see where the candidate was starting to fall off, not mastering that kind of questioning. Or, the candidate was a true master in the area presented in 1a) to 1f), but didn't handle problem 2) much higher than to 2d), clearly weaker in synthesis and assessment in that area. I am quite sure that ChatGPT would handle most problems up to the c) level quite well - although I guess I could fool it by 'trick questions' that would be handled by a human. For d) to f), it is not that difficult to phrase the problem statement so that a mechanical, robotic search would easily be revealed as a fake. If I were creating exams nowadays, I would
The toughest exams I did at university were "open book" exams where we could take any printed materials (books) in with us. The exams tested our understanding of the material and ability to use it in novel ways. In other words it tested our mastery of knowledge, not the knowledge itself, i.e. our ability to think.
-
"we" are doing the same thing to "our kids" that you are doing in the Lounge: increasing methane production.
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
You may be - I recommend a change of diet. The rest of us are comparing and exchanging thoughts and ideas. I believe it's called 'intelligence'.
-
Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?
Two problems: - I'm sure everyone here already met a very intelligent person, who, in a timespan of few days or weeks, came to two mutually exclusive, perfectly logical outcome, which changed the course of the project. Probably multiple times during the project. Some things had to be decided AND written. In History, too. - The real professions had to be based on "repeating random facts", there is no use of a lawyer who search in the constitutional laws for the course of a divorce, nor a doctor who builds up the solution to a cold from the basics of microbiology. Also, learning "useless" things is the same exercise for the brain as doing reps of a workout. Solving crosswords is useful against dementia, for example. Learn and reciting what learned keeps the gears running.
-
nepdev wrote:
mindless robot
wow, just wow. :confused: :doh: :sigh:
Just what is this consciousness that makes you Human? Does this question assert an untruth? :rolleyes:
-
Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?
_"In a stunning announcement today, a plug-in for the ChatGPT AI was released. If you have access to a 3D printer and the raw materials, you can now have the AI create for you your own children. You can select from a board range of physical and mental characteristics for your child: gender (or lack thereof), ethnicity, demeanor, intelligence, and so on. The crowd was shocked when the presenter from the plug-in company jumped into the 3D printer's material hopper, and was then reconstituted as his own child. After a few moments of confusion (an issue to be corrected in version 1.1 according to company officials) the child continued the presentation."
Software Zen:
delete this;
_
-
Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?
Most of the certifications I had to take were worthless rote BS. Had to memorize the properties of different optical cables. They're so expensive that you'd never just order them without making sure they were the right kind. Minor point- Egyptian pyramids were not built by slaves. They were built by corvee labor, drafted during the flood season of the Nile. Equivalent to a tax in a non-monetary economy. They did massive cattle drives from the Delta and fed the workers far more beef than they would ever have seen in their lives. They were also paid in beer.
-
Don't you see that ChatGPT is only going to get "smarter" with time? Don't you see that? it's passing all the tests, barely, but passing. It won't be long at all when it passes all the tests with 100% scores. Humans make silly mistakes, like forgetting to remove all the gauze from a site before sewing up. AI bots will not forget. I will be laughing at all of this, especially at you haters and doubters, everyday till I die.
It's easiest way to see what these "AI" models are doing by looking at the "art" models. The systems are very good at finding the source material to steal (and there are lawsuits filed), but the rest is just a merge/morph operation with no real understanding of the material. Sticking with the art model, if you tell it to "draw" a woman with red hair wearing a black dress, you'll get several reasonable representations. However, if you tell it to draw a Christmas parade, you'll get something that looks ok from a distance, but the people all have warped faces, or too many arms or some such issue. This is because the system doesn't actually understand the material. It's the same thing with the text models. You tell it you want a paper on the theology of bed bugs, and it'll dutifully go out and find a bunch of source material on theology and bed bugs and attempt to merge these concepts into something that "sounds right". It will result in a final product that is as non-sensical as the original input. GIGO. Now, if you take this initial technology and use that to train the next model on the concepts of "person", "dog", "car", "love", etc. you might get another step closer. However, there still isn't a reasoning engine in the mix. Until then, these toys won't be able to pass the Turing test. For all the fluff and thunder in the news, there are just as many stories of how easily these simple models can be tripped up, fooled and twisted. The true danger of "AI" at this point in time is how much people believe that it exists.
-
Humans feel most comfortable at a temperature of ~20-22 degrees Centigrade (293-295 Kelvin), and have a body temperature of 37 degrees Centigrade (310 Kelvin) Given 8.5 billion people, each of whom produces ~100W of heat, we have for the total usable energy: 8.5 * 109 * 100 * (310 - 295) / 295 = 43 GW. This is the total energy production of ten large power stations. From this, you need to subtract the energy required for growing & distribution of food and waste elimination for all those bodies. 'The Matrix' is not very efficient at power production. :sigh:
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
It's easiest way to see what these "AI" models are doing by looking at the "art" models. The systems are very good at finding the source material to steal (and there are lawsuits filed), but the rest is just a merge/morph operation with no real understanding of the material. Sticking with the art model, if you tell it to "draw" a woman with red hair wearing a black dress, you'll get several reasonable representations. However, if you tell it to draw a Christmas parade, you'll get something that looks ok from a distance, but the people all have warped faces, or too many arms or some such issue. This is because the system doesn't actually understand the material. It's the same thing with the text models. You tell it you want a paper on the theology of bed bugs, and it'll dutifully go out and find a bunch of source material on theology and bed bugs and attempt to merge these concepts into something that "sounds right". It will result in a final product that is as non-sensical as the original input. GIGO. Now, if you take this initial technology and use that to train the next model on the concepts of "person", "dog", "car", "love", etc. you might get another step closer. However, there still isn't a reasoning engine in the mix. Until then, these toys won't be able to pass the Turing test. For all the fluff and thunder in the news, there are just as many stories of how easily these simple models can be tripped up, fooled and twisted. The true danger of "AI" at this point in time is how much people believe that it exists.
We as humans steal too. Artists steal ALL THE TIME, its called "inspiration". Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. AI - angel to some, demon to others.
-
Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college. Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam! But is it? Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"? If you think that's learning, THEN of course ChatGPT is "intelligent". Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater." A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff? Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots. Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools. Something needs to change here, doesn't it?
They used to grind us down with pages of sums because they needed us to be calculating machines. Now they grind us down with bullshit because they need bullshitters. Now that AI really is a master of bullshit, they should leave us to be human beings.
-
We as humans steal too. Artists steal ALL THE TIME, its called "inspiration". Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. AI - angel to some, demon to others.
Slacker007 wrote:
Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future.
I'm old enough to remember the critiques from old mainframe programmers when PCs were introduced in the late 70's / early 80's. Much the same apathy.
-
Just what is this consciousness that makes you Human? Does this question assert an untruth? :rolleyes:
I find it interesting that questions like this are being asked in context of a glorified search engine with fancy language output.