The question that no LLM can answer and why it is important
-
Mind Prison[^]:
“Which episode of Gilligan’s Island was about mind reading?”
As long as we have more knowledge about syndicated sitcoms, the AIs will never win
-
Mind Prison[^]:
“Which episode of Gilligan’s Island was about mind reading?”
As long as we have more knowledge about syndicated sitcoms, the AIs will never win
After skimming the article, its central point seems to be that LLMs can't reason. They merely do a great job summarizing and regurgitating the data that they've scanned, so it's hard to know how accurate they are. This hardly comes as a surprise given the many documented, deranged responses from various LLMs.
Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
Mind Prison[^]:
“Which episode of Gilligan’s Island was about mind reading?”
As long as we have more knowledge about syndicated sitcoms, the AIs will never win
See if can make the following make some sense. the LLMs currently GENERATE output, not reference it. Meaning that they have been done to simulate human thinking in a fashion. Like if you ask someone give me a sentence. Person has to figure out context, they know what a sentence should be, but then fill in the content to form the sentence and either coherent of wtf. Some people are great at this and many are not. but if ask give me a sentence BASED on a shakespear play, that narrows down the content, and possibly the top of the mind words would (most populare that person knows) would steer the sentence. add receite a line from shakespear, that is more specific. Can person recall a line verbatum, or they use context they know how a shakepsear sentence is done to mix recall and construction. the LLMs, negate reference and just jumble all the data, and spit out something. Hence the lack of factual data. Contrast with google index which links to specific data. Mix the two and then suggestion comes into play which is the next step. Load a LLM with your domain data and someone can ask, give me a wedding dress with nice pattern, and will write "this one has recommendations for late spring, and here is the link" With copywrite issues making that direct refernce, unless for webpages is currently a challenge on mass, unless individuals, which is what the LLMs are pushing out creators to make
-
After skimming the article, its central point seems to be that LLMs can't reason. They merely do a great job summarizing and regurgitating the data that they've scanned, so it's hard to know how accurate they are. This hardly comes as a surprise given the many documented, deranged responses from various LLMs.
Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing.The dirty little secret is that LLM is that training... it is essentially stacking the deck and then managing to pull an ace... maybe > 50% of the time. But it's still "luck". We've rigged the game by telling it what we're most likely talking about when we say x. This is a fundamental issue when it comes to contexts where it absolutely shouldn't be allowed to be wrong (like handing it its own reigns). Ask it if a football is a sphere :)
-
See if can make the following make some sense. the LLMs currently GENERATE output, not reference it. Meaning that they have been done to simulate human thinking in a fashion. Like if you ask someone give me a sentence. Person has to figure out context, they know what a sentence should be, but then fill in the content to form the sentence and either coherent of wtf. Some people are great at this and many are not. but if ask give me a sentence BASED on a shakespear play, that narrows down the content, and possibly the top of the mind words would (most populare that person knows) would steer the sentence. add receite a line from shakespear, that is more specific. Can person recall a line verbatum, or they use context they know how a shakepsear sentence is done to mix recall and construction. the LLMs, negate reference and just jumble all the data, and spit out something. Hence the lack of factual data. Contrast with google index which links to specific data. Mix the two and then suggestion comes into play which is the next step. Load a LLM with your domain data and someone can ask, give me a wedding dress with nice pattern, and will write "this one has recommendations for late spring, and here is the link" With copywrite issues making that direct refernce, unless for webpages is currently a challenge on mass, unless individuals, which is what the LLMs are pushing out creators to make