Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Other Discussions
  3. The Insider News
  4. The question that no LLM can answer and why it is important

The question that no LLM can answer and why it is important

Scheduled Pinned Locked Moved The Insider News
questionai-models
5 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K Offline
    K Offline
    Kent Sharkey
    wrote on last edited by
    #1

    Mind Prison[^]:

    “Which episode of Gilligan’s Island was about mind reading?”

    As long as we have more knowledge about syndicated sitcoms, the AIs will never win

    Greg UtasG M 2 Replies Last reply
    0
    • K Kent Sharkey

      Mind Prison[^]:

      “Which episode of Gilligan’s Island was about mind reading?”

      As long as we have more knowledge about syndicated sitcoms, the AIs will never win

      Greg UtasG Offline
      Greg UtasG Offline
      Greg Utas
      wrote on last edited by
      #2

      After skimming the article, its central point seems to be that LLMs can't reason. They merely do a great job summarizing and regurgitating the data that they've scanned, so it's hard to know how accurate they are. This hardly comes as a surprise given the many documented, deranged responses from various LLMs.

      Robust Services Core | Software Techniques for Lemmings | Articles
      The fox knows many things, but the hedgehog knows one big thing.

      <p><a href="https://github.com/GregUtas/robust-services-core/blob/master/README.md">Robust Services Core</a>
      <em>The fox knows many things, but the hedgehog knows one big thing.</em></p>

      J 1 Reply Last reply
      0
      • K Kent Sharkey

        Mind Prison[^]:

        “Which episode of Gilligan’s Island was about mind reading?”

        As long as we have more knowledge about syndicated sitcoms, the AIs will never win

        M Offline
        M Offline
        maze3
        wrote on last edited by
        #3

        See if can make the following make some sense. the LLMs currently GENERATE output, not reference it. Meaning that they have been done to simulate human thinking in a fashion. Like if you ask someone give me a sentence. Person has to figure out context, they know what a sentence should be, but then fill in the content to form the sentence and either coherent of wtf. Some people are great at this and many are not. but if ask give me a sentence BASED on a shakespear play, that narrows down the content, and possibly the top of the mind words would (most populare that person knows) would steer the sentence. add receite a line from shakespear, that is more specific. Can person recall a line verbatum, or they use context they know how a shakepsear sentence is done to mix recall and construction. the LLMs, negate reference and just jumble all the data, and spit out something. Hence the lack of factual data. Contrast with google index which links to specific data. Mix the two and then suggestion comes into play which is the next step. Load a LLM with your domain data and someone can ask, give me a wedding dress with nice pattern, and will write "this one has recommendations for late spring, and here is the link" With copywrite issues making that direct refernce, unless for webpages is currently a challenge on mass, unless individuals, which is what the LLMs are pushing out creators to make

        J 1 Reply Last reply
        0
        • Greg UtasG Greg Utas

          After skimming the article, its central point seems to be that LLMs can't reason. They merely do a great job summarizing and regurgitating the data that they've scanned, so it's hard to know how accurate they are. This hardly comes as a surprise given the many documented, deranged responses from various LLMs.

          Robust Services Core | Software Techniques for Lemmings | Articles
          The fox knows many things, but the hedgehog knows one big thing.

          J Offline
          J Offline
          jochance
          wrote on last edited by
          #4

          The dirty little secret is that LLM is that training... it is essentially stacking the deck and then managing to pull an ace... maybe > 50% of the time. But it's still "luck". We've rigged the game by telling it what we're most likely talking about when we say x. This is a fundamental issue when it comes to contexts where it absolutely shouldn't be allowed to be wrong (like handing it its own reigns). Ask it if a football is a sphere :)

          1 Reply Last reply
          0
          • M maze3

            See if can make the following make some sense. the LLMs currently GENERATE output, not reference it. Meaning that they have been done to simulate human thinking in a fashion. Like if you ask someone give me a sentence. Person has to figure out context, they know what a sentence should be, but then fill in the content to form the sentence and either coherent of wtf. Some people are great at this and many are not. but if ask give me a sentence BASED on a shakespear play, that narrows down the content, and possibly the top of the mind words would (most populare that person knows) would steer the sentence. add receite a line from shakespear, that is more specific. Can person recall a line verbatum, or they use context they know how a shakepsear sentence is done to mix recall and construction. the LLMs, negate reference and just jumble all the data, and spit out something. Hence the lack of factual data. Contrast with google index which links to specific data. Mix the two and then suggestion comes into play which is the next step. Load a LLM with your domain data and someone can ask, give me a wedding dress with nice pattern, and will write "this one has recommendations for late spring, and here is the link" With copywrite issues making that direct refernce, unless for webpages is currently a challenge on mass, unless individuals, which is what the LLMs are pushing out creators to make

            J Offline
            J Offline
            jochance
            wrote on last edited by
            #5

            The domain data thing you bring up at the end is why I think general AI driven by LLM isn't coming anytime soon and if it does, we should probably smash it with as giant and hard a hammer as we can find.

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups