Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Other Discussions
  3. The Insider News
  4. Ethicists wonder if LLM makers have a legal duty to ensure reliability

Ethicists wonder if LLM makers have a legal duty to ensure reliability

Scheduled Pinned Locked Moved The Insider News
htmlcomai-modelsquestionannouncement
7 Posts 6 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K Offline
    K Offline
    Kent Sharkey
    wrote on last edited by
    #1

    Tech Xplore[^]:

    A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.

    People don't listen to ethicists, so why should AI?

    N D Richard Andrew x64R 3 Replies Last reply
    0
    • K Kent Sharkey

      Tech Xplore[^]:

      A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.

      People don't listen to ethicists, so why should AI?

      N Offline
      N Offline
      Nelek
      wrote on last edited by
      #2

      Article wrote:

      whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.

      Yes, please.

      M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

      1 Reply Last reply
      0
      • K Kent Sharkey

        Tech Xplore[^]:

        A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.

        People don't listen to ethicists, so why should AI?

        D Offline
        D Offline
        David ONeil
        wrote on last edited by
        #3

        But they have no problem with LLM makers sucking up shitloads of copyrighted content without consent? Then they are not ethicists. They are pundits.

        Quote:

        The researchers also suggest that LLMs used in high-risk areas such as health care should only be trained on truly useful data, such as academic journals

        Ha ha ha ha! Ha ha ha ha ha! Ha ha! Ha ha ha!

        Our Forgotten Astronomy | Object Oriented Programming with C++ | Wordle solver

        1 Reply Last reply
        0
        • K Kent Sharkey

          Tech Xplore[^]:

          A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.

          People don't listen to ethicists, so why should AI?

          Richard Andrew x64R Offline
          Richard Andrew x64R Offline
          Richard Andrew x64
          wrote on last edited by
          #4

          I think the responsibility should lie with the person using the AI information. If they rely on it improperly to make an important move, then it is they who should pay the price, not the providers of the AI.

          The difficult we do right away... ...the impossible takes slightly longer.

          D O 2 Replies Last reply
          0
          • Richard Andrew x64R Richard Andrew x64

            I think the responsibility should lie with the person using the AI information. If they rely on it improperly to make an important move, then it is they who should pay the price, not the providers of the AI.

            The difficult we do right away... ...the impossible takes slightly longer.

            D Offline
            D Offline
            Daniel Pfeffer
            wrote on last edited by
            #5

            At the current level of AI, when they can do no more than act as aids to decision-making, I agree with you. What happens when AI models take into account so many factors that no mere human can second-guess them?

            Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

            Richard Andrew x64R 1 Reply Last reply
            0
            • D Daniel Pfeffer

              At the current level of AI, when they can do no more than act as aids to decision-making, I agree with you. What happens when AI models take into account so many factors that no mere human can second-guess them?

              Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

              Richard Andrew x64R Offline
              Richard Andrew x64R Offline
              Richard Andrew x64
              wrote on last edited by
              #6

              To respond to your question, I believe that judgement is still uniquely a human ability that I don't see AI having at any time in the future.

              The difficult we do right away... ...the impossible takes slightly longer.

              1 Reply Last reply
              0
              • Richard Andrew x64R Richard Andrew x64

                I think the responsibility should lie with the person using the AI information. If they rely on it improperly to make an important move, then it is they who should pay the price, not the providers of the AI.

                The difficult we do right away... ...the impossible takes slightly longer.

                O Offline
                O Offline
                obermd
                wrote on last edited by
                #7

                Yes and no. When I asked Bing Search for results of the New Hampshire primary it flat out lied to me.

                1 Reply Last reply
                0
                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Don't have an account? Register

                • Login or register to search.
                • First post
                  Last post
                0
                • Categories
                • Recent
                • Tags
                • Popular
                • World
                • Users
                • Groups