Ethicists wonder if LLM makers have a legal duty to ensure reliability
-
Tech Xplore[^]:
A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.
People don't listen to ethicists, so why should AI?
-
Tech Xplore[^]:
A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.
People don't listen to ethicists, so why should AI?
Article wrote:
whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.
Yes, please.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Tech Xplore[^]:
A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.
People don't listen to ethicists, so why should AI?
But they have no problem with LLM makers sucking up shitloads of copyrighted content without consent? Then they are not ethicists. They are pundits.
Quote:
The researchers also suggest that LLMs used in high-risk areas such as health care should only be trained on truly useful data, such as academic journals
Ha ha ha ha! Ha ha ha ha ha! Ha ha! Ha ha ha!
Our Forgotten Astronomy | Object Oriented Programming with C++ | Wordle solver
-
Tech Xplore[^]:
A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.
People don't listen to ethicists, so why should AI?
I think the responsibility should lie with the person using the AI information. If they rely on it improperly to make an important move, then it is they who should pay the price, not the providers of the AI.
The difficult we do right away... ...the impossible takes slightly longer.
-
I think the responsibility should lie with the person using the AI information. If they rely on it improperly to make an important move, then it is they who should pay the price, not the providers of the AI.
The difficult we do right away... ...the impossible takes slightly longer.
At the current level of AI, when they can do no more than act as aids to decision-making, I agree with you. What happens when AI models take into account so many factors that no mere human can second-guess them?
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
At the current level of AI, when they can do no more than act as aids to decision-making, I agree with you. What happens when AI models take into account so many factors that no mere human can second-guess them?
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
To respond to your question, I believe that judgement is still uniquely a human ability that I don't see AI having at any time in the future.
The difficult we do right away... ...the impossible takes slightly longer.
-
I think the responsibility should lie with the person using the AI information. If they rely on it improperly to make an important move, then it is they who should pay the price, not the providers of the AI.
The difficult we do right away... ...the impossible takes slightly longer.