Thoughts on current chatty AI
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
Common sense and baseline IQ will tell anyone that AI and AI "Chat" is still very new and is in constant development and progression. Eventually, some day (soon?) it will be perfected. To judge it now is premature at best.
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
Cpichols wrote:
These AI seem to have no concept of the difference between fact and fiction.
Why would they? They get fed the internet: GIGO applies!
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
AI is just a learning algorithm. It predicts behavior based on previously known inputs and learns to predict the correct output based on these. The problem is that it needs a massive amount of data to be able to do this. What I would love to see is an AI that could read the documentation and be able to just answer your questions from this documentation. Or better, you just type what you want to do in a console in the respective program, and the AI finds out what you want it to do and does it.
-
AI is just a learning algorithm. It predicts behavior based on previously known inputs and learns to predict the correct output based on these. The problem is that it needs a massive amount of data to be able to do this. What I would love to see is an AI that could read the documentation and be able to just answer your questions from this documentation. Or better, you just type what you want to do in a console in the respective program, and the AI finds out what you want it to do and does it.
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
Cpichols wrote:
These AI seem to have no concept of the difference between fact and fiction....
Err...not sure I see much difference between what you are stating and basically every click bait site. Plus quite a few other sites. Not to mention posts by many individuals.
-
You could try this AI but most likely a lot of rework is required!? Introducing GitHub Copilot X · GitHub[^]
I was thinking more generally. If you have say a GIS program you could just type: I want to create a plot of the current view, and it just does it. The AI you suggested seems to be a coder and code reviewer?
-
I was thinking more generally. If you have say a GIS program you could just type: I want to create a plot of the current view, and it just does it. The AI you suggested seems to be a coder and code reviewer?
-
My link was for the coder. I do not use it - but you can sign up for the technical preview. They also announce an AI for docs GitHub Next | Copilot for Docs[^] But you can only join the waitlist (whatever this means).:confused:
Maybe I can ask the AI for docs on what it means. Oh snap :laugh: I would also love to just give an AI a plot and ask it to make a game of it in the style of Witcher 3 :)
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
Donald Knuth Asked ChatGPT 20 Questions. What Did We Learn? - The New Stack[^]
>64 Some days the dragon wins. Suck it up.
-
Donald Knuth Asked ChatGPT 20 Questions. What Did We Learn? - The New Stack[^]
>64 Some days the dragon wins. Suck it up.
-
Cpichols wrote:
These AI seem to have no concept of the difference between fact and fiction.
The key word in the name is "Artificial". Anyone with half a brain knows that these machines have nothing anywhere close to intelligence.
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
There's is no "one" AI. Each one is custom-tailored, by a "creator", to pursue their agenda; probably at your expense. At a minimum, it captures "you" while you're busy conversing with "it" (i.e. the "creator's" data banks).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
-
AI is just a learning algorithm. It predicts behavior based on previously known inputs and learns to predict the correct output based on these. The problem is that it needs a massive amount of data to be able to do this. What I would love to see is an AI that could read the documentation and be able to just answer your questions from this documentation. Or better, you just type what you want to do in a console in the respective program, and the AI finds out what you want it to do and does it.
a Few months from now we should start seeing "Note: This answer is deprecated, please update to version 4.101" :)
-
There's is no "one" AI. Each one is custom-tailored, by a "creator", to pursue their agenda; probably at your expense. At a minimum, it captures "you" while you're busy conversing with "it" (i.e. the "creator's" data banks).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
Takes me back to the "good 'ol days" as a student when we "copied" answers from a book, only to realize the answer is not working for our app needs... The AI back then were called "... 101 for dummies".
-
The reports of totally made up answers that are so confident and well-written have me thinking. These AI seem to have no concept of the difference between fact and fiction. Being asked to produce copy is the same thing to them as being asked for factual content. They regularly plagiarize, so taking bits from the questions and bits from other writings, they assemble responses as if they were just copy, even when asked to answer with simple facts, whole snips of historical documents/scientific studies, or calculations. Perhaps what is needed is a sort-of 'scholar:' tag. So when asking for answers, it won't make things up.
Cpichols wrote:
These AI seem to have no concept of the difference between fact and fiction.
That is totally incorrect -- "AI" have no concept of anything. There is no "intelligence" in "artificial intelligence". In simplest terms, any "AI" is just a huge, nested if-then-else. When programming, it's up to the programmer to ensure that an if-then-else is testing the correct things, and is testing them correctly. If there is any point of failure, the results will be wrong at least some of the time. Machine learning works by feeding it massive amounts of data, and later indicating which is correct and which is not. As has been pointed out, it gets better with training. The problem is that it will never be 100% correct, yet people are already trusting these systems as being so. There is no discrimination, just a lot of tests that must be correct, yet can't be.
-
Cpichols wrote:
These AI seem to have no concept of the difference between fact and fiction.
That is totally incorrect -- "AI" have no concept of anything. There is no "intelligence" in "artificial intelligence". In simplest terms, any "AI" is just a huge, nested if-then-else. When programming, it's up to the programmer to ensure that an if-then-else is testing the correct things, and is testing them correctly. If there is any point of failure, the results will be wrong at least some of the time. Machine learning works by feeding it massive amounts of data, and later indicating which is correct and which is not. As has been pointed out, it gets better with training. The problem is that it will never be 100% correct, yet people are already trusting these systems as being so. There is no discrimination, just a lot of tests that must be correct, yet can't be.
The Lounge[^] This is the point right here. They can't be trusted. Specifically, they can't be trusted to "discern" (if/else or otherwise come to a conclusion about) the difference between fact sources (historical documents, scientific studies, current events) and fictional ones.
-
Cpichols wrote:
These AI seem to have no concept of the difference between fact and fiction.
The key word in the name is "Artificial". Anyone with half a brain knows that these machines have nothing anywhere close to intelligence.
Can you pass the bar exam?
-
Common sense and baseline IQ will tell anyone that AI and AI "Chat" is still very new and is in constant development and progression. Eventually, some day (soon?) it will be perfected. To judge it now is premature at best.