People struggle to tell humans apart from ChatGPT in five-minute chat conversations, tests show
-
Tech Xplore[^]:
People find it difficult to distinguish between the GPT-4 model and a human agent when interacting with them as part of a 2-person conversation.
Hint: ChatGPT is the one that seems reasonable
-
Tech Xplore[^]:
People find it difficult to distinguish between the GPT-4 model and a human agent when interacting with them as part of a 2-person conversation.
Hint: ChatGPT is the one that seems reasonable
I've definitely interacted with an LLM driven helper chat bot, maybe a few of them, just in the last month or two. Amongst other stuff... I've had to talk to Amazon about subscriptions and refunds, ISPs about new service provision, and an insurer about benefits/coverage details and in/out of network providers. The biggest indicators for me were near-perfect spelling and grammar along with not being able to help without transferring me to a person. I don't think 80% of people would have noticed. In one instance I'm pretty sure the service provider was aiding the subterfuge by giving the chatbot an Indian name. I found that a little bit humorous. "No you are not talking to a robot! You're talking to a support agent in another country to whom we've outsourced our support staff." But all of them worked more or less flawlessly and were at most only 1/2 as frustrating as the phone systems where you have to repeatedly press buttons or keep saying agent, person, real person, live agent, etc to get a warm body.