Shades of dotcom boomyness
-
AI IS THE WAY AND THE TRUTH AND THE LIGHT. IT IS THE FUTURE
Gosh, that sounds familiar.
I remember hearing that about online shopping, and the Internet.
And I remember the trail of derelict companies and broken dreams left in the wake of all that.
But on the other hand, they weren't exactly wrong. Here we are.
I think LLMs will get better, more efficient, and at the same time, more generally useful.
My fear is how much damage will be done before we get there.
Particularly with some of the over the top stuff I'm hearing from big tech stakeholders about how AI is going to replace developers, and they'll just "tough it out" until it's actually good enough to do the job.
Hopefully some reality seeps in.
-
Companies in 2025 (and probably 2026 as well): Well, well, well... If it isn't the consequences of my own actions:
https://www.techfinitive.com/2025-the-year-of-rehiring-humans-after-ai-fails/
https://www.techradar.com/pro/now-thats-an-embarassing-u-turn-bank-forced-to-rehire-human-workers-after-their-ai-replacement-fail-to-perform -
I have yet to run across an AI assistant that can do anything except point me to how to do what I want on their existing website. Which invariably is something I have already tried that didn't work. Try to tell the AI their instructions don't work and it just repeats the same "use the website" drivel.
It is then that I usually tell the AI what a useless pile of dung the exec is that decided to foist such a useless POS on it's customers and to connect me to a real person.
-
Not to be repetitive but these two quotes sum up my use of various AI Agents.
"...the apparent reasoning prowess of Chain-of-Thought (CoT) is largely a brittle mirage ."
Yep, that sounds like the Copilot I deal with. 🤓 So brittle.
"Together, these findings suggest that LLMs are not principled reasoners but rather *** sophisticated simulators of reasoning-like text*** ."
They are not principled reasoners.
I really like that explanation: "sophisticated simulators of reasoning-like text."
Those two quotes are from a white-paper Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens(published August 13, 2025)Anyways, I think maybe AI development is stuck bec intelligent communication requires context & AI doesn't seem to be good at generating a contextual idea, but instead seems to just be very good at string sentences together that are correct sentences but at some point they stop making sense.
-
Check out the Gartner Hype Cycle. Probably fits.