I signed up for Anthropic's Claude model - might go with Kagi's Claude next. Some observations
-
The machine is not a liar, it's a bullshitter. The intent is different, but the result is often the same.
There's a notes section anthropic has which it will use to seed all your prompts i guess. Mine says "the right question is more important than the right answer. don't be afraid to sayI don't know
. "
Not sure how effective it is in practice.It's not great at imperative development, but it's a set monster. Give it a functional problem vs an imperative one, and it will absolutely go to town on it.
I used it for research, because I don't have a math background, and I was exploring some new frontiers in Chomsky type 3 discrete deterministic finite automata, a subject I know a lot about fundamentally, but I am taxed when it comes to expanding the algorithms, because my grasp of them isn't always complete, especially in the extents, and the math fails me. Well. Claude was able to explore some techniques with me from "The Dragon Book" (not its actual title but if you know what this is you know what this is) and implement them with little to no guidance from me, which allowed me to learn the things I didn't understand, and come to a whole new class of understandings about the way FSMs work, which furthered my research goals.
It's not a substitute for knowledge. You have to know enough about the subject to know if it's bullshitting you. Claude would have led me down counterproductive rabbit holes a lot more often than it did if I wasn't vigilant and aware enough of the subject matter to say "hey, wait a minute! what's that about?!" Even with knowledge i ended up being glad i saved work in progress copies of my code before making major changes it suggested.
And unit test unit test unit test. The thing doesn't like handling edge cases. So you better test for them especially. Also, I've tried getting it to write unit tests for me and in the end it's questionable whether that saved me any time. It wrote a bunch of bad tests, and i had to review all of them anyway.
-
Hi HTCW, I recently used ChatGP to help me to convert/rewrite an MVC site to Blazor Server and I must say it was very helpful but as you say you have to watch what it gives you carefully - if you tell it exactly what you don't want it actually will improve its offerings - nice to see old names appearing here
-
Hi HTCW, I recently used ChatGP to help me to convert/rewrite an MVC site to Blazor Server and I must say it was very helpful but as you say you have to watch what it gives you carefully - if you tell it exactly what you don't want it actually will improve its offerings - nice to see old names appearing here
@pkfox Good to see you!
I don't trust ChatGPT's model, but then the last one i tried was 4. Everyone I talked to about it said try Claude Sonnet 4 so I did, and in some ways it really impressed me. Of course immediately after it would do something pants-on-head stupid. XD
-
Hi HTCW, I recently used ChatGP to help me to convert/rewrite an MVC site to Blazor Server and I must say it was very helpful but as you say you have to watch what it gives you carefully - if you tell it exactly what you don't want it actually will improve its offerings - nice to see old names appearing here
@pkfox said in I signed up for Anthropic's Claude model - might go with Kagi's Claude next. Some observations:
Hi HTCW, I recently used ChatGP to help me to convert/rewrite an MVC site to Blazor Server and I must say it was very helpful but as you say you have to watch what it gives you carefully - if you tell it exactly what you don't want it actually will improve its offerings - nice to see old names appearing here
I've spent several weeks experimenting with AI vibe coding and have learned that it is good a small tasks but can't keep focus on larger tasks. You can constrain it through tracking and guidance documents (microsoft vs & vs code call them copilot-instructions.txt). The current issue is 2 fold:
- Constraints and guidance turn into hundreds of lines of input.
- How much each AI agent can remember before forgetting. A lot gets lost when they summarise.
So, the further into the prompt they go, the less they remember, the more they improvise, and you lose a lot of control.
So here is an example with Claude Sonnet 4, the best of the AI models at the moment: I have a library that I wrote recently called Blazing.Mediator. I used the docs as a guide that I gave to the AI. Everything starts well. However, if I let it run for a while and there is one or more summarisations, Claude switches to coding MediatR patterns. Overly opinionated! Then there is the cost...
I find that I spend a lot of time cleaning up after the AI and lose any gains made if I let it loose on my code, adding new features.
I now keep it to simple or repetitive tasks - wire-framing, prototyping, initial UI, comments, converting code, fixing errors/warnings, rubber ducking, etc... There are some things that it can do quicker than you without giving you work.