top | item 46236484

(no title)

snet0 | 2 months ago

To say that a model won't solve a problem is unfair. Claude Code, with Opus 4.5, has solved plenty of problems for me.

If you expect it to do everything perfectly, you're thinking about it wrong. If you can't get it to do anything perfectly, you're using it wrong.

discuss

order

jacquesm|2 months ago

That means you're probably asking it to do very simple things.

baq|2 months ago

I can confidently say that anecdotally you’re completely wrong, but I’ll also allow a very different definition of ‘simple’ and/or attempting to use an unpopular environment as a valid anecdotal counterpoint.

camdenreslink|2 months ago

Sometimes you do need to (as a human) break down a complex thing into smaller simple things, and then ask the LLM to do those simple things. I find it still saves some time.

djeastm|2 months ago

Possibly, but a lot of value comes from doing very simple things faster.

snet0|2 months ago

If you define "simple thing" as "thing an AI can't do", then yes. Everyone just shifts the goalposts in these conversations, it's infuriating.