(no title)
dvt | 4 days ago
I wouldn't be so sure about that.
In my experience, agents consistently make awful architectural decisions. Both in code and beyond (even in contexts like: what should I cook for a dinner party?). They leak the most obvious "midwit senior engineer" decisions which I would strike down in an instant in an actual meeting, they over-engineer, they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project), and they are absolutely obsessed with levels of indirection on top of levels of indirection. The definition of code bloat.
Unless you're working on the most bottom-of-the-barrel problems (which to be fair, we all are, at least in part: like a dashboard React app, or some boring UI boilerplate, etc.), you still need to write your own code.
drc500free|4 days ago
In lieu of understanding the whole architecture, they assume that there was intent behind the current choices... which is a good assumption on their training data where a human wrote it, and a terrible assumption when it's code that they themselves just spit out and forgot was their own idea.
steve_adams_86|4 days ago
This is one reason why it blows me away that people actually ship stuff they've never looked at. You can be certain it's riddled with craziest garbage Claude is holing away for eternity
tasuki|4 days ago
hinkley|4 days ago
Mediocrity in, mediocrity out.
gjm11|4 days ago
LLM output could be like that. (I am not claiming that it actually is; I haven't looked carefully enough at enough of it to tell.) Humans writing code do lots of bad things, but any specific error will usually not be made.
If (1) it's correct to think of LLMs as producing something like average-over-the-whole-internet code and (2) the mechanism above is operative -- and, again, I am not claiming that either of those is definitely true -- then LLM code could be much higher quality than average, but would seldom do anything that's exceptionally good in ways other than having few bugs.
ipaddr|4 days ago
denimnerd42|4 days ago
logicchains|4 days ago
parliament32|4 days ago
> even though Memory.md has the AWS EC2 instance and instructions well defined
I will second that, despite the endless harping about the usefulness of CC, it's really not good at anything that hasn't been done to death a couple thousand times (in its training set, presumably). It looks great at first blush, but as soon as you start adding business-specific constraints or get into unique problems without prior art, the wheels fall off the thing very quickly and it tries to strongarm you back into common patterns.
dvt|4 days ago
I'm doing it right now, and tbh working on greenfield projects purely using AI is extremely token-hungry (constantly nudging the agent, for one) if you want actual code quality and not a bloated piece of garbage[1][2].
[1] https://imgur.com/a/BBrFgZr
[2] https://imgur.com/a/9Xbk4Y7
xg15|4 days ago
I mean, DB schema versioning is one of the things that you can dismiss as "I won't need it" for a long time - until you do need it, at which point it will be a major pain to add.
vessenes|4 days ago