top | item 47019063

(no title)

jackfranklyn | 15 days ago

The bit about "we have automated coding, but not software engineering" matches my experience. LLMs are good at writing individual functions but terrible at deciding which functions should exist.

My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.

Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.

Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.

discuss

order

zozbot234|15 days ago

> LLMs are good at writing individual functions but terrible at deciding which functions should exist.

Have you tried explicitly asking them about the latter? If you just tell them to code, they aren't going to work on figuring out the software engineering part: it's not part of the goal that was directly reinforced by the prompt. They aren't really all that smart.

fatata123|14 days ago

Injecting bias into an already biased model doesn’t make decision smarter, it just makes them faster.

mettamage|14 days ago

> Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.

Also, it prevents repetitive strain injury. At least, it does for me.