top | item 47022970

(no title)

jackfranklyn | 14 days ago

I build accounting automation tools and this resonates hard. The codebase has ~60 backend services handling things like pattern matching, VAT classification, invoice reconciliation - stuff where a subtle bug doesn't crash anything, it just silently posts the wrong number to someone's accounts.

Vibe coding would be catastrophic here. Not because the AI can't write the code - it usually can - but because the failure mode is invisible. A hallucinated edge case in a tax calculation doesn't throw an error. It just produces a slightly wrong number that gets posted to a real accounting platform and nobody notices until the accountant does their review.

Where I've found AI genuinely useful is as a sophisticated autocomplete. I write the architecture, define the interfaces, handle the domain logic myself. Then I'll use it to fill in boilerplate, write test scaffolding, or explore an API I'm not familiar with. The moment I hand it the steering wheel on anything domain-specific, things go sideways fast.

The article's point about understanding your codebase is spot on. When something breaks at 2am in production, "the AI wrote that part" isn't an answer. You need to be able to trace through the logic yourself.

discuss

order

rafaelmn|14 days ago

> Vibe coding would be catastrophic here. Not because the AI can't write the code - it usually can - but because the failure mode is invisible. A hallucinated edge case in a tax calculation doesn't throw an error. It just produces a slightly wrong number that gets posted to a real accounting platform and nobody notices until the accountant does their review.

How is that different from handwritten code ? Sounds like stuff you deal with architecturally (auditable/with review/rollback) and with tests.

ryan_n|14 days ago

It’s shocking to me that people even ask this type of question. How do you not see the difference between a machine that will hallucinate something random if it doesn’t know the answer vs a human that will logic through things and find the correct answer.

kamaal|14 days ago

>>How is that different from handwritten code ?

I think the point he is trying to make is that you can't outsource your thinking to a automated process and also trust it to make the right decisions at the same time.

In places where a number, fraction, or a non binary outcome is involved there is an aspect of growing the code base with time and human knowledge/failure.

You could argue that speed of writing code isn't everything, many times being correct and stable likely is more important. For eg- A banking app, doesn't have be written and shipped fast. But it has to be done right. ECG machines, money, meat space safety automation all come under this.

jayd16|14 days ago

One major difference is the code has an owner who might consider what needs a test or ask questions if they don't understand.

To argue that all work is fungible because perfection cannot be achieved is actually a pretty out there take.

Replace your thought experiment with "Is one shot consultant code different from expert code?" Yes. They are different.

Code review is good and needed for human code, right? But if its "vibe coded", suddenly its not important? The differences are clear.

skydhash|14 days ago

With handwritten code, the humans know what they don’t know. If you want some constants or some formula, you don’t invent or guess it, you ask the domain expert.

tqian|14 days ago

Humans make such mistakes slowly. It's much harder to catch the "drift" introduced by LLM because it happens so quickly and silently. By the time you notice something is wrong, it has already become the foundation for more code. You are then looking at a full rewrite.

vidarh|14 days ago

If the failure mode is invisible, that is a huge risk with human developers too.

Where vibecoding is a risk, it generally is a risk because it exposes a systemic risk that was always there but has so far been successfully hidden, and reveals failing risk management.

anthonypasq96|14 days ago

i agree, and its strange that this failure mode continually gets lumped onto AI. The whole point of longer term software engineering was to make it so that the context within a particular persons head should not impact the ability of a new employee to contribute to a codebase. turns out everything we do to make sure that is the case for a human also works for an agent.

As far as i can tell, the only reason AI agents currently fail is because they dont have access to the undocumented context inside of peoples heads and if we can just properly put that in text somehwere there will be no problems.

aljarry|14 days ago

>the failure mode is invisible

Only if you are missing tests for what counts for you. And that's true for both dev-written code, and for vibed code.

CPLX|14 days ago

Who writes the tests?

jmt710|14 days ago

I have zero issues with things going sideways on even the most complicated task. I don't understand why people struggle so much, it's easy to get it to do the right thing without having to hand hold you just need to be better at what you're asking for.

tomjen3|14 days ago

Sounds like you need to add more tests to your code. The AI is pretty good at that.

manmal|14 days ago

> A hallucinated edge case in a tax calculation doesn't throw an error.

Would double entry book keeping not catch this?

failingforward|14 days ago

Not necessarily. Double entry bookkeeping catches errors in cases where an amount posted to one account does not have an equally offsetting post in another account or accounts (i.e., it catches errors when the books do not balance). It would not on its own catch errors where the original posted amount is incorrect due to a mistaken assumption, or if the offset balances but is allocated incorrectly.

2Gkashmiri|14 days ago

Do you by any chance work on open source accounting?