It has been a while, but I remember a project of mine trying to port a FTP client to a 'secure compiler' (this was long before Rust and probably a distant ancestor of Checked C). In theory, if I could successfully port it, it would be much more resilient to particular kinds of issues (and maybe even attacks). This was in the era where formal proof coding was trying to take off as well in the industry.After wading through an impressive number of compiler errors (again, it was technically compatible) and attempts to fix them, I eventually surrendered and acknowledged that at the very least, this was beyond my abilities.
I probably would had gotten much further just rewriting it from scratch.
entrustai|9 hours ago
The secure compiler relocated complexity into the porting process — and that relocated complexity turned out to be harder than the original problem. Rewriting from scratch would have been cheaper because you'd be working with complexity you generated and understood, rather than auditing complexity the tool imposed.
This is exactly the dynamic playing out now with LLM-generated code at scale. The syntax is free. The verification is expensive. And the verification is harder precisely because the complexity isn't yours — it arrived from a system whose reasoning you can't inspect and whose decisions you didn't make.
What you ran into in that FTP port is what every engineering team using LLMs for production code will eventually run into. You just got there thirty years early.