(no title)
Krssst
|
27 days ago
OS and compilers have a deterministic public interface. They obey a specification developers know, so you they can be relied on to write correct software that depends on them even without knowing the internal behavior. Generative AI does not have those properties.
raw_anon_1111|27 days ago
refactor_master|27 days ago
So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.
skydhash|27 days ago
Also the code is not a means to an end. It’s going to be run somewhere doing stuff someone wants to do reliably and precisely. The overall goal was ever to invest some programmer time and salary in order to free more time for others. Not for everyone to start babysitting stuff.
signatoremo|27 days ago
Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.
Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.
skydhash|27 days ago
There can be. But you’d have to map the libraries to opcodes and then count the cycles. That’s what people do when they care about that particular optimization. They measure and make guaranties.
Krssst|26 days ago
For OSes: POSIX, or the MSDN documentation for Windows.
Compiler bugs and OS bugs are extremely rare so we can rely on them to follow their spec.
AI bugs are very much expected when the "spec" (the prompt) is correct, and since the prompt is written using imprecise human language likely by people that are not used to writing precise specifications, the prompt is likely either mistaken or insufficiently specified.