top | item 47023256

(no title)

kamaal | 14 days ago

>>How is that different from handwritten code ?

I think the point he is trying to make is that you can't outsource your thinking to a automated process and also trust it to make the right decisions at the same time.

In places where a number, fraction, or a non binary outcome is involved there is an aspect of growing the code base with time and human knowledge/failure.

You could argue that speed of writing code isn't everything, many times being correct and stable likely is more important. For eg- A banking app, doesn't have be written and shipped fast. But it has to be done right. ECG machines, money, meat space safety automation all come under this.

discuss

order

rafaelmn|14 days ago

Replace LLM with employee in your argument - what changes ? Unless everyone at your workplace owns the system they are working on - this is a very high bar and maybe 50% of devs I've worked with are capable of owning a piece of non trivial code, especially if they didn't write it.

Realiy is you don't solve these problems by to relying on everyone to be perfect - everyone slips up - to achieve results consistently you need process/systems to assure quality.

Safety critical system should be even better equipped to adopt this because they already have the systems to promote correct outputs.

The problem is those systems weren't built for LLMs specifically so the unexpected failure cases and the volume might not be a perfect fit - but then you work on adapting the quality control system.

kamaal|14 days ago

>>replace LLM with employee in your argument - what changes ?

I mentioned this part in my comment. You cannot trust an automated process to a thing, and expect the same process to verify if it did it right. This is with regards to any automated process, not just code.

This is not the same as manufacturing, as in manufacturing you make the same part thousands of times. In code the automated process makes a specific customised thing only once, and it has to be right.

>>The problem is those systems weren't built for LLMs specifically so the unexpected failure cases ...

We are not talking of failures. There is a space between success and failure where the LLM can go into easily.