top | item 40125485

(no title)

ascar | 1 year ago

It's more like not using semi-autonomuous driving features with the car entirely relying on your expertise to realise and correct when it's making a mistake. The main difference is risk. Your own life at stake versus bugs in some production system.

discuss

order

kyleyeats|1 year ago

But why are you pushing errors to production? You know you're allowed to fix the LLM's code output, right?

If a robot could paint your house, but made three small errors, would you refuse to use it? Or would you just fix the three small errors by painting over them?

There's some kind of John Henry complex going on in this AI discussion.

ascar|1 year ago

You're working under the assumption that you will be able to find the errors. I personally found reviewing code always way harder than writing it and we already push tons of bugs to production in written+reviewed code.