top | item 47126188

(no title)

mehagar | 7 days ago

I would normally agree, but I think the "code is a liability" quote assumes that humans are reading and modifying the code. If AI tools are also reading and modifying their own code, is that still true?

discuss

order

OptionOfT|7 days ago

You have to be able to express the change you want in natural language. This is not always possible due to ambiguity.

Next to that, eventually you run into the same issue that we humans run into: no more context windows.

But we as software engineers have learned to abstract away components, to reduce the cognitive load when writing code. E.g., when you write file you don't deal with syscalls anymore.

This is different with AI. It doesn't abstract away things, which means you requesting a change might make the AI make a LOT of changes to the same pattern, but this can cause behavior to change in ways you haven't anticipated, haven't tested, or haven't seen yet.

And because it's so much code to review, it doesn't get the same scrutiny.

qudat|7 days ago

What happens when there’s a service outage and you cannot debug code without an agent?

simonw|7 days ago

Switch to a rival service that doesn't have an outage. There are at least half a dozen competent hosted LLM vendors for coding now (Anthropic, OpenAI, Gemini, Mistral, Kimi, MiniMax, Qwen, ...)

duggan|7 days ago

Like any service outage out of their control, people will find other things to do until it’s over.