I would normally agree, but I think the "code is a liability" quote assumes that humans are reading and modifying the code. If AI tools are also reading and modifying their own code, is that still true?
You have to be able to express the change you want in natural language. This is not always possible due to ambiguity.
Next to that, eventually you run into the same issue that we humans run into: no more context windows.
But we as software engineers have learned to abstract away components, to reduce the cognitive load when writing code. E.g., when you write file you don't deal with syscalls anymore.
This is different with AI. It doesn't abstract away things, which means you requesting a change might make the AI make a LOT of changes to the same pattern, but this can cause behavior to change in ways you haven't anticipated, haven't tested, or haven't seen yet.
And because it's so much code to review, it doesn't get the same scrutiny.
Switch to a rival service that doesn't have an outage. There are at least half a dozen competent hosted LLM vendors for coding now (Anthropic, OpenAI, Gemini, Mistral, Kimi, MiniMax, Qwen, ...)
OptionOfT|7 days ago
Next to that, eventually you run into the same issue that we humans run into: no more context windows.
But we as software engineers have learned to abstract away components, to reduce the cognitive load when writing code. E.g., when you write file you don't deal with syscalls anymore.
This is different with AI. It doesn't abstract away things, which means you requesting a change might make the AI make a LOT of changes to the same pattern, but this can cause behavior to change in ways you haven't anticipated, haven't tested, or haven't seen yet.
And because it's so much code to review, it doesn't get the same scrutiny.
qudat|7 days ago
simonw|7 days ago
duggan|7 days ago