top | item 47105158

(no title)

james_marks | 9 days ago

Doesn’t the loom metaphor still hold? A badly operated loom will create bad fabric the same way badly used AI will make unsafe, unscalable programs.

Anything that can be automated can be automated poorly, but we accept that trained operators can use looms effectively.

discuss

order

tokenless|9 days ago

The difference is the loom is performing linear work.

Programming is famously non-linear. Small teams making billion dollar companies due to tech choices that avoid needing to scale up people.

Yes you need marketing, strategy, investment, sales etc. But on the engineering side, good choices mean big savings and scalability with few people.

The loom doesn't have these choises. There is no make a billion tshirts a day for a well configured loom.

Now AI might end up either side of this. It may be too sloppy to compete with very smart engineers, or it may become so good that like chess no one can beat it. At that point let it do everything and run the company.

WJW|9 days ago

Anything that can be automated can be automated poorly indeed. But while it has been proven that textile manufacturing can be automated well (or at least better than a hand weaver ever could), the jury is still out if programming can be sufficiently automated at all. Even if programming can be completely automated, it's also unclear if the current LLM strategy will be enough or whether we'll have another 30 year AI winter before something better comes along.

bigstrat2003|9 days ago

The difference is that one can make good cloth with a loom using less effort than before. With AI one has to choose between less effort, or good quality. You can't get both.

rerdavies|9 days ago

Have you actually seriously tried using an AI? It really isn't that hard to get good code with less effort using an AI. Just manage the scope of the tasks you give it. And of course, review the code that it generates. And of course, do NOT vibe code.

And I've actually grown quite fond of the "review the selected code (that I wrote) and make suggestions for improvements, but don't actually make any changes" prompt. Or "is this code correct?" And AIs are also exceptionally good at doing large-scale code refactoring. So I am actually producing even better code with less effort.

Yes, it requires good judgement -- something that you learn by doing. And developing a sense of what an AI can and cannot handle. Although I am, truthfully, falling behind the curve on that, as coding AIs are making major leaps and bounds in the complexity of what they can deal with, and the quality you can expect out of them, that changes on pretty much a monthly basis. I was quite amazed to get a C++ port of a 3,900 line python library to write to professional standards, in about 5 prompts total, including .deb packaging, test cases, and .md API documentation.

If you are basing your judgements on anything earlier than Claude 4.5 Sonnet(or any of the ChatGPT models prior to 5.2 Codex, which seems to be the first in the ChatGPT series of models that seems to be halfway comparable to Claude 4.5 Sonnet), then you urgently need to give it another try. Avoid any of the lite models. The difference between old and new models is dramatic. (Currently still figuring out what Claude 4.6 Sonnet is capable of. I haven't yet had a chance to feed it something difficult).

sixtyj|9 days ago

Loom is a good metaphor.