(no title)
throwmeaway820 | 1 month ago
I don't want to sound glib, but one could simply not let an LLM execute arbitrary code without reviewing it first, or only let it execute code inside an isolated environment designed to run untrusted code
the idea of letting an LLM execute code it's dreamt up, with no oversight, in an environment you care about, is absolutely bananas to me
blibble|1 month ago
but if a skilled human has to check everything it does then "AI" becomes worthless
hence... YOLO
Terr_|1 month ago
Well, perhaps not worthless, but certainly not "a trillion-dollar revolution that will let me fire 90% of my workforce and then execute my Perfect Rich Guy Visionary Ideas without any more pesky back-talk."
That said, the "worth" is brings to the shareholders will likely be a downgrade for everybody else, both workers and consumers, because:
> The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire 9/10s of your radiologists [...] and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.
> “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”
> This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies [calls] an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.
-- https://doctorow.medium.com/https-pluralistic-net-2025-12-05...
mlyle|1 month ago
ertian|1 month ago
Even as a more experienced dev, I like having a second pair of eyes on critical commands...
alexjplant|1 month ago
sigmonsays|1 month ago
therobots927|1 month ago