(no title)
greenpizza13 | 4 months ago
> AI hallucinations are one of the best bits of PR ever. The term reframes critical errors to anthropomorphise the machine, as that is essentially what an AI hallucination is: the machine getting it significantly and repeatedly wrong. Both MIT and METR found that the effort and cost required to look for, identify, and rectify these errors was almost always significantly larger than the effort the AI reduced.
> In other words, for AI (specifically generative AI) to be even remotely useful in the real world and have a hope in hell of generating revenue by augmenting workers at scale, let alone replacing them like it has promised to, it needs to cut “hallucinations” down to basically zero.
As someone who uses Claude 4.5 in Cursor every workday this rings extremely hollow. I am thinking to myself daily “I would have never had time to do this before.”
Have an idea for a script, you don’t have to lose a day building it. Wanna explore a feature, make a worktree and let the agent go. It’s fundamentally changed my workflow for the better and I don’t wanna go back, hallucinations and all.
No comments yet.