top | item 47050449

(no title)

theorchid | 12 days ago

I tried to write my first blog posts using AI. I created dozens of restrictions and rules so that it would produce human-like text, which I then edited. The text contained only my thoughts, but the AI formatted them. However, no matter how much I tried to prohibit constructions such as "It's not X, it's Y!", it still added them. I had to revise 10 drafts before I had the final version. When I stopped using AI for my texts, my productivity increased, and I can now complete an essay in 1-2 drafts, which is 5 times faster than when using AI.

This is strikingly different from development. In development, AI increases my productivity fivefold, but in texts, it slows me down.

I thought, maybe the problem is simply that I don't know how to write texts, but I do know how to develop? But the thing is, AI development uses standard code, with recognized patterns, techniques, and architecture. It does what (almost) the best programmer in their field would do. And its code can be checked with linters and tests. It's verifiable work.

But AI is not yet capable of writing text the way a living person does. Because text cannot be verified.

discuss

order

causal|12 days ago

Verifiability is part of it, but I think the "semantic ablation" article on the front page really captures my problem with AI-washed writing: https://www.theregister.com/2026/02/16/semantic_ablation_ai_...

I think any use of AI "unrolls" the prompt into a longer but thinner form. This is true of code too I think, but it's still useful because so much of coding is boilerplate and methods that have been written a thousand times before. Great, give me the standard implementation, who cares.

But if you're doing hard algorithmic work and really trying to do novel "computer science", I suspect semantic ablation would take an unacceptable toll.