top | item 47124677

(no title)

threethirtytwo | 6 days ago

The thread always opens the same way.

“If AI makes human labor obsolete, who decides who gets to eat?”

And within six comments we’re back to the sacred mantra: it can’t even solve a trick logic puzzle from 1983, therefore capitalism remains intact.

Allow me to contribute in the proud tradition of the Extremely Calm Skeptic.

First, the entire premise is unserious. Labor cannot become obsolete because the model does not “understand.” I know this because someone on Twitter asked it a riddle about a barber and it got confused. An entity that fumbles a barber paradox is clearly incapable of displacing accountants, paralegals, translators, mid-level engineers, support staff, or analysts.

Second, demos are misleading. Yes, it can draft contracts, generate production code, summarize regulatory filings, build internal tools, design marketing campaigns, and tutor students. But those are not real jobs. Real jobs are the parts that feel difficult and validating when I do them. The fact that those parts are shrinking is a coincidence.

Third, intelligence is not the bottleneck. The bottleneck is vibes. And regulation. And GPU supply. And “human judgment.” There will always be a final layer of ineffable judgment that only carbon-based life can provide. If pressed for examples, I will gesture broadly.

Fourth, labor markets adapt. We replaced elevator operators and invented social media managers. Therefore if large chunks of cognitive labor become cheap, the economy will effortlessly invent millions of new roles titled “Senior Human in the Loop.” The transition will be smooth. There will be no political consequences. History has a flawless track record here.

As for the eating question, that only becomes serious if labor is no longer the main mechanism for income distribution. And that won’t happen, because the models hallucinate sometimes. When something occasionally makes an error, it cannot possibly be economically transformative. By that standard, humans have been non-disruptive for millennia.

If I’m being honest, the resistance has less to do with token prediction and more to do with self-preservation. I invested years building scarce skills. Scarcity is flattering. If intelligence becomes abundant, that flattery evaporates. So I do what any rational actor would do: redefine scarcity.

When it automates my junior tasks, that’s augmentation. When it handles mid-level tasks, that’s assistance. When it approaches senior tasks, that’s hype. If it ever clears that bar, I’ll discover a higher one.

This is not fear. This is prudent analysis performed while quietly pasting my entire codebase into three different models before standup.

So who decides who gets to eat?

If productive capacity detaches from human effort, ownership becomes the obvious lever. That’s not speculative. That’s how capital has always worked. But acknowledging that would mean treating the premise seriously.

Much easier to point at a cherry-picked failure and conclude that intelligence on tap changes nothing.

Anyway, back to my workflow where the fake autocomplete drafts the spec, writes the code, generates the tests, and explains the tradeoffs while I reassure myself that the important part was my supervision.

discuss

order

No comments yet.