top | item 45161097

Show HN: The World After 3, 5, 10, 25, 50, and 100 Years Ft. AI

7 points| mandarwagh | 5 months ago |mandar.cloud

AI is arguably the greatest invention in modern human history. Humanity has always evolved in hockeystick curves, each major discovery unlocking an entirely new trajectory of progress.

But what does this mean for us, Humans ?

dive in for more info here⬇ https://www.mandar.cloud/blog.html?slug=the-world-after-3-5-...

20 comments

order

sonofhans|5 months ago

You know, for many decades — centuries, even — people have had ideas like you had here, mandarwagh. You extrapolate ideas into the future, think really hard about it, and try to lay out a compelling vision.

Typically people also wrap character and whatnot around this skeleton and call it a “science fiction short story.” That also requires that you justify parts of the narrative, though, otherwise people might claim that what you’re writing is unrealistic.

mandarwagh|5 months ago

Good point, and you are right that a lot of futurism reads like sci-fi. That said, this piece is not just imaginative storytelling, it is mechanism-based forecasting. The timeline links observable trends—rapid LLM capability gains, falling inference costs, cloud APIs that make deployment trivial, and huge economic incentives to replace repeatable knowledge work—with plausible policy and social responses, like UBI and regulatory lag. History shows these transitions can compress once the cost/benefit threshold is crossed, think smartphones, cloud services, or the sudden shift to remote work during COVID. So yes, the dates are aggressive, but the logic is empirical: if the technical and economic levers align, adoption can be much faster than we intuitively expect. If you want a stronger case, I can add a clear assumptions list and evidence anchors for each step.

BirAdam|5 months ago

This presumes quite a bit. As it stands, AGI has not been achieved, and this article is claiming that, by 2028, 90% of all knowledge-worker jobs will be done by an AI system.

Even were that to happen, it is unlikely that a UBI would be put in place, or if it were, that it would be successful. An Ouroboros of taxing owners to pay the public who buy from the owners wouldn't be successful. The reality is that were all workers to be replaced by AI, the economy would collapse. Then, the owners of the systems would be forced to liquidate their assets, the prices on GPUs would crater, and the AI means of production would be redistributed among the hands of the public. Then, small models would be driving productivity in the new wave of startups following the crash. This pattern could repeat many times.

mandarwagh|5 months ago

True, AGI is not here yet, but displacement does not require AGI, only AI that is "good enough" for repetitive cognitive tasks. We are already seeing this in coding, design, legal review, and customer support. As for UBI, history shows that when disruptive tech collapses existing structures, new redistribution mechanisms eventually emerge, whether through taxation, social programs, or crashes that reset ownership. The cycle you describe is possible, but it reinforces the core point: once AI becomes cheaper than human labor, waves of disruption are inevitable, whether cushioned by UBI or followed by crashes and redistribution.

sema4hacker|5 months ago

> By 2028, AI already performs 90% of the jobs that once required intelligent and knowledgeable humans.

Two and a half years from now? Sound VERY optimistic.

mandarwagh|5 months ago

It sounds optimistic, but exponential adoption curves show otherwise. In just 18 months since ChatGPT, AI has already displaced coding, design, research, and support roles. Once businesses see it can fully replace repetitive work at near-zero cost, adoption compresses fast. The real barrier is not tech but social and regulatory adaptation.

and even i get this wrong its just an thought experiment and has a 50% chance

Aurornis|5 months ago

90% of jobs replaced by AI in 3 years? UBI in 5 years?

Why do all of these articles have completely unrealistic timeframes? This feels like someone trying their hand at the https://ai-2027.com/ project, which was based on some mathematically flimsy models that models that have been widely debunked.

mandarwagh|5 months ago

Timeframes always feel unrealistic when you’re in the middle of an exponential curve. Smartphones, cloud, and even remote work adoption looked “impossible” until they suddenly became default. AI doesn’t need AGI to displace jobs, it only needs to be cheaper and good enough at scale, and that threshold is already being crossed.

jmfldn|5 months ago

A fun read, but wildly implausible. Perhaps there are other frontier technologies out there that get us even a fraction of this. But if we're talking this time horizon, I assume we mean LLMs or some other related thing? Are you joking?

mandarwagh|5 months ago

LLMs are the visible tip, but underneath we have multimodal models, agent frameworks, robotics integration, and rapidly falling compute costs. Frontier tech rarely looks plausible at first—flight, the internet, even smartphones did not.

The point is not that LLMs themselves take us to 2125, but that they are the spark in a chain of exponential advances that will.