(no title)
wcfrobert | 9 days ago
1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Please talk me out of this...
aerhardt|9 days ago
I personally think that a lot jobs in the economy deal in non-verifiable or hard-to-verify outcomes, including a lot of tasks in SWE which Dario is so confident will be 100% automated in 2-3 years. So either a lot of tasks in the economy turn out to be verifiable, or the AI somehow generalizes to those by some unknown mechanism, or it turns out that it doesn't matter that we abandon abstract work outcomes to vibes, or we have a non-sequitur in our hands.
Dwarkesh pressed Dario well on a lot of issues and left him stumbling. A lot of the leaps necessary for his immediate and now proverbial milestone of a "country of geniuses in a datacenter" were wishy-washy to say the least.
georgeven|9 days ago
credit_guy|9 days ago
Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.
Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.
Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?
I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?
You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.
So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.
Rapzid|9 days ago
Because AIs don't think.
gaigalas|9 days ago
Chess is a closed, small system. Full of possibilities, sure, but still very small compared to the wide range of human abilities. The same applies to Go, StarCraft or any other system. Those were chosen as AI playgrounds specifically because they're very small, limited scenarios.
People are too caught up trying to predict the future. And there are several competing visions, each one absolutely sure they nailed it. To me, that's a sign of uncertainty in the technology. If it was that decided (like smartphones became from 2007->2010), we would have coalesced into a single vision by now.
Essentially, we're witnessing an ongoing unwillingly quagmarization of AI tech. At each bold prediction that fails, it looks worse.
That could easily be solved by taking the tech realistically (we know it's useful, just not a demigod), but people (especially AI companies) don't do that. That smells like fear.
It's an exoskeleton. A bicycle for the mind. "People spirits". A copilot. A trusted companion. A very smart PhD that fails sometimes, etc. We don't need any of those predictions of "what it is", they are only detrimental. It sounds like people cargo culting Steve Jobs (and perhaps it is exactly that).
esafak|9 days ago
wiseowise|9 days ago
overgard|9 days ago
> AI will soon plan better, execute better, and have better taste
I think AI will do all these things faster, but I don't think it's going to be better. Inevitably these things know what we teach them, so, their improvement comes from our improvement. These things would not be good at generating code if they hadn't ingested like the entirety of the internet and all the open source libraries. They didn't learn coding from first principles, they didn't invent their own computer science, they aren't developing new ideas on how to make software better, all they're doing is what we've taught them to do.
> Dario and Dwarkesh were openly chatting about ..
I would HIGHLY suggest not listening to a word Dario says. That guy is the most annoying AI scaremonger in existence and I don't think he's saying these words because he's actually scared, I think he's saying these words because he knows fear will drive money to his company and he needs that money.
Lich|9 days ago
flux3125|9 days ago
Learning from prior knowledge doesn't mean being capped by it.
jameslk|9 days ago
2. Businesses operate in an (imperfect) zero-sum game, which means if they can all use AI, there's no advantage they have. If having human resources means one business has a slight advantage over another, they will have human resources
Consumption leads to more spending, businesses must stay competitive so they hire humans, and paying humans leads to more consumption.
I don't think it's likely we will see the end of employment, just disruption to the type of work humans do
andrei_says_|9 days ago
What’s being sold is at best hopes and more realistically, lies.
keiferski|9 days ago
georgeven|9 days ago
AshishGautam|9 days ago
poisonfountain|9 days ago
Disclaimer: I'm not affiliated with Poison Fountain or its creators, just found it useful.
[1] https://news.ycombinator.com/item?id=46926485
[2] https://www.anthropic.com/research/small-samples-poison
majormajor|9 days ago
Seems like a TAM of near-0. Who's buying any of the product of that labor anymore? 1% of today's consumer base that has enough wealth to not have to work?
The end-game of "optimize away all costs until we get to keep all the revenue" approaches "no revenue." Circulation is key.
It seems like they have the same blind spot as anyone else: AI will disrupt everything—except for them, and they get that big TAM! Same for all the "entrepreneurs will be able to spin up tons of companies to solve problems for people more directly" takes. No they wouldn't, people would just have the problems solved for themselves by the AI, and ignore your sales call.
justinhj|9 days ago
They are running at valuations that may assume that and have no choice but to claim so. Sama and Dario are both wildly hyperbolic.
Kon5ole|9 days ago
My attempt to talk you out of it:
If nobody has a job then nobody can pay to make the robot and AI companies rich.
kfichter|9 days ago
esafak|9 days ago
ndarray|9 days ago
umanwizard|9 days ago
geraneum|9 days ago
Guvante|9 days ago
But your assumptions are based on an idealized thing unrelated to anything that is shown.
No one is paying your wage for AI, full stop, you transition for cost savings not "might as well". Also given most AI cost is in training you likely still wouldn't transition since the capital investment is painful.
Robotics isn't new but hasn't destroyed blue collar yet (the US mostly lost blue collar for other reasons not due to robotics). Especially since robotics is very inflexible leading to impedance problems when you have to adapt.
Mostly though I would consider the problem with your argument it is it basically boils down to nihilism. If an inevitability that you can no control over has a chance of happening you should generally not worry about it. It isn't like in your hypothetical there are meaningful actions to take so it isn't important.
observationist|9 days ago
Government, public sector, and union jobs will go last, but they'll go, too. If you can have a DMV Bot 9000 process people 100x faster than Brenda with fewer mistakes and less attitude, Brenda's gonna retire, and the taxpayers aren't going to want to pay Brenda's salary when the bot costs 1/10th her yearly wage, lasts for 5 years, and only consumes $400 in overhead a year.
unknown|9 days ago
[deleted]
Sateeshm|9 days ago
Who is paying taxes in this scenario
koe123|9 days ago
tracker1|9 days ago
It's not just for defense, hunting and sport.
edit: min/max .... not sure how gesture input messed that one up.
unknown|9 days ago
[deleted]
jrm4|9 days ago
I just think we'll all have to get comfy fighting fire with fire.
dyauspitr|9 days ago
1) We still retain a functional democracy and vote for UBI for ourselves.
2) Society fractionates into castes so the capital owners can maintain hierarchical control.
3) There’s the ruling class and the rest of us living hand to mouth and kept in line by an unassailable robot army forever.
4) If we manage to actually create ASI, there’s a chance we might get to an actual utopia with essentially limitless resources.
raw_anon_1111|8 days ago
Thr value I brought to companies has never been I can write for loops. It’s always been I can use my decade+ of experience with computers to either make the company more money or save the company more money than they are paying me.
Before anyone replies I didn’t have a decade of experience starting out, actually I did. I was hobbyist assembly language then C developer for a decade before graduating from college in 1996.
moreice|9 days ago
For the US, if we had strong unions, those gains could be absorbed by the workers to make our jobs easier. But instead we have at-will employment and shareholder primacy. That was fine while we held value in the job market, but as that value is whittled away by AI, employers are incentivized to pocket the gains by cutting workers (or pay).
I haven't seen signs that the US politically has the will to use AI to raise the average standard of living. For example, the US never got data protections on par with GDPR, preferring to be business friendly. If I had to guess, I would expect socialist countries to adapt more comfortably to the post-AI era. If heavy regulation is on the table, we have options like restricting the role or intelligence of AI used in the workplace. Or UBI further down the road.