top | item 47038199

(no title)

mjr00 | 13 days ago

> Microsoft’s AI CEO is saying AI is going to take everybody’s job. And Sam Altman is saying that AI will wipe out entire categories of jobs. ANd Matt Shumer is saying that AI is currently like Covid in January 2020—as in, “kind of under the radar, but about to kill millions of people”.

> I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.

They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.

discuss

order

Frost1x|13 days ago

Tech has slowly been moving that way anyways. In terms of ROI, you’re often much better off targeting whales and large clients than trying to become the ubiquitous market service for consumers. Competition is fierce and people are poor comparatively, so you need the volume for success.

Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.

And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.

ffsm8|13 days ago

The reason why whaling and government contracts are increasingly the best options available is because the wealth from the working class has mostly been extracted... And with less and less disposable wealth available to the populous, targeting them for products gets increasingly competitive as anything non essential gets ignored

It's a negative feedback loop, and the politicians would rather reduce taxes on the rich then reverse that trend

At least that's how it looks to me

braebo|13 days ago

It’s capitalism moving everything that way. Always has been and will continue to until we’re all hooked up to tubes paying taxes with ectoplasm.

AstroBen|13 days ago

The marketing is clearly effecting individual developers, too. There's a mass psychosis happening

mjr00|13 days ago

Maybe. I'm actually a big fan of Claude/Codex and use them extensively. The author of the article says the same.

> To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.

It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."

My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.

dgxyz|13 days ago

I think this is reality.

None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.

crystal_revenge|13 days ago

> There's a mass psychosis happening

There absolutely is but I'm increasingly realizing that it's futile to fight it.

The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?

But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.

thefilmore|13 days ago

> There's a mass psychosis happening

Any guesses on how long this lasts?

verdverm|13 days ago

Ai psychosis or ai++ psychosis?

co_king_5|13 days ago

[deleted]

bubblewand|13 days ago

This is also what OpenAI’s “safety” angle was all about.

“Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)

viccis|13 days ago

Anthropic has been the most histrionic about this, with their big blog post about how they need to make sure their models don't feel like they are being emotionally abused by the users being the most fatuous example.

scrollop|13 days ago

Gpt2- "too dangerous to release"

brabel|13 days ago

Can confirm. We don’t know if AI really is about to make programmers who write code by hand obsolete, but we sure as hell fear our competitors will ship features 10x faster than us. What is the logical next step?? Invest lots of money on AI or keep hoping it’s a fad and risk being left in the dust, even if you think that risk is fairly small?

dgxyz|13 days ago

Perhaps stop entering into saturated markets and using AI to try and shortcut your way to the moon?

There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.

AstroBen|13 days ago

Why is it an all or nothing decision?

Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later

SoftTalker|13 days ago

So, what I don't get is, taking it to its logical conclusion, if AI takes all the jobs then who are your customers? Who will buy your stock? Who will buy the software that all the developers you used to employ used to write? How do these CEOs and investors see this playing out?

coldtea|13 days ago

Like late-stage capitalism generally thinks of these things: by then you'll have sold your company and living the life...

That such a collapse of consumption economy, even just counting white collar jobs cut by "mere" 30%, would also mean a collapse of the stock market, society, infrastructure, and even basic safety, doesn't enter the mind.

cmiles8|13 days ago

You’re not supposed to ask such logical questions. It kills the AI vibe.

parpfish|13 days ago

something i wonder about with AI taking jobs --

similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.

eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.

the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.

zozbot234|13 days ago

There's plenty of jobs like this already. They'll want to keep you around even if you're not doing much most of the time, because you can still solve the hard problems as they arise and grow organizational capital in other ways.

linguae|13 days ago

I’m also concerned about the continuing enshittification of software. Even without LLMs, we’ve had to endure slapdash software. Even Apple, which used to be perfectionistic, has slipped. I feel enshittification is a result of a lack of meaningful competition for many software products due to moats such as proprietary file formats and protocols, plus network effects. “Move fast and break things” software development methodologies don’t help.

LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.

dv_dt|13 days ago

Saying it will take jobs is the marketing line to CEOs - more than you will be left behind.

hmmmmmmmmmmmmmm|13 days ago

Except entry level jobs are already getting wiped out.

bpodgursky|13 days ago

You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:

"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)

You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.

testbjjl|13 days ago

In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?

biophysboy|13 days ago

The reason you think its honest is because you already believed it.

NoGravitas|12 days ago

> You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.

Or claim to.

tonyedgecombe|13 days ago

Yeah but it’s in his interest to encourage an arms race with China.

coldtea|13 days ago

"I would love to stop getting your money, but consider the children/China/some disaster/various scenarios. That's why you should continue to shower me with billions".

didntknowyou|13 days ago

oh please. people said that about the moon, and nuclear weapons too. and yet it's the one side who has a track record of using new technology to intidmidate.

heraldgeezer|13 days ago

HN has become so marxist they hate the country they live in

VieEnCode|12 days ago

And a great deal of the talking shops/podcasts/keynotes/nonprofits around AI existential risk are all part of this same play. They receive funding from the large AI companies so that the latter can continue to talk their own books using this angle.

im3w1l|13 days ago

When trying to look infer people's motives don't just look at what they are doing. Look also at what they aren't doing. Alternatives they had and rejected.

If marketing it was the sole objective there are many other stories they could have told, but didn't.

vpribish|13 days ago

what are a couple of those alternatives?

whateveracct|13 days ago

> the customers are investors and CEOs

100%. i have a basically unlimited Claude balance at work. I do not think of cost except for fun. CEO thinks every engineer has to use AI because nobody is gonna just be using text editors alone in the future.

spamizbad|13 days ago

Also: They've figured out they can "force" AI adoption top-down at many workplaces. They don't need to convince you or even your boss - they just need the C-suite to mandate it.

empressplay|13 days ago

It's worse than that. It's ultimately a military technology. The end-game here is to use it offensively and / or defensively against other countries. Whoever establishes dominance first wins. And so you have to push adoption, so that it gets tested and can be iterated. But this isn't about making money (they are losing it like crazy!) This is end-of-the world shit and about whoever will be left standing once all the dominoes fall -- if they ever fall (let's hope they don't!)

But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.

monkpit|13 days ago

Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.

daze42|13 days ago

This 100%. We're in the middle of an AI Manhattan Project and if "we" give up or slow down, another company or country will get AGI before "us" and there's no coming back after that. If there's a chance AGI is possible, it doesn't make sense to let someone else take the lead no matter how dangerous it could be.

big_paps|13 days ago

One often forgets this.

saltcured|13 days ago

With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?

coldtea|13 days ago

>The end-game here is to use it offensively and / or defensively against other countries.

Against other countries? The biggest endgame is own population control. That has always been the biggest problem/desire of elites, not war with other countries.

qnleigh|13 days ago

Yeah I guess the subtext is 'AI is going to take over so much of the market that it's risky to hold anything else.'

apaosjns|13 days ago

Sam Altman is a known sociopath who has no problem achieving his goals by any means necessary. His prior business dealings (and repeated patterns with OpenAI) are evidence of this.

Shumer is of a similar stock but less capable, so he gets caught in his lies.

I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.

Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.

verdverm|13 days ago

I'm shocked how congratulatory things were for OpenClaw joining Altman Inc

OptionOfT|13 days ago

FOMO was literally built in into Bitcoin. In the beginning it was a lot easier, and then it slowly gets harder.

But what I really hate about AI and how most people talk about it is that if one day it does what the advertisements say, all white collar jobs collapse.

steve1977|13 days ago

> all white collar jobs collapse

Then everything collapses. The carpenter will also be out of work if more than half of his client base cannot afford his work anymore.