top | item 43322570

Generative AI hype peaking?

100 points| bwestergard | 1 year ago |bjornwestergard.com | reply

136 comments

order
[+] o_nate|1 year ago|reply
There's an old game in the investing world of trying to time the top of a stock bubble by picking out the most breathless headlines and magazine covers, looking for statements such as the famous 1929 quote from two weeks before the market crash: "Stock prices have reached what looks like a permanently high plateau." By that metric, we may be getting close to the top of the AI hype bubble, with headlines such as the one I saw recently in the NY Times for an Ezra Klein column: "The Government Knows A.G.I. Is Coming".
[+] cyberlurker|1 year ago|reply
Listening to his podcast on the topic was so disappointing. I think Ezra is a smart guy, but he doesn’t understand the field and the entire premise of the long discussion was that LLMs are going to get us to AGI.
[+] gh0stcat|1 year ago|reply
The AGI piece from Ezra was frustrating, to the extent that after listening to him talk about technology in this podcast, it made me question the quality of his knowledge in domains I know far less about.
[+] kurthr|1 year ago|reply

[deleted]

[+] breckenedge|1 year ago|reply
This article is way too light on the details. Does it conflate Nvidia’s stock price with interest in generative AI? New use cases for it are arriving every month. 9 months ago I was amazed to use Cursor, and was leading getting my team to switch to it. 3 months ago it was that Cursor had added agents and trying to again demonstrate their benefits to my colleagues. Now I’m using Cline + Claude 3.7 and more productive than I’ve ever been — and I haven’t even touched MCPs yet.

Definitely not yet peaked IMO. However yea, I don’t see it fully replacing developers in the next 1-2 years — still gets caught in loops way too often and makes silly mistakes.

[+] bwestergard|1 year ago|reply
Thanks for your comment.

I am arguing the hype has peaked, and that there will likely be a pull back in investment in the next year. This is not to say the technology has "peaked", which I'm not sure one could even define precisely.

Important technologies emerged during each past "AI summer", and did not disappear during "AI winter". LISP is more popular than ever, despite the collapse of hype in symbolic reasoning AI decades ago.

As I mention in the OP, I think productivity enhancing tools for developers are one of the LLM applications that is here to stay. If I didn't think so, I wouldn't be concerned about the impact on skill development among developers.

https://en.wikipedia.org/wiki/AI_winter

[+] Etheryte|1 year ago|reply
I would say the hype has started to fall off, as it's becoming increasingly obvious that AGI is not around the corner, but meanwhile practical use cases keep getting better and better. The more we understand the strengths and weaknesses, the better we can exploit them, and even if the models themselves have hit the scaling wall, I think tooling around them is far from done.
[+] maxglute|1 year ago|reply
Peaking hype = investers think generative AI may replace billions instead of trillions of economic activity in the return window they're looking at.
[+] MangoCoffee|1 year ago|reply
AI is just a tool. Its not going to replace human coder anytime soon.
[+] yubblegum|1 year ago|reply
> Nvidia’s stock price

Market may be pricing in possible takeover of Taiwan by China.

[+] daedrdev|1 year ago|reply
Stocks are down because the president of the US has entered a costly trade war, actually.
[+] tenpies|1 year ago|reply
What do you make of something like Reddit (RDDT, down -15% at this moment)?

It's unaffected by tariffs, but its insane valuation is driven by the narrative that Reddit posts can be used to train AI. Without that narrative, you have a semi-toxic collection of forums and the valuation would probably be somewhere in the millions at best, not the current $20 BB.

[+] bwestergard|1 year ago|reply
No disagreement from me there. But for the year to date the Nasdaq composite is down less than 4%, whereas NVIDIA is down 20%.
[+] SubiculumCode|1 year ago|reply
A highly unpredictable trade war, as well as rattling every international ally, convincing nations across the world to choose other military platforms than ours because Trump could just turn it off at a whim, increasing risk of political instability leading nations to turn to other nations for investment. Our economy is a ticking time bomb under Trump.
[+] hnthrow90348765|1 year ago|reply
>We may look back in a decade and lament how self-serving and short-sighted employers stopped hiring less experienced workers, denied them the opportunity to learn by doing, and thereby limited the future supply of experience developers.

I think bootcamps will bloom again and companies will hire people from there. The bootcamp pipeline is way faster than 4 year degrees and easy to spin up if the industry decides the dev pipeline needs more juniors. Most businesses don't need CompSci degrees for the implementation work because it's mostly CRUD apps, so the degree is often a signal of intellect.

This model has a few advantages to employers (provided the bootcamps aren't being predatory) like ISAs and referrals. Bootcamp reputations probably need some work though.

What I think will go away is the bootstraps idea that you can self-teach and do projects by yourself and cold-apply to junior positions and expect an interview on merit alone. You'll need to network to get an 'in' at a company, but that can be slow. Or do visible open source work which is also slow.

[+] ike2792|1 year ago|reply
The problem with bootcamps right now is that they provide no predictive value. If I hire someone with a CS degree from, say, Stanford that has 2-3 internships and a few semester-long projects under their belt, that gives me reasonable confidence as a manager that the person has what it takes to solve problems with software and communicate well. Bootcamp candidate resumes are all basically identical and the projects are so heavily coached and curated that it is difficult to figure out how much the candidate actually knows.
[+] vunderba|1 year ago|reply
Perhaps but I have my doubts. With new jobs receiving as much as hundreds of applicants on a single posting, a university degree can significantly help in whittling down the list.

Potential employers can (and do) verify your educational background such as a degree from an accredited university. Even if you had a certificate from a "legitimate code camp" (though I'd argue they're about as valuable as an online TEFL cert) - they have no way to verify it.

[+] rvz|1 year ago|reply
This is the year 1999 again. You have companies that are valued tens of billions with no product AND no revenue.

There is also a race to zero where the best AI models are getting cheaper and big tech is there attempting to kill your startup (again) by lowering prices until it is free for as long as they want it.

More YC startups accepted are so-called AI startups are just vehicles for OpenAI to copy the best one and for the rest of the 90% of them to die.

This is an obvious bubble waiting to burst. With Big Tech coming out stronger, AI frontier companies becoming a new elite group "Big AI" and the so-called other startups getting wiped out.

[+] _ea1k|1 year ago|reply
If the average person has still not ridden a self driving car, assembled by figure 02 style robots, though a drive thru with AI ordering, then we aren't even close to seeing the real peak here.

>100x growth ahead for sure.

[+] hylaride|1 year ago|reply
Most people (in the world) hadn't been on the internet in 2000 when the dot-com crash happened. Barely half the US population was even online at that point. We're probably nowhere near the peak of AI ability or usage, but that doesn't mean there hasn't been a lot of mal-investment or that things can't commodify.

Huge amounts internet growth still happened after the 2000 crash, but networking gear and fiber optic networking became a commodity play, meaning the ROI shifted. The companies that survived ended up piggybacking the over-investment on the cheap, including Amazon and Google.

Even going way back, the real productive growth of the American railroads didn't happen until after the panic of 1873 after overbuilding was rationalized.

[+] rco8786|1 year ago|reply
Author not claiming AI has peaked. Only claiming that the hype as peaked.
[+] vessenes|1 year ago|reply
What’s crazy is we are very close to this right now. Esp if you count industrial robots - byd is almost totally autonomous production, I believe Tesla is close as well.
[+] khrbrt|1 year ago|reply
None of those cases are "generative" AI.
[+] qoez|1 year ago|reply
One thing I'd love to short is the idea that we're going to have a second AI winter. Lots of people predict it but I believe this time is actually a real step function innovation (and last time was caused by it being a very distant research project and money dried out because competition with the much more lucrative internet which was growing at the same time).
[+] somewhereoutth|1 year ago|reply
It will be the third major winter - there was one around 1974-1980 as well as 1987-2000.

I don't believe the previous 'summers' entailed quite the scale of [mal]investment that has occurred this time, so the impending winter will be correspondingly savage.

[+] secretmark|1 year ago|reply
Why would you need to short it? If you believe that is true just go long on AI stocks or buy calls on these companies
[+] cenobyte|1 year ago|reply
Anyone who thinks the Hype has peaked is obviously too young to remember the dotcom bubble.

It will get so much worse before it starts to fade.

Infecting every commercial, movie plot, and article that you read.

I can still here the Yahoo yodel in my head from radio and TV commercials.

[+] zzzeek|1 year ago|reply
Sorry, did you not notice the advertisement for "AI Startup School" at the bottom of Hacker News ? Ixnay on the egativity-nay, my friend !
[+] xyst|1 year ago|reply
> … AI Startup School will gather 2000 of the top CS undergrads, masters, and PhD candidates in AI to hear from speakers including Elon Musk

What a blunder by YC. What is this tool going to add to the conversation?

Hope he gets removed from speaker list.

[+] siliconc0w|1 year ago|reply
IMO Grok and 4.5 show the we've reached the end of reasonable pre-training scaling. We'll see how far we can get with RL in post-training but I suspect we're pretty close to maxed there and will start seeing diminishing returns. The rest is just inference efficiency, porting the gains to smaller models, and building the right app-layer infrastructure to take advantage of the technology.

I do think we're overbuilding on Nvidia and the CUDA moat isn't as big as people think, inference workloads will dominate, and purpose-built inference accelerators will be preferred in the next hardware-cycle.

[+] ypeterholmes|1 year ago|reply
So Deep Research and the latest reasoning models don't deserve mention here? I wish there was accountability on the internet, so that people posting stuff like this can be held accountable a year from now.
[+] skepticATX|1 year ago|reply
The industry only has themselves to blame. When you promise literal utopia and inevitably don’t deliver, you can’t be surprised by what happens next.
[+] _cs2017_|1 year ago|reply
Skeptical as I am about the generative AI, the quality of this particular article (in terms of evidence provided, logic, insights, etc) is substantially lower than ChatGPT / Gemini DeepResearch can generate. If I was grading, I'd rate an average (unedited) AI DeepResearch report at 3/10, and the headline article at 1/10.
[+] zekenie|1 year ago|reply
Idk I used Claude Code recently and revised all my estimates. Even if the models stop getting better today I think every product has years of runway before they incorporate these things effectively.
[+] gdubs|1 year ago|reply
Something I've been saying for two years now is that AI is the most over-hyped and the most under-hyped technology, simultaneously.

On the one hand it has been two years of "x is cooked because this week y came out..." and on the other hand, people who seem to have formed their opinions based on ChatGPT 3.5 and have never checked in again on the state-of-the-art LLMs.

In the same time period, social media has done its thing of splitting people into camps on the matter. So, people – broadly speaking, no not you wise HN reader – are either in the "AI is theft and slop" camp or the "AI will bring about infinite prosperity" camp.

Reality is way more nuanced, as usual. There are incredible things you can do today with AI that would have seemed impossible twenty years ago. I can quickly make some python script that solves a real-world problem for me, by giving fuzzy instructions to a computer. I can bounce ideas off of an LLM and, even if it's not always 'correct', it's still a valuable rubber-ducky.

If you look at the pace of development – compare MidJourney images from a few years ago to the relatively stable generative video clips being created today – it's really hard to say with a straight face that things aren't progressing at a dizzying rate.

I can kind of stand in between these two extreme points of view, and paradigm-shift myself into them for a moment. It's not surprising that creative people who have been promised a wonderful world from technology are skeptical – lots of broken promises and regressions from big tech over the past couple of decades. Also unclear why suddenly society would become redistributive when nobody has to work anymore, when the trend has been a concentration of wealth in the hands of the people who own the algorithms.

On the other hand, there is a lot of drudgery in modern society. There's a lot of evolution in our brains that's biased to roaming around picking berries and playing music and dancing with our little bands. Sitting in traffic to go sit in a small phone both and review spreadsheets is something a lot of people would happily outsource to an AI.

The bottom line – if there is one – is that uncertainty and risk are also huge opportunities. But, it's really hard for anyone to say where all of this is actually headed.

I come back to the simultaneity of over-hyped/under-hyped.

[+] DanHulton|1 year ago|reply
I guess my biggest worry is that the ones doing the outsourcing of all this "drudgery" are unlikely to be the workers who are currently being paid to do the work, but the owners who no longer have to pay them.

The rest of society and our economy doesn't seem to be adjusting to hundreds of thousands or millions of people being "outsourced", so it's not likely there will be a lot of playing music and dancing for these people, though you may be more prescient than either of us are comfortable with, with the "berry picking" prediction...

[+] OldGreenYodaGPT|1 year ago|reply
Peaked? Nah, it's barely started. Wait till we get decent SWE agents reliably writing good code, probably later this year or next. Once AI moves beyond simple boilerplate, the productivity boost will be huge. Too soon to call hype when we've barely scratched the surface.
[+] bluefirebrand|1 year ago|reply
I asked copilot to write me a Typescript function today

I had two defined types, both with the exact same field names. The only difference is one has field names written in snake_case, and the other had names written in camelCase. Otherwise the exact same

I wanted a function that would take an object of the snake_case type, and output an object of the camelCase type. The object only had about 10 fields

It missed about half of the fields, and inserted fields that didn't even exist on either object

You cannot convince me that AI is anywhere near to this level if it cannot even generate a function that can convert "is_enabled" to "isEnabled" inside an object

Every time I try this stuff I'm so disappointed with it. It makes me think anyone who is hyped about it is an absolute fraud that does not know at all what they are doing

[+] ninetyninenine|1 year ago|reply
I still say it’s too early to tell.

It took a decade to reach LLMs. It will likely be another decade for agi. There is still clear trendline progress and we have clear real world targets of actual human level intelligence that exists so we know it can be done.

[+] bigfishrunning|1 year ago|reply
A decade? What do you consider your start point here? Minsky wrote his NN thesis in 1954.
[+] th0ma5|1 year ago|reply
Is saying that you're critical of AI the new approach to being uncritical of it?
[+] debacle|1 year ago|reply
I think at this point in the wave, the criticism starts to pop up here and there, but it's still decried. In 12-18 months, the momentum of the white hot VC injections over the last few years will sustain the wave for a time. By 27 or 28, the unicorn payoffs in the space will arise, and by 30 "everyone" will know that AI has been overhyped for a while.

This person is just trying to get ahead of the game!

[+] codingwagie|1 year ago|reply
People are just click farming with these posts. The technology is ~4 years old. We are in the infancy of this, with hundreds of billions of capital behind making these systems work. Its one of the biggest innovations of the last 100 years.

I propose an internet ban for anyone calling the generative ai top, and a public tar and feathering

[+] dwedge|1 year ago|reply
If you had said the last 20 years then maybe but last 100 years? For a really good autocomplete?

Just think about the innovations over the last 100 years, how the world looked in 1925

[+] __loam|1 year ago|reply
People have been saying it's still early in crypto for over a decade.

This much capital being poured into something and having very little to show for it is actually a bad sign, not a positive.

Putting it on the same shelf as the transistor, the jet engine, and the nuclear bomb is pretty funny. It's a probabilistic token generator. Relax.

[+] parliament32|1 year ago|reply
>We are in the infancy of this, with hundreds of billions of capital behind making these systems work

Just like IoT, just like web3, just like blockchain, just like...

[+] munchler|1 year ago|reply
Agreed. To me, this is reminiscent of the "dot-com bubble" 25 years ago. The internet changed the world permanently, even if the stock market got ahead of itself for a few years. The same is true of generative AI.

https://en.wikipedia.org/wiki/Dot-com_bubble