I worked at a mom and pop ISP in the 90s. Lucent did seem at the forefront of internet equipment at the time. We used Portmaster 3s to handle dial up connections. We also looked into very early wireless technology from Lucent.
Something I wanted to mention, only somewhat tanget. The Telecommunications Act of 1996 forced telecommunication companies to lease out their infrastructure. It massively reduced the prices an ISP had to pay to get T1, because, suddenly, there was competition. I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.
But, wouldn't you know it, the Telecommunication companies sued the FCC and the Telecommunications Act was gutted in 2003
> I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.
It varied a lot by region. At the mom and pop ISP I worked at, we went from paying around $1,500/month for a T1 to $500 to, eventually around $100/month for the T1 loop to the customer plus a few grand a month for an OC12 SONET ring that we used to backhaul the T1 (and other circuits) back to our datacenter.
But, all of it was driven by the Telecommunications Act requirement for ILECs to sell unbundled network facilities - all of the CLECs we purchased from were using the local ILEC for the physical part of the last mile for most (> 75%) of the circuits they sold us.
One interesting thing that happened was that for a while in the late 90’s, when dialup was still a thing, we could buy a voice T1 PRI for substantially less than a data T1 ($250 for the PRI vs $500 for the T1.) The CLEC’s theory was our dialup customers almost all had service from the local ILEC, and the CLEC would be paid “reciprocal compensation” fees by the ILEC for the CLEC accepting calls from them.
In my market, when the telecommunications act reform act was gutted, the ILEC just kept on selling wholesale/unbundled services to us. I think they had figured out at that point that it was a very profitable line of business if they approached it the right way.
Th gutting of telecom competition, the allowance of total monopoly power, was a travesty of the court system. The law was quite plain & clear & the courts decided all on their own that, well, since fiber to the home is expensive to deploy, we are going to overrule the legislative body. The courts aren't supposed to be able to overturn laws they don't like as they please but that's what happened here.
Regarding the price of connection, it's also worth mentioning that while T1 and other T-channel and OCx connection remains in high use, 1996-1999 is also the period where DSL became readily available & was a very fine choice for many needs. This certainly created significant cost pressure on other connectivity options.
I worked at a startup during the telecom boom. Then, startups were getting acquired by the likes of Cisco before the startup had a deployed product. And, back then, IPOs were the only form of liquidity event and engineers were locked up for 6 months. The lucky ones had their startups go IPO or get acquired with enough time to spare to get out before the ensuing bust. After the bust, funding dried up and most startups folded including the one I worked at. There was wipeout and desolation for a few years. Subsequently, green shoots started appearing in the form of a new wave of tech companies.
You're implying only 4 years of regulation was enough to shift the balance of power between telecoms and smaller ISPs."
If it's true that this regulation was what helped jumpstart the internet it's an interesting counterpoint to the apocalyptic predictions of people when these regulations are undone. (net neutrality comes to mind as well)
I've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".
Fiber networks were using less
than 0.002% of available capacity,
with potential for 60,000x speed
increases. It was just too early.
I doubt we will see unused GPU capacity. As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.
If a minute of GPU usage is currently $0.10, a night of GPU usage is 8 * 60 * 0.1 = $48. Which might very well be worth it for an improved codebase. Or a better design of a car. Or a better book cover. Or a better business plan.
I'd argue we very certainly will. Companies are gobbling up GPUs like there's no tomorrow, assuming demand will remain stable and continue growing indefinitely. Meanwhile LLM fatigue has started to set in, models are getting smaller and smaller and consumer hardware is getting better and better. There's no way this won't end up with a lot of idle GPUs.
> As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.
That is nothing. Coding is done via text. Very soon people will use generative AI for high resolution movies. Maybe even HDR and high FPS (120 maybe?). Such videos will very likely cost in the range of $100-$1000 per minute. And will require lots and lots of GPUs. The US military (and I bet others as well) are already envisioning generative AI use for creating a picture of the battlespace. This type of generation will be even more intensive than high resolution videos.
> "Try different ways to refactor it. Tomorrow, show me your best solution."
The cost/benefit analysis doesn't add up for two reasons:
First, a refactored codebase works almost the same as non-refactored one, that is, the tangible benefit is small.
Second, how many times are you going to refactor the codebase? Once and... that's it. There's simply no need for that much compute for lack of sufficient beneficial work.
That is, the present investments are going to waste unless we automate and robotize everything, I'm OK with that but it's not where the industry is going.
I've seen lots of claims about AI coding skill, but that one might be able to improve (and not merely passably extend) a codebase is a new one. I'd want to see it before I believe it.
This is such a short sighted take glaringly ommitting a crucial ingredient in learning or improvement - both for humans or machines alike: feedback loops.
And you can't really hack / outsmart feedback loops.
Just because something is conceptually possible, interaction with the real rest of the world separates a possible from an optimal solution.
The low hanging fruits/ obvious incremental improvements might be quickly implemented by LLMs based on established patterns in their training data.
That doesn't get you from 0 to 1 dollar, though and that's what it's all about.
This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.
The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.
I’ve never understood why time is the metric people are using here. If LLMs get so much better we can “run them overnight”, what makes you think that they won’t also get faster and so they accomplish exactly what you’re talking about in 5 minutes?
I just had to double check (have not been paying attention for a couple of years) but indeed it seems GPU underutilization remains a fact and the numbers are pretty significant. Main issues are being memory bound so the compute sits idle.
Knowing history of past bubbles is only mildly informative. The dotcom bubble was different than the railroads bubble etc.
The only thing to keep in mind is that all of this is about business and ROI.
Given the colossal investments, even if the companies finances are healthy and not fraudulent, the economic returns have to be unprecedented or there will be a crash.
I agree, but would like to maybe build out that theory. When we start talking about the mechanisms of the past we end up over-constricting the possibility space. There were a ton of different ways the dotcom bubble COULD have played out, and only one way it did. If we view the way it did as the only way it possibly could have, we'll almost certainly miss the way the next bubble will play out.
I’m concerned that the accounting differences mentioned between Lucent and Nvidia, Microsoft, OpenAI, Google just mean we have gotten much better at lying and misrepresenting things as true. Then the bubble pops and you get the real numbers and we are all like “yep it was the same thing all over again”.
I was there in the middle of the dotcom crash and the telecoms crash which was much worse. Fiber does not rust, and while there was vast overcapacity, not all of it was lit, or indeed worth lighting. 10 years after, thanks to DWDM there were 8 strand cables where only 2 strands were lit, albeit with many more wavelengths than envisaged before. Even though demand had grown.
How much is a 10 year old GPU worth? Where is the “dwdm but for GPUs?”.
There truly are interesting times and we have the benefit of being in them.
Just so I understand correctly, you mean that with DWDM 2 strands of cables were equivalent to 8 since DWDM as a multiplexing technology increased the capacity of each fiber, right?
> How much is a 10 year old GPU worth? Where is the “dwdm but for GPUs?”.
From other sources cited in TFA it seems GPUs won't last 3 years, let alone 10! But I think we know what the "DWDM for GPUs" is -- it's the processing efficiency gains that we've seen over the last few years which keeps driving the per-token prices sharply down.
At a "telecom of telecom" we (they) were still lighting up dark fiber 15 years later (2015) when mobile data for cell carriers finally created enough demand. Hard to fathom the amount of overbuild.
The only difference is fiber optic lines remained useful the whole time. Will these cards have the same longevity?
New fiber isn't significantly more power efficient. The other side of the coin is that backhoes haven't become more efficient since the fiber was buried.
In 2005 telecom was a cash cow because of long distance charges and if your mechanical phone switch was paid off you were printing money (regulations guaranteed revenue)
This didn't last that much longer and many places were trying to diversify into managed services (data dog for companues on Orem network and server equipment,etc) which they call "unregulated" revenue.
Add written an things business, irrational exuberance can kill you.
I think the high-density data centers that are being built to support the hyperscalers are more analogous to the dark fiber overbuild. When you lit that fiber in 2015, you (presumably) were not using a line card bought back in 1998.
I think the fundamental issue is the uncertainty of achieving AGI with baked in fundamentals of reasoning.
Almost 90% of topline investments appear to be geared around achieving that in the next 2-5 years.
If that doesn’t come to pass soon enough, investors will loose interest.
Interest has been maintained by continuous growth in benchmark results.
Perhaps this pattern can continue for another 6-12 months before fatigue sets in, there are no new math olympiads to claim a gold medal on…
Whats next is to show real results, in true software development, cancer research, robotics.
I am highly doubtful the current model architecture will get there.
AGI is not near. At best, the domains where we send people to years of grad school so that they can do unnatural reasoning tasks in unmanageable knowledge bases, like law and medicine, will become solid application areas for LLMs. Coding, most of all, will become significantly more productive. Thing is, once the backlog of shite code gets re-engineered, the computing demand for a new code creation will not support bubble levels of demand for hardware.
If you speak with AI researchers, they all seem reasonable in their expectations.
... but I work with non-technical business people across industries and their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.
12 months later, when things don't work out, their response to AI goes to the other end of the spectrum -- anger, avoidance, suspicion of new products, etc.
Enough failures and you have slowing revenue growth. I think if companies see lower revenue growth (not even drops!), investors will get very very nervous and we can see a drop in valuations, share prices, etc.
Hyperscalers are only spending less than half of their operating cash flows on AI capex. Full commitment to achieving AGI within a few years would look much different.
The biggest issue with Nvidia is their revenue is not recurring but the market is treating their stock as it were, which is correlated with all semi stocks, with a one-time massive CAPEX investment lasting 1-2 years.
Simple as this - as to why its just not possible for this to continue.
NVDA stock does not trade at a huge multiple. Only 25x EPS despite very rapid top line growth and a dominant position at the eve of possibly the most important technology transition in the history of humankind. The market is (and has been) pricing in a slowdown.
Same could be said of the covid-darlings - zoom, pelaton etc. They got bid up assuming the present to continue into the future. That is the nature of the markets. Same story with fake meat companies. Across time you will find this pattern - 3d printing etc, all ushering some new faddish technology. Also, explains the investments into openai as a hedge against capex slowdown, so there is a captive customer.
This. It’s basic economics. The second there’s a blip the market will be flooded with cheap used GPUs and there will be zero reason to buy new ones. At that point it will be impossible for NVidia to sustain their revenue numbers.
The one thing I don't understand is this assumption that demand for GPUs for training is going to keep growing at the rate they grew so far.
I get the demand for new applications, which require inference, but nowadays with so many good (if not close to SOTA) models available for free and the ability to run them on consumer hardware (apple M4 or AMD Max APUs), is there any demand for applications that justify a crazy amount of investment in GPUs?
Inference will be cheapest when run in a shared cloud environment, simply due to the LLMs roofline. Thus, most B2B use cases are likely to be datacenter based, like AWS today.
Of course, cern is still going to use their FPGA hyper-optimized for their specific trigger model for the LHC, and apple is gojng to use a specialized low power ASIC running a quantized model for hello Siri, but I meant the majority usecase.
Apologies for the second reply, but it also occurs to me that reinforcement learning is the new battleground. Look at the changes between o1, o3 and GPT-5 thinking. Sonnet 3.7, Sonnet 4, and Sonnet 4.5. And so forth.
I expect models will get larger again once everyone is doing their inference on B200s, but the RL training budget is where the insatiable appetite sits right now.
There was a similar circular effect in the dot com boom around ads. VCs poured money into startups, which put the money into ads on Yahoo and other properties. Yahoo was getting huge revenue from the ads, which pumped up the stock price. The rising price and revenue, and hence stock price of Yahoo pumped up the market for other dot coms, as it proved you could make money on the Internet, so the market for dot com IPOs was strong. That drew more VC money. More VC money meant more ads.
> However, what’s become clear is that OpenAI plans to pay for Nvidia’s graphics processing units (GPUs) through lease arrangements, rather than upfront purchases
I wish someone here could explain it to a dummy like me. Nvidia tells OpenAI: heres some GPUs, can you pay for them over 5 years. How is this an "investment" by Nvidia? That reference keeps calling this an investment, but what they describe is a lease agreement. Why do they call it an investment? What am I missing?
That Nvidia has to front the costs of the product at the beginning and arguably the risk that the allocation of assets end up not being paid off (bankruptcy, etc.) By carrying those costs early and the associated risk, Nvidia expects a return on that. If the risk is realized they'll lose but otherwise they'll gain. That has all the hallmarks of an investment.
Are these companies developing InfiniBand-class interconnects to pair with their custom chips? Without equivalent fabric, they can’t replace NVIDIA GPUs for large-scale training.
recent Huang podcast went into this, making the point that custom chips won't be competitive to Nvidia's as they are now making specialised chips instead of just 'gpu's'.
By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
Today, any game you make for a modern system is a game you could have made for the PS3/Xbox 360 or perhaps something slightly more powerful.
Certainly there have been experiences that use new capabilities that you can’t literally put on those consoles, but they aren’t really “more” in the same way that a PS2 offered “more” than the PlayStation.
I think in that sense, there will be some kind of bubble. All the companies that thought that AI would eventually get good enough to suit their use case will eventually be disappointed and quit their investment. The use cases where AI makes sense will stick around.
It’s kind of like how we used to have pipe dreams of certain kinds of gameplay experiences that never materialized. With our new hardware power we thought that maybe we could someday play games with endless universes of rich content. But now that we are there, we see games like Starfield prove that dream to be something of a farce.
> By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
The PS3 is the last console to have actual specialized hardware. After the PS3, everything is just regular ol' CPU and regular ol' GPU running in a custom form factor (and a stripped-down OS on top of it); before then, with the exception of the Xbox, everything had customized coprocessors that are different from regular consumer GPUs.
> By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
I hope that's where we are, because that means my experience will still be valuable and vibe coding remains limited to "only" tickets that take a human about half a day, or a day if you're lucky.
Given the cost needed for improvements, it's certainly not implausible…
…but it's also not a sure thing.
I tried "Cursor" for the first time last week, and just like I've been experiencing every few months since InstructGPT was demonstrated, it blew my mind.
My game metaphor is 3D graphics in the 90s: every new release feels amazing*, such a huge improvement over the previous release, but behind the hype and awe there was enough missing for us to keep that cycle going for a dozen rounds.
We did get more : the return of VR couldn't have been possible without drastically improved hardware.
But the way how it stayed niche shows how it's not just about new gameplay experiences.
Compare with the success of the Wii Sports and Wii Fit, which I would guess managed it better, though through a different kind of hardware that you are thinking about ?
And I kind of expect the next Nintendo console to have a popular AR glasses option, which also would only have been made possible thanks to improving hardware (of both kinds).
I wonder if the buying customers of Nvidia are going to find the self’s left with the overcapacity. Certainly people are waking up to LLM challenges and as budgets focus more on useful applications, smaller language models, how much of that demand will remain.
Also, depreciation schedules beyond useful life of an asset may not be fraud but I’d call it a bit too creative for my liking.
Glad to see Tom's blog on HN - as usual a great write up. A number of us have been chatting about this for several months now, and the take is fairly sober.
Meta commentary but I've grown weary of how commentary by actual domain experts in our industry are underrepresented and underdiscussed on HN in favor of emotionally charged takes.
Calling a VC a "domain expert" is like calling an alcoholic a "libation engineer." VC blogs are, in the best case, mildly informative, and in the worst, borderline fraudulent (the Sequoia SBF piece being a recent example, but there are hundreds).
The incentives are, even in a true "domain expert" case (think: doctors, engineers, economists), often opaque. But when it comes to VCs, this gets ratcheted up by an order of magnitude.
This reminds me of SGI at the peak of the dot-com bubble.
SGI (Silicon Graphics) made the 3D hardware that many companies relied on for their own businesses, in the days before Windows NT and Nvidia came of age.
Alias|Wavefront and Discreet were two companies where their product cycles were very tied in the SGI product cycles, with SGI having some ownership, whether it be wholly owned or spun out (as SGI collapsed). I can't find the reporting from the time, but it seemed to me that the SGI share price was propped up by product launches from the likes of Alias|Wavefront or Discreet. Equally, the 3D software houses seemed to have share prices propped up by SGI product launches.
There was also the small matter of insider trading. If you knew the latest SGI boxes were lemons then you could place your bets of the 3D software houses accordingly.
Eventually Autodesk, Computer Associates and others eventually owned all the software, or, at least, the user bases. Once upon a time these companies were on the stock market and worth billions, but then they became just another bullet point in the Autodesk footer.
My prediction is that a lot of AI is like that, a classic bubble, and, when the show moves on, all of these AI products will get shoehorned into the three companies that will survive, with competition law meaning that it will be three rather than two eventual winners.
Equally, much like what happened with SGI, Nvidia will eventually come a cropper due to the evaluations due to today's hype and hubris not delivering.
But the answer is, "kinda"? There are similarities, but the AI buildout is worse in some ways (more concentration, GPU backed debt) and better in others (capacity is being used, vendors actually have cash flow).
The conclusion:
> Unlike the telecom bubble, where demand was speculative & customers burned cash , this merry-go-round has paying riders.
Seems a little short sighted to me. IMO, there is a definite echo, but we are in the mid-late stage, not the end stage.
It's simply not fair to compare Lucent at the end of a bubble with Nvidia in the middle, and that is what the author did.
Looking at the last chapter of the essay, there was a lot of illegal activity by lucent in the runup to the collapse. Today, We won’t know the list of shady practices until the bubble bursts. I doubt Tom could legally speculate, he’d likely be sued into oblivion if he even hinted at malfeasance by these trillion dollar companies.
The smartest finance folks I know say that this “irrational exuberance” works until it doesn’t. Meaning nobody really thinks it’s sustainable, but companies and VCs chasing the AI hype bubble have backed themselves into a corner where the only way to stop the bubble from bursting is to keep inflating the bubble.
The fate of the bubble will be decided by Wall Street not tech folks in the valley. Wall Street is already positioning itself for the burst and there’s lots of finance types ready to call party over and trigger the chaos that lets them make bank on the bubble’s implosion.
These finance types (family offices, small secret investment funds) eat clueless VCs throwing cash on the fire for lunch… and they’re salivating at what’s ahead. It’s a “Big Short” once in 20-30 years type opportunity.
>These finance types (family offices, small secret investment funds) eat clueless VCs throwing cash on the fire for lunch… and they’re salivating at what’s ahead. It’s a “Big Short” once in 20-30 years type opportunity.
No - it's very hard to successfully bet against anything in finance, and VCs and non-public investments are particularly hard. When you go long, you simply buy something and hold it until you decide to sell. If you short, you have to worry about borrowing shares, paying short fees, and having unlimited risk.
How would you even begin to bet against OpenAI specifically? The closest proxy I can think of is shorting NVDA.
There's also nobody whose job it is to make big one-time shorts. Like you said, it's a once in 20-30 years opportunity, so no one builds a hedge fund dedicated to sitting around for decades waiting for that opportunity. There will certainly be exceptions, and maybe they'll make a Big Short 2 about the scrappy underdogs who saw the opening and timed it perfectly. But the vast majority of Wall Street desperately wants the party to continue.
> have backed themselves into a corner where the only way to stop the bubble from bursting is to keep inflating the bubble.
They are not in any corner. They rightly believe that they won't be allowed to fail. There's zero cost to inflating the bubble. If they tank a loss, it's not their money and they'll go on to somewhere else. If they get lucky (maybe skillful?) they get out of the bubble before anyone else, but get to ride it all the way to the top.
The only way they lose is if they sit by and do nothing. The upside is huge, and the downside is non-existent.
oracle's announcement of a 300b purchase commitment from openai followed soon by a 100B investment into openai.
The pace and size of these announcements is reaching a fever-pitch which seems like an attempt to keep the music playing.
Great points. I am bullish on AI but also wary of accounting practices. Tom says Nvidia's financials are different from Lucent's but that doesn't mean we shouldn't be wary.
The Economist has a great discussion on depreciation assumptions having a huge impact on how the finances of the cloud vendors are perceived[1].
Revenue recognition and expectations around Oracle could also be what bursts the bubble. Coreweave or Oracle could be the weak point, even if Nvidia is not.
The telecom bubble built infrastructure for something that didn’t exist, they built anticipating the need for high didn’t come in time.
The gpu bubble is different. Nvidia is actually selling gpus in spades. So it’s not comparable to the telecom bubble. Now the question remains how many more gpus can they sell? That depends on the kind of services that are built and how their adoption takes off. So now is it a bubble or just frothy at the top? There is definitely going to be a pull back and some adjustment, but I cannot say how bad it is
Distinguishing that in-hindsight Lucent was committing accounting fraud and present firms aren't is a load-bearing assumption here; for all we know the big players in the AI bubble just haven't been outed yet.
Some great insights with some less interesting in there. I didn’t know about the SPVs, that’s sketchy and now I wanna know how much of that is going on. The MIT study that gets pulled out for every critical discussion of AI was an eye roll for me. But very solid analysis of the quants.
How much of a threat is custom silicon to Nvidia remains an open question to me. I kinda think, by now, we can say they’re similar but different enough to coexist in the competitive compute landscape?
With all the major players like Amzn, Msft and Alphabet going for their own custom chips and restrictions on selling to China it will be interesting to see how Nvidia does.
I personally would prefer China to get to parity on node size and get competitive with nvidia. As that is the only way I see the world not being taken over by the tech oligarchy.
The custom chips don’t seem to be gaining traction at scale. On paper the specs look good but the ecosystem isn’t there. The bubble popping and flooding the market with CUDA GPUs means it will make even less sense to switch.
TLDR: Lucent was committing various forms of accounting fraud and had an unhealthy cash flow position, and had their primary customers on economically dangerous ground. Nvidia meanwhile appears to be above board, has strong cash flow, and has extremely strong dominant customers (eg customers that could reduce spending but can survive a downturn). Therefore there's no clear takeaway: similarities but also differences. Risk and a lot of debt as well as hyperscalers insulating themselves from some of that risk... but at the same time as lot more cash to burn.
One of the things before AI in the market was that capital had limited growth opportunities. Tech, which was basically a universe of scaled out crud apps, was where capital would keep going back into.
AI is a lot more useful than hyper scaled up crud apps. Comparing this to the past is really overfitting imho.
The only argument against accumulating GPUs is that they get old and stop working. Not that it sucks, not that it’s not worth it. As in, the argument against it is actually in the spirit of “I wish we could keep the thing longer”. Does that sound like there’s no demand for this thing?
The AI thesis requires getting on board with what Jenson has been saying:
1) We have a new way to do things
2) The old ways have been utterly outclassed
3) If a device has any semblance of compute power, it will need to be enhanced, updated, or wholesale replaced with an AI variant.
There is no middle ground to this thesis. There is no “and we’ll use AI here and here, but not here, therefore we predictably know what is to come”.
Get used to the unreal. Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.
> Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.
We've technically been able to play board games by entering our moves into our telephones, sending them to a CPU to be combined, then printing out a new board on paper to conform to the new board state. We do not do this because it would be stupid. We can not depend on people starting to do this saving the paper, printer, and ink industries. Some things are not done because they are worthless.
hackthemack|4 months ago
Something I wanted to mention, only somewhat tanget. The Telecommunications Act of 1996 forced telecommunication companies to lease out their infrastructure. It massively reduced the prices an ISP had to pay to get T1, because, suddenly, there was competition. I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.
But, wouldn't you know it, the Telecommunication companies sued the FCC and the Telecommunications Act was gutted in 2003
https://en.wikipedia.org/wiki/Competitive_local_exchange_car...
marcusb|4 months ago
It varied a lot by region. At the mom and pop ISP I worked at, we went from paying around $1,500/month for a T1 to $500 to, eventually around $100/month for the T1 loop to the customer plus a few grand a month for an OC12 SONET ring that we used to backhaul the T1 (and other circuits) back to our datacenter.
But, all of it was driven by the Telecommunications Act requirement for ILECs to sell unbundled network facilities - all of the CLECs we purchased from were using the local ILEC for the physical part of the last mile for most (> 75%) of the circuits they sold us.
One interesting thing that happened was that for a while in the late 90’s, when dialup was still a thing, we could buy a voice T1 PRI for substantially less than a data T1 ($250 for the PRI vs $500 for the T1.) The CLEC’s theory was our dialup customers almost all had service from the local ILEC, and the CLEC would be paid “reciprocal compensation” fees by the ILEC for the CLEC accepting calls from them.
In my market, when the telecommunications act reform act was gutted, the ILEC just kept on selling wholesale/unbundled services to us. I think they had figured out at that point that it was a very profitable line of business if they approached it the right way.
jauntywundrkind|4 months ago
Regarding the price of connection, it's also worth mentioning that while T1 and other T-channel and OCx connection remains in high use, 1996-1999 is also the period where DSL became readily available & was a very fine choice for many needs. This certainly created significant cost pressure on other connectivity options.
bwfan123|4 months ago
awongh|4 months ago
If it's true that this regulation was what helped jumpstart the internet it's an interesting counterpoint to the apocalyptic predictions of people when these regulations are undone. (net neutrality comes to mind as well)
I've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".
nroets|4 months ago
But the price war was inevitable. And the telecoms bubble was highly likely in any case.
Telecoms investment was a response to crazy valuations of dot-com stocks.
mg|4 months ago
If a minute of GPU usage is currently $0.10, a night of GPU usage is 8 * 60 * 0.1 = $48. Which might very well be worth it for an improved codebase. Or a better design of a car. Or a better book cover. Or a better business plan.
Mo3|4 months ago
I'd argue we very certainly will. Companies are gobbling up GPUs like there's no tomorrow, assuming demand will remain stable and continue growing indefinitely. Meanwhile LLM fatigue has started to set in, models are getting smaller and smaller and consumer hardware is getting better and better. There's no way this won't end up with a lot of idle GPUs.
credit_guy|4 months ago
That is nothing. Coding is done via text. Very soon people will use generative AI for high resolution movies. Maybe even HDR and high FPS (120 maybe?). Such videos will very likely cost in the range of $100-$1000 per minute. And will require lots and lots of GPUs. The US military (and I bet others as well) are already envisioning generative AI use for creating a picture of the battlespace. This type of generation will be even more intensive than high resolution videos.
bigbadfeline|4 months ago
The cost/benefit analysis doesn't add up for two reasons:
First, a refactored codebase works almost the same as non-refactored one, that is, the tangible benefit is small.
Second, how many times are you going to refactor the codebase? Once and... that's it. There's simply no need for that much compute for lack of sufficient beneficial work.
That is, the present investments are going to waste unless we automate and robotize everything, I'm OK with that but it's not where the industry is going.
skrebbel|4 months ago
I've seen lots of claims about AI coding skill, but that one might be able to improve (and not merely passably extend) a codebase is a new one. I'd want to see it before I believe it.
thenaturalist|4 months ago
And you can't really hack / outsmart feedback loops.
Just because something is conceptually possible, interaction with the real rest of the world separates a possible from an optimal solution.
The low hanging fruits/ obvious incremental improvements might be quickly implemented by LLMs based on established patterns in their training data.
That doesn't get you from 0 to 1 dollar, though and that's what it's all about.
cantor_S_drug|4 months ago
jdlshore|4 months ago
This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.
The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.
ccorcos|4 months ago
yubblegum|4 months ago
stephc_int13|4 months ago
The only thing to keep in mind is that all of this is about business and ROI.
Given the colossal investments, even if the companies finances are healthy and not fraudulent, the economic returns have to be unprecedented or there will be a crash.
They are all chasing a golden goose.
jauntywundrkind|4 months ago
delusional|4 months ago
I agree, but would like to maybe build out that theory. When we start talking about the mechanisms of the past we end up over-constricting the possibility space. There were a ton of different ways the dotcom bubble COULD have played out, and only one way it did. If we view the way it did as the only way it possibly could have, we'll almost certainly miss the way the next bubble will play out.
Mistletoe|4 months ago
Printerisreal|4 months ago
nickdothutton|4 months ago
How much is a 10 year old GPU worth? Where is the “dwdm but for GPUs?”.
There truly are interesting times and we have the benefit of being in them.
keeda|4 months ago
> How much is a 10 year old GPU worth? Where is the “dwdm but for GPUs?”.
From other sources cited in TFA it seems GPUs won't last 3 years, let alone 10! But I think we know what the "DWDM for GPUs" is -- it's the processing efficiency gains that we've seen over the last few years which keeps driving the per-token prices sharply down.
pragmatic|4 months ago
The only difference is fiber optic lines remained useful the whole time. Will these cards have the same longevity?
(I have no idea just sharing anecdata)
Zigurd|4 months ago
heisenbit|4 months ago
The article cites anecdotal 1-2 years due to the significant stress.
pragmatic|4 months ago
This didn't last that much longer and many places were trying to diversify into managed services (data dog for companues on Orem network and server equipment,etc) which they call "unregulated" revenue.
Add written an things business, irrational exuberance can kill you.
hyghjiyhu|4 months ago
mjcl|4 months ago
narmiouh|4 months ago
Almost 90% of topline investments appear to be geared around achieving that in the next 2-5 years.
If that doesn’t come to pass soon enough, investors will loose interest.
Interest has been maintained by continuous growth in benchmark results. Perhaps this pattern can continue for another 6-12 months before fatigue sets in, there are no new math olympiads to claim a gold medal on…
Whats next is to show real results, in true software development, cancer research, robotics.
I am highly doubtful the current model architecture will get there.
Zigurd|4 months ago
cl42|4 months ago
If you speak with AI researchers, they all seem reasonable in their expectations.
... but I work with non-technical business people across industries and their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.
12 months later, when things don't work out, their response to AI goes to the other end of the spectrum -- anger, avoidance, suspicion of new products, etc.
Enough failures and you have slowing revenue growth. I think if companies see lower revenue growth (not even drops!), investors will get very very nervous and we can see a drop in valuations, share prices, etc.
xadhominemx|4 months ago
stevenhuang|4 months ago
digitcatphd|4 months ago
Simple as this - as to why its just not possible for this to continue.
xadhominemx|4 months ago
bwfan123|4 months ago
Workaccount2|4 months ago
Their margin is ridiculous and they are still unable to meet demand.
JCM9|4 months ago
cyanydeez|4 months ago
Obviously its a bubble but thats meaningless for anyone but the richest to manage.
The rest of us are just ants.
rglullis|4 months ago
I get the demand for new applications, which require inference, but nowadays with so many good (if not close to SOTA) models available for free and the ability to run them on consumer hardware (apple M4 or AMD Max APUs), is there any demand for applications that justify a crazy amount of investment in GPUs?
porridgeraisin|4 months ago
Of course, cern is still going to use their FPGA hyper-optimized for their specific trigger model for the LHC, and apple is gojng to use a specialized low power ASIC running a quantized model for hello Siri, but I meant the majority usecase.
Leynos|4 months ago
I expect models will get larger again once everyone is doing their inference on B200s, but the RL training budget is where the insatiable appetite sits right now.
bix6|4 months ago
Leynos|4 months ago
cameldrv|4 months ago
ekjhgkejhgk|4 months ago
> However, what’s become clear is that OpenAI plans to pay for Nvidia’s graphics processing units (GPUs) through lease arrangements, rather than upfront purchases
I wish someone here could explain it to a dummy like me. Nvidia tells OpenAI: heres some GPUs, can you pay for them over 5 years. How is this an "investment" by Nvidia? That reference keeps calling this an investment, but what they describe is a lease agreement. Why do they call it an investment? What am I missing?
sbuttgereit|4 months ago
foundart|4 months ago
At the end Nvidia retains ownership of what are probably very low value assets.
Contrast that with car leases: there is a robust market for used cars.
Nvidia is in effect financing the GPUs by not requiring the full payment up front.
Do the lease payments add up to the total cost?
sails|4 months ago
This is surely the most important line in the piece? In what world would this much demand not lead to alternatives emerging?
(Assuming the upside, yes if the demand is not there in two years then yes it’s all going to burn)
monkeydust|4 months ago
pgspaintbrush|4 months ago
whp_wessel|4 months ago
https://open.spotify.com/episode/2ieRvuJxrpTh2V626siZYQ?si=2...
dangus|4 months ago
By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
Today, any game you make for a modern system is a game you could have made for the PS3/Xbox 360 or perhaps something slightly more powerful.
Certainly there have been experiences that use new capabilities that you can’t literally put on those consoles, but they aren’t really “more” in the same way that a PS2 offered “more” than the PlayStation.
I think in that sense, there will be some kind of bubble. All the companies that thought that AI would eventually get good enough to suit their use case will eventually be disappointed and quit their investment. The use cases where AI makes sense will stick around.
It’s kind of like how we used to have pipe dreams of certain kinds of gameplay experiences that never materialized. With our new hardware power we thought that maybe we could someday play games with endless universes of rich content. But now that we are there, we see games like Starfield prove that dream to be something of a farce.
jcranmer|4 months ago
The PS3 is the last console to have actual specialized hardware. After the PS3, everything is just regular ol' CPU and regular ol' GPU running in a custom form factor (and a stripped-down OS on top of it); before then, with the exception of the Xbox, everything had customized coprocessors that are different from regular consumer GPUs.
ben_w|4 months ago
I hope that's where we are, because that means my experience will still be valuable and vibe coding remains limited to "only" tickets that take a human about half a day, or a day if you're lucky.
Given the cost needed for improvements, it's certainly not implausible…
…but it's also not a sure thing.
I tried "Cursor" for the first time last week, and just like I've been experiencing every few months since InstructGPT was demonstrated, it blew my mind.
My game metaphor is 3D graphics in the 90s: every new release feels amazing*, such a huge improvement over the previous release, but behind the hype and awe there was enough missing for us to keep that cycle going for a dozen rounds.
* we used to call stuff like this "photorealistic": https://www.reddit.com/r/gaming/comments/ktyr1/unreal_yes_th...
BlueTemplar|4 months ago
But the way how it stayed niche shows how it's not just about new gameplay experiences.
Compare with the success of the Wii Sports and Wii Fit, which I would guess managed it better, though through a different kind of hardware that you are thinking about ?
And I kind of expect the next Nintendo console to have a popular AR glasses option, which also would only have been made possible thanks to improving hardware (of both kinds).
MASNeo|4 months ago
Also, depreciation schedules beyond useful life of an asset may not be fraud but I’d call it a bit too creative for my liking.
Time will tell.
alephnerd|4 months ago
Meta commentary but I've grown weary of how commentary by actual domain experts in our industry are underrepresented and underdiscussed on HN in favor of emotionally charged takes.
dvt|4 months ago
Calling a VC a "domain expert" is like calling an alcoholic a "libation engineer." VC blogs are, in the best case, mildly informative, and in the worst, borderline fraudulent (the Sequoia SBF piece being a recent example, but there are hundreds).
The incentives are, even in a true "domain expert" case (think: doctors, engineers, economists), often opaque. But when it comes to VCs, this gets ratcheted up by an order of magnitude.
unknown|4 months ago
[deleted]
Theodores|4 months ago
SGI (Silicon Graphics) made the 3D hardware that many companies relied on for their own businesses, in the days before Windows NT and Nvidia came of age.
Alias|Wavefront and Discreet were two companies where their product cycles were very tied in the SGI product cycles, with SGI having some ownership, whether it be wholly owned or spun out (as SGI collapsed). I can't find the reporting from the time, but it seemed to me that the SGI share price was propped up by product launches from the likes of Alias|Wavefront or Discreet. Equally, the 3D software houses seemed to have share prices propped up by SGI product launches.
There was also the small matter of insider trading. If you knew the latest SGI boxes were lemons then you could place your bets of the 3D software houses accordingly.
Eventually Autodesk, Computer Associates and others eventually owned all the software, or, at least, the user bases. Once upon a time these companies were on the stock market and worth billions, but then they became just another bullet point in the Autodesk footer.
My prediction is that a lot of AI is like that, a classic bubble, and, when the show moves on, all of these AI products will get shoehorned into the three companies that will survive, with competition law meaning that it will be three rather than two eventual winners.
Equally, much like what happened with SGI, Nvidia will eventually come a cropper due to the evaluations due to today's hype and hubris not delivering.
nextworddev|4 months ago
foundart|4 months ago
Certainly it suggests that “this time is different” without saying it in a quotable fashion.
The metrics it provides seem useful. What are the metrics it is missing?
mooreds|4 months ago
But the answer is, "kinda"? There are similarities, but the AI buildout is worse in some ways (more concentration, GPU backed debt) and better in others (capacity is being used, vendors actually have cash flow).
The conclusion:
> Unlike the telecom bubble, where demand was speculative & customers burned cash , this merry-go-round has paying riders.
Seems a little short sighted to me. IMO, there is a definite echo, but we are in the mid-late stage, not the end stage.
It's simply not fair to compare Lucent at the end of a bubble with Nvidia in the middle, and that is what the author did.
If you haven't listened to the referenced interview between Thompson and Kedrosky, I'd do so: https://www.theringer.com/podcasts/plain-english-with-derek-...
spaceballbat|4 months ago
JCM9|4 months ago
The fate of the bubble will be decided by Wall Street not tech folks in the valley. Wall Street is already positioning itself for the burst and there’s lots of finance types ready to call party over and trigger the chaos that lets them make bank on the bubble’s implosion.
These finance types (family offices, small secret investment funds) eat clueless VCs throwing cash on the fire for lunch… and they’re salivating at what’s ahead. It’s a “Big Short” once in 20-30 years type opportunity.
ProjectArcturis|4 months ago
No - it's very hard to successfully bet against anything in finance, and VCs and non-public investments are particularly hard. When you go long, you simply buy something and hold it until you decide to sell. If you short, you have to worry about borrowing shares, paying short fees, and having unlimited risk.
How would you even begin to bet against OpenAI specifically? The closest proxy I can think of is shorting NVDA.
There's also nobody whose job it is to make big one-time shorts. Like you said, it's a once in 20-30 years opportunity, so no one builds a hedge fund dedicated to sitting around for decades waiting for that opportunity. There will certainly be exceptions, and maybe they'll make a Big Short 2 about the scrappy underdogs who saw the opening and timed it perfectly. But the vast majority of Wall Street desperately wants the party to continue.
delusional|4 months ago
They are not in any corner. They rightly believe that they won't be allowed to fail. There's zero cost to inflating the bubble. If they tank a loss, it's not their money and they'll go on to somewhere else. If they get lucky (maybe skillful?) they get out of the bubble before anyone else, but get to ride it all the way to the top.
The only way they lose is if they sit by and do nothing. The upside is huge, and the downside is non-existent.
bwfan123|4 months ago
cl42|4 months ago
The Economist has a great discussion on depreciation assumptions having a huge impact on how the finances of the cloud vendors are perceived[1].
Revenue recognition and expectations around Oracle could also be what bursts the bubble. Coreweave or Oracle could be the weak point, even if Nvidia is not.
[1] https://www.economist.com/business/2025/09/18/the-4trn-accou...
ekjhgkejhgk|4 months ago
https://capitalgains.thediff.co/p/vendor-financing
yalogin|4 months ago
The gpu bubble is different. Nvidia is actually selling gpus in spades. So it’s not comparable to the telecom bubble. Now the question remains how many more gpus can they sell? That depends on the kind of services that are built and how their adoption takes off. So now is it a bubble or just frothy at the top? There is definitely going to be a pull back and some adjustment, but I cannot say how bad it is
mwkaufma|4 months ago
brazukadev|4 months ago
dehrmann|4 months ago
Really?! I'm not used to chips having such a short lifespan.
metadat|4 months ago
davedx|4 months ago
How much of a threat is custom silicon to Nvidia remains an open question to me. I kinda think, by now, we can say they’re similar but different enough to coexist in the competitive compute landscape?
alephnerd|4 months ago
Nvidia has also begun trying to enter the custom silicon sector as well, but it's still largely dominated by Broadcom, Marvell, and Renesas.
xbmcuser|4 months ago
I personally would prefer China to get to parity on node size and get competitive with nvidia. As that is the only way I see the world not being taken over by the tech oligarchy.
JCM9|4 months ago
rossdavidh|4 months ago
redwood|4 months ago
ivape|4 months ago
AI is a lot more useful than hyper scaled up crud apps. Comparing this to the past is really overfitting imho.
The only argument against accumulating GPUs is that they get old and stop working. Not that it sucks, not that it’s not worth it. As in, the argument against it is actually in the spirit of “I wish we could keep the thing longer”. Does that sound like there’s no demand for this thing?
The AI thesis requires getting on board with what Jenson has been saying:
1) We have a new way to do things
2) The old ways have been utterly outclassed
3) If a device has any semblance of compute power, it will need to be enhanced, updated, or wholesale replaced with an AI variant.
There is no middle ground to this thesis. There is no “and we’ll use AI here and here, but not here, therefore we predictably know what is to come”.
Get used to the unreal. Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.
pessimizer|4 months ago
We've technically been able to play board games by entering our moves into our telephones, sending them to a CPU to be combined, then printing out a new board on paper to conform to the new board state. We do not do this because it would be stupid. We can not depend on people starting to do this saving the paper, printer, and ink industries. Some things are not done because they are worthless.