I had my formative years in programming when memory usage was something you still worried about as a programmer. And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant. Will we start to actually consider this in software solutions again as a result?
You're right in terms of fitting your program to memory, so that it can run in the first place.
But in performance work, the relative speed of RAM relative to computation has dropped such that it's a common wisdom to treat today's cache as RAM of old (and today's RAM as disk of old, etc).
In software performance work it's been all about hitting the cache for a long time. LLMs aren't too amenable to caching though.
I've actively started to use outlook and teams through chrome to free up some of my ram, easily saves 3-4gb. It's gotten ridiculous how much ram basic tools are using, leaving nothing for doing actually real work
I doubt it. I predict in a few years, maybe sooner, one/some of the AI companies buying up the supply will either have achieved their goal or collapsed, and then the market will be flooded with a glut of memory driving prices low again. Or, conversely, the demand stays high for a sustained period of time and the suppliers just increase supply. There's no hard bill of materials/technical reasons for the memory prices to be this high, unlike 20+ years ago.
> And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant.
I don't think that ever happened. Using relatively sparse amount of memory turns into better cache management which in turn usually improves performance drastically.
And in embedded stuff being good with memory management can make the difference between 'works' and 'fail'.
I never really bought in to the anti-Leetcode crowd’s sentiment that it’s irrelevant. It has always mattered as a competitive edge, against other job candidates if you’re an employee or the competition of you’re a company. It only looked irrelevant because opportunities were everywhere during ZIRP, but good times never last.
It's not like most developers are wasting memory for fun by using Electron etc. It's just the simplest way to deploy applications that require frequent multiplatform changes. Until you get Apple to approve native app changes faster and Linux users to agree on framework, app distribution, etc., it's the most optimal way to ship a product and not just a program.
RAM didn't get more expensive to produce. It just got more desirable. The prices will come down again when supply responds. It may take some time, but it will happen eventually.
We would have, if the expensive memory was a long term trend. It is not - eventually the supply will expand to match demand. There is no fundamental lack of raw materials underlying the issues, it is just a demand shock.
I just heard in a podcast, they talked about how powerful our devices are today but do not feel faster than they did 15 years ago and that it's because of what you write here.
When I train some leetcode problems, I remember the best solution was the one that optimised cpu (time) instead of memory. Meaning adding data index in memory instead of iterating on the main data structure. I thought, ok, thats fine, it's normal, you can (could) always buy more RAM, but you can't buy more time.
But well, I think there is no right answer and there always be a trade off case by case depending on the context.
Android's investing significantly in reducing the memory usage of the next release simply because the BOM cost of RAM for their low-end partners is becoming prohibitive.
I think Europe should invest into manufacturing RAM. RAM isn't going anywhere, all of modern compute uses it. This would be an opportunity to create domestic supply of it.
The worry is that these high prices aren't going to last long. And by the time you spend years building the capacity, the prices plummet making your facility uneconomical to run.
Ram will always be in some demand, but that doesn't mean it's viable for everyone to start building production.
Aren’t Chinese manufacturers already expanding their capacity? Given that Samsung and SK Hynix have left that market in the pursuit of HBM4 chips, China is going to rule this market. At least that’s what analysts are saying.
It should. And it should enact the political reforms they would make large capital projects like fabs possible. The current confederacy is proving just as much a stepping stone for Europe as it was for America. I’m not saying a full united Europe should emerge. But a system of vetoes is barely a system at all.
Idea: Take the money that Germany promised to Intel if they build a state of the art fab. Instead, ask SK Hynix, Samsung, or Micron to build a DRAM fab in Germany.
Europe needs to focus on energy and fixing their supply chain first. And the deregulation push keeps getting delayed. Just the other day the main person behind it in Germany got sacked because of internal power struggles, in part because of the Greens (part of the coalition).
For context, the German manufacturing sector is losing something like 15k jobs PER MONTH.
> I think Europe should invest into manufacturing RAM. RAM isn't going anywhere, all of modern compute uses it. This would be an opportunity to create domestic supply of it.
It's easy to build factories, much more difficult to train the engineers required to run them... and let's not even talk about all the crazy regulations & environmental rules at the EU level that make that task even more difficult, because yes, chip factories do pollute... a lot.
Countries like South Korea or Taiwan have adapted all their legislations and tax, environmental regulations to allow such factories to operate easily. The EU and EU countries will never do that... better outsource pollution and claim they care about the planet...
> I think Europe should invest into manufacturing RAM ... This would be an opportunity to create domestic supply of it
How?
Most foundries across Asia and the US are being given subsidizes that outstrip those that the EU is providing, with the only mega-foundry project in Europe was canceled by Intel last year [0].
Additionally, much of the backend work like OSAT and packaging is done in ASEAN (especially Malaysia), Taiwan, China, and India. As much of the work for memory chips is largely backend work (OSAT and packaging), this is a field the EU simply cannot compete in given that it has FTAs with the US, Japan, South Korea, India, and Vietnam so any EU attempt would be crushed well before imitating the process.
Furthermore, much of the IP in the memory space is owned by Korean, Japanese, Taiwanese, Chinese, and American champions who are largely investing either domestically or in Asia, as was seen with MUFG's announcement earlier today to create a dedicated end-to-end semiconductor fund specifically to unify Japan, Taiwan, and India into a single fab-to-fabless ecosystem [1]. SoftBank announced something similar to unify the US, Japan, Malaysia, and India into a similar end-to-end ecosystem as well a couple weeks ago [2]. Meanwhile, South Korea is trying to further shore up their domestic capacity [3] via subsidies and industrial policy.
When Japanese, Korean, and Taiwanese technology and capital partners are uninterested in investing in building European capacity, American technology and capital partners have pulled out of similar initiatives in Europe, and the EU working to ban Chinese players [4] what can the EU even do?
----
Edit: can't reply
> Why are you overlooking European semiconductor champions
Because they don't have the IP for the flash memory supply chain. And whatever capacity and IP they have in chip design, front-end fab, or back-end fab is domiciled in the US, ASEAN, and India.
> STMicroelectronics
Power electronics and legacy nodes (28nm and above) for IoT and embedded applications.
> Infineon
Power electronics and legacy nodes (28nm and above) for automotive applications.
> NXP
Power electronics and legacy nodes (28nm and above) for embedded applications.
> All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.
They don't have the IP. Much of the IP for the memory space is owned by Japanese, American, Korean, Taiwanese and Chinese companies.
Additionally, most Asian funds own both the IP and capital (often with government backing), making European attempts futile.
Essentially, the EU would have to start from scratch and decades behind countries with whom the EU already has FTAs with that have expanded capacity well before the EU and thus would be able to crush any incipient European competitor.
Back in the day if you could find a deal on defective RAM (that wasn't going to degrade further?), Linux could be configured to avoid the defects. Unfortunately this isn't allowed with secure/UEFI boot.
i am working on my side-product [1] where i was exploring a Rockchip which required external memory (just 1G) which went from $3 to $32 and completely destroyed economics for me. I settled with one with embedded memory and optimizing my code instead :)
I suspect game development will be similar - game companies will optimize their games given customer cards are not going to be released for a while or will be too expensive.
For reference, Octopart is useful to track prices from many distributors, linked below [0] is a commonly used memory (1G) for Rockchip, Amlogic, Allwinner on many Radxa and Orange Pis.
Part of the reason programs use so much memory is because of optimization, but of a different kind. Memory is fast-ish, so if you know or think that you will require X Y Z anyway then just load it in RAM. And, if you think you might need it later, don't bother unloading it. Just keep it around.
Garbage collectors also use similar strategies. Collecting garbage is expensive, so just don't until you need to. The extra memory usage in this case isn't a downside, it's an upside. Your code runs faster.
That's how Java and dotnet are able to achieve insane performance times in some benchmarks, like within 50% of native. They're not collecting garbage, and their allocators are actually faster than malloc.
If you've ever run a Java program at consistent 90% heap usage, you'll notice it absolutely grinds to a halt. I'm talking orders of magnitude slower. Naturally, this isnt highlighted in benchmarks, but it illustrates the power of allocating more memory.
Only a matter of time before you hear about missing shipping trucks being stolen. China is opening up more production, but I don’t see any relief coming soon.
China is fundamentally limited by lack of ASML machines due to a ban. China can only help if they can recreate ASML machines used to produce RAM with good yield and small enough features.
There were years in the 1990s and early 2000s when it was easy to get faster runtimes by using more memory, back when even on a multi-user system like Linux or BSD was typically running one main program at a time. We had multiple hardware web servers, multiple hardware email servers, and multiple hardware database servers. Getting the most CPU performance out of the system for your applications was the order of the day.
Now, almost everything on the server side is a VM or a container. We have lots of neighbors who want to share the CPU and the RAM, and the RAM is the bigger constraint because the CPUs have 192 cores and each of those cores does a dozen times as much work as a decade ago. Heck, we used to have the memory controller on the motherboard and the last level of cache was a chip or module of SRAM outside the CPU.
We also have a situation now in which the multiple in speed of the CPU over RAM has skyrocketed, but the caches have gotten far larger and much smarter. Smaller things arranged differently in RAM make things run faster because they make better use of the cache.
Now that RAM is expensive, shared, and program and data size and arrangements are bound to cache behavior, optimization can lean heavily into optimizing for RAM again.
Some of these arguments hold true for desktop systems as well.
I have wondered for years when the time will come that instead of such huge and smart caches, someone will just put basically register-speed RAM on the chip and swap to motherboard RAM the way we swap to disk. HBM is somewhere close, being a substrate stacked in the package but not in the CPU die itself.
The joke is that Apple RAM pricing is now close to market level, they still have margin in there even at market prices, and they are notorious for supply chain management and locking in contracts/prices ahead of time. So doubt Apple will change anything here short term.
On the flip side if you're buying a new computer in 2026 - it's going to be even harder to justify not getting a MacBook, the chips are already 2 years ahead of PC, the price of base models was super competitive, now that the ram is super expensive even the upgraded versions are competitive with the PC market. Oh and Windows is turning to an even larger pile of shit on a daily basis.
Apple also uses a different kind of RAM (iirc a custom LPDDR5X to use with their Unified Memory SOC - not the same kind as the commodity RAM that everyone is putting in their PCs. So they aren’t competing with everyone on it. Plus they probably locked in their rates back in 2024 with their suppliers.
I might not have bought NVDA or timed BTC correctly, but at least I have 512 GB of DDR5 in my server and 128 GB in my Macbook Pro haha. The reality is that these are insanely huge amounts of RAM. I'm glad to have them because I don't need these tab suspender extensions a bunch of my friends use, but really I'd prefer if GPUs were a bit cheaper, and server hardware was generally easier to get. An SXM5 based motherboard is really hard to get these days despite the fact that you can get super powered Epyc 9755s for comparatively nothing.
It reminds me of the heady days of Thai floods when hard drives were inaccessible.
Recently order a number of machines with 32Gb of RAM. Wanted 64, was told prices couldn't be guaranteed nor could delivery dates. Under the pressure of urgency settled for whatever was available that day.
Yes the normal term I’ve always heard and used for what they’re talking about is “BOM cost”, i.e. the combined cost of every item on the BOM to make a single unit.
A conspiracy theory I’m entertaining right now is that hogging RAM manufacturing by AI companies is not so much because they _need_ the RAM, but because they want to cripple existing and potential competitors, and that includes on-device models.
One thing that might support this is the fact AI companies are purchasing uncut wafers of DRAM. One use might be to hoard and stockpile them somewhere in a cave, so that no one else gets to them.
Another thing that might support this is that precisely the same strategy had been in use by software companies during the COVID hiring fever. Companies used to hire people for ridiculous pay with little actual work to perform so that among other things, competitors wouldn’t whisk those people away and be at an advantage.
This, of course, ended with massive layoffs once the reckoning came about, and I’m wondering about what is going to happen when (there’s no “if”) the reckoning comes for big AI, too.
This is like in 1993, when I bought a 486-DX2 with mighty(!!! ;-) 4 MB of Ram. MEGAbytes, not GIGAbytes :-D
(Graphiccard memory was back then 256KB or 512KB or 1024KB, amount what we have today as L1 cache on throwaway CPUs)
There were multiple periods in history we experienced ram price shocks.
1. Ram was relatively cheap between 1985 and 1987 hovering around $100-150 for 1MB using 256Kbit chips. Then 1987 anti dumping laws lined up with fabs upgrading to lower yielding new 1Mbit chips and things got crazy. In 1988 256Kbit chips went from $3.5 to $7 in less than a month. Some companies coped better than others. Atari was the first to offer computer shipping with 1MB below $1000 thanks to Tramiels little secret of smuggling ram from Japan and skirting anti dumping restrictions :) Even SUN Microsystems was caught buying that smuggled ram from Tramiel.
2. 4MB $150 January 1992, lowest in went would be $100 in December 1992, and back to $130 in December 1994.
128MB DIMM prices: May 1997 $300. July 1998 $150. July 1999 $99. September-December 1999 $300. May 2000 $89.
Then overproduction combined with dot-com boom liquidations started flooding the market and Feb 2001 $59, by Aug 2001 _256MB_ module was $49. Feb 2002 256MB $34. Finally April 2003 hit the absolute bottom with $39 _512MB_ DIMMs
Sadly now is not like any of those times. Its like the Jiji earthquake lasted couple of years straight.
Isn't there a full wafer ai chip mainframe for data centers now that blows anything needing ram out of the water?
I don't understand the ram shortage exists companies have surpassed nvidia.
I think China is about to step in and take every last bit of non-ai market share, and then when the bubble bursts companies like micron and samsung are going to be begging governments for a bail out.
I think we’re at the peak, or close to it for these memory shenanigans. OpenAI who is largely responsible for the shortage, just doesn’t have the capital to pay for it. It’s only a matter of time before chickens come home to roost and the bill is due. OpenAI is promising hundreds of billions in capex but has no where near that cash on hand, and its cash flow is abysmal considering the spend.
Unless there is a true breakthrough, beyond AGI into super intelligence on existing, or near term, hardware— I just don’t see how “trust me bro,” can keep its spending party going. Competition is incredibly stiff, and it’s pretty likely we’re at the point of diminishing returns without an absolute breakthrough.
The end result is going to be RAM prices tanking in 18-24 months. The only upside will be for consumers who will likely gain the ability to run much larger open source models locally.
Kind of funny, with the help of AI finding some historical price sheets and 'design' a computer. So like 70s to 80s and I was blown away, how large the RAM cost was. A huge part of the BOM. It totally change the way I think about computer design in this area and why some decisions were made.
I guess something needs to be done about the RAM (and to a degree SSD/NAND) production cartel if it can so easily take hostage a major part of the society & starve it of critically needed components that are needed for a functioning modern society.
The last spike in RAM price was after an earthquake in Taiwan in April 2024. Now, the shortage will continue until about 2027 when new factories will start shipping.
Expensive PCs/homeservers means more people on mobile crap + someone cloud, means students who do not learn PCs FLOSS when they have time and so on. That's the real point.
I read that Apple will start feeling the heat in the third quarter of this year although nobody knows for sure. That will either shrink their margins a bit or iPhones prices will go up.
Most base level smartphones are loss leaders and wouldn't be severely impacted and upper-tier smartphones tend to be priced at their true value. It's the mid-tier SKUs that get impacted.
Additonally, depending on which country you live in, telecom vendors reduce the upfront cost of the phone purchase and make up the difference via contracts.
The Verge had some good coverage about this, but TLDR: probably. Flagship phones may not raise in price to fully reflect it. They might cut costs elsewhere like keeping the same camera, or eat some of the cost increase in their margin.
Big tech wants all the chips and they get them. That is Stalinist level of absurd planning.
People is missing the point. Mega-corporations distort the market. This is not capitalism this is old aristocratic ruling by power. If all these monopolies were divided in smaller chunks and regulated to not allow them to abuse that power we will not be here.
This situation is not normal, big tech is currently above the law and above the market economy and if they fail their plan is to make us pay *AGAIN* for their bad decisions. All businesses and individuals are already paying higher prices for big tech folly, we will be left with the bill when the AI boom fails, too.
This is a fairly odd statement given that BOMs are managed in manufacturing systems and for accounting and engineering purposes in multiple different ways. This can be for anything to do with sales data for a client or for guys on the factory floor or for the accountants. There are sales BOMs, manufacturing BOMs procurement BOMs and nested BOMs etc all for different parts of the business process...you would have BOMs within the organisation that were probably nearly 70% etc or those that were 0%!
I asked ChatGPT directly how it was fair that OpenAI bought 40% of the world’s RAM supply.
It denied this saying that the figures quoted were estimates only, that such massive RAM contracts would be easily obtainable public knowledge and that primarily the recent price increases were mostly cyclical in nature.
Any truth to this?
Edit to add: I am actually curious; I was under the impression that this 40% story going around was true and confirmed, rather than just hyperbole or speculation.
Some comments were deferred for faster rendering.
travisgriggs|5 days ago
fulafel|5 days ago
But in performance work, the relative speed of RAM relative to computation has dropped such that it's a common wisdom to treat today's cache as RAM of old (and today's RAM as disk of old, etc).
In software performance work it's been all about hitting the cache for a long time. LLMs aren't too amenable to caching though.
dahcryn|5 days ago
mushufasa|5 days ago
jacquesm|5 days ago
I don't think that ever happened. Using relatively sparse amount of memory turns into better cache management which in turn usually improves performance drastically.
And in embedded stuff being good with memory management can make the difference between 'works' and 'fail'.
rTX5CMRXIfFG|5 days ago
cyberrock|5 days ago
zarzavat|5 days ago
yxhuvud|5 days ago
ReedorReed|5 days ago
jooz|5 days ago
But well, I think there is no right answer and there always be a trade off case by case depending on the context.
throw0101a|5 days ago
As 'just' a user in the 1990s and MS-DOS, fiddling with QEMM was a bit of a craft to get what you wanted to run in the memory you had.
* https://en.wikipedia.org/wiki/QEMM
(Also, DESQview was awesome.)
lmcd|5 days ago
unknown|5 days ago
[deleted]
eulers_secret|5 days ago
I do embedded Linux and ram usage is a major concern, same for other embedded applications.
I’m partying like it’s the 90s, on a 32-bit processor and a couple hundred MB of ram.
nostrademons|5 days ago
NooneAtAll3|5 days ago
it's just a cartel cycle of gaining profits while soon eliminating all investments into competitors when flood of cheap ram "suddenly" appears
Aerroon|5 days ago
Gigachad|5 days ago
Ram will always be in some demand, but that doesn't mean it's viable for everyone to start building production.
malshe|5 days ago
JumpCrisscross|5 days ago
It should. And it should enact the political reforms they would make large capital projects like fabs possible. The current confederacy is proving just as much a stepping stone for Europe as it was for America. I’m not saying a full united Europe should emerge. But a system of vetoes is barely a system at all.
Tade0|5 days ago
https://www.goodram.com/en/
You don't see their products in stores too often as they're focused on B2B - particularly the automotive sector.
That being said I have a 128GB memory stick from this manufacturer and I hope they make the most out of this windfall.
throwaway2037|5 days ago
alecco|5 days ago
For context, the German manufacturing sector is losing something like 15k jobs PER MONTH.
gib444|5 days ago
throw_m239339|5 days ago
It's easy to build factories, much more difficult to train the engineers required to run them... and let's not even talk about all the crazy regulations & environmental rules at the EU level that make that task even more difficult, because yes, chip factories do pollute... a lot.
Countries like South Korea or Taiwan have adapted all their legislations and tax, environmental regulations to allow such factories to operate easily. The EU and EU countries will never do that... better outsource pollution and claim they care about the planet...
artemonster|5 days ago
alephnerd|5 days ago
How?
Most foundries across Asia and the US are being given subsidizes that outstrip those that the EU is providing, with the only mega-foundry project in Europe was canceled by Intel last year [0].
Additionally, much of the backend work like OSAT and packaging is done in ASEAN (especially Malaysia), Taiwan, China, and India. As much of the work for memory chips is largely backend work (OSAT and packaging), this is a field the EU simply cannot compete in given that it has FTAs with the US, Japan, South Korea, India, and Vietnam so any EU attempt would be crushed well before imitating the process.
Furthermore, much of the IP in the memory space is owned by Korean, Japanese, Taiwanese, Chinese, and American champions who are largely investing either domestically or in Asia, as was seen with MUFG's announcement earlier today to create a dedicated end-to-end semiconductor fund specifically to unify Japan, Taiwan, and India into a single fab-to-fabless ecosystem [1]. SoftBank announced something similar to unify the US, Japan, Malaysia, and India into a similar end-to-end ecosystem as well a couple weeks ago [2]. Meanwhile, South Korea is trying to further shore up their domestic capacity [3] via subsidies and industrial policy.
When Japanese, Korean, and Taiwanese technology and capital partners are uninterested in investing in building European capacity, American technology and capital partners have pulled out of similar initiatives in Europe, and the EU working to ban Chinese players [4] what can the EU even do?
----
Edit: can't reply
> Why are you overlooking European semiconductor champions
Because they don't have the IP for the flash memory supply chain. And whatever capacity and IP they have in chip design, front-end fab, or back-end fab is domiciled in the US, ASEAN, and India.
> STMicroelectronics
Power electronics and legacy nodes (28nm and above) for IoT and embedded applications.
> Infineon
Power electronics and legacy nodes (28nm and above) for automotive applications.
> NXP
Power electronics and legacy nodes (28nm and above) for embedded applications.
> All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.
They don't have the IP. Much of the IP for the memory space is owned by Japanese, American, Korean, Taiwanese and Chinese companies.
Additionally, most Asian funds own both the IP and capital (often with government backing), making European attempts futile.
Essentially, the EU would have to start from scratch and decades behind countries with whom the EU already has FTAs with that have expanded capacity well before the EU and thus would be able to crush any incipient European competitor.
[0] - https://www.it-daily.net/shortnews-en/intel-officially-cance...
[1] - https://www.digitimes.com/news/a20260224VL219/taiwan-talent-...
[2] - https://asia.nikkei.com/economy/trade-war/trump-tariffs/soft...
[3] - https://www.digitimes.com/news/a20251230PD220/semiconductor-...
[4] - https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...
chvid|5 days ago
coppsilgold|5 days ago
Note that it won't help you if your workload makes use of all your RAM at once.
If you have a bunch of stuff running in the background it will help a lot.
I get 2 to 3 compression factor at all times with zstd. I calculated the utility to be as if I had 20GB extra RAM for what I do.
password4321|5 days ago
https://www.gnu.org/software/grub/manual/grub/html_node/badr...
fredoralive|5 days ago
m4rtink|5 days ago
asadm|5 days ago
1. https://x.com/_asadmemon/status/1989417143398797424
tehlike|5 days ago
goodburb|5 days ago
[0] https://octopart.com/part/nanya/NT6AN256T32AV-J2
blackoil|5 days ago
TrackerFF|5 days ago
A) Programmers will get their shit together and start shipping lean software.
OR
B) New laptops will become neutered thin clients, and all the heavy lifting will be done by cloud service providers.
Which one seems more likely?
sourcegrift|5 days ago
array_key_first|4 days ago
Garbage collectors also use similar strategies. Collecting garbage is expensive, so just don't until you need to. The extra memory usage in this case isn't a downside, it's an upside. Your code runs faster.
That's how Java and dotnet are able to achieve insane performance times in some benchmarks, like within 50% of native. They're not collecting garbage, and their allocators are actually faster than malloc.
If you've ever run a Java program at consistent 90% heap usage, you'll notice it absolutely grinds to a halt. I'm talking orders of magnitude slower. Naturally, this isnt highlighted in benchmarks, but it illustrates the power of allocating more memory.
Cyph0n|5 days ago
KumaBear|5 days ago
aurareturn|5 days ago
cestith|4 days ago
Now, almost everything on the server side is a VM or a container. We have lots of neighbors who want to share the CPU and the RAM, and the RAM is the bigger constraint because the CPUs have 192 cores and each of those cores does a dozen times as much work as a decade ago. Heck, we used to have the memory controller on the motherboard and the last level of cache was a chip or module of SRAM outside the CPU.
We also have a situation now in which the multiple in speed of the CPU over RAM has skyrocketed, but the caches have gotten far larger and much smarter. Smaller things arranged differently in RAM make things run faster because they make better use of the cache.
Now that RAM is expensive, shared, and program and data size and arrangements are bound to cache behavior, optimization can lean heavily into optimizing for RAM again.
Some of these arguments hold true for desktop systems as well.
I have wondered for years when the time will come that instead of such huge and smart caches, someone will just put basically register-speed RAM on the chip and swap to motherboard RAM the way we swap to disk. HBM is somewhere close, being a substrate stacked in the package but not in the CPU die itself.
locusofself|5 days ago
rafaelmn|5 days ago
On the flip side if you're buying a new computer in 2026 - it's going to be even harder to justify not getting a MacBook, the chips are already 2 years ahead of PC, the price of base models was super competitive, now that the ram is super expensive even the upgraded versions are competitive with the PC market. Oh and Windows is turning to an even larger pile of shit on a daily basis.
tehlike|5 days ago
deafpolygon|5 days ago
estimator7292|5 days ago
We can't get any new chips. At all. We can't launch our new product because nobody could afford the memory even if we could get some.
Incredible.
arjie|5 days ago
It reminds me of the heady days of Thai floods when hard drives were inaccessible.
SanjayMehta|5 days ago
kazinator|5 days ago
stephen_g|5 days ago
ketzu|5 days ago
WesolyKubeczek|5 days ago
One thing that might support this is the fact AI companies are purchasing uncut wafers of DRAM. One use might be to hoard and stockpile them somewhere in a cave, so that no one else gets to them.
Another thing that might support this is that precisely the same strategy had been in use by software companies during the COVID hiring fever. Companies used to hire people for ridiculous pay with little actual work to perform so that among other things, competitors wouldn’t whisk those people away and be at an advantage.
This, of course, ended with massive layoffs once the reckoning came about, and I’m wondering about what is going to happen when (there’s no “if”) the reckoning comes for big AI, too.
KellyCriterion|5 days ago
Raise your hand if you have been there too! :-))
estimator7292|4 days ago
I think it ran at 433MHz, and I could overclock it to almost 700.
Those were the days!
rasz|4 days ago
1. Ram was relatively cheap between 1985 and 1987 hovering around $100-150 for 1MB using 256Kbit chips. Then 1987 anti dumping laws lined up with fabs upgrading to lower yielding new 1Mbit chips and things got crazy. In 1988 256Kbit chips went from $3.5 to $7 in less than a month. Some companies coped better than others. Atari was the first to offer computer shipping with 1MB below $1000 thanks to Tramiels little secret of smuggling ram from Japan and skirting anti dumping restrictions :) Even SUN Microsystems was caught buying that smuggled ram from Tramiel.
2. 4MB $150 January 1992, lowest in went would be $100 in December 1992, and back to $130 in December 1994.
3. September 1999 Jiji earthquake.
https://en.wikipedia.org/wiki/1999_Jiji_earthquake#Economic_...
https://www.edn.com/panic-buying-sets-dram-prices-on-wild-ri...
https://www.eetimes.com/dram-prices-rise-sharply-following-t...
128MB DIMM prices: May 1997 $300. July 1998 $150. July 1999 $99. September-December 1999 $300. May 2000 $89.
Then overproduction combined with dot-com boom liquidations started flooding the market and Feb 2001 $59, by Aug 2001 _256MB_ module was $49. Feb 2002 256MB $34. Finally April 2003 hit the absolute bottom with $39 _512MB_ DIMMs
Sadly now is not like any of those times. Its like the Jiji earthquake lasted couple of years straight.
haxtormoogle|5 days ago
Infiniti20|5 days ago
Ray20|5 days ago
Guess what's inside these chips and what equipment they're made on.
tw04|5 days ago
agentifysh|5 days ago
throwaway2037|5 days ago
tonyedgecombe|5 days ago
asimovDev|5 days ago
rubyn00bie|5 days ago
Unless there is a true breakthrough, beyond AGI into super intelligence on existing, or near term, hardware— I just don’t see how “trust me bro,” can keep its spending party going. Competition is incredibly stiff, and it’s pretty likely we’re at the point of diminishing returns without an absolute breakthrough.
The end result is going to be RAM prices tanking in 18-24 months. The only upside will be for consumers who will likely gain the ability to run much larger open source models locally.
panick21_|5 days ago
m4rtink|5 days ago
mvanbaak|5 days ago
tamimio|5 days ago
2010s: so much memory, programmers used electron and chrome wrapping everything in js.
2026: so little memory, programmers have to optimize AI code to run properly.
tsoukase|4 days ago
kkfx|5 days ago
throwaway2037|5 days ago
malshe|5 days ago
gib444|5 days ago
alephnerd|5 days ago
Additonally, depending on which country you live in, telecom vendors reduce the upfront cost of the phone purchase and make up the difference via contracts.
holysoles|5 days ago
https://www.theverge.com/tech/880812/ramageddon-ram-shortage...
They discussed it on the decoder podcast as well.
cedws|5 days ago
snvzz|5 days ago
Behold, the RAM cost is being optimized with AI.
IAmGraydon|4 days ago
westurner|5 days ago
re-thc|5 days ago
wraptile|5 days ago
jld|5 days ago
rationalist|5 days ago
https://downloadmoreram.com
Idk if the owner changed or what, but the website used to be more comical.
unknown|5 days ago
[deleted]
shablulman|5 days ago
[deleted]
Frieren|5 days ago
People is missing the point. Mega-corporations distort the market. This is not capitalism this is old aristocratic ruling by power. If all these monopolies were divided in smaller chunks and regulated to not allow them to abuse that power we will not be here.
This situation is not normal, big tech is currently above the law and above the market economy and if they fail their plan is to make us pay *AGAIN* for their bad decisions. All businesses and individuals are already paying higher prices for big tech folly, we will be left with the bill when the AI boom fails, too.
SolubleSnake|5 days ago
Fr0styMatt88|5 days ago
It denied this saying that the figures quoted were estimates only, that such massive RAM contracts would be easily obtainable public knowledge and that primarily the recent price increases were mostly cyclical in nature.
Any truth to this?
Edit to add: I am actually curious; I was under the impression that this 40% story going around was true and confirmed, rather than just hyperbole or speculation.
ozgrakkurt|5 days ago
binaryturtle|5 days ago
sourcegrift|5 days ago