(no title)
roadbuster | 2 months ago
And? Why should they be obligated to pay for all the middleman steps from fab down to module? That includes: wafer-level test, module-level test (DC, AC, parametric), packaging, post-packaging test, and module fabrication. There's nothing illegal or sketchy about saying, "give me the wafers, I'll take care of everything else myself."
> not even allocated to a specific DRAM standard yet
DRAM manufacturers design and fabricate chips to sell into a standardized, commodity market. There's no secret evolutionary step which occurs after the wafers are etched which turns chips into something which adheres to DDR4,5,6,7,8,9
> It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM
Who cares?
hamandcheese|2 months ago
Do you think that's fine, or do you think that implication is wrong and OpenAI does actually plan to deploy 40% of the world's DRAM supply?
roadbuster|2 months ago
You have no evidence of that. Even at face value, the idea of "cornering the market" on a depreciating asset with no long-term value isn't a war strategy, it's flushing money down the toilet. Moreover, there's a credible argument OpenAI wanted to secure capacity in an essential part of their upstream supply chain to ensure stable prices for themselves. That's not "cornering the market," either, it's securing stability for their own growth.
Apple used to buy-up almost all leading-edge semiconductor process capacity from TSMC. It wasn't to resell capacity to everyone else, it was to secure capacity for themselves (particularly for new product launches). Nvidia has been doing the same since the CUDA bubble took off (they have, in effect, two entire fabs worth of leading-edge production just for their GPUs/accelerators). Have they been "cornering" the deep sub-micron foundry market?
noosphr|2 months ago
If what we've heard about no acceptable pre-training runs from them in the last two years trying to increase the memory for training by two orders of magnitude is just a rehash of what got them from gpt2 to gpt3.