I have no idea of the likely price, but (IMO) this is the sort of disruption that Intel needs to aim at if it's going to make some sort of dent in this market. If they could release this for around the price of a 5090, it would be very interesting.
> If they could release this for around the price of a 5090
This is not targeted at consumers. It’s competing with nVidia’s high RAM workstation cards. Think $10K price range, not $1-2K.
The 160GB of LPDDR5X chips alone is expensive enough that they couldn’t release this at the $2K price point unless they felt like giving it away (which they don’t)
Xe3P as far as I remember is built in their own fabs as opposed to xe3 at TSMC. This could give them a huge advantage by being possibly the only competitor not competing for the same TSMC wafers
Funny they still call them graphics cards when they're really... I dont know, matmul cards ? Tensor cards ? TPU ?
Well that sums it up maybe, what those are are really CUDA cards.
Dude, this is asinine. Graphics cards have been doing matrix and vector operations since they were invented. No one had a problem with calling matrix multiplers graphics cards until it became cool to hate AI.
Graphics cards haven't ever done graphics. Graphics is a screen thing. Nobody looks at their graphics card to see little pictures. So they are still misnamed, but they've always been misnamed. They do BLAS.
Any business people here that can explain why companies announce products a year before their release? I can understand getting consumers excited but it also tells competitors what you are doing giving them time to make changes of their own. What's the advantage here?
In this case there is no risk of anyone stealing Intel's ideas or even reacting to them.
First, they're not even an also-ran in the AI compute space. Nobody is looking to them for roadmap ideas. Intel does not have any credibility, and no customer is going to be going to Nvidia and demanding that they match Intel.
Second, what exactly would the competitors react to? The only concrete technical detail is that the cards will hopefully launch in 2027 and have 160GB of memory.
The cost of doing this is really low, and the value of potentially getting into the pipeline of people looking to buy data center GPUs in 2027 soon enough to matter is high.
If customers know your product exists before they can buy it then they may wait for it. If they buy the competitor's product today because they don't know your product will exist until the day they can buy it then you lose the sale.
Samples of new products also have to go out to third party developers and reviewers ahead of time so that third party support is ready for launch day and that stuff is going to leak to competitors anyway so there's little point in not making it public.
If you're Intel sized, it's gonna leak. If you announce it first, you get to control the message.
The other thing is enterprise sales is ridiculously slow. If Intel wants corporate customers to buy these things, they've got to announce them ~a year ahead, in order for those customers to buy them next year when they upgrade hardware.
It can also prevent competitors from entering a particular space. I was told as an undergraduate that UNIX was irrelevant because the upcoming Windows NT would be POSIX compliant. It took a _very_ long time before that happened (and for a very flexible version of "compliant"), but the pointy-headed bosses thought that buying Microsoft was the future. And at first glance the upcoming NT _looked_ as if the TCO would be much lower than AIX, HPuX or Solaris.
Then of course Linux took over everywhere except the desktop.
Any discussion of an intel entry to discrete graphics cards needs to at least _mention_ intel's repeated history of abandoning discrete graphics cards.
the GPU market is not what it used to be, it's not some checkbox some executive needs to check to say "we are doing something".
the chips are so valuable now NVIDIA will end up owning a chunk of every major tech company, everyone is throwing cash and shares at them as fast as they can.
Yeah, Intel's problem is that this is (at least) the third time they've announced a new ML accelerator platform, and the first two got shitcanned. At this point I wouldn't even glance at an Intel product in this space until it had been on the market for at least five years and several iterations, to be somewhat sure it isn't going to be killed, and Intel's current leadership inspires no confidence that they'll wait that long for success.
I’m personally just thinking about how they treated their embedded Keem Bay line. Totally shitcanned without warning. I doubt they consider this a core market to the degree that they will endure bad sales numbers for a while.
It'll be either "cheap" like the DGX Spark (with crap memory bandwidth) or overpriced with the bus width of a M4 Max with the rhetoric of Intel's 50% margin.
What price is this sitting at? Because if its software support is decent then Intel might have just managed to break into the hardware for AI on the edge. Examples like self hosted LLM finetuning and RAG on a old dell or HP server with these type of cards on them.
> Examples like self hosted LLM finetuning and RAG on an old dell or HP server with these type of cards on them.
This won’t be in the price range of an old Dell server or a fun impulse buy for a hobbyist. 160GB of raw LPDDR5X chips alone is not cheap.
This is a server/workstation grade card and the price is going where the market will allow. Consider that an nVidia card with almost half the RAM is going to cost $8K or more. That price point is probably the starting point for where this will be priced, too.
Gelsinger had a long term realistic plan. He was out around 11 months ago. You can't magic a new GPU in that timeframe - those projects have 3+ years pipelines for CPUs. I assume GPU will be a bit shorter, but not that much.
Whatever happened with new products today must've been started before he left.
How does LPDDR5 (This Xe3P) compare with GDDR7 (Nvidia's flagships) when it comes to inference performance?
Local inference is an interesting proposition because today in real life, the NV H300 and AMD MI-300 clusters are operated by OpenAI and Anthropic in batching mode, which slows users down as they're forced to wait for enough similar sized queries to arrive. For local inference, no waiting is required - so you could get potentially higher throughput.
Honestly, Intel just has to build a GPU with insane amount of VRAM. It doesn't even have to be the fastest to compete... just a ton of vram for dirt cheap
It’s gonna be what, 273GB/sec vram bandwidth at most? Might as well as buy an AND 395+ 128GB right now for the same inference performance and slightly less VRAM.
mft_|4 months ago
Aurornis|4 months ago
This is not targeted at consumers. It’s competing with nVidia’s high RAM workstation cards. Think $10K price range, not $1-2K.
The 160GB of LPDDR5X chips alone is expensive enough that they couldn’t release this at the $2K price point unless they felt like giving it away (which they don’t)
musicale|4 months ago
They made a dent in the HPC market / Top500 with intel MAX.
It will be interesting to see if they can make a dent in the AI inference market (presumably datacenter/enterprise).
schmorptron|4 months ago
baq|4 months ago
schmorptron|4 months ago
makapuf|4 months ago
wmf|4 months ago
mikkupikku|4 months ago
musicale|4 months ago
That don't run CUDA?
halJordan|4 months ago
roenxi|4 months ago
knowitnone3|4 months ago
jsnell|4 months ago
First, they're not even an also-ran in the AI compute space. Nobody is looking to them for roadmap ideas. Intel does not have any credibility, and no customer is going to be going to Nvidia and demanding that they match Intel.
Second, what exactly would the competitors react to? The only concrete technical detail is that the cards will hopefully launch in 2027 and have 160GB of memory.
The cost of doing this is really low, and the value of potentially getting into the pipeline of people looking to buy data center GPUs in 2027 soon enough to matter is high.
AnthonyMouse|4 months ago
Samples of new products also have to go out to third party developers and reviewers ahead of time so that third party support is ready for launch day and that stuff is going to leak to competitors anyway so there's little point in not making it public.
fragmede|4 months ago
The other thing is enterprise sales is ridiculously slow. If Intel wants corporate customers to buy these things, they've got to announce them ~a year ahead, in order for those customers to buy them next year when they upgrade hardware.
Perenti|4 months ago
Then of course Linux took over everywhere except the desktop.
epolanski|4 months ago
Semiconductors are like container ships, they are extremely slow and hard to steer, you plan today the products you'll release in 2030.
pointyfence|4 months ago
Intel has practically nothing to show for an AI capex boom for the ages. I suspect that Intel is talking about it early for a shred of AI relevance.
reactordev|4 months ago
unknown|4 months ago
[deleted]
toast0|4 months ago
If you're planning a supercomputer to be built in 2027, you want to look at what's on the roadmap.
Mars008|4 months ago
teeray|4 months ago
Stock number go up
creaturemachine|4 months ago
cwillu|4 months ago
kobalsky|4 months ago
the chips are so valuable now NVIDIA will end up owning a chunk of every major tech company, everyone is throwing cash and shares at them as fast as they can.
hnuser123456|4 months ago
sharts|4 months ago
bigmattystyles|4 months ago
Analemma_|4 months ago
wmf|4 months ago
throwaway173738|4 months ago
eadwu|4 months ago
phonon|4 months ago
https://www.linkedin.com/posts/storagereview_storagereview-a...
tonetegeatinst|4 months ago
Aurornis|4 months ago
This won’t be in the price range of an old Dell server or a fun impulse buy for a hobbyist. 160GB of raw LPDDR5X chips alone is not cheap.
This is a server/workstation grade card and the price is going where the market will allow. Consider that an nVidia card with almost half the RAM is going to cost $8K or more. That price point is probably the starting point for where this will be priced, too.
silisili|4 months ago
Makes me wonder whether Gelsinger put all this in motion, or if the new CEO lit a fire under everyone. Kinda a shame if it's the former...
viraptor|4 months ago
Whatever happened with new products today must've been started before he left.
RoyTyrell|4 months ago
CoastalCoder|4 months ago
I assume that hasn't changed.
0xfedcafe|4 months ago
pjmlp|4 months ago
api|4 months ago
bigwheels|4 months ago
Local inference is an interesting proposition because today in real life, the NV H300 and AMD MI-300 clusters are operated by OpenAI and Anthropic in batching mode, which slows users down as they're forced to wait for enough similar sized queries to arrive. For local inference, no waiting is required - so you could get potentially higher throughput.
btian|4 months ago
How is this better?
lillecarl|4 months ago
incomingpain|4 months ago
To me, the price point is what matters. It's going to be slow with ddr5. The 5090 today is much faster. But sure big ram.
RTX pro 6000 with 96gb of ram will be much faster.
So I'm thinking price point is below the 6000, above the 5090.
DrNosferatu|4 months ago
mawadev|4 months ago
tommica|4 months ago
jychang|4 months ago
It’s gonna be slowwww
It’s gonna be what, 273GB/sec vram bandwidth at most? Might as well as buy an AND 395+ 128GB right now for the same inference performance and slightly less VRAM.
DrNosferatu|4 months ago
g42gregory|4 months ago
vrighter|4 months ago
nullsmack|4 months ago
thedudeabides5|4 months ago
Tepix|4 months ago
storus|4 months ago