I don't understand the name "Strix". It's a name a GPU and motherboard partner of theirs, Asus, uses (used?) for their products. It's impossible for me to read "AMD Strix" and not think of it as some ASU's GPU with an AMD chip in it, or some motherboard for AMD sockets.
Aren't there enough syllables out there to invent a combination which doesn't collide with your own board partners?
Reminds me when I was Amazon, one of our tablets was codenamed Thor. We could ask the device what it's codename was and we special cases some functionality for the tablet we built. But it was the same code we used for the android app and it turned out some other tablet manufacturer used the codename Thor and all of a sudden the code was super broken on that device.
I don't think AMD really uses the name "Strix Halo" to market it to a large audience, it's just an internal codename. Just two other recent internal names are "Hawk Point" and "Dragon Range" internally, where Hawk and Dragon are names that MSI and PowerColor use to market GPUs as as well. Heck, PowerColor even exclusively sells AMD cards under the "Red Dragon" name!
AMD's marketing names for especially their mobile chips are just so deliberately confusing that it makes way more sense for press and enthusiasts to keep referring to it by its internal code name than whatever letter/number/AI nonsense AMD's marketing department comes up with.
For me the question is: what does this mean for future of desktop CPUs? High bandwidth unified memory seems very compelling for many applications, but the GPU doesn't have as much juice as a separate unit. Are we going to see more these supposedly laptop APUs finding their way into desktops, and essentially a bifurcation of desktops into APUs and discrete CPU/GPUs? Or will desktop CPUs also migrate to becoming APUs?
iGPUs have been getting ever closer to entry level and even mid-range GPUs.
In addition there's a interest in having a lot of memory for LLM acceleration. I expect both CPUs to get more LLM acceleration capabilities and desktop pc memory bandwidth to increase from its current rather slow dual channel 64bit DDR5-6000 status quo.
We're already hearing the first rumors for Medusa Halo coming in 2026 with 50% more bandwidth than Strix Halo.
Strix Halo is impressive, but it isn't AMD going all out on the concept. Strix Halo's die area (300mm2 ish) is roughly the same as estimates for Apple's M3 Pro die area. The M3 Max and M3 Ultra are twice or four times the size.
In a next iteration AMD could look into doubling or quadrupling the memory channels and GPU die area like as Apple has done. AMD is already a pioneer in the chiplet technology Apple is also using to scale up. So there's lots of room to grow for even higher costs.
APUs are going to replace low end video cards, because they no longer make economical or technical sense.
Historically those cards had narrow memory bus and about a quarter or less video memory of high end (not even halo) cards from the same generation.
That narrow memory bus puts their max memory bandwidth at a comparable level to desktop DDR5 with 2 DIMMs. At the same time quarter of high end is just 4GB VRAM which is not enough for low details for many games and prevents upscaling/frame gen from working.
From manufacturing standpoint low end GPUs aren't great either - memory controllers, video output and a bunch of other non-compute components don't scale with process node.
At the same time unified memory and bypassing PCIE benefits igpus greatly. You don't have to build an entire card, power delivery, cooler - you just slightly beef up existing ones.
tl;dr; sub-200 dollas GPUs are dead and will be replaced by APUs. I won't be surprised if they will start nibbling at lower mid-range market too in the near future.
Having a system level cache for low latency transfer of data between CPU and GPU could be very compelling for some applications even if the overall GPU power is lower than a dedicated card. That doesn't seem to be the case here, though?
>> Are we going to see more these supposedly laptop APUs finding their way into desktops, and essentially a bifurcation of desktops into APUs and discrete CPU/GPUs?
I sure hope so. We could use a new board form factor that omits the GPU slot. Although my case puts the power connector and button over that slot on the back so it's not completely wasted, but the board area is. This has seemed like a good idea for a long time to me.
This can also be a play against nVidia. When mainstream systems use "good enough" integrated GPUs and get rid of that slot, there is no place for nVidia except in high-end systems.
The bifurcation is already happening. The last few years have seen lots of miniPC/NUC like products being released.
One of (many) factors that were holding back this form factor was the gap in iGPU/GPU performance. However with the frankly total collapse of the low end GPU market in the last 3-4 years, there's a much larger opening for iGPUs.
I also think that within the gaming space specifically, a lot of the chatter around the Steam Deck helped reset expectations. Like if everyone else is having fun playing games at 800p low/medium, then you suddenly don't feel so bad playing at maybe 1080p medium on your desktop.
I don't really like these "lightly edited" machine transcripts. There are transcription errors in many paragraphs, just adds that little bit of extra friction when reading.
Interesting read, and interesting product. If I understand it right, this seems like it could be at home in a spiritual successor to the Hades Canyon NUCs. I always thought those were neat.
I wish Chips and Cheese would clean up transcripts instead of publishing verbatim. Maybe I'll use the GPU on my Strix Halo to generate readable transcripts of Chips and Cheese interviews.
Although I appreciate the drive for small profile I wonder where the limits are if you put a big tower cooler onto it, seeing as the broad design direction is for laptops or consoles I doubt there's too much left on the table. I think that highlights a big challenge - is there a sizeable enough market for it, or can you pull in customers from other segments to buy a NUC instead. You'd need a certain amount of mass manufacturing with a highly integrated design to make it worthwhile.
Yeah would it have killed them to read over it just once? Can they not find a single school kid to do it for lunch money or something? Hell I'll do it for free, I've read this article twice now, and read everything they put out the moment it hits my inbox.
I really want LPDDR5X (and future better versions) to become standard on desktops, alongside faster and more-numerous memory controllers to increase overall bandwidth. Why hasn't CAMM gotten anywhere?
I also really want an update to typical form factors and interconnects of desktop computers. They've been roughly frozen for decades. Off the top of my head:
- Move to single-voltage power supplies at 36-57 volts.
- Move to bladed power connectors with fewer pins.
- Get rid of the "expansion card" and switch to twinax ribbon interconnects.
- Standardize on a couple sizes of "expansion socket" instead, putting the high heat-flux components on the bottom side of the board.
- Redesign cases to be effectively a single ginormous heatsink with mounting sockets to accept things which produce heat.
- Kill SATA. It's over.
- Use USB-C connectors for both power and data for internal peripherals like disks. Now there's no difference between internal and external peripherals.
Framework asked AMD if they could use CAMM for their new Framework Desktop.
AMD actually humored the request and did some engineering, with simulations. According to Framework, the memory bandwidth on the simulations was less than half of the soldered version.
This completely defied the entire point of the chip - the massive 256 bit bus ideal for AI or other GPU-heavy tasks, which allows this chip to offer the features it does.
This is also why Framework has apologized for non upgradability, but said it can’t be helped, so enjoy fair and reasonable RAM prices. Previously, it had been speculated that CAMM had a performance penalty, but Framework’s engineer on video saying it was that bad was fairly shocking.
> - Move to single-voltage power supplies at 36-57 volts.
Why? And why not 12V? Please be specific in your answers.
> - Get rid of the "expansion card" and switch to twinax ribbon interconnects.
If you want that, it's available right now. Look for a product known as "PCI Express Riser Cable". Given that the "row of slots to slot in stiff cards" makes for nicely-standardized cases and card installation procedures that are fairly easy to understand, I'm sceptical that ditching slots and moving to riser cables for everything would be a benefit.
> - Kill SATA. It's over.
I disagree, but whatever. If you just want to reduce the number of ports on the board, mandate Mini SAS HD ports that are wired into a U.2 controller that can break each port out into four (or more) SATA connectors. This will give folks who want it very fast storage, but also allow the option to attach SATA storage.
> - Use USB-C connectors for both power and data for internal peripherals like disks.
God no. USB-C connectors are fragile as all hell and easy to mishandle. I hate those stupid little almost-a-wafer blades.
> - Standardize on a couple sizes of "expansion socket" instead...
What do you mean? I'm having trouble envisioning how any "expansion socket" would work well with today's highly-variably-sized expansion cards. (I'm thinking especially of graphics accelerator cards of today and the recent past, which come in a very large array of sizes.)
> - Redesign cases to be effectively a single ginormous heatsink with mounting sockets...
See my questions to the previous quote above. I currently don't see how this would work.
There's a rumor that future desktops will use LPDDR6 (with CAMMs presumably) instead of DDR6. Of course CAMMs will be slower so they might "only" run at ~8000 GT/s while soldered LPDDR6 will run at >10000.
Fascinating how Strix Halo feels like AMD's spiritual successor to their ATI merger dreams - finally delivering desktop-class graphics and CPU power in a genuinely portable form factor. Can't wait to see where it pushes laptop capabilities.
I think having a (small desktop) system with Strix Halo plus a GPU to accelerate prompt processing could be a good combo, avoiding the weakness of the Mac Ultra. The Strix Halo has 16 PCIe lanes.
People keep saying "to compete with Apple" which of course, is nonsense. Apple isn't even second or third place in laptop marketshare last I checked.
So why build powerful laptops? Simple: people want powerful laptops. Remoting to a desktop isn't really a slam dunk experience, so having sufficient local firepower to do real work is a selling point. I do work on both a desktop and a laptop and it's nice being able to bring a portable workstation wherever I might need it, or just around the house.
"Mobile CPU" has recently come to mean more than laptops. The Steam Deck validated the market for handheld gaming computers, and other OEMs have joined the fray. Even Microsoft intends to release an XBox-branded portable. I think there's an market opportunity for better-than-800p handheld gaming, and Strix Halo is perfectly positioned for it - I wouldn't bet against the handheld XBox running in this very processor.
128GB is actually a step down. The previous generation (of sorts) Strix Point had maximum memory capacity of 256GB.
The mini-PC market (which basically all uses laptop chips) seems pretty robust (especially in Asia/China). They've basically torn out the bottom of the traditional small form factor market.
Seems like Apple's M2 is a sweet spot for AI performance at 800 GB/s of memory bandwidth which can be added under $1,500 refurbished for 65 gigs of RAM.
[+] [-] mort96|1 year ago|reply
Aren't there enough syllables out there to invent a combination which doesn't collide with your own board partners?
[+] [-] newsclues|1 year ago|reply
AMD Ryzen AI MAX 300 is the product name. This continuing to use the code name.
[+] [-] Keyframe|1 year ago|reply
[+] [-] ajma|1 year ago|reply
[+] [-] DCKing|1 year ago|reply
AMD's marketing names for especially their mobile chips are just so deliberately confusing that it makes way more sense for press and enthusiasts to keep referring to it by its internal code name than whatever letter/number/AI nonsense AMD's marketing department comes up with.
[+] [-] casey2|1 year ago|reply
[+] [-] bombcar|1 year ago|reply
AMD is captured.
[+] [-] noelwelsh|1 year ago|reply
[+] [-] Tepix|1 year ago|reply
In addition there's a interest in having a lot of memory for LLM acceleration. I expect both CPUs to get more LLM acceleration capabilities and desktop pc memory bandwidth to increase from its current rather slow dual channel 64bit DDR5-6000 status quo.
We're already hearing the first rumors for Medusa Halo coming in 2026 with 50% more bandwidth than Strix Halo.
[+] [-] DCKing|1 year ago|reply
In a next iteration AMD could look into doubling or quadrupling the memory channels and GPU die area like as Apple has done. AMD is already a pioneer in the chiplet technology Apple is also using to scale up. So there's lots of room to grow for even higher costs.
[+] [-] c2h5oh|1 year ago|reply
Historically those cards had narrow memory bus and about a quarter or less video memory of high end (not even halo) cards from the same generation.
That narrow memory bus puts their max memory bandwidth at a comparable level to desktop DDR5 with 2 DIMMs. At the same time quarter of high end is just 4GB VRAM which is not enough for low details for many games and prevents upscaling/frame gen from working.
From manufacturing standpoint low end GPUs aren't great either - memory controllers, video output and a bunch of other non-compute components don't scale with process node.
At the same time unified memory and bypassing PCIE benefits igpus greatly. You don't have to build an entire card, power delivery, cooler - you just slightly beef up existing ones.
tl;dr; sub-200 dollas GPUs are dead and will be replaced by APUs. I won't be surprised if they will start nibbling at lower mid-range market too in the near future.
[+] [-] sambull|1 year ago|reply
[+] [-] Symmetry|1 year ago|reply
[+] [-] phkahler|1 year ago|reply
I sure hope so. We could use a new board form factor that omits the GPU slot. Although my case puts the power connector and button over that slot on the back so it's not completely wasted, but the board area is. This has seemed like a good idea for a long time to me.
This can also be a play against nVidia. When mainstream systems use "good enough" integrated GPUs and get rid of that slot, there is no place for nVidia except in high-end systems.
[+] [-] icegreentea2|1 year ago|reply
One of (many) factors that were holding back this form factor was the gap in iGPU/GPU performance. However with the frankly total collapse of the low end GPU market in the last 3-4 years, there's a much larger opening for iGPUs.
I also think that within the gaming space specifically, a lot of the chatter around the Steam Deck helped reset expectations. Like if everyone else is having fun playing games at 800p low/medium, then you suddenly don't feel so bad playing at maybe 1080p medium on your desktop.
[+] [-] juancn|1 year ago|reply
Pay for memory once, and avoid all the copying around between CPU/GPU/NPU for mixed algorithms, and have the workload define the memory distribution.
[+] [-] adra|1 year ago|reply
[+] [-] swiftcoder|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] ryukoposting|1 year ago|reply
I wish Chips and Cheese would clean up transcripts instead of publishing verbatim. Maybe I'll use the GPU on my Strix Halo to generate readable transcripts of Chips and Cheese interviews.
[+] [-] keyringlight|1 year ago|reply
Although I appreciate the drive for small profile I wonder where the limits are if you put a big tower cooler onto it, seeing as the broad design direction is for laptops or consoles I doubt there's too much left on the table. I think that highlights a big challenge - is there a sizeable enough market for it, or can you pull in customers from other segments to buy a NUC instead. You'd need a certain amount of mass manufacturing with a highly integrated design to make it worthwhile.
[+] [-] pixelpoet|1 year ago|reply
[+] [-] zbrozek|1 year ago|reply
I also really want an update to typical form factors and interconnects of desktop computers. They've been roughly frozen for decades. Off the top of my head:
- Move to single-voltage power supplies at 36-57 volts.
- Move to bladed power connectors with fewer pins.
- Get rid of the "expansion card" and switch to twinax ribbon interconnects.
- Standardize on a couple sizes of "expansion socket" instead, putting the high heat-flux components on the bottom side of the board.
- Redesign cases to be effectively a single ginormous heatsink with mounting sockets to accept things which produce heat.
- Kill SATA. It's over.
- Use USB-C connectors for both power and data for internal peripherals like disks. Now there's no difference between internal and external peripherals.
[+] [-] gjsman-1000|1 year ago|reply
Framework asked AMD if they could use CAMM for their new Framework Desktop.
AMD actually humored the request and did some engineering, with simulations. According to Framework, the memory bandwidth on the simulations was less than half of the soldered version.
This completely defied the entire point of the chip - the massive 256 bit bus ideal for AI or other GPU-heavy tasks, which allows this chip to offer the features it does.
This is also why Framework has apologized for non upgradability, but said it can’t be helped, so enjoy fair and reasonable RAM prices. Previously, it had been speculated that CAMM had a performance penalty, but Framework’s engineer on video saying it was that bad was fairly shocking.
[+] [-] simoncion|1 year ago|reply
Why? And why not 12V? Please be specific in your answers.
> - Get rid of the "expansion card" and switch to twinax ribbon interconnects.
If you want that, it's available right now. Look for a product known as "PCI Express Riser Cable". Given that the "row of slots to slot in stiff cards" makes for nicely-standardized cases and card installation procedures that are fairly easy to understand, I'm sceptical that ditching slots and moving to riser cables for everything would be a benefit.
> - Kill SATA. It's over.
I disagree, but whatever. If you just want to reduce the number of ports on the board, mandate Mini SAS HD ports that are wired into a U.2 controller that can break each port out into four (or more) SATA connectors. This will give folks who want it very fast storage, but also allow the option to attach SATA storage.
> - Use USB-C connectors for both power and data for internal peripherals like disks.
God no. USB-C connectors are fragile as all hell and easy to mishandle. I hate those stupid little almost-a-wafer blades.
> - Standardize on a couple sizes of "expansion socket" instead...
What do you mean? I'm having trouble envisioning how any "expansion socket" would work well with today's highly-variably-sized expansion cards. (I'm thinking especially of graphics accelerator cards of today and the recent past, which come in a very large array of sizes.)
> - Redesign cases to be effectively a single ginormous heatsink with mounting sockets...
See my questions to the previous quote above. I currently don't see how this would work.
[+] [-] wmf|1 year ago|reply
[+] [-] runjake|1 year ago|reply
Does anyone know?
[+] [-] Timshel|1 year ago|reply
Too bad there isn't a full PCIe (might not be enough bandwidth left) :(.
[+] [-] wmf|1 year ago|reply
I wouldn't be surprised if Minisforum also offers a motherboard.
[+] [-] colejohnson66|1 year ago|reply
https://frame.work/products/desktop-diy-amd-aimax300
[+] [-] sourtrident|1 year ago|reply
[+] [-] Tepix|1 year ago|reply
[+] [-] heraldgeezer|1 year ago|reply
[+] [-] elorant|1 year ago|reply
[+] [-] jchw|1 year ago|reply
So why build powerful laptops? Simple: people want powerful laptops. Remoting to a desktop isn't really a slam dunk experience, so having sufficient local firepower to do real work is a selling point. I do work on both a desktop and a laptop and it's nice being able to bring a portable workstation wherever I might need it, or just around the house.
[+] [-] alienthrowaway|1 year ago|reply
[+] [-] cptskippy|1 year ago|reply
This solves those problems but apparently uncovers a new one.
[+] [-] bangaladore|1 year ago|reply
Framework (I believe) made one of these into a purchasable desktop.
[+] [-] icegreentea2|1 year ago|reply
The mini-PC market (which basically all uses laptop chips) seems pretty robust (especially in Asia/China). They've basically torn out the bottom of the traditional small form factor market.
[+] [-] aurareturn|1 year ago|reply
[+] [-] ThatMedicIsASpy|1 year ago|reply
[+] [-] linwangg|1 year ago|reply
[deleted]
[+] [-] aurareturn|1 year ago|reply
So it's probably closer to RTX 4050m than an RTX 4070m.
Also, the Strix Halo laptop is nearly 3x more expensive than the RTX 4060 laptop. So expect to pay more instead of less.
https://www.youtube.com/watch?v=RycbWuyQHLY
[+] [-] randomNumber7|1 year ago|reply
[+] [-] FloatArtifact|1 year ago|reply
[+] [-] crazystar|1 year ago|reply