Cool that it's possible but basically unusable performance characteristics. For an 8192 token prompt they report a ~1.5 minute time-to-first-token and then 8.30tk/s from there. For context ChatGPT is typically <<1s ttft and ~50tk/s.
Given that APU only has 4 channels isn't this setup comically starved for bandwidth? By the same token, wouldn't you expect performance to scale approximately linearly as you add additional boxes? And wouldn't you be better off with smaller nodes (ie less RAM and CPU power per box)?
If I'm right about that then if you're willing to go in for somewhere in the vicinity of $30k (24x the Max 385 model) you should be able to achieve ChatGPT performance.
I've never understood the obsession with token/s. I'm fine with asking a question and then going on to another task (which might be making coffee).
Even with a cloud-based LLM where the response is pretty snappy, I still find that I wander off and return when I am ready to digest the entire response.
It's sad that NDA fetishist Broadcom has a de-facto monopoly on PCIe fabric switches; notably we would have functional open source drivers for at least simpler topologies for a while now, and could just set up cheap FNN topologies by using (usually NMVe targeted) bifurcation support on hosts to get several x4 ports with only a comparatively cheap retimer out into "mini SAS hd" (the square shaped 4-Lane connectors) or QSFP+ ports; and then have a few meters reach on generic DAC cables from such standards (even Skylake-era SAS ones (nominally 12 GT/s; PCIe4.0 is 16 GT/s) should typically manage PCIe4; that's just under 64 Gbit/s from each link, with typical desktop/gaming systems delivering 3~5 links without complaints next to a dGPU (that one at fewer than full lanes).
> Though only 5gig Ethernet? Can’t they do usb-c / thunderbolt 40 Gb/s connections like Macs?
Does the network speed matter that much when TFA talks about outputting a few tens of tokens per second? Ain't 5 Gbit/s plenty for that? (I understand the need to load the model but that'd be local already right?)
I've been pretty happy with my Framework Desktop, though I managed to snag it before RAM prices shot through the roof. Currently, a tricked out model is around $2500.
Mine sees more use as a Steam machine, but it can run decently large models. Ollama was trivial to get working, and qwen3-coder-next spits out paragraphs of text/code in seconds. I don't really do anything with that, but it's fun to mess around with. (LLMs are still pretty bad at assembly language.)
You can buy a 128GB mainboard from framework for $2300, so maybe somewhere a bit over $9k by the time you've got power, storage, cables, racks (they sell those too). I was thinking about getting into one of these Strix Halo setups but decided to go a slightly different route with a lot higher TDP, better throughput, and a bit less VRAM.
> Minisforum likes to turn itself off every couple of weeks, not sure why yet
AFAICT, the answer is "because Minisforum". I don't know if they have a design principle that they should run their systems near the edge of the thermal envelope or what, but Minisforum is the only brand I've had consistent trouble with stability on. My last one got to where it stopped booting altogether, just looped. Since then I've written off Minisforum as a brand, just not worth the hassle.
Framework has gone fully in the tank of Apple consumerization route of unrepairability and unupgradeability with a nonstandard machine, soldered-on RAM, and no meaningful PCIe slots. There's only the superficial appearance of longevity and future-proofness when it's really yet another silo. There's no way to add an IB, FC, or 100/400 GbE NICs to these machines. 5 GbE is a joke. Non-ECC RAM is a joke.
ibeckermayer|1 day ago
fc417fc802|21 hours ago
If I'm right about that then if you're willing to go in for somewhere in the vicinity of $30k (24x the Max 385 model) you should be able to achieve ChatGPT performance.
JKCalhoun|17 hours ago
Even with a cloud-based LLM where the response is pretty snappy, I still find that I wander off and return when I am ready to digest the entire response.
elcritch|1 day ago
Though only 5gig Ethernet? Can’t they do usb-c / thunderbolt 40 Gb/s connections like Macs?
namibj|21 hours ago
evanjrowley|1 day ago
TacticalCoder|22 hours ago
Does the network speed matter that much when TFA talks about outputting a few tens of tokens per second? Ain't 5 Gbit/s plenty for that? (I understand the need to load the model but that'd be local already right?)
jauntywundrkind|23 hours ago
tills13|1 day ago
How much is one of these gonna run me?
zeta0134|1 day ago
https://frame.work/desktop
Mine sees more use as a Steam machine, but it can run decently large models. Ollama was trivial to get working, and qwen3-coder-next spits out paragraphs of text/code in seconds. I don't really do anything with that, but it's fun to mess around with. (LLMs are still pretty bad at assembly language.)
unknown|1 day ago
[deleted]
jcgrillo|1 day ago
https://frame.work/products/framework-desktop-mainboard-amd-...
verdverm|1 day ago
This is a good list, I like my Beelink a lot, my Minisforum likes to turn itself off every couple of weeks, not sure why yet.
https://www.techradar.com/pro/there-are-15-amd-ryzen-ai-max-...
---
Performance is pretty bad (<10/tps) and context is quite limited. Still good to see progress
Prompt Size (tokens) | TFT (s) - Flash Attention Disabled | TFT (s) - Flash Attention Enabled
4096 | 53.7s | 39.7s
8192 | Out Of Memory (OOM) | 90.5s
16384 | Out Of Memory (OOM) | 239.1s
rootusrootus|1 day ago
AFAICT, the answer is "because Minisforum". I don't know if they have a design principle that they should run their systems near the edge of the thermal envelope or what, but Minisforum is the only brand I've had consistent trouble with stability on. My last one got to where it stopped booting altogether, just looped. Since then I've written off Minisforum as a brand, just not worth the hassle.
shrubble|23 hours ago
gmerc|22 hours ago
burnt-resistor|1 day ago