top | item 47104667

Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU

395 points| xaskasdf | 8 days ago |github.com

Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"

This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho

101 comments

order

01100011|8 days ago

Yeah, GPUdirect should allow you to dma straight to a storage device.

I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.

javchz|8 days ago

Yeah a ramdisk would probably work wonders. It's a shame Intel optane didn't became a standard, those type of workflows would be amazing for it.

lmeyerov|7 days ago

This is exactly what I was wondering

I gave a talk a few years ago at dask summit (conf?) on making the stars align with dask-cudf here. We were helping a customer accelerate log analytics by proving out our stack for nodes that look roughly like: parallel ssd storage arrays (30 x 3 GB/s?) -> GPUDirect Storage -> 4 x 30 GB/s PCIe (?) -> 8 x A100 GPUs, something like that. It'd be cool to see the same thing now in the LLM world, such as a multi-GPU MoE, or even a single-GPU one for that matter!

ElectricalUnion|8 days ago

Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.

randomtoast|8 days ago

0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff

xaskasdf|8 days ago

yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :(

Wuzado|8 days ago

I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.

tyfon|8 days ago

I didn't really understand the performance table until I saw the top ones were 8B models.

But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

fluoridation|8 days ago

That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context.

rl3|8 days ago

Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

Curious if anyone's tried this already.

xaskasdf|8 days ago

That would be nice to see. Actually I was thinking about getting another 3090 and a mobo upgrade since I'm bottlenecked by pcie3 to tryna run glm 4.7 or 5 at q4_k_m, it should be possible.

jacquesm|8 days ago

This is an interesting area for experiments. I suspect that in the longer term model optimization (knowing which bits you can leave out without affecting the functioning of the model) will become the dominant area of research just like it did with compression algorithms because effectively a model is a lossy compression scheme.

And that's good because that increases democratization of AI away from the silos that are being created.

serendip-ml|7 days ago

The compression analogy is interesting. Another way of looking at it could be fine-tuning as "knowing what to leave out" - a 3B model for example tuned for a narrow task doesn't need the capacity that makes 70B good at many things.

civicsquid|8 days ago

Really cool. I'm wondering: what background did you need to be able to think of the question that resulted in this project?

I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).

xaskasdf|7 days ago

This was the experiment itself https://github.com/xaskasdf/ps2-llm

The idea was basically to run a llm on a ps2, then I ran into some problems as the 32mb ram cap with 4mb vram cap; so I had to figure out a way to stream layers on the forward pass. Given that ps2 manages to give instructions directly to the vram that's capable of 32bit addresses, it gave an insane amount of tok/s, then I wondered if I could do the same on my puter

rustyhancock|7 days ago

I wonder too, DMA plays a huge role in most older gaming consoles when the CPUs were far more sluggish.

Perhaps that's what made them think to try.

Perhaps the current batch of smart memory cards which on the PS2 I believe have quite complex DMA capabilities to stream from the SD card game data.

Wuzado|8 days ago

I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?

rao-v|8 days ago

Yeah I’ve often wondered why folks aren’t training two tier MoEs for VRAM + RAM. We already have designs for shared experts so it cannot be hard to implement a router that allocated 10x or 100x as often to “core” experts vs the “nice to have” experts. I suppose balancing during training is tricky but some sort of custom loss on the router layers should work.

I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.

davideom0414|7 days ago

Really interesting experiment i should have done this before Do you have numbers on effective throughput vs PCIe theoretical bandwidth? I’m curious whether this is primarily latency-bound or bandwidth-bound in practice Can some tell me??

xaskasdf|7 days ago

Actually is purely bandwidth-bound, the major bottleneck of the whole process, for me in this case, is the B450 mobo I got that's only capable of pcie3 and 1x8 in the pcie lanes for gpu instead of 1x16; so I'm capped until I get an X570 maybe. I should get around twice or triple the tok speed with that upgrade alone

jauntywundrkind|8 days ago

Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.

Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!

7777777phil|7 days ago

Cool hack but 0.5 tok/s on 70B when a 7B does 30+ on the same card. NVIDIA's own research says 40-70% of agentic tasks could run on sub-10B models and the quality gap has closed fast.

Aurornis|7 days ago

Cool project. Can you provide more details about your DKMS patching process for consumer GPUs? This would be fun to try out, but I’d need some more details on that patch process first.

xaskasdf|7 days ago

I updated the documentation to provide more info for the patching process, I added the patches themselves too and provided some risk info about the patches

spwa4|7 days ago

I've often wondered doing this with extreme compression. What if you did extreme compression + decompression on the GPU? Because you're leaving a lot of compute unused.

xaskasdf|7 days ago

I did it, but with different quantization compressions, It ran into quality issues, I will try to rerun with the same quants if that fixes the issue, but the most that looks unused, its being used by rotating layers that are being swapped by the cpu from the ram itself, that manages to keep layers warm, ready to use while inferencing and discarding already used ones

nathan_compton|7 days ago

I'm not sure, but I suspect that LLM weights don't compress all that well. The intuition here is that training an LLM is compression of the training data into the weights, so they are probably very information dense already. Can't squeeze them down much.

stuaxo|7 days ago

Interesting. Can AMD GPUs do direct io like this?

sylware|7 days ago

Isn't that linux DMA buf?

timzaman|7 days ago

Umm sorry but the cpu can easily keep up shuttling around to/from your nvme. Especially ancient gen3 pcie. Not sure why ud do this.

xaskasdf|7 days ago

Did you even read anything? hahaha

MarcLore|7 days ago

[deleted]

YetAnotherNick|6 days ago

No it is not. CPU and GPU overhead is close to 0 anyways if you are loading weights at 10GB/s.

fabifabulous|7 days ago

NVMEs are much, much slower than RAM. Especially unified/soldered RAM.

3abiton|7 days ago

To be fair, llama.cpp had this feature for over a year now. It just applies to GGUF.

xaskasdf|7 days ago

I got an m3, I will test it on metal and check how it goes

umairnadeem123|8 days ago

[deleted]

esquire_900|8 days ago

Cost wise it does not seem very effective. .5 token / sec (the optimized one) is 3600 tokens an hour, which costs about 200-300 watts for an active 3090+system. Running 3600 tokens on open router @.4$ for llama 3.1 (3.3 costs less), is about $0,00144. That money buys you about 2-3 watts (in the Netherlands).

Great achievement for privacy inference nonetheless.

eleventyseven|7 days ago

Are you taking into account energy costs of running a 3090 at 350 watts for a very long time?

turingsroot|8 days ago

[deleted]

Aurornis|7 days ago

> No cuBLAS means they wrote their own GEMM kernels, which is a massive undertaking

Not to diminish the impressiveness of this overall project, but it says right up front that these were vibe coded and the Opus 4.6 co-author lines are right in the commit messages. Those pieces were adapted from existing work via LLM, which is exactly the right use in a proof of concept project like this.

snovv_crash|7 days ago

Please don't use LLMs to post on HN...