top | item 46562763

(no title)

01092026 | 1 month ago

Cool, so you are basically doing local onsite deployments? The H100's are nice. I'm not that rich, so I have some 4xV100 32GB SXM2....server, dual socket - it's OK for inference. You can get when with V100s, RAM, etc for $10-$12k all in used stuff.

I run largest models I can, DeepSeek, adding a few more soon. The fact that I can have a premier high end model run locally is main interest, a 70B model is pointless unless it's a specific task based special model or whatever Text to speech, etc.

I am more interested in ditching Nvidia for AMD Chips+GPUs, but not even ROCm - just run with OpenGL / Vulkan weights in shaders. Faster, more control, better performance for MY architecture, etc. This is the goal.

I don't think many people are running models, maybe outside of a company? I guess you are company/industry focused, I am just a programmer / personal.

People don't see a need I guess? It's complicated. Well - actually it's NOT if you have lots of money to buy all the right stuff, brand new, etc.

For regular guys like me, we have to be creative to get shit to run in the best way, it's all we can afford.

discuss

order

No comments yet.