top | item 46560936

(no title)

Blue_Cosma | 1 month ago

Our main driver and hypothesis was to work with regulated industry. We worked with a few large enterprise clients in defence and industry for R&D and IP use cases mostly.

Our stack changes per project, adapting to client needs and infra: Llama 70B on a Mac Studio M1 with Ollama in 2024, vLLM on 4xH100 private cloud for larger deployments. Most recently, we've been working on a custom workstation with 2x RTX PRO 6000 Blackwell Max-Q + 1.1TB DDR5 to run larger models locally using SGLang and KTransformers.

The question isn't rhetorical, I'm trying to understand if the demand we see in regulated sectors is the whole market or if there's broader adoption I'm missing.

discuss

order

01092026|1 month ago

Cool, so you are basically doing local onsite deployments? The H100's are nice. I'm not that rich, so I have some 4xV100 32GB SXM2....server, dual socket - it's OK for inference. You can get when with V100s, RAM, etc for $10-$12k all in used stuff.

I run largest models I can, DeepSeek, adding a few more soon. The fact that I can have a premier high end model run locally is main interest, a 70B model is pointless unless it's a specific task based special model or whatever Text to speech, etc.

I am more interested in ditching Nvidia for AMD Chips+GPUs, but not even ROCm - just run with OpenGL / Vulkan weights in shaders. Faster, more control, better performance for MY architecture, etc. This is the goal.

I don't think many people are running models, maybe outside of a company? I guess you are company/industry focused, I am just a programmer / personal.

People don't see a need I guess? It's complicated. Well - actually it's NOT if you have lots of money to buy all the right stuff, brand new, etc.

For regular guys like me, we have to be creative to get shit to run in the best way, it's all we can afford.