top | item 46975594

(no title)

btbuildem | 18 days ago

> doesn't make financial sense to self-host

I guess that's debatable. I regularly run out of quota on my claude max subscription. When that happens, I can sort of kind of get by with my modest setup (2x RTX3090) and quantized Qwen3.

And this does not even account for privacy and availability. I'm in Canada, and as the US is slowly consumed by its spiral of self-destruction, I fully expect at some point a digital iron curtain will go up. I think it's prudent to have alternatives, especially with these paradigm-shattering tools.

discuss

order

jsheard|18 days ago

I think AI may be the only place you could get away with calling a 2x350W GPU rig "modest".

That's like ten normal computers worth of power for the GPUs alone.

bigyabai|18 days ago

> That's like ten normal computers worth of power for the GPUs alone.

Maybe if your "computer" in question is a smartphone? Remember that the M3 Ultra is a 300w+ chip that won't beat one of those 3090s in compute or raster efficiency.

dymk|18 days ago

That's maybe a few dollars to tens of dollars in electricity per month depending on where in the US you live

kataklasm|18 days ago

Did you even try to read and understand the parent comment? They said they regularly run out of quota on the exact subscription you're advising they subscribe to.

wongarsu|18 days ago

Self-hosting training (or gaming) makes a lot of sense, and once you have the hardware self-hosting inference on it is an easy step.

But if you have to factor in hardware costs self-hosting doesn't seem attractive. All the models I can self-host I can browse on openrouter and instantly get a provider who can get great prices. With most of the cost being in the GPUs themselves it just makes more sense to have others do it with better batching and GPU utilization

zozbot234|18 days ago

If you can get near 100% utilization for your own GPUs (i.e. you're letting requests run overnight and not insisting on any kind of realtime response) it starts to make sense. OpenRouter doesn't have any kind of batched requests API that would let you leverage that possibility.

int_19h|17 days ago

Anthropic has very tight limits, so you're basically using the worst (pricing-wise) SOTA cloud model as your baseline. I have $200 subs for both Claude and OpenAI, and I also bump into limits with Claude all the time, whether coding or research. With Codex, I ran into the limit once so far, and that's in a month of very heavy (sometimes literally 24 hours around the clock, leaving long-running tasks overnight) use.

sheepscreek|17 days ago

I bought the Gemini Ultra to try for a month (at the discounted price). I have been using it non-stop for Opus 4.6 Thinking, which is much better than Gemini 3 Pro (High) and it's been a blast. The most I've managed to consume is 60% of my 5 hourly quota. That was with 2-3 instances in parallel.

I hope too many of us won't be doing this and cause Google to add limits! My hope is Google sees the benefit in this and goes all in - continues to let people decide which Google hosted model to use, including their own.

mythz|18 days ago

Did the napkin math on M3 Ultra ROI when DeepSeek V3 launched: at $0.70/2M tokens and 30 tps, a $10K M3 Ultra would take ~30 years of non-stop inference to break even - without even factoring in electricity. Clearly people aren't self-hosting to save money.

I've got a lite GLM sub $72/yr which would require 138 years to burn through the $10K M3 Ultra sticker price. Even GLM's highest cost Max tier (20x lite) at $720/yr would buy you ~14 years.

ljosifov|18 days ago

Everyone should do the calculation for themselves. I too pay for couple of subs. But I'm noticing having an agent work for me 24/7 changes the calculation somewhat. Often not taken into account: the price of input tokens. To produce 1K of code for me, the agent may need to churn through 1M of tokens of codebase. IDK if that will be cached by the API provider or not, but that makes x5-7 times price difference. OK discussion today about that and more https://x.com/alexocheema/status/2020626466522685499

wongarsu|18 days ago

And it's worth noting that you can get DeepSeek at those prices from DeepSeek (Chinese), DeepInfra (US with Bulgarian founder), NovitaAI (US), AtlasCloud (US with Chinese founder), ParaSail (US), etc. There is no shortage of companies offering inference, with varying levels of trustworthiness, certificates and promises around (lack of) data retention. You just have to pick one you trust

DeathArrow|18 days ago

I don't think an Apple PC can run full Deepseek or GLM models.

Even if you quantize the hell out of the models to fit in the memory, they will be very slow.

oceanplexian|18 days ago

Doing inference with a Mac Mini to save money is more or less holding it wrong. Of course if you buy some overpriced Apple hardware it’s going to take years to break even.

Buy a couple real GPUs and do tensor parallelism and concurrent batch requests with vllm and it becomes extremely cost competitive to run your own hardware.

Aurornis|18 days ago

> I regularly run out of quota on my claude max subscription. When that happens, I can sort of kind of get by with my modest setup (2x RTX3090) and quantized Qwen3.

When talking about fallback from Claude plans, The correct financial comparison would be the same model hosted on OpenRouter.

You could buy a lot of tokens for the price of a pair of 3090s and a machine to run them.

bigyabai|18 days ago

> You could buy a lot of tokens for the price of a pair of 3090s and a machine to run them.

That's a subjective opinion, to which the answer is "no you can't" for many people.

visarga|18 days ago

Your $5,000 PC with 2 GPUs could have bought you 2 years of Claude Max, a model much more powerful and with longer context. In 2 years you could make that investment back in pay raise.

tw1984|18 days ago

> In 2 years you could make that investment back in pay raise.

you can't be a happy uber driver making more money in the next 24 months by having a fancy car fitted with the best FSD in town when all cars in your town have the same FSD.

benterix|18 days ago

> In 2 years you could make that investment back in pay raise.

Could you elaborate? I fail to grasp the implication here.

dymk|18 days ago

This claim has so many assumptions mixed in it's utterly useless

7thpower|18 days ago

Unless you already had those cards, it probably still doesn’t make sense from a purely financial perspective unless you have other things you’re discounting for.

Doesn’t mean you shouldn’t do it though.

flaviolivolsi|18 days ago

How does your quantized Qwen3 compares in code quality to Opus?

Aurornis|18 days ago

Not the person you’re responding to, but my experience with models up through Qwen3-coder-next is that they’re not even close.

They can do a lot of simple tasks in common frameworks well. Doing anything beyond basic work will just burn tokens for hours while you review and reject code.

btbuildem|18 days ago

It's just as fast, but not nearly as clever. I can push the context size to 120k locally, but quality of the work it delivers starts to falter above say 40k. Generally you have to feed it more bite-sized pieces, and keep one chat to one topic. It's definitely a step down from SOTA.