top | item 39491782

(no title)

abra0 | 2 years ago

I was thinking of doing something similar, but I am a bit sceptical about how the economics on this works out. On vast.ai renting a 3x3090 rig is $0.6/hour. The electricity price of operating this in e.g. Germany is somewhere about $0.05/hour. If the OP paid 1700 EUR for the cards, the breakeven point would be around (haha) 3090 hours in, or ~128 days, assuming non-stop usage. It's probably cool to do that if you have a specific goal in mind, but to tinker around with LLMs and for unfocused exploration I'd advise folks to just rent.

discuss

order

imiric|2 years ago

> On vast.ai renting a 3x3090 rig is $0.6/hour. The electricity price of operating this in e.g. Germany is somewhere about $0.05/hour.

Are you factoring in the varying power usage in that electricity price?

The electricity cost of operating locally will vary depending on the actual system usage. When idle, it should be much cheaper. Whereas in cloud hosts you pay the same price whether the system is in use or not.

Plus with cloud hosts reliability is not guaranteed. Especially with vast.ai, where you're renting other people's home infrastructure. You might get good bandwidth and availability on one host, but when that host disappears, you should hope that you did a backup, which vast.ai charges for separately, and if so, you need to spend time restoring the backup to another, hopefully equally reliable host, which can take hours depending on the amount of data and bandwidth.

I recently built an AI rig and went with 2x3090s, and am very happy with the setup. I evaluated vast.ai beforehand, and my local experience is much better, while my electricity bill is not much higher (also in EU).

KeplerBoy|2 years ago

Well rented cloud instances shouldn't idle in the first place.

abra0|2 years ago

Well if you are not using a rented machine during a period of time, you should release it.

Agreed on reliability and data transfer, that's a good point.

Out of curiosity, what do you use a 2x3090 rig for? Bulk not time-sensitive inference on down quanted models?

algo_trader|2 years ago

> built an AI rig and went with 2x3090s,

Is there a goto card for low memory (1-2BN) models?

Something with much better flops/$ but purposely crippled with low memory.

whimsicalism|2 years ago

with runpod/vast, you can request a set amount of time - generally if I request from Western EU or North America the availability is fine on the week-to-month timescale.

fwiw I find runpod's vast clone significantly better than vast and there isn't really a price premium.

mirekrusin|2 years ago

For me "economics" are:

- if I have it locally, I'll play with it

- if not, I won't (especially with my data)

- if I have something ready for a long run I may or may not want to send it somewhere (it's not going to be on 3090s for sure if I send it)

- if I have requirement to have something public I'd probably go for per usage with ie [0].

[0] https://www.runpod.io/serverless-gpu

kkielhofner|2 years ago

With the current more-or-less dependency on CUDA and thus Nvidia hardware it's about making sure you actually have the hardware available consistently.

I've had VERY hit-miss results with Vast.ai and I'm convinced people are cheating their evaluation stuff because when the rubber meets the road it's very clear performance isn't what it's claimed to be. Then you still need to be able to actually get them...

whimsicalism|2 years ago

use runpod and yeah i think vast.ai has some scams, especially in the asian and eastern european nodes.

wiradikusuma|2 years ago

For me the economics is when I'm not using it to do AI stuff, I can use it to play games with max settings.

Unfortunately my CFO (a.k.a Wife) does not share the same understanding.

ejb999|2 years ago

I fear that someday I will die and my wife will sell off all my stuff for what I said I paid for it.

(not really, but it is a joke I read someplace and I think it applies to a lot of couples).

segmondy|2 years ago

Unless you are training, you never hit peak watts. When inferring, the watt is still minimal. I'm running inference now and using 20%. GPU 0 is using more because I have it as main GPU. Idle watt sits at about 5%.

Device 0 [NVIDIA GeForce RTX 3060] PCIe GEN 3@16x RX: 0.000 KiB/s TX: 55.66 MiB/s GPU 1837MHz MEM 7300MHz TEMP 43°C FAN 0% POW 43 / 170 W GPU[|| 5%] MEM[|||||||||||||||||||9.769Gi/12.000Gi]

Device 1 [Tesla P40] PCIe GEN 3@16x RX: 977.5 MiB/s TX: 52.73 MiB/s GPU 1303MHz MEM 3615MHz TEMP 22°C FAN N/A% POW 50 / 250 W GPU[||| 9%] MEM[||||||||||||||||||18.888Gi/24.000Gi]

Device 2 [Tesla P40] PCIe GEN 3@16x RX: 164.1 MiB/s TX: 310.5 MiB/s GPU 1303MHz MEM 3615MHz TEMP 32°C FAN N/A% POW 48 / 250 W GPU[|||| 11%] MEM[||||||||||||||||||18.966Gi/24.000Gi]

KuriousCat|2 years ago

When you compute the break even point did you factor in that you still own the cards and you can resell them? I bought my 3090s for 1000$ and after 1 year I think they go for more in the open market if I resell them now.

ametrau|2 years ago

Interesting. I checked it out. The providers running your docker container have access to all your data.

lostmsu|2 years ago

I just made a clone of diskprices.com for GPUs specifically for AI training, and it has a power and depreciation calculator: https://gpuprices.us

You can expect a GPU to last 5 years. So for 128 days break even you are only looking at 6.67% utilization. If you are doing training runs, I think you are going to beat it easily.

P.S. coincidentally or not, but shortly after it got mentioned on Hacker News, Best Buy run out of both RTX 4090s and RTX 4080s. They used to top the chart. Turns out at descent utilization they win due to the electricity costs.

cyanydeez|2 years ago

the current economics is a low ball to get costumers. it's absolutely not going to be the market price once commercial interests have locked in their products.

but if you're just goofing around and not planning to create anything production worthy, it's a great deal.

whimsicalism|2 years ago

> the current economics is a low ball to get costumers.

vast.ai is basically a clearinghouse. they are not doing some VC subsidy thing

in general, community clouds are not suitable for commercial use.

verticalscaler|2 years ago

Well maybe you could rent it out to others for 256 days at $0.3/hour, tinker, and sell it for parts after you get bored with it. ;)

Luc|2 years ago

Breakeven point would be less than 128 days due to the (depreciating) resale value of the rig.

segmondy|2 years ago

Well, almost. GPUs have not be depreciating. The cost of 3090's and 4090's have gone up. Folks are selling it for what they paid for or even more. With the recent 40's SUPER series from Nvidia, I'm not expecting any new releases in a year. AMD & Intel still have ways to go before major adoption. Startups are buying up consumer cards. So I sadly expect prices to stay more or less the same.

karolist|2 years ago

He can use these cards for 128days non stop and re-sell, claiming back the purchase price almost fully since OP bought them cheap. Buying doesn't mean you use the GPUs to a point where they end up costing 0, yes there is risk with GPUs going but but c'mon.... Renting is money you will never see again.