top | item 42852717

(no title)

roosgit | 1 year ago

Renting could be a good choice to get started. I used to rent a g4dn.xlarge instance from AWS (for Stable Diffusion, not LLMs). More affordable options are Runpod and Vast.ai.

I started with a local system using llama.cpp on CPU alone and for short questions and answers it was OK for me. Because (in 2023) I didn't know if LLMs would be any good, I chose cheap components https://news.ycombinator.com/item?id=40267208.

Since AWS was getting pretty expensive, I also bought an RTX 3060(16GB), an extra 16GB RAM (for a total of 32GB) and a superfast 1TB M.2 SSD. The total cost of the components was around €620.

Here are some basic LLM performance numbers for my system:

https://news.ycombinator.com/item?id=41845936

https://news.ycombinator.com/item?id=42843313

discuss

order

dconden|1 year ago

You can find even more affordable + reliable cloud GPU options on Shadeform (YC S23).

It's a GPU marketplace that lets you compare and deploy on-demand instances from big names like Lambda, Scaleway, Crusoe, etc. with a single account.

Super useful for finding the best pricing per GPU type and deploying.

There's H100s for under $2 an hour, and H200s for under $3 an hour. Lots of lighter GPU options too (ex: A5000 for $0.25/hr)