top | item 45343616

(no title)

suladead | 5 months ago

I built pretty much this exact rig myself, but now it's gathering dust, any other uses for this rather than localLLMS

discuss

order

Tepix|5 months ago

Sell it? There are people who want a rig like this.

dotnet00|5 months ago

The 3090 I have in my server (Ollama on it is only used occasionally nowadays since I have dual 5080s on my work desktop), also handles accelerating transcoding in Plex, and is in the process of being setup to handle monitoring my 3d printers for failures via camera.

Am also considering setting up Home Assistant with LLM support again.

asimovDev|5 months ago

Play DnD by yourself with Llama as a DM

robotswantdata|5 months ago

Heating

ProllyInfamous|5 months ago

I use an older machine/GPU for wintertime heating, mining Monero (xmrig).

Should one get lucky and guess the next valid block, that pays the entire month's electricity — since an electric space heater would already be consuming the exact same amount of kWH as this GPU, there is no "negative cost" to operate.

This machine/GPU used to be my main workhorse, and still has ollama3.2 available — but even with HBM, 8GB of VRAM isn't really relevant in LLM-land.

winkelmann|5 months ago

3D rendering and fluid simulation stuff could be interesting.

thiago_fm|5 months ago

Playing games, it has a good graphics card