The 3090 I have in my server (Ollama on it is only used occasionally nowadays since I have dual 5080s on my work desktop), also handles accelerating transcoding in Plex, and is in the process of being setup to handle monitoring my 3d printers for failures via camera.
Am also considering setting up Home Assistant with LLM support again.
I use an older machine/GPU for wintertime heating, mining Monero (xmrig).
Should one get lucky and guess the next valid block, that pays the entire month's electricity — since an electric space heater would already be consuming the exact same amount of kWH as this GPU, there is no "negative cost" to operate.
This machine/GPU used to be my main workhorse, and still has ollama3.2 available — but even with HBM, 8GB of VRAM isn't really relevant in LLM-land.
Tepix|5 months ago
dotnet00|5 months ago
Am also considering setting up Home Assistant with LLM support again.
asimovDev|5 months ago
robotswantdata|5 months ago
ProllyInfamous|5 months ago
Should one get lucky and guess the next valid block, that pays the entire month's electricity — since an electric space heater would already be consuming the exact same amount of kWH as this GPU, there is no "negative cost" to operate.
This machine/GPU used to be my main workhorse, and still has ollama3.2 available — but even with HBM, 8GB of VRAM isn't really relevant in LLM-land.
DaSHacka|5 months ago
winkelmann|5 months ago
thiago_fm|5 months ago