Jimbabwe
|
2 years ago
|
on: Mistral-8x7B-Chat
Meh I bought this 5 years ago because there was a sale on 10tb hard drives and I thought “Why shouldn’t I become a data hoarder?” And now it runs homeassistant and frigate and MeTube and jellyfin and if it doesn’t work for ollama then I’ll probably just deal with it, lol.
Jimbabwe
|
2 years ago
|
on: Mistral-8x7B-Chat
Jimbabwe
|
2 years ago
|
on: Mistral-8x7B-Chat
Thanks, I’ll look into it! Especially if the llama.cpp route is a dud, like the other response says it will be. My little qnap clunker handles all the self hosting stuff I throw at it, but I won’t be surprised if it simply has met its match
Jimbabwe
|
2 years ago
|
on: Mistral-8x7B-Chat
Thanks! I was just following the thread about their recent addition of the OpenCl support and was on the verge of trying it out last weekend. I’ll definitely continue once I’m home again!
Jimbabwe
|
2 years ago
|
on: Mistral-8x7B-Chat
There’s probably a better place to ask this highly specific technical question, but I’m avoiding Reddit these days so just throwing it out I guess. I’ve been trying to run these in a container but it’s verrrry slow, I believe, because of the lack of gpu help. All the instructions I find are for nvidia gpus and my server is a qnap tvs-473e with an embedded amd cpu/gpu (I know, I know). The only good news is that I’ve upgraded the ram to 32gb, and I have a 1TB ssd. Any idea of how I can get my own self-hosted LLM/chat service on this funky hardware? The nvidia/docker option requires installing the nvidia runtime alongside docker, but I can’t find an amd equivalent.
Thanks. Sorry for the wall of text nobody cares about.
Jimbabwe
|
8 years ago
|
on: Ask HN: A good primer on cryptocurrencies?