top | item 39361667

(no title)

markab21 | 2 years ago

I've found myself more and more using local models rather than ChatGPT; it was pretty trivial to set up Ollama+Ollama-WebUI, which is shockingly good.

I'm so tired of arguing with ChatGPT (or what was Bard) to even get simple things done. SOLAR-10B or Mistral works just fine for my use cases, and I've wired up a direct connection to Fireworks/OpenRouter/Together for the occasion I need anything more than what will run on my local hardware. (mixtral MOE, 70B code/chat models)

discuss

order

chrisallenlane|2 years ago

Same here. I've found that I currently only want to use an LLM to solve relatively "dumb" problems (boilerplate generation, rubber-ducking, etc), and the locally-hosted stuff works great for that.

Also, I've found that GPT has become much less useful as it has gotten "safer." So often I'd ask "How do I do X?" only to be told "You shouldn't do X." That's a frustrating waste of time, so I cancelled by GPT-4 subscription and went fully self-hosted.