(no title)
tygra | 6 months ago
Of course, you need decent hardware to run LLMs locally, but you don’t need a super high-end computer to host qwen3:30b or gpt-oss:20b. You don’t even need a GPU for those models, as long as you’ve got a modern CPU. And they’re already pretty solid for writing and coding.
No comments yet.