top | item 44941473

(no title)

tygra | 6 months ago

Nice.

Of course, you need decent hardware to run LLMs locally, but you don’t need a super high-end computer to host qwen3:30b or gpt-oss:20b. You don’t even need a GPU for those models, as long as you’ve got a modern CPU. And they’re already pretty solid for writing and coding.

discuss

order

No comments yet.