(no title)
stuckinhell | 4 months ago
My job just got me and our entire team a DGX spark. I'm impressed at the ease of use for ollama models I couldn't run on my laptop. gpt-oss:120b is shockingly better than what I thought it would be from running the 20b model on my laptop.
The DGX has changed my mind about the future being small specialized models.
unknown|4 months ago
[deleted]
RyeCatcher|4 months ago
unknown|4 months ago
[deleted]
jasonjmcghee|4 months ago
Are you shocked because that isn't your experience?
From the article it sounds like ollama runs cpu inference not GPU inference. Is that the case for you?