top | item 46828650

(no title)

zeroxfe | 1 month ago

I've been using this model (as a coding agent) for the past few days, and it's the first time I've felt that an open source model really competes with the big labs. So far it's been able to handle most things I've thrown at it. I'm almost hesitant to say that this is as good as Opus.

discuss

order

rubslopes|1 month ago

Also my experience. I've been going back and forth between Opus and Kimi for the last few days, and, at least for my CRUD webapps, I would say they are both on the same level.

armcat|1 month ago

Out of curiosity, what kind of specs do you have (GPU / RAM)? I saw the requirements and it's a beyond my budget so I am "stuck" with smaller Qwen coders.

Carrok|1 month ago

Not OP but OpenCode and DeepInfra seems like an easy way.

observationist|1 month ago

API costs on these big models over private hosts tend to be a lot less than API calls to the big 4 American platforms. You definitely get more bang for your buck.

kristianp|29 days ago

Note that Kimi K2x is natively 4 bit int, which reduces the memory requirements somewhat.

tgrowazay|1 month ago

Just pick up any >240GB VRAM GPU off your local BestBuy to run a quantized version.

> The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs.

timwheeler|1 month ago

Did you use Kimi Code or some other harness? I used it with OpenCode and it was bumbling around through some tasks that Claude handles with ease.

zedutchgandalf|1 month ago

Are you on the latest version? They pushed an update yesterday that greatly improved Kimi K2.5’s performance. It’s also free for a week in OpenCode, sponsored by their inference provider

thesurlydev|1 month ago

Can you share how you're running it?

eknkc|1 month ago

I've been using it with opencode. You can either use your kimi code subscription (flat fee), moonshot.ai api key (per token) or openrouter to access it. OpenCode works beautifully with the model.

Edit: as a side note, I only installed opencode to try this model and I gotta say it is pretty good. Did not think it'd be as good as claude code but its just fine. Been using it with codex too.

zeroxfe|1 month ago

Running it via https://platform.moonshot.ai -- using OpenCode. They have super cheap monthly plans at kimi.com too, but I'm not using it because I already have codex and claude monthly plans.

indigodaddy|1 month ago

Been using K2.5 Thinking via Nano-GPT subscription and `nanocode run` and it's working quite nicely. No issues with Tool Calling so far.

gigatexal|1 month ago

Yeah I too am curious. Because Claude code is so good and the ecosystem so just it works that I’m Willing to pay them.