top | item 46991615

(no title)

mythz | 17 days ago

Really looked forward to this release as MiniMax M2.1 is currently my most used model thanks to it being fast, cheap and excellent at tool calling. Whilst I still use Antigravity + Claude for development, I reach for MiniMax first in my AI workflows, GLM for code tasks and Kimi K2.5 when deep English analysis is needed.

Not self-hosting yet, but I prefer using Chinese OSS models for AI workflows because of the potential to self-host in future if needed. Also using it to power my openclaw assistant since IMO it has the best balance of speed, quality and cost:

> It costs just $1 to run the model continuously for an hour at 100 tokens/sec. At 50 tokens/sec, the cost drops to $0.30.

discuss

order

algo_trader|17 days ago

> MiniMax first in my AI workflows, GLM for code tasks and Kimi K2.5

Its good to have these models to keep the frontier labs honest! Can i ask if you use the API or a monthly plan? Do the monthly plan throttle/reset ?

edit: i agree that MM2.1 most economic, and K2.5 generally the strongest

mythz|17 days ago

Using a coding plan, haven't noticed any throttling and very happy with the performance. They publish the quotas for each of their plans on their website [1]:

- $10/mo: 100 prompts / 5 hours

- $20/mo: 300 prompts / 5 hours

- $50/mo: 1000 prompts / 5 hours

[1] https://platform.minimax.io/docs/guides/pricing-coding-plan

user2722|17 days ago

!!!!!! Incredibly cheap!!!!!

I'll have to look for it in OpenRouter.

amunozo|17 days ago

For the moment is free in Opencode, if you want ot try it.