(no title)
Liwink | 5 months ago
From OpenRouter last week:
* xAI: Grok Code Fast 1: 1.15T
* Anthropic: Claude Sonnet 4: 586B
* Google: Gemini 2.5 Flash: 325B
* Sonoma Sky Alpha: 227B
* Google: Gemini 2.0 Flash: 187B
* DeepSeek: DeepSeek V3.1 (free): 180B
* xAI: Grok 4 Fast (free): 158B
* OpenAI: GPT-4.1 Mini: 157B
* DeepSeek: DeepSeek V3 0324: 142B
simonw|5 months ago
For all I know there are a couple of enormous whales on there who, should they decide to switch from one model to another, will instantly impact those overall ratings.
I'd love to have a bit more transparency about volume so I can tell if that's what is happening or not.
minimaxir|5 months ago
A "weekly active API Keys" faceted by models/app would be a useful data point to measure real-world popularity though.
frde_me|5 months ago
koakuma-chan|5 months ago
tiahura|5 months ago
mistic92|5 months ago
nextos|5 months ago
YetAnotherNick|5 months ago
crazysim|5 months ago
People are lazy at pointing to the latest name.
rohansood15|5 months ago
koakuma-chan|5 months ago
minimaxir|5 months ago
Both apps have offered usage for free for a limited time:
https://blog.kilocode.ai/p/grok-code-fast-get-this-frontier-...
https://cline.bot/blog/grok-code-fast
NitpickLawyer|5 months ago
Also cheap enough to not really matter.
coder543|5 months ago
I would rather use a model that is good than a model that is free, but different people have different priorities.
BoredPositron|5 months ago
davey48016|5 months ago
riku_iki|5 months ago
keeeba|5 months ago
Simon321|5 months ago
PetrBrzyBrzek|5 months ago
testycool|5 months ago