top | item 47117345

(no title)

hsaliak | 7 days ago

Google's Pro service (no idea about ultra and I have no intention to find out) is riddled with 429s. They have generous quotas for sure, but they really give you very low priority. For example, I still dont have access to Gemini 3.1 from that endpoint. It's completely uncharacteristic of Google.

I analyzed 6k HTTP requests on the Pro account, 23% of those were hit with 429s. (Though not from Gemini-CLI, but from my own agent using code assist). The gemini-cli has a default retry backoff of 5s. That's verifiable in code, and it's a lot.

I dont touch the anti-gravity endpoint, unlike code-assist, it's clear that they are subsidizing that for user acquisition on that tool. So perhaps it's ok for them to ban users form it.

I like their models, but they also degrade. It's quite easy to see when the models are 'smart' and capacity is available, and when they are 'stupid'. They likely clamp thinking when they are capacity strapped.

Yes the models are smart, but you really cant "build things" despite the marketing if you actively beat back your users for trying. I spent a decade at Google, and it's sad to see how they are executing here, despite having solid models in gemini-3-flash and gemini-3.1

discuss

order

gck1|7 days ago

> Yes the models are smart, but you really cant "build things" despite the marketing if you actively beat back your users for trying

I think this is the most important takeaway from this thread and at some point, this will end up biting Google and Anthropic back.

OpenAI seems to have realized this and is actively trying to do the opposite. They welcomed OpenCode the same day Anthropic banned them, X is full of tweets of people saying codex $20 plan is more generous than Anthropic's $200 etc.

If you told me this story a year ago without naming companies, I would tell you it's OpenAI banning people and Google burning cash to win the race.

And it's not like their models are winning any awards in the community either.

lukeschlather|7 days ago

My impression is there's a definite shortage of GPUs, and if OpenAI is more reliable it's because they have fewer customers relative to the number of GPUs they have. I don't think Google is handing out 429s because they are worried about overspending; I think it's because they literally cannot serve the requests.

tom_m|7 days ago

You can build plenty with Google ai pro plan and Antigravity. Yea there's some limits that should be even higher, but you can still build stuff.

mannanj|7 days ago

It's unfortunate though that they lie and deceive by having a name called "Open"AI when they are in fact "Closed". And the whole non-profit to profit and Microsoft deals are just untrustable and unethical.

They also actively employ dark strategies in cooperation with CIA and who knows when they will pull the rug under you again.

Do you really trust a foundational rotten group of people who avoid accountability?

tempaccount420|7 days ago

I'm guessing at least 50% of the "users" of Antigravity are actually OpenCode users exploiting the oauth and endpoint. Must be infuriating to them if they're subsidizing it.

The OpenCode plugin (8.7k stars btw!) even advertises "Multi-account support — add multiple Google accounts, auto-rotates when rate-limited"[1]

[1] https://github.com/NoeFabris/opencode-antigravity-auth/blob/...

ingatorp|7 days ago

I've stopped using Gemini models altogether because of this. I'm using Claude Code with MiniMax M2.5 for a while now and i couldn't be happier. I haven't noticed any drop in output quality and the biggest advantage is that even the $10 is pretty generous. I haven't been hit with rate limit, not even one time. And i'm pretty heavy user. I tried also GLM 5.0 but i hit rate limit there pretty early on.

AJRF|6 days ago

One thing with GLM 5 is they seem to do this weird thing where when your account is just opened it limits you really heavy, then this gets lifted later.

I had buyers remorse when the first hour or two I kept getting rate limited on GLM5, but since then i've not had a single rate limit and I am using it very heavily.

harshitaneja|7 days ago

Just adding for context that I use Gemini Ultra and across all models from Gemini 3.1 Pro to Claude Opus 4.6, I have never hit 429s as well as hitting model quota limits is incredibly rare and only happens if I am trying to run 3 projects at once. While not the biggest agentic coding fan, I have been toying with them and have been running it for at least 7-8 hours a day if not longer.

oofbey|7 days ago

I’ve often suspected these models of getting dumber when the service is under high load. But I’ve never seen actually measured results or proof. Anybody know of real published data here?

transcriptase|7 days ago

ChatGPT was brutal for it a couple years ago. You could tell when it would go into “lazy mode” during peak usage periods.

Suddenly instead of writing the code you asked for it would give some generic bullet points telling you to find a library to do what you asked for and read the documentation.

sva_|7 days ago

It is indeed somewhat sad/ridiculous to see that my GH Copilot Pro grants me access to Gemini 3.1, but my Google AI Pro does not