Don't want to sound rude, but anytime anyone says this I assume they haven't tried using agentic coding tools and are still copy pasting coding questions into a web input box
I would be really curious to know what tools you've tried and are using where gemini feels better to use
It's good enough if you don't go wild and allow LLMs to produce 5k+ lines in one session.
In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.
Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.
My experience is that on large codebases that get tricky problems, you eventually get an answer quicker if you can send _all_ the context to a relevant large model to crunch on it for a long period of time.
Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling.
I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI).
I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it.
They took a good while thinking.
But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not.
I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible.
So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages.
You need to stick Gemini in a straightjacket; I've been using https://github.com/ClavixDev/Clavix. When using something like that, even something like Gemini 3 Flash becomes usable. If not, it more often than not just loses the plot.
Every time I've tried to use agentic coding tools it's failed so hard I'm convinced the entire concept is a bamboozle to get customers to spend more tokens.
I've had the same experience with editing shaders. ChatGPT has absolutely no clue what's going on and it seems like it randomly edits shader code. It's never given me anything remotely usable. Gemini has been able to edit shaders and get me a result that's not perfect, but fairly close to what I want.
have you compared it with Claude Code at all? Is there a similar subscription model for Gemini as Claude? Does it have an agent like Claude Code or ChatGPT Codex? what are you using it for? How does it do with large contexts? (Claude AI Code has a 1 million token context).
I tried Claude Opus but at least for my tasks, Gemini provided better results. Both were way better than ChatGPT. Haven't done any agents yet, waiting on that until they mature a bit more.
On top of every version of Gemini, you also get both Claude models and GPT-OSS 120B. If you're doing webdev, it'll even launch a (self-contained) Chrome to "see" the result of its changes.
I haven't played around Codex, but it blows Claude Code's finicky terminal interface out of the water in my experience.
frde|9 days ago
I would be really curious to know what tools you've tried and are using where gemini feels better to use
f311a|9 days ago
In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.
Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.
dudeinhawaii|9 days ago
Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling.
I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI).
I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it.
They took a good while thinking.
But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not.
I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible.
So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages.
gman83|9 days ago
segfaultex|9 days ago
parliament32|9 days ago
m00x|9 days ago
Coding has been vastly improved in 3.0 and 3.1, but Google won't give us the full juice as Google usually does.
kdheiwns|9 days ago
logicallee|9 days ago
m-schuetz|9 days ago
landl0rd|9 days ago
- yes
- yes (not quite as good as CC/Codex but you can swap the API instead of using gemini-cli)
- same stuff as them
- better than others, google got long (1mm) context right before anyone else and doesn't charge two kidneys, an arm, and a leg like anthropic
airstrike|9 days ago
but claude and claude code are different things
nobody_r_knows|9 days ago
input_sh|9 days ago
On top of every version of Gemini, you also get both Claude models and GPT-OSS 120B. If you're doing webdev, it'll even launch a (self-contained) Chrome to "see" the result of its changes.
I haven't played around Codex, but it blows Claude Code's finicky terminal interface out of the water in my experience.
pastjean|9 days ago
unknown|9 days ago
[deleted]
m-schuetz|9 days ago