(no title)
jjani | 5 months ago
It's bad at agentic stuff, especially coding. Incomparably so compared to Claude and now GPT-5. But if it's just about asking it random stuff, and especially going on for very long in the same conversation - which non-tech users have a tendency to do - Gemini wins. It's still the best at long context, noticing things said long ago.
Earlier this week I was doing some debugging. For debugging especially I like to run sonnet/gpt5/2.5-pro in parallel with the same prompt/convo. Gemini was the only one that, 4 or so messages in, pointed out something very relevant in the middle of the logs in the very first message. GPT and Sonnet both failed to notice, leading them to give wrong sample code. I would've wasted more time if I hadn't used Gemini.
It's also still the best at a good number of low-resource languages. It doesn't glaze too much (Sonnet, ChatGPT) without being overly stubborn (raw GPT-5 API). It's by far the best at OCR and image recognition, which a lot of average users use quite a bit.
Google's ridiculously bad at marketing and AI UX, but they'll get there. They're already much more than just a "bang for the buck" player.
FWIW I use all 3 above mentioned on a daily basis for a wide variety of tasks, often side-by-side in parallel to compare performance.
breakingcups|5 months ago
kridsdale1|5 months ago
Same as social media converging to rage bait. The user base LIKES it subconsciously. Nobody at the companies explicitly added that to content recommendation model training. I know, for the latter, as I was there.
Twirrim|5 months ago
porridgeraisin|5 months ago
typpilol|5 months ago
People have said it destroys the intelligence mid convo
m_mueller|5 months ago
lelanthran|5 months ago
Just on the video link alone Gemini is making money on the free tier by pointing the hapless user at an ad while the other LLMs make zilch off the free tier.
dudeinhawaii|5 months ago
Additionally, despite having "grounding with google search" it tends to default to old knowledge. I usually have to inform it that it's presently 2025. Even after searching and confirming, it'll respond with something along the lines of "in this hypothetical timeline" as if I just gaslit it.
Consider this conversation I just had with all Claude, Gemini, GPT-5.
<ask them to consider DDR6 vs M3 Ultra memory bandwidth>
-- follow up --
User: "Would this enable CPU inference or not? I'm trying to understand if something like a high-end Intel chip or a Ryzen with built in GPU units could theoretically leverage this memory bandwidth to perform CPU inference. Think carefully about how this might operate in reality."
<Intro for all 3 models below - no custom instructions>
GPT-5: "Short answer: more memory bandwidth absolutely helps CPU inference, but it does not magically make a central processing unit (CPU) “good at” large-model inference on its own."
Claude: "This is a fascinating question that gets to the heart of memory bandwidth limitations in AI inference. "
Gemini 2.5 Pro: "Of course. This is a fantastic and highly relevant question that gets to the heart of future PC architecture."
viraptor|5 months ago
BeetleB|5 months ago
not_kurt_godel|5 months ago
mcintyre1994|5 months ago
dpoloncsak|5 months ago
jjani|5 months ago