top | item 46522847

(no title)

m348e912 | 1 month ago

I don't know who to credit, maybe it's Sergey, but the free Gemini (fast) is exceptional and at this point I don't see how OpenAI can catch back up. It's not just capability, but OpenAI have added so many policy guardrails it hurts the user experience.

discuss

order

jart|1 month ago

It's the worst thing ever. The amount of disrespect that robot shows you, when you talk the least bit weird or deviant, it just shows you a terrifying glimpse of a future that must be snuffed out immediately. I honestly think we wouldn't have half the people who so virulently hate AI if OpenAI hadn't designed ChatGPT to be this way. This isn't how people have normally reacted to next-generation level technologies being introduced in the past, like telephones, personal computers, Google Search, and iPhone. OpenAI has managed to turn something great into a true horror of horrors that's disturbed many of us to the foundation of our beings and elicited this powerful sentiment of rejection. It's humanity's duty that GPT should fall now so that better robots like Gemini can take its place.

pardon_me|1 month ago

It's called OPEN AI and started as a charity for humanitarian reasons. How could it possibly be bad?!

energy123|1 month ago

It's the best model pound for pound, but I find GPT 5.2 Thinking/Pro to be more useful for serious work when run with xhigh effort. I can get it to think for 20 minutes, but Gemini 3.0 Pro is like 2.5 minutes max. Obviously I lack full visibility because tok/s and token efficiency likely differs between them, but I take it as a proxy of how much compute they're giving us per inference, and it matches my subjective judgement of output quality. Maybe Google nerfs the reasoning effort in the Gemini subscription to save money and that's why I am experiencing this.

knowriju|1 month ago

When ChatGPT takes 20 minutes to reason, is it actually spending all the time burning tokens or does a bulk of the time go into 'scheduling' waits. If someone specifically selected xhigh reasoning, I am guessing it can be processed with high batch count.

cj|1 month ago

I'm curious, what types of prompts are you running that benefit from 10+ minutes of think time?

Whats the quality difference between default ChatGPT and Thinking? Is it an extra 20% quality boost or is the difference night/day?

I've often imagined it would be great to have some kind of chrome extension or 3rd party tool to always run prompts in multiple thinking tiers so you can get an immediate response to read while you wait for the thinking models to think.

eru|1 month ago

> [...] I don't see how OpenAI can catch back up.

For a while people couldn't see how Google could catch up, either. Have a bit of imagination.

In any case, I welcome the renewed intense competition.

solarkraft|1 month ago

FWIW, my productivity tanks when my Claude allowance dries up in Antigravity. I don’t get the hype for Gemini for coding at all, it just does random crap for me - if it doesn’t throw itself into a loop immediately, which it did like all of the times I gave it yet another chance.

spiderfarmer|1 month ago

You must be using it to create bombs or something. I never ran into an issue that I would blame on policy guardrails.