top | item 47204771

(no title)

Joeri | 17 hours ago

I already switched to claude a while ago. Didn’t bring along any context, just switched subscriptions, walked away from chatgpt and haven’t touched it again. Turned out to be a non-event, there really is no moat.

I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.

discuss

order

kdheiwns|15 hours ago

Yesterday was my first time trying it. One thing that felt a bit strange to me was that I asked it something and the response was just one paragraph. Which isn't bad or anything but it felt... strange? Like I always need to preface ChatGPT/gemini/whatever question with "Briefly, what is..." or it gives me enough fluff to fill a 5 page high school essay. But I didn't need to do that and just got an answer that was to the point and without loads of shit that's barely related.

And the weirdest thing that I noticed: instead of skimming the response to try finding what was relevant, I just straight up read it. Kind of felt like I got a slight amount of focus ability back.

Accuracy is something I can't really compare yet (all chatbots feel generally the same for non-pro level queries), but so far, I'm fairly satisfied.

HarHarVeryFunny|7 hours ago

I use Gemini all the time, but I have to say it's got verbal diarrhea and an EXTREMELY annoying trait of wanting to lead the conversation rather than just responding to what YOU want to do. At the end of every response Gemini will always suggest a "next step", in effect trying to 2nd guess where you want the conversation to go. I'd much rather have an AI that just did what it was asked, and let me decide what to ask next (often nothing - maybe it was just a standalone question!).

Apparently this annoying "next step" behavior is driven by the system prompt, since the other day I was running Gemini 3 Thinking, and it was displaying it's thoughts which included a reminder to itself to check that it was maintaining a consistent persona, and to make sure that it had suggested a next step. I'd love to know the thought process of whoever at Google thought that this would make for a natural or useful conversation flow! Could you imagine trying to have a conversation with a human who insisted on doing this?!

layer8|12 hours ago

One issue is that Claude’s web search abilities are more limited, for example it can’t search Reddit and Stack Overflow for relevant content.

Sharlin|12 hours ago

Heh, a while ago I wondered why ChatGPT had started to reply tersely, almost laconically. Then I remembered that I had explicitly told it to be brief by default in the custom personality settings… I also noticed that there are now various sliders to control things like how many emojis or bulletpoint lists ChatGPT should use, which I though was amusing. Anyway, these tools can be customized to adopt just about any style, there's no need to always prefix questions with "Briefly" or similar.

lkbm|8 hours ago

Yeah, I've always been a little confused why people use ChatGPT so heavily. It's better than it used to be (maybe thanks to custom configuration), but it still tends to respond like it's writing a Wikipedia article.

Wikipedia articles on demand are great, but not usually what I want.

davidee|2 hours ago

That tracks for me; longtime claude, claude code pro subscriber (not all of it has been good - but that's neither here nor there).

Over the last few iterations of Sonnet and Opus, anthropic has definitely trained me to ask it to explain something "in detail" (or even "in great detail") when I want as much nuance as possible.

It used to be the inverse - way too much detail when I didn't want it.

skeledrew|11 hours ago

Yep the experience is quite something. Another thing I've noticed, and you likely soon will also, is that Claude only attempts a follow-up if the one is needed or the prompt is structured for it. Meanwhile ChatGPT always prompts you with a choice of next steps. It can be nice, as sometimes the options contain improvements you never thought of and would like, but in lengthy conversations with a detailed plan it does things really piecemeal, as though trained to maximize engagement instead of getting to a final solution.

esperent|13 hours ago

> Which isn't bad or anything but it felt... strange?

On the contrary, it's great. It's fully capable of outputting a wall of text when required, so instead of feeling like I'm talking to something that has a minimum word count requirement, I get an appropriate sized response to the task at hand.

mavamaarten|14 hours ago

In my limited experience, that's mostly since the 4.6 release. I noticed that with the same prompt, it answers much more briefly. A bit jarring indeed, but I prefer it. Less bs and filler, and less burning off electricity for nothing.

KellyCriterion|17 hours ago

> there really is no moat.

For ChatGPT and Gemini, yes.

But for Claude, they have a very deep & big one: Its the only model that gets production ready output on the first detailled prompt. Yesterday I used my tokens til noon, so I tried some output from Gemini & Co. I presented a working piece of code which is already in production:

1. It changed without noticing things like "Touple.First.Date.Created" and "Touple.Second.Date.Created" and it rendered the code unworking by chaning to "Touple.FirstDate" and "Touple.SecondDate"

2. There was a const list of 12 definitions for a given context, when telling to rewrite the function it just cut 6 of these 12 definitions, making the code not compiling - I asked why they were cut: "Sorry, I was just too lazy typing" ?? LOL

3. There is a list include holding some items "_allGlobalItems" - it changed the name in the function simply to "_items", code didnt compile

As said, a working version of a similar function was given upfront.

With Claude, I never have such issues.

ptnpzwqd|16 hours ago

I have used Claude (incl. Opus 4.6) fairly extensively, and Claude still spits out quality that is far below what I would call production ready - both littered with smaller issues, but also the occasional larger blunder. Particularly when doing anything non-trivial, and even when guiding it in detail (although that admittedly reduces the amount of larger structural issues).

Maybe it is tech stack dependent (I have mostly used it with C#/.NET), but I have heard people say the same for C#. The only conclusion I have been able to draw from this, is that people have very different definitions of production ready, but I would really like to see some concrete evidence where Claude one-shots a larger/complex C# feature or the like (with or without detailed guidance).

AlecSchueler|16 hours ago

> Its the only model that gets production ready output on the first detailled prompt. Yesterday I used my tokens til noon, so I tried some output from Gemini & Co. I presented a working piece of code which is already in production:

One does often hear that where LLMs shine is with greenfield code generation but they all start to struggle working with pre-existing code. It could be that this wasn't a like for like comparison.

That said I do personally feel Claude to produce far better results than competitors.

ben_w|17 hours ago

That's been my experience too. I'm using the recent free trial of OpenAI Plus to vibe code, and from this I would say that if Claude Code is a junior with 1-3 years of experience, OpenAI's Codex is like a student coder.

otabdeveloper4|15 hours ago

> Its the only model that gets production ready output on the first detailled prompt.

That's, just, like, your opinion, man.

littlestymaar|17 hours ago

> But for Claude, they have a very deep & big one: Its the only model that gets production ready output on the first detailled promp

That's not a moat though. Claude itself wasn't there 6 months ago and there's no reason to think Chinese open models won't be at this level in a year at most.

To keep its current position Claude has to keep improving at the same pace as the competitor.

jccx70|11 hours ago

[deleted]

crossroadsguy|15 hours ago

I wrote off ChatGPT/OpenAI because of Sam Altman and those eyeball scan things - so sort of even before all this was a rage and centre stage. Sometimes it's just the gut feeling, and while it may not always be accurate, if something doesn't "feel" right, maybe it is not right. No one else is all good either, but what I mean to say is there are some entities/people who repeatedly don't feel right, have things attached to them that never felt right, etc., and you get a combined "gut feeling". At least that's how it was for me.

Buttons840|10 hours ago

I love no mote!

One day I'd like to create a server in my basement that just runs a few really really nice models, and then get some friends and CO workers to pay me $10 a month for unlimited access.

All with the understanding that if you hog the entire server I'm going to kick you off, and if you generate content that makes the feds knock on my door I'm turning over the server logs and your information. Don't be an idiot, and this can be a good thing between us friends.

It would be like running a private Minecraft server. Trust means people can usually just do what they want in an unlimited way, but "unlimited" doesn't necessarily mean you can start building an x86 processor out of redstone and lagging the whole server. And you can't make weird naked statues everywhere either.

Usually these things aren't issues among a small group. Usually the private server just means more privacy and less restriction.

afcool83|10 hours ago

Amazing how analogous this is to the early Internet when people started running web servers out of their basement and then eventually graduated up to being their own dial-in ISP…

jacquesm|15 hours ago

> I’m pretty sure they would allow AGI to be used for truly evil purposes

It's perfectly possible that 'truly evil purposes' were the goal all along. Slogans and ethics departments are mere speed bumps on the way to generational wealth.

rustyhancock|17 hours ago

I know this is necessarily a very unpopular opinion however.

I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic.

Even being generous they are only very minimally a "better actor" than OpenAI.

However, we are so enthralled by their product that we tend to let the view bleed over to their ethics.

Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation.

All Anthropic have said is:

1. No mass domestic surveillance of Americans.

2. No fully autonomous lethal weapons yet.

My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"?

JauntyHatAngle|17 hours ago

Lets generalise a bit more here - every company at any time could completely heel-turn and do awful things. Even my favourite private companies (e.g. Valve) have done things that I would consider evil.

However, I would think I'm not alone in that I'm generally wanting to do good while also wanting convenience, I know that really every bit of consumption I do is probably negative in some ways, and there is no real "apolitical" action anyone can take.

But can't I at least get annoyed and take my money somewhere else for the short amount of time another company is doing it better?

Yes, if openAI suddenly leaps forwards with codex and pounds anthropic into the dust, I'll likely switch back despite my moral grievances, but in a situation where I can get mildly motivated to jump over for something that - to me - seems like a better morality without much punishment to me, I'll do it.

earthnail|17 hours ago

Well, they did stand up to the US administration and lost a lot of money in the process. That takes courage. They clearly were being bullied into compliance, and they stood their ground.

You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build.

In my opinion, what Anthropic did is not a small thing at all.

jacquesm|15 hours ago

It's not high. But it is higher.

ekianjo|14 hours ago

Let's not forget they also lobby to forbid models from China and pretend that distillation is stealing. but somehow just because they said no to two points the majority of HN folks think them as virtuous.

bko|10 hours ago

I never understood the point of this kind of comment. It doesn't add any value or anything to the discussion. Its basically two paragraphs with some presupposition (openai bad) and how the author is virtuous by canceling his subscription. No explanation, argument, nuance. Its just virtue signaling. Actually... I guess I do know the point of this kind of comment. I just don't know why these kinds of comments get upvoted, even if you do agree openai bad

bossyTeacher|15 hours ago

I tried Claude recently (after they dropped the nonsensical requirement to give them your phone number) and I was surprised to see how significantly less sycophant it was. Chatgpt, unless you are talking hard science, tends to be overly agreeable. Claude questions you a lot (you ask for x and it asks you stuff like: why are you interested in x, or based on our previous convo, x might not be suitable to you, or I see your point but based on our previous convo, y is better than x, etc). Chatgpt rarely does that.

Of course, also OpenAI being ran by openly questionable people while Dario so far doesn't seem nowhere near as bad even if none of them are angels.

Gooblebrai|16 hours ago

Claude still doesn't have image generation?

Sammi|15 hours ago

Image generation isn't what most devs spend most of their time on?

wongarsu|12 hours ago

It is semi-competent at making SVGs. Which are the only kind of images I really need in dev work.

For marketing or personal stuff I do sometimes want images, but I don't really mind going somewhere else for that

nkmnz|15 hours ago

Interesting. Have been using Gemini, Gpt and Claude extensively in parallel and never noticed that.

toss1|11 hours ago

I'm switching over to Claude from OpenAI, and I don't care. OpenAI's image generation is terrible anyway. Just try to get it to generate something to scale, like a cabinet for a specific kitchen or bathroom space. Give it all the explicit constraints, initial sketches, etc. it wants.

The results are laughably bad.

Sure, it does get some of the tones and features, but any kind of actual real-world constraint is so far off, and the dimension indicators it includes are hilarious if they weren't so bad.

samiv|14 hours ago

I did the same thing and cancelled my OpenAI plan today. Besides boycotting it for their latest grifting I also found it to not really produce much value in my use cases.

Moving back to doing this archaic thing called using my own brain to do my work. Shocking.

mannanj|11 hours ago

Yes they have a great marketing team and a powerful astro turfing presence though, especially with the recent "Claude beat up OpenClaw! OpenAI is supporting the community by buying it!" and that nonsense.

Though tbh I hardly feel Claude is innocent either. When their safety engineer/leader left, I didn't see any statements from the Anthropic team not one addressing the legitimate points of his for why he left. Instead we got an eager over-push in the media cycle of "Anthropic standing up to DOD! Here's why you can trust us!"

It's all sounds too similar to propaganda and astroturfing to me.

neya|17 hours ago

[deleted]

pants2|10 hours ago

OpenAI actually does have two excellent OSS models. Not Anthropic. Not that OpenAI is 'open' per se, but more so than Anthropic. Also see the Codex vs Claude Code extensibility.

Mashimo|15 hours ago

What is your definition of NPC behavior?

bsder|16 hours ago

> I swear HN is just a bunch of fanboys full of NPC behavior.

Why are you assuming these are real people and not NPCs?

The amount of money flowing around AI is staggering. To believe that the AI companies aren't flooding all the social media zones with propaganda is disingenuous.