(no title)
Joeri | 17 hours ago
I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.
kdheiwns|15 hours ago
And the weirdest thing that I noticed: instead of skimming the response to try finding what was relevant, I just straight up read it. Kind of felt like I got a slight amount of focus ability back.
Accuracy is something I can't really compare yet (all chatbots feel generally the same for non-pro level queries), but so far, I'm fairly satisfied.
HarHarVeryFunny|7 hours ago
Apparently this annoying "next step" behavior is driven by the system prompt, since the other day I was running Gemini 3 Thinking, and it was displaying it's thoughts which included a reminder to itself to check that it was maintaining a consistent persona, and to make sure that it had suggested a next step. I'd love to know the thought process of whoever at Google thought that this would make for a natural or useful conversation flow! Could you imagine trying to have a conversation with a human who insisted on doing this?!
layer8|12 hours ago
Sharlin|12 hours ago
lkbm|8 hours ago
Wikipedia articles on demand are great, but not usually what I want.
davidee|2 hours ago
Over the last few iterations of Sonnet and Opus, anthropic has definitely trained me to ask it to explain something "in detail" (or even "in great detail") when I want as much nuance as possible.
It used to be the inverse - way too much detail when I didn't want it.
skeledrew|11 hours ago
esperent|13 hours ago
On the contrary, it's great. It's fully capable of outputting a wall of text when required, so instead of feeling like I'm talking to something that has a minimum word count requirement, I get an appropriate sized response to the task at hand.
mavamaarten|14 hours ago
KellyCriterion|17 hours ago
For ChatGPT and Gemini, yes.
But for Claude, they have a very deep & big one: Its the only model that gets production ready output on the first detailled prompt. Yesterday I used my tokens til noon, so I tried some output from Gemini & Co. I presented a working piece of code which is already in production:
1. It changed without noticing things like "Touple.First.Date.Created" and "Touple.Second.Date.Created" and it rendered the code unworking by chaning to "Touple.FirstDate" and "Touple.SecondDate"
2. There was a const list of 12 definitions for a given context, when telling to rewrite the function it just cut 6 of these 12 definitions, making the code not compiling - I asked why they were cut: "Sorry, I was just too lazy typing" ?? LOL
3. There is a list include holding some items "_allGlobalItems" - it changed the name in the function simply to "_items", code didnt compile
As said, a working version of a similar function was given upfront.
With Claude, I never have such issues.
ptnpzwqd|16 hours ago
Maybe it is tech stack dependent (I have mostly used it with C#/.NET), but I have heard people say the same for C#. The only conclusion I have been able to draw from this, is that people have very different definitions of production ready, but I would really like to see some concrete evidence where Claude one-shots a larger/complex C# feature or the like (with or without detailed guidance).
AlecSchueler|16 hours ago
One does often hear that where LLMs shine is with greenfield code generation but they all start to struggle working with pre-existing code. It could be that this wasn't a like for like comparison.
That said I do personally feel Claude to produce far better results than competitors.
ben_w|17 hours ago
otabdeveloper4|15 hours ago
That's, just, like, your opinion, man.
littlestymaar|17 hours ago
That's not a moat though. Claude itself wasn't there 6 months ago and there's no reason to think Chinese open models won't be at this level in a year at most.
To keep its current position Claude has to keep improving at the same pace as the competitor.
jccx70|11 hours ago
[deleted]
crossroadsguy|15 hours ago
Buttons840|10 hours ago
One day I'd like to create a server in my basement that just runs a few really really nice models, and then get some friends and CO workers to pay me $10 a month for unlimited access.
All with the understanding that if you hog the entire server I'm going to kick you off, and if you generate content that makes the feds knock on my door I'm turning over the server logs and your information. Don't be an idiot, and this can be a good thing between us friends.
It would be like running a private Minecraft server. Trust means people can usually just do what they want in an unlimited way, but "unlimited" doesn't necessarily mean you can start building an x86 processor out of redstone and lagging the whole server. And you can't make weird naked statues everywhere either.
Usually these things aren't issues among a small group. Usually the private server just means more privacy and less restriction.
afcool83|10 hours ago
jacquesm|15 hours ago
It's perfectly possible that 'truly evil purposes' were the goal all along. Slogans and ethics departments are mere speed bumps on the way to generational wealth.
rustyhancock|17 hours ago
I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic.
Even being generous they are only very minimally a "better actor" than OpenAI.
However, we are so enthralled by their product that we tend to let the view bleed over to their ethics.
Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation.
All Anthropic have said is:
1. No mass domestic surveillance of Americans.
2. No fully autonomous lethal weapons yet.
My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"?
JauntyHatAngle|17 hours ago
However, I would think I'm not alone in that I'm generally wanting to do good while also wanting convenience, I know that really every bit of consumption I do is probably negative in some ways, and there is no real "apolitical" action anyone can take.
But can't I at least get annoyed and take my money somewhere else for the short amount of time another company is doing it better?
Yes, if openAI suddenly leaps forwards with codex and pounds anthropic into the dust, I'll likely switch back despite my moral grievances, but in a situation where I can get mildly motivated to jump over for something that - to me - seems like a better morality without much punishment to me, I'll do it.
earthnail|17 hours ago
You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build.
In my opinion, what Anthropic did is not a small thing at all.
jacquesm|15 hours ago
ekianjo|14 hours ago
unknown|12 hours ago
[deleted]
bko|10 hours ago
bossyTeacher|15 hours ago
Of course, also OpenAI being ran by openly questionable people while Dario so far doesn't seem nowhere near as bad even if none of them are angels.
Gooblebrai|16 hours ago
Sammi|15 hours ago
wongarsu|12 hours ago
For marketing or personal stuff I do sometimes want images, but I don't really mind going somewhere else for that
nkmnz|15 hours ago
toss1|11 hours ago
The results are laughably bad.
Sure, it does get some of the tones and features, but any kind of actual real-world constraint is so far off, and the dimension indicators it includes are hilarious if they weren't so bad.
oldpersonintx|15 hours ago
[deleted]
samiv|14 hours ago
Moving back to doing this archaic thing called using my own brain to do my work. Shocking.
mannanj|11 hours ago
Though tbh I hardly feel Claude is innocent either. When their safety engineer/leader left, I didn't see any statements from the Anthropic team not one addressing the legitimate points of his for why he left. Instead we got an eager over-push in the media cycle of "Anthropic standing up to DOD! Here's why you can trust us!"
It's all sounds too similar to propaganda and astroturfing to me.
neya|17 hours ago
[deleted]
pants2|10 hours ago
Mashimo|15 hours ago
bsder|16 hours ago
Why are you assuming these are real people and not NPCs?
The amount of money flowing around AI is staggering. To believe that the AI companies aren't flooding all the social media zones with propaganda is disingenuous.