(no title)
JohnnyMarcone | 25 days ago
It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...
[3]https://investors.palantir.com/news-details/2024/Anthropic-a...
mrdependable|25 days ago
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
JohnnyMarcone|25 days ago
rhubarbtree|25 days ago
I’m not saying this is how it will play out, but this reads as lazy cynicism - which is a self-realising attitude and something I really don’t admire about our nerd culture. We should be aiming higher.
qudat|25 days ago
zombot|25 days ago
So, ideally, not at all?
libraryofbabel|25 days ago
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
yoyohello13|25 days ago
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
Jayakumark|25 days ago
https://www.anthropic.com/news/anthropic-s-recommendations-o...
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
skerit|25 days ago
Epitaque|25 days ago
[deleted]
throwaw12|25 days ago
- Blocking access to others (cursor, openai, opencode)
- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs
- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.
at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
mym1990|25 days ago
signatoremo|25 days ago
As it’s often said: there is no such thing as free product, you are the product. AI training is expensive even for Chinese companies.
esbranson|25 days ago
> Asking to regulate hardware chips more
> partnerships with [the military-industrial complex]
> only labs doing good in that front are Chinese labs
That last one is a doozy.
derac|25 days ago
Zambyte|25 days ago
falloutx|25 days ago
There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
delaminator|25 days ago
Perhaps your moral bubble is not universal.
insane_dreamer|25 days ago
skybrian|25 days ago
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
deaux|25 days ago
rowyourboat|25 days ago
easterncalculus|25 days ago
No one who believes this should be in any position of authority in the AI space. Anthropic's marketing BS has basically been taken as fact on this website since they started and it's just so tiring to watch this industry fall for the same nonsense over and over and over again.
Anthropic is younger. That's why they're not doing ads. As soon as they actually reach the spending to (not) reach their AGI goals they will start running ads and begging the taxpayer for even more money.
adriand|25 days ago
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
nilkn|25 days ago
astrange|25 days ago
Claude is somewhat sycophantic but nowhere near 4o levels. (or even Gemini 3 levels)
unknown|25 days ago
[deleted]
hackernews90210|23 days ago
I always find the CEO using hype and fear as his marketing strategy. A year ago, he came out and said, "Blood-bath" for white collar jobs. It seems to create some sense of anxiety in the receiving end.
agluszak|25 days ago
raincole|25 days ago
Hell, OpenAI was the good guy.
JumpinJack_Cash|25 days ago
Google delivered on their promise, and OpenAI well it's too soon but it's looking good.
The name OpenAI and its structure is a relic from a world where the sentiment was to be heavily preoccupied and concerned by the potential accidental release of an AGI.
Now that it's time for products the name and the structure are no longer serving the goal
unknown|25 days ago
[deleted]
4d4m|24 days ago
cedws|25 days ago
[0]: https://news.ycombinator.com/item?id=46873708
2001zhaozhao|25 days ago
Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.
In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.
In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in and then enshittify the product later, they are certainly acting like it.
The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.
I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.
b3ing|25 days ago
drawfloat|25 days ago
mhb|25 days ago
romanovcode|24 days ago
> The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
Dude, you cannot put these two sentences together. The defense department was either a fluke or a PR stunt. If they partner with Palintir they absolutely do not care that their tech is going to be used for killing and other horrible deeds.
A company with morals (which does not exist BTW) would never partner with Palintir.
marxisttemp|25 days ago
fragmede|25 days ago
yuiasdfj|25 days ago
[deleted]
threetonesun|25 days ago
mirekrusin|25 days ago