I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.
So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.
However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.
> I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.
Anthropics principles are extraordinarily weak from an absolute point of view.
Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.
Yeah dude, I'm sure just about any burglar I pull out of prison will agree.
Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.
That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.
The funniest or perhaps saddest (depending on your view) is that the "principles" we're talking about and apparently celebrating here are that they don't want to do DOMESTIC surveillance, and they don't want FULLY autonomous kill bots... Yet, because according to the CEO the models aren't there yet.
Meaning, they're a-okay with:
- Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)
- Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.
What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?
If you're going to be cynical, at least credit them with some brains:
MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.
Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.
I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.
Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?
I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.
So in the end you are either
1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".
2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.
In other words, when you start taking investment, you forego your right to claim virtuous.
The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"
This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
“I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.”
Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.
Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
It’s not just admirable it’s the obvious position to take and any alternative is head scratching.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
I don't know if I like Anthropic more, but I certainly like their competitors much less now.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
Actually why is nobody in Cali just trying to join Canada - would be better for everyone in terms of more similar culture and values. Weird that it isn't discussed more
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
Warfighter is literally the Department of War's Amazonian or Googler or any other cringe term you'd see in company PR or recruiting material.
It isn't a new thing at all, and the term has been around for a while. I was an Infantryman from 05-08 and heard it back then. I have also more recently been a defense contractor. I don't think members of the military prefer any title, honestly. In the most broad sense, good terms are soldiers, sailors, airmen, marines. Defense Contractors constantly refer to the military as "warfighter" and have for a while. In short, nobody in the military is going to flinch one way or the other if you use either term. Just don't call marines anything but marines.
"Warfighters" has been used for decades to describe service members, though usage picked up (in my experience) some time in the late 00s or 2010s. It's actually pretty common to describe "serving the warfighter" for all the all the missions that support combat roles but aren't combat roles themselves.
It has been in use for at least a decade, since the Obama administration if not earlier.
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
The usual suspects have stood up to it. Ben & Jerry's, Patagonia. In the former case it led to an illegal takeover by Unilever for which they're now being sued (or more accurately, the spinoff). Capgemini sold a US division over working with ICE, though that's a French company.
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
It is indeed kind of crazy. That's because the current US administration is composed of people whose sole qualification is being able to work for Donald Trump. Being competent, rational or ethical is career-limiting.
A question - being considered a supply chain risk is the same as being sanctioned? Or does it only affect their ability to be a defense supplier in the US (even if transitively?)
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
Yeah, but you can’t contract your software to the department of defense and then demand that they not use it to surveil foreigners. If that’s the line you want to draw, you’d have to avoid working with them in the first place.
What's stopping the government from using the usual nasty tricks the world has known about for decades?
DPA?
All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
One interesting change between the last statement and this one: In the last statement Dario said that this designation had “never before been applied to an American company”. In the latest one the phrase is “never before publicly applied to an American company”.
I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected.
If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
I'm wondering how this plays out in practice. Does the administration decide to strongarm contractors into cutting all ties? Will that extend to someone like google who provides compute to anthropic? Will the administration just plain ignore any court ruling? (as they've shown they're ready to do recently with the tarrifs situation)
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
They can also classify it as restricted data -- like nuclear weapons technology.
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
What happens if somebody (maybe anthropic!) uses Claude Code Security to find & fix a vulnerability in some piece of open-source software---openssh, linux kernel, that sort of thing? Can the DoW use the resulting fix?
Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.
The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
It gets so much money, compute and US user data. It won’t be allowed to operate as is as a foreign entity
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
Would the US government attempt to apply export controls on the technology and prohibit this? I'm sure Lockheed Martin couldn't decide to move their proprietary technology to another country.
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
This letter is a public part of the negotiation process. It shouldn't be surprising that they are primarily using arguments that are, at least on the face, "patriotic".
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
Also people like me who are paying for a 20x Claude Max subscription and am feeling really good about it right now. I'll never even glance at OpenAI Codex or Gemini. Not to mention my divestment of OpenAI. It's just a drop I guess, but it's probably not the only one.
No offense,but this is where having immigrants throughout the power structure of these companies becomes an issue. We have a administration who clearly is not above using all avenues to apply pressure to get the things that they want done done,
How can we expect the VPs of these companies to make to make tough decisions like this when half can be pressured via immigration status? It’s hard enough being a normal citizen sticking your neck out in these circumstances.
None of them are 'good'. Execs at Anthropic just perceive the long-term damage from a potential Snowden-level leak showing how their model directed a drone strike against a bunch of civilians higher than short-term loss of revenue from the DoD contracts.
Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.
You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
> If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
Looking through the comments here, I am repeatedly surprised how quickly we seem to have lost a shared understanding of fundamentals ever since Trump came into power. I still need inner adjusting to fully realize that it must have always been a misunderstanding?
1. I don’t understand what is controversial about any supplier making their own rules for trade. I don’t have to agree with their beliefs, but I find it the basis of a functioning society to allow others to hold beliefs that I do not share, and to develop products as they want, as long as they don’t pose any dangers. If I don’t like the product, I am free to shop elsewhere or develop my own.
2. I thought that there is was a shared understanding of the line between voluntary business deals, and coercion and punishment. I thought we agreed that the law should protect people and businesses in ways so that nobody can exert power over another. Not on the hows, but on the why. And not based on ethical considerations (beliefs) but purely on logical grounds that we know how violence begets violence, use of it will only escalate conflict, and we will ultimately lose.
3. I thought we all agreed that government agencies were bound by the law and its policies. If you were to use the designation of Supply Chain Risk, you would at least have to sufficiently provide logical arguments. Here, they even openly disclose how they plan to use the mechanism purely as punishment, against the spirit of the law, not because a product carries any risk and should be limited, but because it is too limited.
Is this some form of collective narcissistic psychosis? The desire to burn it all down in suicide?
People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.
This makes it seem like they really like the Anthropic product and are using it quite a bit more than the others? Or is it just me making random connections?
I'm of the opinion that anthropic's "moral" stances are bullshit, not particularly coherent when you dig deep and more about branding. If so, this is grade A marketing.
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
Everyone close to Anthropic leadership has claimed they’re the real deal and it’s not a stunt. I don’t think it’s bull. They are trying to find a reasonable middle ground and settled on some red lines they won’t cross.
Instead of just canceling the contract, the DoD is trying to destroy Anthropic to make it comply with their whims.
IMO this will probably be quickly defeated in court.
If it isn't, comrade Hegseth will have done an impressive job of weakening the American empire. You simply can't do business with an entity that would try to destroy you over dumb bullshit like this.
There is literally no world where I take any organizations which has been strong armed by fucking Pete Hegseth seriously lmao. Thank you Anthropic both for building the best models for general engineering and for having a fucking backbone.
> Protect _Americans_ from mass surveillance
> Protect _American_ forces
What the actual fuck. How can anyone side with Anthropic. They are not the good guys by any means whatsoever. Mass surveillance against anyone is wrong and having killbots "when AI is ready" is totally fucked and dystopian. Imagine killbots rampaging while the good American people are at home living a nice peaceful life. Fuck any of that, fuck Anthropic, fuck ClosedAI, fuck Google, fuck Trump, fuck the DoD and fuck every American who is patriotic to the monster their country became. Fuck every country that also tries to do stuff like this. Fuck all companies taking part in such insanity.
This is what real leadership looks like. Not the silence and complicity that you see from big tech, who regularly bend the knee and bestow bribes and gifts onto the Trump administration.
I think that choice of words to call them the Department of War and Secretary of War multiple times in that statement was very much intentional. And a point well made.
Title is off: "Statement on the comments from Secretary of War Pete Hegseth"
This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
This applies to basically every military and company in every country in all of human history. Nearly every single other country tries to spy on every single other country, including on the US. That's just how these things go.
This an extremely polite “fuck you, make me”. It’s good to see that they have principles, and I suspect strongly that Anthropic will come out on top here if they stand firm.
If the Trump admin so chooses, they could absolutely obliterate Anthropic in an instant. They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
I think Anthropic sounds well-intentioned but is blundering this incident in a big way and they really needed to work better towards a deal instead of isolating themselves with a "principled stance" that sets up a competitor to swoop in and take the contracts they had
And which one of their competitors do you imagine would swoop in and take their contracts while admitting to the rest of their customers that they're okay with their models being used for autonomous weapons and surveillance?
Doesn't NSA have a backdoor to all these companies by default? I could have sworn I read somewhere years ago that the government demands a backdoor to all US companies if they can't get in on their own.
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
NSA legally isn't allowed to spy on US citizens directly, due to the NSA being a US military organization and the Posse Comitatus act prohibits the US military from being used as a US policing force.
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
A backdoor is a completely different thing when it comes to an AI company, as compared to a social media company. Not really even sure what it would mean when it comes to doing inference on an LLM. Having access to the weights, training data and inference engine?
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.
I have worked at a number of software companies that would be "interesting" to get access to, with enough intimate information to know if there was a super-sekret backdoor. If "all US companies" had to comply .. well .. I guess I was really lucky to work for those that somehow fell through the cracks.
Some comments were deferred for faster rendering.
lebovic|1 day ago
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47174423
[2]: https://news.ycombinator.com/item?id=47149908
lich_king|1 day ago
So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.
However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.
BatFastard|1 day ago
stouset|1 day ago
The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.
eh-tk|1 day ago
array_key_first|1 day ago
Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.
Yeah dude, I'm sure just about any burglar I pull out of prison will agree.
Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.
That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.
sensanaty|1 day ago
Meaning, they're a-okay with:
- Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)
- Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.
What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?
maddmann|1 day ago
msla|1 day ago
MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.
19123127|1 day ago
I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.
qsera|1 day ago
Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?
I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.
So in the end you are either
1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".
2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.
In other words, when you start taking investment, you forego your right to claim virtuous. The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"
jmount|1 day ago
white_dragon88|1 day ago
[deleted]
Rapzid|1 day ago
[deleted]
arjie|1 day ago
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
prng2021|1 day ago
Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.
https://www.susmangodfrey.com/wins/susman-godfrey-secures-1-...
parl_match|1 day ago
dmix|1 day ago
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
stavros|1 day ago
by364|1 day ago
[deleted]
Rapzid|1 day ago
[deleted]
hank2000|1 day ago
abtinf|1 day ago
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
steve_adams_86|1 day ago
thirtygeo|1 day ago
8note|1 day ago
silisili|1 day ago
I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
tokyobreakfast|1 day ago
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
Warfighter is literally the Department of War's Amazonian or Googler or any other cringe term you'd see in company PR or recruiting material.
hunter-gatherer|1 day ago
kristjansson|1 day ago
Jtsummers|1 day ago
Shawnj2|1 day ago
SoftTalker|1 day ago
unknown|1 day ago
[deleted]
kibibu|1 day ago
BurningFrog|1 day ago
EFreethought|1 day ago
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
SanjayMehta|1 day ago
Edit: so it's been around for longer, but the Trump regime seems to love it bigly so I'm sticking with my observation.
It's a trump regime thing.
youarentrightjr|1 day ago
edit: To be clear, Hegseth didn't create it, merely has popularized its use recently. Eg his speech at Quantico last Sept
seizethecheese|1 day ago
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
egonschiele|1 day ago
soared|1 day ago
jakeydus|1 day ago
ch4s3|1 day ago
deaux|1 day ago
So yeah, extremely few have.
mizzao|1 day ago
Brybry|1 day ago
[1] https://en.wikipedia.org/wiki/Learning_Resources,_Inc._v._Tr...
biophysboy|1 day ago
byang364|1 day ago
mythz|1 day ago
netinstructions|1 day ago
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
solenoid0937|1 day ago
michaelhoney|1 day ago
surgical_fire|1 day ago
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
mkl|1 day ago
Mass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.
titanomachy|1 day ago
krior|1 day ago
Intermernet|1 day ago
DPA? All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
Did the world learn nothing from Snowden?
amai|1 day ago
Europe is a nice place, too. In case you need GPUs we have AI factories for you : https://digital-strategy.ec.europa.eu/en/policies/ai-factori...
We also don't engage in mass surveillance or develop autonomous weapons.
pennaMan|1 day ago
> we don't engage in mass surveillance
you're incredibly naive
szmarczak|1 day ago
gorbachev|1 day ago
What?
zmmmmm|1 day ago
jleyank|1 day ago
Perhaps it’s time or even past time to think of ways of screwing up their training sets.
rglover|1 day ago
ndgold|1 day ago
koshergweilo|1 day ago
unknown|1 day ago
[deleted]
wewewedxfgdf|1 day ago
MathMonkeyMan|1 day ago
lovehashbrowns|1 day ago
Jordan-117|1 day ago
jkells|1 day ago
I guess our democracies don't count and we don't have any rights.
ctmnt|1 day ago
bjackman|1 day ago
stego-tech|1 day ago
astrolx|1 day ago
tecoholic|1 day ago
throw310822|1 day ago
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
andkenneth|1 day ago
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
Bleak.
infamouscow|1 day ago
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
jryio|1 day ago
I applaud Anthropic's candor in the public sphere. Unfortunately the country party is unworthy of such applause.
sneilan1|1 day ago
cdwhite|1 day ago
layer8|1 day ago
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
throwaway20261|1 day ago
ParentiSoundSys|1 day ago
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
tlogan|1 day ago
The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
markvdb|1 day ago
unknown|1 day ago
[deleted]
seydor|1 day ago
ok_dad|1 day ago
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
https://news.ycombinator.com/item?id=47188698
Fuck this authoritarian bullshit.
prawn|1 day ago
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
solenoid0937|1 day ago
Maxious|1 day ago
keeeba|1 day ago
monomyth|1 day ago
wjessup|1 day ago
rorylawless|1 day ago
karmasimida|1 day ago
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
ocdtrekkie|1 day ago
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
abtinf|1 day ago
Waterluvian|1 day ago
That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
JshWright|1 day ago
solenoid0937|1 day ago
freakynit|1 day ago
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
rapind|1 day ago
John23832|1 day ago
How can we expect the VPs of these companies to make to make tough decisions like this when half can be pressured via immigration status? It’s hard enough being a normal citizen sticking your neck out in these circumstances.
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
lzapon|1 day ago
https://en.wikipedia.org/wiki/Project_Maven
The Epstein adjacent crew (Palantir) took over. Palantir was using Anthropic. No one could possibly have foreseen this. /s
maxgashkov|1 day ago
unknown|1 day ago
[deleted]
nightshift1|1 day ago
verdverm|1 day ago
tw04|1 day ago
titanomachy|1 day ago
unknown|1 day ago
[deleted]
mikeyouse|1 day ago
tehjoker|1 day ago
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
engineer_22|1 day ago
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
Justitia|1 day ago
47282847|1 day ago
1. I don’t understand what is controversial about any supplier making their own rules for trade. I don’t have to agree with their beliefs, but I find it the basis of a functioning society to allow others to hold beliefs that I do not share, and to develop products as they want, as long as they don’t pose any dangers. If I don’t like the product, I am free to shop elsewhere or develop my own.
2. I thought that there is was a shared understanding of the line between voluntary business deals, and coercion and punishment. I thought we agreed that the law should protect people and businesses in ways so that nobody can exert power over another. Not on the hows, but on the why. And not based on ethical considerations (beliefs) but purely on logical grounds that we know how violence begets violence, use of it will only escalate conflict, and we will ultimately lose.
3. I thought we all agreed that government agencies were bound by the law and its policies. If you were to use the designation of Supply Chain Risk, you would at least have to sufficiently provide logical arguments. Here, they even openly disclose how they plan to use the mechanism purely as punishment, against the spirit of the law, not because a product carries any risk and should be limited, but because it is too limited.
Is this some form of collective narcissistic psychosis? The desire to burn it all down in suicide?
jackyli02|1 day ago
tushar-r|1 day ago
50208|1 day ago
fooster|1 day ago
bawolff|1 day ago
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
solenoid0937|1 day ago
Think. The problem is that being branded a "supply chain risk" prohibits vast chunks of the US corporate landscape from doing business with Anthropic.
The problem is that the government is attempting to destroy a company rather than simply terminate their contract.
blcknight|1 day ago
anonymous_user9|1 day ago
Instead of just canceling the contract, the DoD is trying to destroy Anthropic to make it comply with their whims.
IMO this will probably be quickly defeated in court.
If it isn't, comrade Hegseth will have done an impressive job of weakening the American empire. You simply can't do business with an entity that would try to destroy you over dumb bullshit like this.
zmmmmm|1 day ago
amunozo|1 day ago
johnnyApplePRNG|1 day ago
LMFAO
eaglelamp|1 day ago
This administration is comfortable with blatantly picking winners and OpenAI is better connected with the admin than Anthropic.
nseggs|1 day ago
Phelinofist|1 day ago
What the actual fuck. How can anyone side with Anthropic. They are not the good guys by any means whatsoever. Mass surveillance against anyone is wrong and having killbots "when AI is ready" is totally fucked and dystopian. Imagine killbots rampaging while the good American people are at home living a nice peaceful life. Fuck any of that, fuck Anthropic, fuck ClosedAI, fuck Google, fuck Trump, fuck the DoD and fuck every American who is patriotic to the monster their country became. Fuck every country that also tries to do stuff like this. Fuck all companies taking part in such insanity.
vcryan|1 day ago
dbg31415|1 day ago
"We'd sure love to turn our AI into a mass surveillance tool! Please, aim it at the Americans Population! And Kill Bots, we can't wait!"
SilverElfin|1 day ago
lazzlazzlazz|1 day ago
https://x.com/USWREMichael/status/2027568070034608173
Rapzid|1 day ago
aryonoco|1 day ago
joeross|1 day ago
irenetusuq|1 day ago
[deleted]
JohnnyLarue|1 day ago
[deleted]
theturtle|1 day ago
[deleted]
billg_ms|18 hours ago
[deleted]
verdverm|1 day ago
This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
tomhow|1 day ago
unknown|1 day ago
[deleted]
ddoottddoott|1 day ago
[deleted]
piskov|1 day ago
[deleted]
meowface|1 day ago
oceanplexian|1 day ago
[deleted]
collinmcnulty|1 day ago
fzeroracer|1 day ago
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
water9|1 day ago
It’s the library of Alexandria all over again.
erelong|1 day ago
ajam1507|1 day ago
chirau|1 day ago
nerdsniper|1 day ago
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
readitalready|1 day ago
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
helloplanets|1 day ago
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.
quietsegfault|1 day ago