top | item 47188697

Statement on the comments from Secretary of War Pete Hegseth

1154 points| surprisetalk | 1 day ago |anthropic.com

353 comments

order

Some comments were deferred for faster rendering.

lebovic|1 day ago

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].

I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.

I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47174423

[2]: https://news.ycombinator.com/item?id=47149908

lich_king|1 day ago

My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.

So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.

However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.

BatFastard|1 day ago

I applaud Anthropic choice. Choosing principle over money is a hard choice. I love Anthropic's products and wish them success!

stouset|1 day ago

> I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.

array_key_first|1 day ago

Anthropics principles are extraordinarily weak from an absolute point of view.

Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.

Yeah dude, I'm sure just about any burglar I pull out of prison will agree.

Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.

That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.

sensanaty|1 day ago

The funniest or perhaps saddest (depending on your view) is that the "principles" we're talking about and apparently celebrating here are that they don't want to do DOMESTIC surveillance, and they don't want FULLY autonomous kill bots... Yet, because according to the CEO the models aren't there yet.

Meaning, they're a-okay with:

- Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)

- Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.

What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?

maddmann|1 day ago

This is an absolute rarity these days. Very appreciative of the true leadership on display here

msla|1 day ago

If you're going to be cynical, at least credit them with some brains:

MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.

19123127|1 day ago

Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.

I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.

qsera|1 day ago

>driven by values

Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?

I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.

So in the end you are either

1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".

2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.

In other words, when you start taking investment, you forego your right to claim virtuous. The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"

jmount|1 day ago

So many tech companies have the "high values" screed that it really just seems like a standard step in the money plan.

Rapzid|1 day ago

[deleted]

arjie|1 day ago

This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.

Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.

Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.

prng2021|1 day ago

“I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.”

Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.

https://www.susmangodfrey.com/wins/susman-godfrey-secures-1-...

parl_match|1 day ago

Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.

dmix|1 day ago

It’s not just admirable it’s the obvious position to take and any alternative is head scratching.

It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.

stavros|1 day ago

I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".

by364|1 day ago

[deleted]

hank2000|1 day ago

Stay strong Anthropic. We just like you more for this.

abtinf|1 day ago

I don't know if I like Anthropic more, but I certainly like their competitors much less now.

The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.

steve_adams_86|1 day ago

Anthropic is welcome to set up shop here in Canada! I hear Victoria BC is great. Absolutely brimming, overflowing with technology talent

thirtygeo|1 day ago

Actually why is nobody in Cali just trying to join Canada - would be better for everyone in terms of more similar culture and values. Weird that it isn't discussed more

8note|1 day ago

whats going on round tectoria/viatec nowadays? im looking to go buy a house there next

silisili|1 day ago

Not to intentionally sidetrack the conversation, but when did we start calling service members 'warfighters?'

I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?

tokyobreakfast|1 day ago

https://languagelog.ldc.upenn.edu/nll/?p=4339

The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988

Warfighter is literally the Department of War's Amazonian or Googler or any other cringe term you'd see in company PR or recruiting material.

hunter-gatherer|1 day ago

It isn't a new thing at all, and the term has been around for a while. I was an Infantryman from 05-08 and heard it back then. I have also more recently been a defense contractor. I don't think members of the military prefer any title, honestly. In the most broad sense, good terms are soldiers, sailors, airmen, marines. Defense Contractors constantly refer to the military as "warfighter" and have for a while. In short, nobody in the military is going to flinch one way or the other if you use either term. Just don't call marines anything but marines.

kristjansson|1 day ago

They want to make sure the whole Diversity of our armed forces (soldiers, sailors, marines, …) feel an Equitable and Inclusive share of the mention.

Jtsummers|1 day ago

"Warfighters" has been used for decades to describe service members, though usage picked up (in my experience) some time in the late 00s or 2010s. It's actually pretty common to describe "serving the warfighter" for all the all the missions that support combat roles but aren't combat roles themselves.

Shawnj2|1 day ago

I’ve always heard this term in use from a defense contractor

SoftTalker|1 day ago

It's a term that's been used at least back to the Bush 43 administration, probably older than that.

kibibu|1 day ago

I always associate it with fighter aircraft

EFreethought|1 day ago

It has been in use for at least a decade, since the Obama administration if not earlier.

We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.

SanjayMehta|1 day ago

Around the time Hegseth was appointed secretary of war. It's a trump thing.

Edit: so it's been around for longer, but the Trump regime seems to love it bigly so I'm sticking with my observation.

It's a trump regime thing.

youarentrightjr|1 day ago

It's a Hegseth malapropism, which is why it's slightly disturbing that Dario continues to use it.

edit: To be clear, Hegseth didn't create it, merely has popularized its use recently. Eg his speech at Quantico last Sept

seizethecheese|1 day ago

This part stood out to me:

“To the best of our knowledge, these exceptions have not affected a single government mission to date.”

I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.

egonschiele|1 day ago

Heck yeah, so happy to see Anthropic fighting. This is what real leadership looks like. I'd love to see the same from Google and OpenAI.

soared|1 day ago

Is this the first company to actually face to face stand up to the current administration?

jakeydus|1 day ago

Costco has been. When every other major company was scuttling their DEI initiatives Costco doubled down. Doesn’t seem to have impacted them yet.

ch4s3|1 day ago

No, a few law firms targeted by EOs fought them in court last year and won.

deaux|1 day ago

The usual suspects have stood up to it. Ben & Jerry's, Patagonia. In the former case it led to an illegal takeover by Unilever for which they're now being sued (or more accurately, the spinoff). Capgemini sold a US division over working with ICE, though that's a French company.

So yeah, extremely few have.

mizzao|1 day ago

Harvard is an analogue in the academic sphere, if you include organizations beyond just companies.

biophysboy|1 day ago

Hundreds of companies have filed lawsuits against the admin over the tariffs.

byang364|1 day ago

I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.

mythz|1 day ago

Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.

netinstructions|1 day ago

This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).

It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.

How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.

solenoid0937|1 day ago

Are they just threatening to label? It seems to me like they have already labeled.

michaelhoney|1 day ago

It is indeed kind of crazy. That's because the current US administration is composed of people whose sole qualification is being able to work for Donald Trump. Being competent, rational or ethical is career-limiting.

surgical_fire|1 day ago

A question - being considered a supply chain risk is the same as being sanctioned? Or does it only affect their ability to be a defense supplier in the US (even if transitively?)

It's an honest question by the way - not trying to throw any gothas.

Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.

mkl|1 day ago

> we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights

Mass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.

titanomachy|1 day ago

Yeah, but you can’t contract your software to the department of defense and then demand that they not use it to surveil foreigners. If that’s the line you want to draw, you’d have to avoid working with them in the first place.

krior|1 day ago

Americans cannot even be bothered to care about americans, what makes you think they can be bothered to care about foreigners?

Intermernet|1 day ago

What's stopping the government from using the usual nasty tricks the world has known about for decades?

DPA? All Writs Act?

Force them to comply and then prevent them talking about it with NSLs?

I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.

So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?

Did the world learn nothing from Snowden?

amai|1 day ago

Dear Anthropic,

Europe is a nice place, too. In case you need GPUs we have AI factories for you : https://digital-strategy.ec.europa.eu/en/policies/ai-factori...

We also don't engage in mass surveillance or develop autonomous weapons.

pennaMan|1 day ago

> europe

> we don't engage in mass surveillance

you're incredibly naive

szmarczak|1 day ago

ASML is European too! Then they could make a deal with them. wink wink

gorbachev|1 day ago

> We also don't engage in mass surveillance

What?

zmmmmm|1 day ago

Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.

jleyank|1 day ago

Just don’t help big brother see more. If you job leads to such results, think hard whether that’s what you should be doing.

Perhaps it’s time or even past time to think of ways of screwing up their training sets.

rglover|1 day ago

Was bracing for another rug pull around all this, but kudos to Dario and co for their continued vigilance. Refreshing to see.

ndgold|1 day ago

Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.

koshergweilo|1 day ago

It's kind of sad how an AI Startup defers more to its constitution than the actual government

wewewedxfgdf|1 day ago

Remember "small government"?

MathMonkeyMan|1 day ago

Smaller government has always been code for bigger me, at least in recent American politics. Now me is government, so bigger government.

lovehashbrowns|1 day ago

Happy to be a paying Anthropic customer right now.

Jordan-117|1 day ago

Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.

jkells|1 day ago

But of course, wholesale surveillance on the rest of the world is fine.

I guess our democracies don't count and we don't have any rights.

ctmnt|1 day ago

One interesting change between the last statement and this one: In the last statement Dario said that this designation had “never before been applied to an American company”. In the latest one the phrase is “never before publicly applied to an American company”.

bjackman|1 day ago

How do you imagine a secret designation would work..?

stego-tech|1 day ago

I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.

astrolx|1 day ago

Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?

tecoholic|1 day ago

Yup. That’s pretty much how they make money. (Edit - in the context of contracting services to US government)

throw310822|1 day ago

From the statement:

"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.

In practice, this means:

If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."

andkenneth|1 day ago

I'm wondering how this plays out in practice. Does the administration decide to strongarm contractors into cutting all ties? Will that extend to someone like google who provides compute to anthropic? Will the administration just plain ignore any court ruling? (as they've shown they're ready to do recently with the tarrifs situation)

If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.

Bleak.

infamouscow|1 day ago

They can also classify it as restricted data -- like nuclear weapons technology.

Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.

jryio|1 day ago

This an appropriate rewind to unreasonable behavior.

I applaud Anthropic's candor in the public sphere. Unfortunately the country party is unworthy of such applause.

sneilan1|1 day ago

I'm a lot happier now being an anthropic customer.

cdwhite|1 day ago

What happens if somebody (maybe anthropic!) uses Claude Code Security to find & fix a vulnerability in some piece of open-source software---openssh, linux kernel, that sort of thing? Can the DoW use the resulting fix?

throwaway20261|1 day ago

I had subscriptions to both Anthropic and Openai. Cancelled my openai subscriptions. Companies without a modicum of ethics deserve to go extinct.

ParentiSoundSys|1 day ago

Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:

"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.

tlogan|1 day ago

This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.

The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.

markvdb|1 day ago

Hours ago, OpenAI raised $110B.

seydor|1 day ago

This has been an exceptional publicity campaign for anthropic, among others

ok_dad|1 day ago

Don't worry, OpenAI will kneel for the king:

> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.

https://news.ycombinator.com/item?id=47188698

Fuck this authoritarian bullshit.

prawn|1 day ago

Can just see it now.

You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.

(Will leave you to imagine the bullseye emoji, etc.)

solenoid0937|1 day ago

Hopefully this causes an exodus of top talent from OpenAI. Anthropic needs all the help it can get.

keeeba|1 day ago

The gap between Anthropic and the other guys keeps growing

monomyth|1 day ago

based on the replies so far hacker news are ideologically captured

wjessup|1 day ago

Any commentary about how adversaries won't have regulations?

rorylawless|1 day ago

Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?

karmasimida|1 day ago

It gets so much money, compute and US user data. It won’t be allowed to operate as is as a foreign entity

Best scenario it will get TikTok-ed, otherwise it will become the real national security risk

Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.

ocdtrekkie|1 day ago

Would the US government attempt to apply export controls on the technology and prohibit this? I'm sure Lockheed Martin couldn't decide to move their proprietary technology to another country.

Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.

abtinf|1 day ago

Every other country is significantly less free than the US. America is freedom's last stand.

Waterluvian|1 day ago

> Allowing current models to be used in this way would endanger America’s warfighters and civilians.

That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!

JshWright|1 day ago

This letter is a public part of the negotiation process. It shouldn't be surprising that they are primarily using arguments that are, at least on the face, "patriotic".

solenoid0937|1 day ago

The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.

freakynit|1 day ago

DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.

If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.

rapind|1 day ago

Also people like me who are paying for a 20x Claude Max subscription and am feeling really good about it right now. I'll never even glance at OpenAI Codex or Gemini. Not to mention my divestment of OpenAI. It's just a drop I guess, but it's probably not the only one.

John23832|1 day ago

No offense,but this is where having immigrants throughout the power structure of these companies becomes an issue. We have a administration who clearly is not above using all avenues to apply pressure to get the things that they want done done,

How can we expect the VPs of these companies to make to make tough decisions like this when half can be pressured via immigration status? It’s hard enough being a normal citizen sticking your neck out in these circumstances.

lzapon|1 day ago

Google walked out in 2018 from project Maven, which is what this is about:

https://en.wikipedia.org/wiki/Project_Maven

The Epstein adjacent crew (Palantir) took over. Palantir was using Anthropic. No one could possibly have foreseen this. /s

maxgashkov|1 day ago

None of them are 'good'. Execs at Anthropic just perceive the long-term damage from a potential Snowden-level leak showing how their model directed a drone strike against a bunch of civilians higher than short-term loss of revenue from the DoD contracts.

tw04|1 day ago

I just want to point out how 1984 fascist dictatorship it still feels to call it “the department of war”. That’s not normal. None of this is normal.

titanomachy|1 day ago

In 1984, they called it the “ministry of peace”. If anything “defense” is more euphemistic than “war”.

mikeyouse|1 day ago

Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.

tehjoker|1 day ago

You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.

I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.

engineer_22|1 day ago

> If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.

/In theory./

In practice, if your biggest customer tells you to drop Anthropic, you listen to them.

Justitia|1 day ago

I'm not sure if OpenAI knows that scooping this might hurt their brand by a lot.

47282847|1 day ago

Looking through the comments here, I am repeatedly surprised how quickly we seem to have lost a shared understanding of fundamentals ever since Trump came into power. I still need inner adjusting to fully realize that it must have always been a misunderstanding?

1. I don’t understand what is controversial about any supplier making their own rules for trade. I don’t have to agree with their beliefs, but I find it the basis of a functioning society to allow others to hold beliefs that I do not share, and to develop products as they want, as long as they don’t pose any dangers. If I don’t like the product, I am free to shop elsewhere or develop my own.

2. I thought that there is was a shared understanding of the line between voluntary business deals, and coercion and punishment. I thought we agreed that the law should protect people and businesses in ways so that nobody can exert power over another. Not on the hows, but on the why. And not based on ethical considerations (beliefs) but purely on logical grounds that we know how violence begets violence, use of it will only escalate conflict, and we will ultimately lose.

3. I thought we all agreed that government agencies were bound by the law and its policies. If you were to use the designation of Supply Chain Risk, you would at least have to sufficiently provide logical arguments. Here, they even openly disclose how they plan to use the mechanism purely as punishment, against the spirit of the law, not because a product carries any risk and should be limited, but because it is too limited.

Is this some form of collective narcissistic psychosis? The desire to burn it all down in suicide?

jackyli02|1 day ago

People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.

tushar-r|1 day ago

This makes it seem like they really like the Anthropic product and are using it quite a bit more than the others? Or is it just me making random connections?

50208|1 day ago

This is what fighting early stage facism looks like.

fooster|1 day ago

early stage? shooting a woman in the face in her car for the crime of driving off by the brownshirts is not early stage my dude.

bawolff|1 day ago

I'm of the opinion that anthropic's "moral" stances are bullshit, not particularly coherent when you dig deep and more about branding. If so, this is grade A marketing.

They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.

At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?

It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.

solenoid0937|1 day ago

> What's the problem?

Think. The problem is that being branded a "supply chain risk" prohibits vast chunks of the US corporate landscape from doing business with Anthropic.

The problem is that the government is attempting to destroy a company rather than simply terminate their contract.

blcknight|1 day ago

Everyone close to Anthropic leadership has claimed they’re the real deal and it’s not a stunt. I don’t think it’s bull. They are trying to find a reasonable middle ground and settled on some red lines they won’t cross.

anonymous_user9|1 day ago

> What's the problem?

Instead of just canceling the contract, the DoD is trying to destroy Anthropic to make it comply with their whims.

IMO this will probably be quickly defeated in court.

If it isn't, comrade Hegseth will have done an impressive job of weakening the American empire. You simply can't do business with an entity that would try to destroy you over dumb bullshit like this.

zmmmmm|1 day ago

that doesn't even remotely represent what is happening here.

amunozo|1 day ago

Again, mass domestic surveillance of Americans is bad, otherwise it is okay. Disgusting.

johnnyApplePRNG|1 day ago

>We have tried in good faith to reach an agreement with the Department of War

LMFAO

eaglelamp|1 day ago

Anthropic knew they were going to lose this contract to OpenAI, and this is an attempt to salvage publicity from the loss.

This administration is comfortable with blatantly picking winners and OpenAI is better connected with the admin than Anthropic.

nseggs|1 day ago

There is literally no world where I take any organizations which has been strong armed by fucking Pete Hegseth seriously lmao. Thank you Anthropic both for building the best models for general engineering and for having a fucking backbone.

Phelinofist|1 day ago

> Protect _Americans_ from mass surveillance > Protect _American_ forces

What the actual fuck. How can anyone side with Anthropic. They are not the good guys by any means whatsoever. Mass surveillance against anyone is wrong and having killbots "when AI is ready" is totally fucked and dystopian. Imagine killbots rampaging while the good American people are at home living a nice peaceful life. Fuck any of that, fuck Anthropic, fuck ClosedAI, fuck Google, fuck Trump, fuck the DoD and fuck every American who is patriotic to the monster their country became. Fuck every country that also tries to do stuff like this. Fuck all companies taking part in such insanity.

vcryan|1 day ago

Amazing the Pete Hegseth is even a person that anyone would ever need to take seriously.

dbg31415|1 day ago

ChatGPT wasted no time bending over backwards to appease Trump.

"We'd sure love to turn our AI into a mass surveillance tool! Please, aim it at the Americans Population! And Kill Bots, we can't wait!"

SilverElfin|1 day ago

This is what real leadership looks like. Not the silence and complicity that you see from big tech, who regularly bend the knee and bestow bribes and gifts onto the Trump administration.

Rapzid|1 day ago

Hegseth is the, ultra unqualified, Secretary of Defense. Defense. JFC even when "pushing back" everyone is capitulating.

aryonoco|1 day ago

I think that choice of words to call them the Department of War and Secretary of War multiple times in that statement was very much intentional. And a point well made.

joeross|1 day ago

Hegseth is so pathetic.

verdverm|1 day ago

Title is off: "Statement on the comments from Secretary of War Pete Hegseth"

This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.

tomhow|1 day ago

Fixed, thanks!

piskov|1 day ago

[deleted]

meowface|1 day ago

This applies to basically every military and company in every country in all of human history. Nearly every single other country tries to spy on every single other country, including on the US. That's just how these things go.

collinmcnulty|1 day ago

This an extremely polite “fuck you, make me”. It’s good to see that they have principles, and I suspect strongly that Anthropic will come out on top here if they stand firm.

fzeroracer|1 day ago

If the Trump admin so chooses, they could absolutely obliterate Anthropic in an instant. They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.

Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.

water9|1 day ago

I fundamentally do not like the idea of one adult determining what knowledge another adult is entitled to.

It’s the library of Alexandria all over again.

erelong|1 day ago

I think Anthropic sounds well-intentioned but is blundering this incident in a big way and they really needed to work better towards a deal instead of isolating themselves with a "principled stance" that sets up a competitor to swoop in and take the contracts they had

ajam1507|1 day ago

And which one of their competitors do you imagine would swoop in and take their contracts while admitting to the rest of their customers that they're okay with their models being used for autonomous weapons and surveillance?

chirau|1 day ago

Doesn't NSA have a backdoor to all these companies by default? I could have sworn I read somewhere years ago that the government demands a backdoor to all US companies if they can't get in on their own.

nerdsniper|1 day ago

3 parts to this:

1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.

2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.

3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.

readitalready|1 day ago

NSA legally isn't allowed to spy on US citizens directly, due to the NSA being a US military organization and the Posse Comitatus act prohibits the US military from being used as a US policing force.

It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.

helloplanets|1 day ago

A backdoor is a completely different thing when it comes to an AI company, as compared to a social media company. Not really even sure what it would mean when it comes to doing inference on an LLM. Having access to the weights, training data and inference engine?

The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.

quietsegfault|1 day ago

I have worked at a number of software companies that would be "interesting" to get access to, with enough intimate information to know if there was a super-sekret backdoor. If "all US companies" had to comply .. well .. I guess I was really lucky to work for those that somehow fell through the cracks.