So the Pentagon is strongarming a company into cooperation? That reminds of how my alcoholic neighbor used to treat his family. It's almost as if someone let a mean drunk be in charge of the Pentagon.
Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.
It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.
> Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.
There are two possibilities:
> The government would likely argue that dropping the contractual restrictions doesn't change the product. Claude is the same model with the same weights and the same capabilities—the government just wants different contractual terms. […] Anthropic would likely argue the opposite: that its usage restrictions are part of what Claude is as a commercial service, and that Claude-without-guardrails is a product it doesn't offer to anyone. On this view, the government is asking for a new product, and the statute doesn't clearly authorize that.
and
> The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms. Here the characterization question seems easier: a retrained model looks much more like a new product than dropping contractual restrictions does. Admittedly, the government has a textual argument in its favor: the DPA's definitions of "services" include “development … of a critical critical technology item,” and the government could frame retraining Claude as exactly that. Whether courts would accept that framing, especially in light of the major questions doctrine, is another matter.
A more extreme situation: could the DPA be used to nationalize the model so the government has ownership, and then allow access to more amenable AI players?
The top line of the article gives a big old hint: Anthropic signed a contract with the “Killing people” part of the government and now they’re putting on a show. No contract, no leverage.
The only threat the Pentagon has is to terminate the contract.
It seems like an unfortunate reality that being a government contractor puts any company in any country at the whim of their government. AFAIK every government has 'pulled the rug out' from at least some contractors at some point.
The whole government 'strong-arms' many of its counter-parties in a variety of situations; this is unfortunately nothing new, and far from an innovation by Hegseth. A more clearly illegal example (because the government was acting as a regulator, not a purchaser) is Operation Choke Point, though there are many others: https://en.wikipedia.org/wiki/Operation_Choke_Point
COVID and election discourse was (and is) massively influenced by foreign actors, and social media companies were disinclined to take action on that front, as it was good engagement. Thus the government was motivated to do something about it. This leads us to now, where we're looking at ID/citizenship requirements for much of what people consider "the internet".
Isn't this just whataboutism? I can't tell if you're defending the practice described in the post, trying to distract from it, or just going off on a tangent for no reason.
As if governments throughout history haven't constantly used threats to gain leverage? No need to take a personal shot at the guy in charge when this is SOP throughout the administration.
I don't like the "guy in charge" anyway but it's not clear the other major party would stand united against this if they were in power. While I believe they'd probably have hearings and debate it more, this may be one of those issues where the defense establishment usually gets what it wants no matter which party is in control. One party protesting an issue when they're in the minority can just be performative "point scoring" against their opposition - not a guarantee of what result they'd participate in engineering if they were in power.
Much like FISA court-enabled unaccountable surveillance, this may be another of the increasing number of things where neither major party is will actually stop it. In terms of real-world outcomes, it doesn't much matter whether the party in control has just enough of their members (in the safest seats) vote with the minority to pass an unpopular measure or if they all vote for it. When the votes are stage managed in advance, the count being close is merely optics to further the narrative that the two major parties represent meaningfully different outcomes on every major issue.
basch|4 days ago
It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.
throw0101a|4 days ago
There are two possibilities:
> The government would likely argue that dropping the contractual restrictions doesn't change the product. Claude is the same model with the same weights and the same capabilities—the government just wants different contractual terms. […] Anthropic would likely argue the opposite: that its usage restrictions are part of what Claude is as a commercial service, and that Claude-without-guardrails is a product it doesn't offer to anyone. On this view, the government is asking for a new product, and the statute doesn't clearly authorize that.
and
> The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms. Here the characterization question seems easier: a retrained model looks much more like a new product than dropping contractual restrictions does. Admittedly, the government has a textual argument in its favor: the DPA's definitions of "services" include “development … of a critical critical technology item,” and the government could frame retraining Claude as exactly that. Whether courts would accept that framing, especially in light of the major questions doctrine, is another matter.
* https://www.lawfaremedia.org/article/what-the-defense-produc...
* https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
A more extreme situation: could the DPA be used to nationalize the model so the government has ownership, and then allow access to more amenable AI players?
EA-3167|4 days ago
The only threat the Pentagon has is to terminate the contract.
foogazi|4 days ago
We don’t know this
ljm|4 days ago
nickff|4 days ago
AbstractH24|3 days ago
nickff|4 days ago
brightball|4 days ago
[deleted]
tencentshill|4 days ago
NewJazz|4 days ago
smohare|4 days ago
[deleted]
CodingJeebus|4 days ago
mrandish|4 days ago
Much like FISA court-enabled unaccountable surveillance, this may be another of the increasing number of things where neither major party is will actually stop it. In terms of real-world outcomes, it doesn't much matter whether the party in control has just enough of their members (in the safest seats) vote with the minority to pass an unpopular measure or if they all vote for it. When the votes are stage managed in advance, the count being close is merely optics to further the narrative that the two major parties represent meaningfully different outcomes on every major issue.
ljm|4 days ago
EdwardDiego|4 days ago
quickthrowman|3 days ago
buellerbueller|4 days ago
linkregister|4 days ago
unknown|3 days ago
[deleted]