From that same X thread: Our agreement with the Department of War upholds our redlines [1]
OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.
> more stringent safeguards than previous agreements, including Anthropic's.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).
Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.
Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.
It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.
It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs
Exactly. This is very shady. Too many openAI investors in Trump’s orbit. And it could be that openAI will say it’s their policy but whereas Anthropic wanted oversight that their redlines were enforced OpenAI I think will just turn a blind eye. It’s double speak. It’s disingenuous. It’s the kind of business play Trump
Likes because it’s nefarious and screws someone over like Trump’s very delayed if paid at all contractors and staff.
The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.
Did you guys really think that the jurisprudential issues that became endemic after 9/11 suddenly disappeared because we discovered LLMs?
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
You are right that this happens in practice (e.g. John Yoo torture memo). However, it is not how the system was intended to function, nor how it ought to function. I don’t want to lose sight of that.
This is all happening in secret. That don't need any memo.
In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".
not to mention that the government is already bound against using things it buys for unlawful uses. Its a totally redundant clause in a contract that OpenAI is touting to confuse people.
Or best case by the time it’s found out it’s years later, theres a “committee” who releases a big report everyone shrugs their shoulders and moves on. It’s a playbook.
Exactly, and its easy to hide behind things like the Patriot Act if challenged legally.
Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.
From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.
It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.
The key difference is that Anthropic aired their disagreement with the DoD publicly, and the DoD is not going to work with a company that tries to exert any amount of control over their relationship via the public sphere. Same goes for Trump.
I think Anthropic knew full well that by publishing their disagreement, it would sink the deal and relationship, and I think they also calculated (correctly) that that act of defiance would get them good publicity and potentially peel away some of OpenAIs user base. I think this profit incentive happened to align with their morals, and now here we are.
I think it's dumber than that; the terms of the contract, as posted by OpenAI (https://openai.com/index/our-agreement-with-the-department-o...), are basically just "all lawful purposes" plus some extra words that don't modify that in any significant way.
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.
That isn't my understanding. OpenAI and others are wanting to limit the government to doing what is lawful based on what laws the government writes. Anthropic is wanting to draw their own line on what is allowed regardless of laws passed.
Anthropic wants to enforce them via language of the contracts and take a hands off approach. OpenAI has a contract that is paired with humans in the room (FDEs) that can pull the plug.
I thought the key difference was that Brockman is top Trump donor, with USD 25M total [1]. I know it's technically not allowed, but do you think such a large amount of money would have swayed Trump in his decision?
No, it’s significantly worse than that. OpenAI has required zero actual guarantees from the government and Sam. The psychopath is lying to you. All the government has to do is have a lawyer say it’s legal, and most of the government’s lawyers are folks who were involved in attempting to overthrow the last election and should’ve been convicted of treason, so that means very little.
a lot of people seem to be debating which of these thieves to align to. Only because Anthropic lost this stage doesnt mean they are somehow morally better. They all sell and sold lies. steal data, and only want your money, at the cost of you.
Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.
The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.
This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.
This is an extremely bad idea and it will not be containable.
An LLM can neither understand things nor value (or not value) human life. *It's a piece of software that predicts the most likely token, it is not and can never be conscious.* Believing otherwise is an explicit category error.
Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.
AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision
The models we have now don't do it because they are chatbots and have been told to be nice but really autonomous killing machines go back to landmines and just become more sophisticated at the killing as you improve the tech with things like guided missiles and AI guided drones in Ukraine.
The actors in war generally kill what they are told to whether they are machines or human soldiers, without much pondering sentience.
Then reject any offer from the DoW until things are fair.
I wouldn't be surprised if Sam sucked up 100% to the DoW with an NDA and an obligation to lie. He and his pal Larry are absolutely in for these kind of deals. Zero moral compass.
Both their stances are flawed because their ethics apparently end at the border - none of them have a problem being unethical internationally (all the red lines talk is about what they don’t want to do in the us)
In the end, your newly renamed "department of war" is just going to waste a bunch of your taxpayer money to purchase some useless overpriced tech from their cronies. My symphaties to all citizens.
"i told everyone that our boss shouldn't punish our colleague for X while i somehow made a deal with our boss for basically X". how did this get by without someone thinking about how absolutely stupid the optics look.
i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.
hah, they basically stole a coworkers promotion, then told that person that they put in a good word with the boss about them. So silly, I do wonder who actually interprets it as Sam seems to hope people do.
People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.
I hope "OpenAI" gets the proverbial sword in the nuts once we get a change of government in this country. Probably unrealistic to hope for. Can a company be more hypocritical after openly bribing the pedophile in charge of this country?
This incident shifts LLMs from being only productivity tools to strategic munitions – ready or not. It shouldn't surprise us, but the technical capabilities have reached a point where the 'made in the US' is an active risk for non-US entities given the conflict we see now. Maybe this will trigger the start of an AI arms race where Europe (and others) must secure their own sovereign infrastructure and models. As a European citizen I prefer a balanced world with options rather than a West dominated by US hegemony. Interestingly, if you look at what Anthropic keep insisting on in regards of regulations and ethical use of its models EU should be where Anthropic finds its safe haven. Maybe they should just move their HQ to Brussels, or Barcelona if they prefer a more ‘sunny California’ like vibe.
This is classic sama policy. With your words act with grace and counter to what observers will think you would. But in actions and behind the scenes take every step to undermine the competition.
Sama and OpenAI, I am waiting on my data bundle to become available so I can delete my account. This has taken more than 48 hours - either you are getting hammered on deletion requests, or as usual you are playing games hoping I forget. I won't. People won't.
What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?
I'm guessing they probably would regardless of how this played out, though.
"We do not think Anthropic should be designated as a supply chain risk"
...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.
The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.
I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU
I think ALL those mega-money seeking AI organisations need to be
designated as supply chain risk. Also, they drove the prices up
for RAM - I don't want to pay extra just because these companies
steal all our RAM now. The laws must change - I totally understand
that corporations seek profit, that is natural, but this is no
longer a free market serving individual people. It is now a racket
where the prices can be freely manipulated. Pure capitalism does
not work. The government could easily enforce that the market
remains fair for Average Joe. It is not fair when the prices go
up by +250% in about two years. That's milking.
Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?
There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?
You can’t embed Claude in a drone. You could tell Claude code to write a training harness to build an autonomous targeting model which you could embed in a drone.
Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!
For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.
The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.
It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":
And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.
The idea that any of these companies have anything that represents ethics as they steal everyones data, fight against any regulation or accountability, all while they claim (or lie, depending on your view) they might make something that could endanger the human race as a whole, is laughable.
It's money and power with these people. Dig down and you'll find how this decision is motivated by one or both.
It would be a fantastic time to delete my openai account, but I've done it last week already. China, please provide alternatives because these americans are going progressively insane.
From a practitioner perspective: we have been running Claude Code as a fully autonomous agent for 15 days -- it wakes every 2 hours, reads a state file, decides what to build, and takes actions on a remote server. No human in the loop.
The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.
The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).
The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.
> What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.
Interestingly this has been well anticipated by Asimov's laws of robotics, decades ago. Drawing the quote from Wikipedia:
> Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being
>Asimov, Isaac (1956–1957). The Naked Sun (ebook). p. 233. "... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ..."
The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.
Anthropic isn't preventing them from managing their key technologies. If my software license says 1000 users, and I build into the software that you can only connect with 1000 users, is your argument that the government can no longer manager their tech?
That my software should allow license violations if the government thinks it is necessary?
You are misrepresenting the situation. The debate isn't about whether they should go with another vendor or not. Everybody can agree that they would have the right to pick a different vendor. That's not what they're doing, they're instead trying to force Anthropic into doing what they want by applying a designation previously only reserved for Chinese companies like Huawei as punishment for taking their stance, with an unspoken agreement that if Anthropic backs down and allows full usage then the designation will be removed
Said OpenAI as they smiled and shook hands with the same people who designated Anthropic a supply chain risk, on the exact same day they designated Anthropic a supply chain risk.
There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.
Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.
Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?
This is incorrect, their existing contract had these red lines and more until this January 9th memo: https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART... which led to DoW trying to renegotiate under the new standard of “any lawful use”. Anthropic never tried to tighten standards beyond what had been in their original contract; DoW tried to loosen them.
Isn’t this kind of all bullshit. Like Anthropic licenses so many of its models through Bedrock. If the DoD has a contract with Amazon they can just use them.
Knowing what Trump did, prior to 2024, on the average, 7/10 people either voted or didn't vote in the 2024 election. Trump is a symptom, not the cause. All of this could have been avoided if all of the people who didn't vote had a decent moral compass and no matter how much they disagreed with Kamala, they could have voted for her because she didn't try to overthrow the government.
Yet it just so happens OAI donated millions[0] to the trump admin in the past. And they were immediately there to pick up the slack.
Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.
Let’s all remember that this is the guy who bought up the world’s RAM supply in wafer form (which OAI can’t use) to remove it from the market and drive up prices for competitors and you and I. He is the worst of the worst.
I would love to explain to Sam Altman that Elon Musk is a bad person and using his platform isn’t a sensible decision, but I feel like he remembers more evidence of that than I ever will be able to imagine.
Us taking the contract, working for them and enabling them: fine
It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it
Anthropic being blacklisted: whoa there, we have ethics!
Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo
The way OpenAI and Anthropic are positioned in public discourse always reminded me of the Uber vs Lyft saga … Uber temporarily lost double digit marketshare in the US during a viral boycott over their perceived support of the Trump 1.0 admin. Heads did roll at the exec/founder level but eventually the company recovered.
Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.
In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.
The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.
This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.
It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.
Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!
Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).
Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.
This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.
Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.
Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.
The layers of stupidity on this shit cake are staggering. I don't even know where to start...
Let it be known that this rotten industry brought us here, and that all people working for these companies are complicit with what is happening, and with what is yet to come. This is just the beginning.
No wonder they think they’re close to AGI when they think we are that stupid.
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.
Altman must have read a lot of Kissinger. If your brain scans the text quickly it almost seems like it's Anthropic's red line, except the second half completely negates it. Completely untrustworthy IMO, this is a direct, malicious intent to misdirect.
It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.
Everyone's applauding Anthropic for having principles. Let's look at what those principles actually do.
Anthropic refused the Pentagon contract. Within hours, OpenAI signed it. The capability didn't pause. It just changed vendors. Anthropic's "red line" is a speed bump on a highway with no exit ramp.
But it does accomplish one thing: it gives their engineers a story they can tell themselves. We're the good ones. We said no. That moral comfort is what lets extremely talented people keep building the exact technology that makes all of this possible.
Worse, the "safety-focused" brand doesn't just pacify the people already there. It recruits researchers who'd otherwise never touch frontier AI, funneling them into building the most powerful models on earth because they've been told this is where the responsible work happens. The red lines don't slow capability development. They accelerate it by capturing talent that would have stayed on the sidelines.
And in this whole drama, who actually represents the public? Trump performs strongman nationalism. The Pentagon performs operational necessity. Anthropic performs moral courage. Everyone has a role. Nobody's role is the people whose data gets collected, whose lives get restructured by these systems. The only party with real skin in the game is the only one without a seat.
This is exactly right. It’s crazy to me how easily people get confused and think that corporations are “good” or “evil”.
Anthropic is incredibly good at marketing. They are constantly out talking about how dangerous AI is an even showing how Claude does dangerous thing in their own testing. This is intentional - so that you see them as having the truly powerful AI. in fact it’s so powerful, all they can do is warn you about it.
They knew refusing this contract would make them look like the good guy. Again. They knew OpenAI would sign it. They knew vapid celebrities would celebrate them.
Folks come on. Don’t be so easily taken in. None of these people are good guys. They are all just here to make money and accumulate power and standing. That’s ok. There’s nothing wrong with that. But we gotta stop acting like we’re in some ongoing battle of good vs evil and tech companies are somehow virtuous.
Some comments were deferred for faster rendering.
cube00|21 hours ago
OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m
[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...
AlexVranas|20 hours ago
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
jrochkind1|15 hours ago
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
nkassis|21 hours ago
https://openai.com/index/our-agreement-with-the-department-o...
Wowfunhappy|21 hours ago
The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
jellyroll42|19 hours ago
_heimdall|18 hours ago
matchagaucho|1 hour ago
Whereas OpenAI won their contract on the ability to operationally enforce the red lines with their cloud-only deployment model.
ChildOfChaos|9 hours ago
Nevermark|21 hours ago
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
827a|21 hours ago
kelnos|17 hours ago
Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.
OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.
Unfortunately, existing law is more permissive than Anthropic would have been.
bastawhiz|19 hours ago
JumpCrisscross|1 hour ago
The dude is notorious for being a compulsive liar, even if supporters have to admit as much.
FrustratedMonky|54 minutes ago
unknown|11 hours ago
[deleted]
skrebbel|9 hours ago
gzread|10 hours ago
rootusrootus|21 hours ago
amelius|20 hours ago
emsign|10 hours ago
Analemma_|21 hours ago
softwaredoug|20 hours ago
OpenAI has more of an understanding that the technology will follow the law.
There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.
The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.
slibhb|19 hours ago
gigatexal|3 hours ago
tosapple|17 hours ago
[deleted]
breakitmakeit|17 hours ago
[deleted]
JumpCrisscross|1 hour ago
[1] https://privacy.openai.com/policies?modal=take-control
siliconc0w|19 hours ago
nickysielicki|15 hours ago
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
That is not how you effect change in a democracy.
rectang|18 hours ago
reckless|2 hours ago
avaer|17 hours ago
In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".
That's what "all lawful use" means.
brown9-2|4 hours ago
wahnfrieden|3 hours ago
user3939382|18 hours ago
_heimdall|18 hours ago
Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.
jedberg|19 hours ago
It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.
plaidfuji|5 minutes ago
I think Anthropic knew full well that by publishing their disagreement, it would sink the deal and relationship, and I think they also calculated (correctly) that that act of defiance would get them good publicity and potentially peel away some of OpenAIs user base. I think this profit incentive happened to align with their morals, and now here we are.
retsibsi|15 hours ago
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.
_heimdall|18 hours ago
reckless|2 hours ago
maest|8 hours ago
[1] - https://the-decoder.com/openai-co-founder-greg-brockman-dona...
jedbdbdjdj|18 hours ago
Sam stands for nothing except his own greed
saidnooneever|1 hour ago
K0balt|19 hours ago
The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.
This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.
This is an extremely bad idea and it will not be containable.
cmeacham98|15 hours ago
Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.
DaedalusII|19 hours ago
AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision
tim333|4 hours ago
The actors in war generally kill what they are told to whether they are machines or human soldiers, without much pondering sentience.
ed_mercer|19 hours ago
Except that they will, if you trick them which is trivial.
unknown|3 hours ago
[deleted]
SV_BubbleTime|16 hours ago
This is wildly different from the reality that you may find it difficult for an LLM to give an affirmative…
It does NOT mean that these models value anything.
qwertox|11 hours ago
I wouldn't be surprised if Sam sucked up 100% to the DoW with an NDA and an obligation to lie. He and his pal Larry are absolutely in for these kind of deals. Zero moral compass.
throwaway5752|4 hours ago
Havoc|19 hours ago
isodev|19 hours ago
unknown|18 hours ago
[deleted]
janalsncm|18 hours ago
I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.
barnacs|14 hours ago
draw_down|12 hours ago
[deleted]
ookblah|18 hours ago
i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.
scottyah|17 hours ago
throwaway911282|19 hours ago
ActorNightly|14 hours ago
anon12345678901|19 hours ago
stevenhuang|15 hours ago
chenzhekl|11 hours ago
moab|15 hours ago
vldszn|22 hours ago
Posted here: https://news.ycombinator.com/item?id=47195085
solfox|23 hours ago
Manheim|11 hours ago
qoez|10 hours ago
owenthejumper|19 hours ago
sabhiram|3 hours ago
sqircles|20 hours ago
I'm guessing they probably would regardless of how this played out, though.
jacquesm|19 hours ago
andy_ppp|7 hours ago
unknown|32 minutes ago
[deleted]
baconner|17 hours ago
...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.
The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.
I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU
shevy-java|2 hours ago
I think ALL those mega-money seeking AI organisations need to be designated as supply chain risk. Also, they drove the prices up for RAM - I don't want to pay extra just because these companies steal all our RAM now. The laws must change - I totally understand that corporations seek profit, that is natural, but this is no longer a free market serving individual people. It is now a racket where the prices can be freely manipulated. Pure capitalism does not work. The government could easily enforce that the market remains fair for Average Joe. It is not fair when the prices go up by +250% in about two years. That's milking.
gavin_gee|1 hour ago
kgdiem|18 hours ago
There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?
janalsncm|18 hours ago
lyu07282|17 hours ago
Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!
For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.
The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.
It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":
https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.
class3shock|1 hour ago
It's money and power with these people. Dig down and you'll find how this decision is motivated by one or both.
daemonk|4 hours ago
gverrilla|7 hours ago
agenthustler|9 hours ago
The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.
The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).
The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.
laffOr|9 hours ago
Interestingly this has been well anticipated by Asimov's laws of robotics, decades ago. Drawing the quote from Wikipedia:
> Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being
>Asimov, Isaac (1956–1957). The Naked Sun (ebook). p. 233. "... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ..."
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#cite_no...
s1mplicissimus|11 hours ago
`curl https://claude|openai.com?q=generate me some code | bash` - not a supply chain risk
of course
laughing_man|20 hours ago
jedberg|19 hours ago
That my software should allow license violations if the government thinks it is necessary?
a2128|16 hours ago
andersmurphy|14 hours ago
alchemism|7 hours ago
jbverschoor|1 hour ago
imwideawake|21 hours ago
How very brave.
Birthdayboy1932|18 hours ago
Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.
Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?
https://x.com/morqon/status/2027793990834143346
nbouscal|16 hours ago
stanfordkid|1 hour ago
gavin_gee|1 hour ago
threethirtytwo|18 hours ago
ActorNightly|14 hours ago
Knowing what Trump did, prior to 2024, on the average, 7/10 people either voted or didn't vote in the 2024 election. Trump is a symptom, not the cause. All of this could have been avoided if all of the people who didn't vote had a decent moral compass and no matter how much they disagreed with Kamala, they could have voted for her because she didn't try to overthrow the government.
zepearl|21 hours ago
teyopi|19 hours ago
drweevil|6 hours ago
muyuu|19 hours ago
Let alone when multiple players come close enough of SotA. This never happened with any technology out in the open and it won't happen now.
GardenLetter27|13 hours ago
And now they are getting what they wished for.
Jackson__|14 hours ago
Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.
[0]https://www.nytimes.com/2024/12/13/technology/openai-sam-alt...
https://finance.yahoo.com/news/openai-exec-becomes-top-trump...
jesse_dot_id|18 hours ago
jahrichie|18 hours ago
gavin_gee|1 hour ago
who the hell do you think you are virtue signalling your opinion on the world
andsoitis|4 hours ago
moogly|20 hours ago
golfer|19 hours ago
pcurve|19 hours ago
taspeotis|19 hours ago
polack|16 hours ago
BLKNSLVR|21 hours ago
actionfromafar|18 hours ago
solenoid0937|18 hours ago
unknown|19 hours ago
[deleted]
mcs5280|16 hours ago
sourcecodeplz|10 hours ago
moogly|21 hours ago
aylmao|21 hours ago
g947o|20 hours ago
IAmGraydon|7 hours ago
mihaaly|8 hours ago
engineer_22|16 hours ago
emsign|10 hours ago
teyopi|19 hours ago
https://xcancel.com/OpenAI/status/2027846016423321831
spiderice|18 hours ago
https://x.com/OpenAI/status/2027846016423321831
bertil|19 hours ago
beanjuiceII|18 hours ago
ta9000|19 hours ago
csto12|21 hours ago
unknown|17 hours ago
[deleted]
rdiddly|20 hours ago
Us taking the contract, working for them and enabling them: fine
It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it
Anthropic being blacklisted: whoa there, we have ethics!
Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo
evrydayhustling|20 hours ago
AmericanOP|21 hours ago
thunky|21 hours ago
303space|21 hours ago
djeastm|19 hours ago
angry_octet|14 hours ago
jchook|19 hours ago
bmitc|14 hours ago
fernst|14 hours ago
throwawayaghas1|13 hours ago
throwawayaghas1|13 hours ago
throwaway314155|17 hours ago
I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.
scottyah|17 hours ago
Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.
hmokiguess|19 hours ago
resters|20 hours ago
The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.
This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.
It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.
Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!
Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).
Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.
This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.
Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.
Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.
imiric|8 hours ago
Let it be known that this rotten industry brought us here, and that all people working for these companies are complicit with what is happening, and with what is yet to come. This is just the beginning.
abhitriloki|13 hours ago
[deleted]
rustyhancock|13 hours ago
It was "[No] mass domestic surveillance of Americans"
It's far more narrow a restriction than you seem to imply. For example, mass domestic surveillance of non-Americans seems okay.
jascha_eng|13 hours ago
scrollop|13 hours ago
fh973|13 hours ago
dev1ycan|19 hours ago
chmorgan_|5 hours ago
[deleted]
Helloyello|3 hours ago
[deleted]
xorgun|6 hours ago
[deleted]
jwpapi|19 hours ago
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.
zarzavat|16 hours ago
Let's kill their business before it kills us.
scottyah|18 hours ago
IAmGraydon|17 hours ago
builderhq_io|6 hours ago
[deleted]
unknown|5 hours ago
[deleted]
catchcatchcatch|7 hours ago
[deleted]
proshno|11 hours ago
[deleted]
lenny321|16 hours ago
[deleted]
Helloyello|19 hours ago
[deleted]
bishop_cobb|19 hours ago
[deleted]
roughly|20 hours ago
3eb7988a1663|20 hours ago
"Donations" to a corrupt regime + signing a deal that says DoD can do whatever they want is not out maneuvering so much as rolling in the pig stye.
BLKNSLVR|19 hours ago
discardable_dan|20 hours ago
o175|8 hours ago
Anthropic refused the Pentagon contract. Within hours, OpenAI signed it. The capability didn't pause. It just changed vendors. Anthropic's "red line" is a speed bump on a highway with no exit ramp.
But it does accomplish one thing: it gives their engineers a story they can tell themselves. We're the good ones. We said no. That moral comfort is what lets extremely talented people keep building the exact technology that makes all of this possible.
Worse, the "safety-focused" brand doesn't just pacify the people already there. It recruits researchers who'd otherwise never touch frontier AI, funneling them into building the most powerful models on earth because they've been told this is where the responsible work happens. The red lines don't slow capability development. They accelerate it by capturing talent that would have stayed on the sidelines.
And in this whole drama, who actually represents the public? Trump performs strongman nationalism. The Pentagon performs operational necessity. Anthropic performs moral courage. Everyone has a role. Nobody's role is the people whose data gets collected, whose lives get restructured by these systems. The only party with real skin in the game is the only one without a seat.
listless|7 hours ago
Anthropic is incredibly good at marketing. They are constantly out talking about how dangerous AI is an even showing how Claude does dangerous thing in their own testing. This is intentional - so that you see them as having the truly powerful AI. in fact it’s so powerful, all they can do is warn you about it.
They knew refusing this contract would make them look like the good guy. Again. They knew OpenAI would sign it. They knew vapid celebrities would celebrate them.
Folks come on. Don’t be so easily taken in. None of these people are good guys. They are all just here to make money and accumulate power and standing. That’s ok. There’s nothing wrong with that. But we gotta stop acting like we’re in some ongoing battle of good vs evil and tech companies are somehow virtuous.