(no title)
WarmWash | 1 day ago
It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.
From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
nextaccountic|16 hours ago
Anthropic didn't want a kill switch, they wanted contractual guarantees (the kind you can go to courts for). This administration just doesn't want accountability, that's all.
It was OpenAI that said they prefer to rely on guardrails and less on contracts (the kind that stops the AI from working if you violate). The same OpenAI that was awarded the contract now.
GorbachevyChase|1 day ago
WarmWash|1 day ago
So the governments stance is "We already have laws and procedures in place, we don't want and can't have a CEO to also be part of those checks"
I don't think this outcome would have been any different under a normal blue government either. Definitely with less mud slinging though.
syllogism|1 day ago
Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?