(no title)
nilkn | 3 days ago
I take that to mean they don't want the military using Claude to decide who to kill. As a hyperbolic yet frankly realistic example, they don't want Claude to make a mistake and direct the military to kill innocent children accidentally identified as narco-terrorists.
At least, that's the most charitable interpretation of everything going on. I suspect they are also worried that the sitting administration wants to use AI to help them execute a full autocratic takeover of the United States, so they're attempting to kill one of the world's most innovative companies to set an example and pressure other AI labs into letting their technology be used for such purposes.
unknown|3 days ago
[deleted]
blhack|3 days ago
nilkn|3 days ago
Obviously the military wants to use it for that purpose since they couldn't accept Anthropic's extremely limited terms.
One can easily and immediately infer the answers to both your questions are yes.
ImPostingOnHN|3 days ago
If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.
sigmar|3 days ago
Here's the Chief Pentagon Spokesman pointing to the same verbiage and reiterating they they won't agree to those terms of use.
mcphage|3 days ago