Real tension here is governance, not model quality. If the Pentagon wants "all lawful purposes," vendors need contract-level guardrails: explicit prohibited uses, independent audit logs, and kill-switch rights. Otherwise "safety-first" is mostly branding.
Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons ... OpenAI, Google and xAI—have agreed to loosen safeguards ... Pentagon has demanded that AI be available for “all lawful purposes.”
I predict (Kalshi ?) that Anthropic will ultimately be ejected from the Pentagon running. Morals and ethics be damned, all the others will likely tell their workers: any that don't agree, they will be escorted out the door if they don't like it. Corporate America. Just wait for the next genocidal operation where A I is found contributing to the mass murdering. Cuba ?
lyaocean|8 days ago
lonelyasacloud|8 days ago
Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons ... OpenAI, Google and xAI—have agreed to loosen safeguards ... Pentagon has demanded that AI be available for “all lawful purposes.”
DivingForGold|8 days ago
ungreased0675|8 days ago