top | item 47101064

Anthropic's safety-first ethos collided with The Pentagon

5 points| lonelyasacloud | 8 days ago |scientificamerican.com

5 comments

order

lyaocean|8 days ago

Real tension here is governance, not model quality. If the Pentagon wants "all lawful purposes," vendors need contract-level guardrails: explicit prohibited uses, independent audit logs, and kill-switch rights. Otherwise "safety-first" is mostly branding.

lonelyasacloud|8 days ago

From the article ...

Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons ... OpenAI, Google and xAI—have agreed to loosen safeguards ... Pentagon has demanded that AI be available for “all lawful purposes.”

DivingForGold|8 days ago

I predict (Kalshi ?) that Anthropic will ultimately be ejected from the Pentagon running. Morals and ethics be damned, all the others will likely tell their workers: any that don't agree, they will be escorted out the door if they don't like it. Corporate America. Just wait for the next genocidal operation where A I is found contributing to the mass murdering. Cuba ?

ungreased0675|8 days ago

This seems to mostly be about the Pentagon sticking to the principle of not allowing a private corporation tell them what they can or can’t do.