This is most likely because getting a SaaS software to conform to federal regulations and to promise the security needed by the US military is difficult and expensive. FedRAMP is onerous.
And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.
It's a little weird, too, because Claude definitely isn't the only one approved for use on classified systems in general; both Grok and OpenAI have models approved, at the very least.
They always focused on the safety. (Their own safety). They only backed off from us military once they were in the bad press. As usual, they are not an ethical company. I can’t say it’s bad as all corporations are the same. Just don’t look at the illusion they create.
If you look at my post history you can see I’m always calling them out about how sketchy they are.
thephyber|5 days ago
And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.
LordDragonfang|5 days ago
https://www.anthropic.com/news/claude-gov-models-for-u-s-nat...
https://support.claude.com/en/articles/13756069-public-secto...
LordDragonfang|5 days ago
https://devblogs.microsoft.com/azuregov/azure-openai-authori...
https://x.ai/news/government
skeptic_ai|5 days ago
If you look at my post history you can see I’m always calling them out about how sketchy they are.