(no title)
jrochkind1 | 18 hours ago
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
lostnground|15 hours ago
Symmetry|11 hours ago
Barbing|15 hours ago
Right, wouldn't they need a moderation layer that could, for example, fire if it analyzed & labeled too many banal English conversations?
They really gave training credit for guardtrails? I mean, it could perhaps reject prompts about designing social credit systems sometimes, but I can't imagine realistic mitigations to mass domestic surveillance generally.