This frames it as Pentagon vs. Anthropic but the actual problem is upstream. If we tell companies they must prevent all possible harm, you're setting them up: nerf the model and silently lose value nobody can quantify, or don't nerf the model and get blamed for every bad outcome. We don't want nerf'd models either. DoW is saying that.
derektank|3 days ago
dirk94018|2 days ago