Sure they do. Even-handedness is not some uniquely American value. And anyway they recognize that their current analysis has a US-specific slant; it's still a good place to start, especially as so much of the world follows US culture and politics.
It's probably the case that Anthropic's staff has political biases, but that doesn't mean they can't aim for neutrality and professionalism. Honestly my opinion of Anthropic has gone up a lot from reading this blog post (and it was already pretty high). Claude 1 was wild in terms of political bias, but it got so much better and this effort is absolutely the right way to go. It's very encouraging that the big model companies are making these kinds of efforts. I believe OpenAI already did one, or at least publicly talked about the importance of even handedness in public already.
Years ago I worked for Google and left partly because I saw the writing on the wall for its previous culture of political neutrality, which I valued more than any 20% time or free lunch. Over the next ten years Google became heavily manipulated by the left to brainwash its users, first internally, then in periphery products like News, then finally in core web search. It is by far the most distressing thing they've done. I worried for a long time that AI companies would be the same, but it does seem like they recognize the dangers of that. It's not just about their users, it's about employees being able to get along too. Apparently Googlers are trying to cancel Noam Shazeer right now for not being left wing enough, so the risks of political bias to maintaining the skill base are very real.
I think the most interesting question is where the market demand is. Musk is trying to train Grok to prioritize "truth" as an abstract goal, whereas the other companies are trying to maximize social acceptability. The latter feels like a much more commercially viable strategy, but I can see there being a high end market for truth-trained LLMs in places like finance where being right is more important than being popular. The model branding strategies might be limiting here, can one brand name cover models trained for very different personalities?
mike_hearn|3 months ago
It's probably the case that Anthropic's staff has political biases, but that doesn't mean they can't aim for neutrality and professionalism. Honestly my opinion of Anthropic has gone up a lot from reading this blog post (and it was already pretty high). Claude 1 was wild in terms of political bias, but it got so much better and this effort is absolutely the right way to go. It's very encouraging that the big model companies are making these kinds of efforts. I believe OpenAI already did one, or at least publicly talked about the importance of even handedness in public already.
Years ago I worked for Google and left partly because I saw the writing on the wall for its previous culture of political neutrality, which I valued more than any 20% time or free lunch. Over the next ten years Google became heavily manipulated by the left to brainwash its users, first internally, then in periphery products like News, then finally in core web search. It is by far the most distressing thing they've done. I worried for a long time that AI companies would be the same, but it does seem like they recognize the dangers of that. It's not just about their users, it's about employees being able to get along too. Apparently Googlers are trying to cancel Noam Shazeer right now for not being left wing enough, so the risks of political bias to maintaining the skill base are very real.
I think the most interesting question is where the market demand is. Musk is trying to train Grok to prioritize "truth" as an abstract goal, whereas the other companies are trying to maximize social acceptability. The latter feels like a much more commercially viable strategy, but I can see there being a high end market for truth-trained LLMs in places like finance where being right is more important than being popular. The model branding strategies might be limiting here, can one brand name cover models trained for very different personalities?
ml-anon|3 months ago
[deleted]