(no title)
iNic | 1 month ago
The problem is not that no one is trying to solve the issues that you mentioned, but that it is really hard to solve them. You will probably have to bring large class action law suits, which is expensive and risky (if it fails it will be harder to sue again). Anthropic can make their own models safe, and PauseAI can organize some protests, but neither can easily stop grok from producing endless CSAM.
[1] https://www.anthropic.com/news/protecting-well-being-of-user...
[2] https://www.anthropic.com/research/team/societal-impacts
mossTechnician|1 month ago
I appreciate you pointing out the Risks page though, as it does disprove my hyperbole about ignoring present-day harms completely, although I was disheartened that the page just appears to list things that they believe actions "could be mitigated by a Pause" (emphasis mine).
[0]: https://pauseai.info/proposal
[1]: https://pauseai.info/dangerous-capabilities