(no title)
Shrezzing | 1 year ago
Looking through Antrhopic's publication history, their work on alignment & safety has been pretty out in the open, and collaborative with the other major AI labs.
I'm not certain your view is especially contrarian here, as it mostly aligns with research Anthropic are already doing, openly talking about, and publishing. Some of the points you've made are addressed in detail in the post you've replied to.
No comments yet.