(no title)
xpe | 2 days ago
So reframe I did. (I don’t think those articles you cited are worth any more attention than I’ve already given them.)
My most blunt editorializing would be this: most people would be better grounded if they read AI alignment and safety books by Stuart Russell, Nick Bostrom, Brian Christian, Eliezer Yudkowsky, and Nate Soares. If you’ve read others that you recommend, please let me know. I’ve read many that I don’t usually recommend.
As far as long form articles, I recommend Paul Christiano, Zvi Moshowitz, as well as anyone with the fortitude to make predictions while sharing their models (like the AI 2027 crew).
I recommend browsing “Best of Year Y” (or whatever they are called) articles on the AI Alignment Forum and LessWrong. They are my go-tos for smart & informed writing on AI. For posts that have more than say 100 votes, the quality bar is tremendously higher than almost anywhere else I’ve seen, including mainstream sources with great reputations.
In conclusion, I would rather point to interesting people to read and places to engage.
No comments yet.