(no title)
mshron | 2 years ago
https://ai.objectives.institute/whitepaper
It’s weird to have been working on a paper for almost a year and have it launch into this environment, but uptake has been good. My hope is that we will continue to see more nuance around different kinds of alignment risks in the near future. There’s a wide spectrum between biased statistical models and paperclip maximizing overlords, and lots bad but not existentially catastrophic things for the public to want to keep a pulse on.
walleeee|2 years ago
> In some sense, we’re already living in a world of misaligned optimizers
I understand this is an academic paper given to nuance and understatement, but for any drive-by readers, this is true in an extremely literal sense, with very real consequences.