top | item 35273563

(no title)

mshron | 2 years ago

Some of us do! Check out a whitepaper on that exact point:

https://ai.objectives.institute/whitepaper

It’s weird to have been working on a paper for almost a year and have it launch into this environment, but uptake has been good. My hope is that we will continue to see more nuance around different kinds of alignment risks in the near future. There’s a wide spectrum between biased statistical models and paperclip maximizing overlords, and lots bad but not existentially catastrophic things for the public to want to keep a pulse on.

discuss

order

walleeee|2 years ago

Thanks! Looks like good work. I hope this idea continues to get traction:

> In some sense, we’re already living in a world of misaligned optimizers

I understand this is an academic paper given to nuance and understatement, but for any drive-by readers, this is true in an extremely literal sense, with very real consequences.