top | item 44362785

(no title)

eholk | 8 months ago

For what it's worth, I've read both Bostrom's Superintelligence and AI 2027. Reading Superintelligence was interesting and for me really drove home how hard setting aligned goals for an AI is, but the timelines seemed far enough out that it wasn't likely to be something that would matter in my lifetime.

AI 2027 was much more impactful on me. It probably helps that I read it the same week I started playing with agent mode on GitHub Copilot. Seeing what AI can already do, especially compared to six months ago, and then seeing their projections made AI seem like something much more worth paying attention to.

Yeah, getting from here to being killed by rogue AI nanobots in less than five years still seems pretty far fetched to me. But, each of the steps in their scenario didn't seem completely outside the realm of possiblity.

So for me personally, my 80% confidence interval includes both things stagnating pretty much where they are now, but also something more like AI 2027. I suspect we'll be fine, but AGI seems like a real enough possibility that it's worth working on a contingency plan.

discuss

order

No comments yet.