I think both approaches are useful. AI2027 presents a specific timeline in which a) the trajectory of tech is at least somewhat empirically grounded, and b) each step of the plot arc is plausible. There's a chance of it being convincing to a skeptic who had otherwise thought of the whole "rogue AI" scenario as a kind of magical thinking.
eholk|8 months ago
AI 2027 was much more impactful on me. It probably helps that I read it the same week I started playing with agent mode on GitHub Copilot. Seeing what AI can already do, especially compared to six months ago, and then seeing their projections made AI seem like something much more worth paying attention to.
Yeah, getting from here to being killed by rogue AI nanobots in less than five years still seems pretty far fetched to me. But, each of the steps in their scenario didn't seem completely outside the realm of possiblity.
So for me personally, my 80% confidence interval includes both things stagnating pretty much where they are now, but also something more like AI 2027. I suspect we'll be fine, but AGI seems like a real enough possibility that it's worth working on a contingency plan.
kypro|8 months ago
Unfortunately there's a huge number of people who get obsessed about details and then nit pick. I see this with Eliezer Yudkowsky all the time where 90% of the criticism of his views are just nit picking the weaker predictions he makes while ignoring his stronger predictions regarding the core risks which could result in those bad things happening. I think Yudkowsky opens himself up to this though because he often makes very detailed predictions about how things might play out and this largely why he's so controversial, in my opinion.
I really liked AI 2027 personally. I thought specifically the tabletop exercises were a nice heuristic for predicting how actors might behave in certain scenarios. I also agree that it presented a plausible narrative for how things could play out. I'm also glad they did wimp out with the bad ending. Another problem I have with people are concerned about AI risk is that they scare away from speaking plainly about the fact if things go poorly your love ones in a few years will probably either be either be dead, in suspended animation on a memory chip, or in a literal digital hell.