top | item 18735955

(no title)

ShannonAlther | 7 years ago

This article leans too much on the current state of the art to extrapolate towards possible futures, and it shouldn't. AI is dangerous, sure, but the examples we're getting here are a DOTA2 team that lost to humans in a modified version of the game, a blog post that doesn't adequately explain what the heck was accomplished, and Atari bots that buttonmashed their way into exploits that let them directly change registers in the game (all very cool but not super relevant).

It kind of buries the lede that we're one or two technical achievements away from the apocalypse (learning from unlabelled training sets, say) and instead starts with "AI sentencing guidelines are biased against black defendants", as if that exists in the same universe as Skynet IRL. IME the people who are even worried about AI risks at all are overwhelmingly in the "what if it disenfranchises minorities even more?" camp, and only a tiny fraction are concerned about having the atmosphere packed with graphite superconductors in the next fifteen years. Any article like this should probably start off with the assumption that the audience thinks AI is that cute thing in their iPhone that dials the local pizza joint instead of your mother when you try to speak to it. Whether or not AI will eventually let Donald Trump sustain himself eternally on a golden throne as the God Emperor of America is a red herring and needs to be deflected, like, immediately, or else people get on their own hobby horse about how AI represents a turning point in the bendy straws community. All the right notes are in here but they're at the end for some reason.

discuss

order

No comments yet.