top | item 35234186

(no title)

TamDenholm | 2 years ago

Not that im defending anyone here but couldn't almost anything be misused for nafarious purposes?

discuss

order

CharlesW|2 years ago

Sure, but the difference is the kind of harm one can do, and the scale at which one can do it.

aaroninsf|2 years ago

This is the thing.

AI is a general-purpose accelerant and force multiplier. It provides a mechanism for automating and deploying at scale, a set of attacks on our society, against which we have little experience and almost no defense—nor even any good means of detecting, at least not until post-mortem forensics.

The most obvious harmful avenue for this is venal criminality (which will be awful) but the real danger is in the political sphere.

There is already widespread use of AI for disinformation purposes in e.g. the Ukraine war.

I have been saying for the last N months or so, my immediate concern with AI is not AGI but augmented intelligence applications which are leveraged enough to be destabilizing.

In specific, I believe the 2024 election cycle in the US will be decided by AI.

We aren't ready for this.

api|2 years ago

Thag warn sharp rock could be misused for nefarious purposes.

tppiotrowski|2 years ago

Any new technology has some benefits and some drawbacks.

Electric cars have no direct emissions but increase mining operations in certain parts of the world. Or they can be used to plow through a public gathering of people. Or with some rewiring to electrocute someone to death. If you think hard enough, you can find nefarious purposes for almost any household item that's made your life easier.

I think what's important to focus on are the "net" benefits but the outliers feed into our emotional response.

mikeg8|2 years ago

I don’t focus on the “net” benefits as mush as I used to, in large part to FB. For the first several years, the “net” benefit of a connected digital world where you can communicate and connect with friends anywhere sounded so fantastic. But it’s become almost consensus that the very real downsides, and this societal consequences, may not have been such a great deal after all.

The reason outliers feed our emotional response is a survival/skin in the game mechanism. Parroting Taleb, all it takes in ruin once and the game stops. It’s not unreasonable to be hyper focused on reducing long tail risks with potentially catastrophic and unknown results. Caution and fear is warranted here.