top | item 38118235

(no title)

chakalakasp | 2 years ago

Yes and no. AI is no different than proliferating nuclear weapons or deciding to burn all the fossil fuels -- on an individual competitive level, it makes sense to do this more and more to remain competitive. On a systems-whole level it leads to catastrophe. The whole tragedy of the commons thing, or, more recently described, the Moloch problem.

https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...

Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).

https://x.com/ylecun/status/1718764953534939162?s=20

discuss

order

jph00|2 years ago

It is not at all true that "AI is no different than proliferating nuclear weapons". The project manager for the Nuclear Information Project said (https://www.vox.com/future-perfect/2023/7/3/23779794/artific...) :

"""

But what we are seeing too often is a calorie-free media panic where prominent individuals — including scientists and experts we deeply admire — keep showing up in our push alerts because they vaguely liken AI to nuclear weapons or the future risk from misaligned AI to pandemics. Even if their concerns are accurate in the medium to long term, getting addicted to the news cycle in the service of prudent risk management gets counterproductive very quickly.

## AI and nuclear weapons are not the same

From ChatGPT to the proliferation of increasingly realistic AI-generated images, there’s little doubt that machine learning is progressing rapidly. Yet there’s often a striking lack of understanding about what exactly is happening. This curious blend of keen interest and vague comprehension has fueled a torrent of chattering-class clickbait, teeming with muddled analogies. Take, for instance, the pervasive comparison likening AI to nuclear weapons — a trope that continues to sweep through media outlets and congressional chambers alike.

While AI and nuclear weapons are both capable of ushering in consequential change, they remain fundamentally distinct. Nuclear weapons are a specific class of technology developed for destruction on a massive scale, and — despite some ill-fated and short-lived Cold War attempts to use nuclear weapons for peaceful construction — they have no utility other than causing (or threatening to cause) destruction. Moreover, any potential use of nuclear weapons lies entirely in the hands of nation-states. In contrast, AI covers a vast field ranging from social media algorithms to national security to advanced medical diagnostics. It can be employed by both governments and private citizens with relative ease.

"""

Let's stop contributing to this "calorie-free media panic" with such specious analogies.

jay_kyburz|2 years ago

Furthermore, there is little or no defense against a full scale nuclear attack, but a benevolent AI should be sufficient defense against a hostile AI.

I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.

coryfklein|2 years ago

Color me surprised that that the project manager for the Nuclear Information Project is in fact a subject matter expert for nuclear power and not AGI x-risk. Why would they be working on nuclear information if they didn't think it the most important thing?

chakalakasp|2 years ago

If you talk to the people on the bleeding edge of AI research instead of nuke-heads (I tend to be kinda deep into both communities), you'll get a better picture that, yeah, a lot of people who work on AI really do think that AI is like nukes in the scale of force multiplication it will be capable of in the near to medium future, and may well vastly exceed nukes in this regard. Even in the "good" scenarios you're looking at a future where people with relatively small resources will have access to information that would create disruptive offensive capabilities, be it biological or technological or whatever. In worse scenarios, people aren't even in the picture any more, the AIs are just working with or fighting each other and we are in the way.

torginus|2 years ago

Why does everybody come up with the 'nuclear weapons' comparison, when there is a much more appropriate one - encryption, specifically public key cryptography? Way back in the 90s, when Phil Zimmerman released PGP, the US government raised hell to keep it from proliferating. Would you rather live in a world where strong encryption for ordinary citizens was illegal?

chakalakasp|2 years ago

Because encryption is not an inherently dangerous thing. A superintelligent AI is.

It’s no different than inviting an advanced alien species to visit. Will it go well? Sure hope so, because if they don’t want it to go well it won’t be our planet any more

nonameiguess|2 years ago

It's quite a bit different. Access to weapons-grade plutonium is inherently scarce. The basic techniques for producing a transformer architecture to emulate human-level text and image generation is out in the open. The only scarce resource right now preventing anyone from reproducing the research themselves from scratch is the data and compute required to do it. But data and compute aren't plutonium. They aren't inherently scarce. Unless we shut down all advances in electronics and communications, period, shutting down AI research only stops it until data and compute is sufficiently abundant that anyone can do what currently only OpenAI and a few pre-existing giants can do.

What does that buy us? An extra decade?

I don't know where this leaves us. If you're in the MIRI camp believing AI has to lead to runaway intelligence explosion to unfathomable godlike abilities, I don't see a lot of hope. If you believe that is inevitable, then as far as I'm concerned, it's truly inevitable. First, because I think formally provable alignment of an arbitrary software system with "human values," however nebulously you might define that, is fundamentally impossible, but even if it were possible, it's also fundamentally impossible to guarantee in perpetuity that all implementations of a system will forever adhere to your formal proof methods. For 50 years, we haven't even been able to get developers to consistently use strnlen. As far as I can tell, if sufficiently advanced AI can take over its light cone and extinguish all value from the universe, or whatever they're up to now on the worry scale, then it will do so.

nonameiguess|2 years ago

I guess I should add, because so few people do, this is what I believe, but it's entirely possible I'm wrong, so by all means, MIRI, keep trying. If you'd asked anyone in the world except three men before 1975 if public key cryptography was possible, they'd have said no, but here we are. Wow me with your math.

CamperBob2|2 years ago

AI is no different than proliferating nuclear weapons

I mean, once the discussion goes THIS far off the rails of reality, where do we go from here?

Mistletoe|2 years ago

Can someone outline how AI could actually harm us directly? I don’t believe for a second sci-fi novel nonsense about self-replicating robots that we can’t unplug. My Roomba can’t even do its very simple task without getting caught on the rug. I don’t know of any complicated computing cluster or machine that exists that wouldn’t implode without human intervention on an almost daily level.

If we are talking about AI stoking human fears and weaknesses to make them do awful things, then ok I can see that and am afraid we have been there for some time with our algorithms and AI journalism.

hyeonwho22|2 years ago

Why not? It has already been shown that AI can be (mis)used to identify good candidates for chemical weapons. [1] Next in the pipeline is obviously some religious nut (who would not otherwise have the capability) using it to design a virus which doesn't set off alarms at the gene synthesis / custom construct companies, and then learning to transfect it.

More banally, state actors can already use open source models to efficiently create misinformation. It took what, 60,000 votes to swing the US election in 2016? Imagine what astroturfing can be done with 100x the labor thanks to LLMs.

[1] dx.doi.org/10.1038/s42256-022-00465-9

chasd00|2 years ago

AI isn't anything like nuclear weapons. One possible analogy i can draw is how scientists generally agreed to hold off on attempting to clone a human. Then that one guy in China did and everyone got on his case so bad he hasn't done it again (that we know of). In the best case, I could see AI regulation taking that form, as in, release a breakthrough get derided so much you leave the field. God, what a sad world to live in.