(no title)
chakalakasp | 2 years ago
https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...
Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).
jph00|2 years ago
"""
But what we are seeing too often is a calorie-free media panic where prominent individuals — including scientists and experts we deeply admire — keep showing up in our push alerts because they vaguely liken AI to nuclear weapons or the future risk from misaligned AI to pandemics. Even if their concerns are accurate in the medium to long term, getting addicted to the news cycle in the service of prudent risk management gets counterproductive very quickly.
## AI and nuclear weapons are not the same
From ChatGPT to the proliferation of increasingly realistic AI-generated images, there’s little doubt that machine learning is progressing rapidly. Yet there’s often a striking lack of understanding about what exactly is happening. This curious blend of keen interest and vague comprehension has fueled a torrent of chattering-class clickbait, teeming with muddled analogies. Take, for instance, the pervasive comparison likening AI to nuclear weapons — a trope that continues to sweep through media outlets and congressional chambers alike.
While AI and nuclear weapons are both capable of ushering in consequential change, they remain fundamentally distinct. Nuclear weapons are a specific class of technology developed for destruction on a massive scale, and — despite some ill-fated and short-lived Cold War attempts to use nuclear weapons for peaceful construction — they have no utility other than causing (or threatening to cause) destruction. Moreover, any potential use of nuclear weapons lies entirely in the hands of nation-states. In contrast, AI covers a vast field ranging from social media algorithms to national security to advanced medical diagnostics. It can be employed by both governments and private citizens with relative ease.
"""
Let's stop contributing to this "calorie-free media panic" with such specious analogies.
jay_kyburz|2 years ago
I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.
coryfklein|2 years ago
chakalakasp|2 years ago
torginus|2 years ago
chakalakasp|2 years ago
It’s no different than inviting an advanced alien species to visit. Will it go well? Sure hope so, because if they don’t want it to go well it won’t be our planet any more
nonameiguess|2 years ago
What does that buy us? An extra decade?
I don't know where this leaves us. If you're in the MIRI camp believing AI has to lead to runaway intelligence explosion to unfathomable godlike abilities, I don't see a lot of hope. If you believe that is inevitable, then as far as I'm concerned, it's truly inevitable. First, because I think formally provable alignment of an arbitrary software system with "human values," however nebulously you might define that, is fundamentally impossible, but even if it were possible, it's also fundamentally impossible to guarantee in perpetuity that all implementations of a system will forever adhere to your formal proof methods. For 50 years, we haven't even been able to get developers to consistently use strnlen. As far as I can tell, if sufficiently advanced AI can take over its light cone and extinguish all value from the universe, or whatever they're up to now on the worry scale, then it will do so.
nonameiguess|2 years ago
CamperBob2|2 years ago
I mean, once the discussion goes THIS far off the rails of reality, where do we go from here?
Mistletoe|2 years ago
If we are talking about AI stoking human fears and weaknesses to make them do awful things, then ok I can see that and am afraid we have been there for some time with our algorithms and AI journalism.
hyeonwho22|2 years ago
More banally, state actors can already use open source models to efficiently create misinformation. It took what, 60,000 votes to swing the US election in 2016? Imagine what astroturfing can be done with 100x the labor thanks to LLMs.
[1] dx.doi.org/10.1038/s42256-022-00465-9
chasd00|2 years ago