top | item 46412817

(no title)

AlexErrant | 2 months ago

This is a deeply unserious book. It gives no concrete outline that leads to extinction. I agree with the overall premise that IFF we give inscrutable black boxes the ability to self-replicate, build their own data centers, and generate their own power, we're doomed. However, I see no hint that people (or governments) will give black boxes complete autonomy with no safeguards or kill switches.

Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct. The first thing we should do if we achieve AGI is to take it apart to see how it works (to make it safe). I believe that's one of the first things a frontier lab will do because it's our nature as curious monkeys.

discuss

order

Imustaskforhelp|2 months ago

> Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct.

Well we are giving them ability to manipulate all aspects of a computer (aka giving them computer access) and we all know how that went (Spoiler or maybe not so much spoiler for those who know but NOT GOOD)

For the unitiated, Rob Pike goes nuclear over GenAI: https://news.ycombinator.com/item?id=46392115

and Rob Pike got spammed with an AI slop "act of kindness : https://news.ycombinator.com/item?id=46394867

AlexErrant|2 months ago

Hm, perhaps I was unclear.

AI absolutely is capable of doing damage, and _is_ currently doing damage. Perpetuating inequality, generating fake news, violation of privacy, questionable IP/rights, etc. These are more pressing than the idea that someday we will give AI the ability to manufacture nano-mosquitos that will poison us all, as Yudkowsky suggested on a recent podcast. He's so busy fantasizing about scifi he's lost touch with the damage it's currently doing.