(no title)
AlexErrant | 2 months ago
Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct. The first thing we should do if we achieve AGI is to take it apart to see how it works (to make it safe). I believe that's one of the first things a frontier lab will do because it's our nature as curious monkeys.
Imustaskforhelp|2 months ago
Well we are giving them ability to manipulate all aspects of a computer (aka giving them computer access) and we all know how that went (Spoiler or maybe not so much spoiler for those who know but NOT GOOD)
For the unitiated, Rob Pike goes nuclear over GenAI: https://news.ycombinator.com/item?id=46392115
and Rob Pike got spammed with an AI slop "act of kindness : https://news.ycombinator.com/item?id=46394867
AlexErrant|2 months ago
AI absolutely is capable of doing damage, and _is_ currently doing damage. Perpetuating inequality, generating fake news, violation of privacy, questionable IP/rights, etc. These are more pressing than the idea that someday we will give AI the ability to manufacture nano-mosquitos that will poison us all, as Yudkowsky suggested on a recent podcast. He's so busy fantasizing about scifi he's lost touch with the damage it's currently doing.