top | item 38757139

(no title)

MicolashKyoka | 2 years ago

I may not personally like it, but it's the pragmatist thing to do if you're leading openai. The gov will eventually get involved, better to get in front of it.

However, there won't be a monopoly or anything of the sort that will form, our society's institutions wouldn't be able to properly enforce it and it's too easy for weights to leak, too much alpha for people to not fight back against it. Not worried.

The ai safety crowd's problem is that they're focused on a hypothetical entity that will take over humanity, imagining all sorts of scenarios in lalaland which are ridiculous (some of their "thought leaders" even advocating for strikes on datacenters to save humanity???), where the real risk is criminals/terros being empowered by the ai's cognitive ability. For that, we'll need the gov involved one way or another.

discuss

order

Geisterde|2 years ago

I appreciate your attitude around the infesability of monopolizing software, and agree with you on that and on how hysterical people are acting about AI.

I cant picture a case where charges need to be brought under a novel framework that couldnt be captured by existing laws. If you deploy a computer virus, thats a crime, if you hack someones bank account, thats a crime, if you slander someone, thats a crime.

Im personally not convinced that any of the public efforts made around AI safety have been honest. If these people were serious then like you say, they wouldnt be chasing evil scifi ai overlords. I can think of some wonderful use cases to counter things like AI generated fakes; I am actually uncomfortably suprised that every social media company hasnt added a banner below every picture that says whether scans indicate a picture was deepfaked, twitter could scan a picture when you post it and show what a variety of algorithms think. That would have a real shot at improving information asymmetry at the social level.