(no title)
kshacker | 13 days ago
We built phone lines and got Spam and "Do Not Call" registries.
We built the Internet and got Ads, Scams, and Spoofing.
We built Google Search and got SEO gaming.
We built Facebook and it was Hijacked to influence elections.
We built AirTags to track our keys, and people used them for Stalking.
We built High-Frequency Trading and got Flash Crashes.
We built Encryption to protect data and got Ransomware.
We built Engagement Algorithms and got a Mental Health Crisis.
We built Planes, and people flew them into buildings. We only stay in the air today because of Rigorous Debugging: maintenance, reinforced doors, and intense security.
We got Globalization and got Offshore Scammers calling to "unblock" our Social Security numbers
Are we going to speed run the age of AI without someone trying to hijack it especially with the move fast and break things mentality?
I am sure teams are working on it. State actors and non state actors. Let's check the headlines in 2028.
AI will need "analyze plan" and "analyze query" levels of detail as safeguards—similar to voter-verified paper ballots—but with millions of queries running every minute, how do we keep up?
entech|13 days ago
Unfortunately a lot of the guardrails are developed in response to the bad outcomes occurring (or rather the guardrails refined).
I feel that we are much better at dealing with obvious physical harms like planes being hijacked and viruses causing damages to your data than with higher level impacts that are not clearly felt and where you cannot hold someone accountable.
The fact that there are not even basic regulations around AI is truly mind boggling. Like how we just acknowledge a phenomenon of AI psychosis and suicidal ideation. We ban drugs and install barriers on bridges, but somehow if AI causes it - it’s seemingly ok. Let’s just hope Sam and Dario care enough to fix it.
kshacker|12 days ago