top | item 36938095

(no title)

yxre | 2 years ago

Another article conflating AGI risks with ML risks. AGI has catastrophic possibilities. ML risks can range from accidents to annoyances.

Other than that, good insights from someone that works with self driving cars

discuss

order

mlyle|2 years ago

I don't think self-driving cars are a major risk for catastrophe.

But I also am not sure we can say, in general, that ML is safe to the point that we can rule out catastrophe. Probably not-- but there's a lot of things we can't exclude. What if it completely poisons our discourse and ability to govern? What if it breaks labor markets and leads to profound unrest? What if it just plain screws up resource allocation and leads to famine? etc.

Ordinary mechanization and industrialization was pretty bad in a whole lot of ways (and the verdict is still out whether we will thread the needle and be OK or whether it will lead to catastrophe). ML and automation could arguably be a fair bit worse.

yxre|2 years ago

For the most part, ML doesn't control high-risk things with a human in the loop. The worst cases for ML so far have been stock market crashes from algo trading.

Its entirely possible that an update to self-driving car's algorithms cause a day of chaos as the self-driving cars lose control and crash. Worst case scenario.

I agree that the secondary effects of the ML systems are going to be far worse than the primary. We can only see how it goes.