My reason for posting was I was appalled by this interview, which I think was quite irresponsible. I've lost all respect for Lex Fridman for playing the hapless dupe "just asking questions" and not pushing back harder.
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
I'm over halfway through it, and it seems to me that Lex just can't wrap his mind around the warning that Eliezer is trying to give him. He's so in love with AI that he just can't fathom how things could go wrong.
I'm convinced the threat is real, but have no idea what the timeline is. I hope, like most things, we'll skate by, and just stop calling it AI once it happens, and treat it like any other tool. I strongly doubt that is true.
I suspect what will actually happen is that peak oil will catch us off guard, and we won't have the spare power available to train GPT7, and that will avert the singularity.
Having finished the episode, it seems quite clear to me that Lex just doesn't understand the argument, or doesn't want to understand. He's so used to the idea of falling in love with an AI that he can't see the danger.
I see the danger, let me give an analogy.
What if, according to the laws of physics, it were possible to make a thermonuclear weapon out of beach sand using a microwave oven.
That's something so absurd that we'd never figure it out, but AGI could. That scale of dangerously destabilizing knowledge could show up at any time from a superintelligent AGI.
Its bad enough that nation-states have the resources to make civilization ending weapons. I think AGI could super-empower those with access to it.
---
On the other hand, what if it were possible to make unlimited clean energy using beach sand, a microwave oven and some whiskey as a catalyst. AGI could make that future possible as well.
mikewarot|2 years ago
I'm convinced the threat is real, but have no idea what the timeline is. I hope, like most things, we'll skate by, and just stop calling it AI once it happens, and treat it like any other tool. I strongly doubt that is true.
I suspect what will actually happen is that peak oil will catch us off guard, and we won't have the spare power available to train GPT7, and that will avert the singularity.
mikewarot|2 years ago
I see the danger, let me give an analogy.
What if, according to the laws of physics, it were possible to make a thermonuclear weapon out of beach sand using a microwave oven.
That's something so absurd that we'd never figure it out, but AGI could. That scale of dangerously destabilizing knowledge could show up at any time from a superintelligent AGI.
Its bad enough that nation-states have the resources to make civilization ending weapons. I think AGI could super-empower those with access to it.
---
On the other hand, what if it were possible to make unlimited clean energy using beach sand, a microwave oven and some whiskey as a catalyst. AGI could make that future possible as well.