(no title)
xondono | 8 months ago
By the same logic, we should worry about the sun not coming up tomorrow, since we know to be true:
- The sun consumes hydrogen in nuclear reactions all the time.
- The sun has a finite amount of hydrogen available.
There’s a lot of non justifiable assumptions baked into those axioms, like that we’re anywhere close to superintelligence or the sun running out of hydrogen.
AFAIK we haven’t even seen “AI trying to escape”, we’ve seen “AI roleplays as if it’s trying to escape”, which is very different.
I’m not even sure you can even create a prompt scenario without that prompt having biased the response towards faking an escape.
I think it’s hard at this point to maintain the claim “LLMs are intelligent”, they’re clearly not. They might be useful, but that’s another story entirely.
LordDragonfang|8 months ago
Nowhere in my post did I imply a timeline for this. The first predicate of the argument is when we eventually do develop ASI. You can make plans for that, same as you can make plans for when the sun eventually runs out of hydrogen. The difference is, we can look at the sun and say that at the rate it's burning, we've got maybe billions of years, and then look at AI improvements, extrapolate, and assume we've got less than 100 years. If we had any indication the sun was going to nova in 100 years, people would be way more worried.
binary132|8 months ago