top | item 36873128

(no title)

vampirical | 2 years ago

Just a bystander but you just quoted this: > My whole premise is that it doesn’t have better than human intelligence. Everyone glosses over how we go from ChatGPT 4 failing its own unit tests to geometrically self-improving hyper-intelligence.

Then did exactly what the quote is talking about, assuming we achieve better than human intelligence. > But you seem to think there's some barrier that means that even if it was a better-than-human programmer (implicitly for general-intelligence reasons), it wouldn't be able to geometrically self-improve.

The question is how exactly do we go from a below human intelligence to an above human intelligence. Without a clear path to that reality, making trade offs in order to protect against low probability worse case scenarios which require it doesn’t look like a good deal.

The reality that right now human level intelligences have a hard time even keeping these below human intelligence systems operating seems like a useful checkpoint. Maybe we can hold off on doing distasteful things like consolidating control until they can at least wipe their own bottoms as it were.

discuss

order

lmm|2 years ago

> Then did exactly what the quote is talking about, assuming we achieve better than human intelligence.

Well, sure, because a) the article covers that b) their whole argument makes no sense if their position is that AI simply can't ever achieve better than human intelligence. If the AI is never intelligent enough to improve its own code then none of the operational complexity stuff matters!

> The question is how exactly do we go from a below human intelligence to an above human intelligence.

The same way we got to the current level of artificial intelligence; old-fashioned hard work by smart people. The point is that, if you accept that slightly-better-than-human AI would be geometrically self-improving, then by the time we have slightly-better-than-human AI it's too late to do anything.