This view may ultimately be right, but massively ignores the current observed trends in capabilities increase[0], scaling laws[1], and things like grokking[2]. I'm seeing an increasing amount of researchers (me included) moving to stances like: "there is a scary possibility that we may solve all the benchmarks we come up for AI... without understanding anything fundamentally deep about what intelligence is about.
a bummer for those like me who are see AI as a fantastic way to unlock deeper insights on human intelligence" @Thom_Wolf [3][0] https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-thin...
[1] https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla...
[2] https://twitter.com/_akhaliq/status/1479265403142553601
[3] https://twitter.com/TacoCohen/status/1584499066410790912
No comments yet.