It depends on what you'd consider fundamental. It's true that most of our advances since the mid-80s have been about improving the robustness, data- and compute-efficiency of the training process through optimized architectures and learning algorithms, and in principle in the limit of infinite data and compute you could have taken a model from 1986 and scaled it up to do everything that our current models do. In that sense there have been no "fundamental" advances.On the other hand, in the limit of infinite size and complexity most mathematical functions can be represented by hash maps, yet to say that there have been no fundamental advances in programming since the invention of hash maps in the fifties would seem like an odd claim to make.
No comments yet.