(no title)
clickok | 5 years ago
One thing that should be learned from the bitter lesson is
the great power of general purpose methods, of methods that
continue to scale with increased computation even as the
available computation becomes very great.
The point isn't that improvements in our algorithms is unnecessary or unhelpful, rather that the algorithms we should focus on will be capable of scaling with arbitrary amounts of compute/data.
Such as, for example, neural networks, where we see an almost constant rate of improvement (for the appropriate architecture) as more resources are added.
unknown|5 years ago
[deleted]
KKKKkkkk1|5 years ago
clickok|5 years ago
Rich used to be very bullish on neural nets, then somewhat dismissive of them (due to the fragility/inadequacy of FCNs), and then increasingly enthusiastic as the renewed interest demonstrated that those problems could be overcome-- e.g., through better initialization, training, and (as you note) different architecture choices. His main concern was whether a method could keep working as more resources became available, as otherwise you would tautologically end up with something short of true artificial intelligence.
The important thing is that the technique can scale with increasing data or compute without hitting a hard or soft limit.