It’s funny, because while technically correct, you may end up fundamentally wrong that there was a limit to the value we might find in parallel compute. After all, we may have lie of sight to AGI through parallel scale alone.
What you've identified is known as Gustafson's Law. If you can keep solving larger and larger problems, your solution time doesn't necessarily go down, but "speedup" is potentially uncapped.
Of course there may be some practical limit to Gustafson's Law, but I don't think we've found it yet, at least in many scientific domains.
eslaught|2 years ago
Of course there may be some practical limit to Gustafson's Law, but I don't think we've found it yet, at least in many scientific domains.