(no title)
ethn | 10 months ago
> This is irrelevant for AI, because people throw more hardware at bigger problems
GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer.
Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states:
"A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars.
> These two sentences contradict each other
There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable.
1. https://pmc.ncbi.nlm.nih.gov/articles/PMC6458202/
Further reading on Amdahl's law w.r.t LLM:
2. https://medium.com/@TitanML/harmonizing-multi-gpus-efficient...
3. https://pages.cs.wisc.edu/~sinclair/papers/spati-iiswc23-tot...
lud_lite|10 months ago
ethn|10 months ago
However, in this article I contend that those limitations have posed little adversity in the field given the success of the latest models. As a result, it may be a bit premature to be concerned about it.