(no title)
kuang_eleven | 3 years ago
In the majority of use cases, your runtime is dominated by I/O, and for the remaining use-cases, you either have low-level functions written in other languages wrapped in Python (numpy, etc.) or genuinely have a case Python is a terrible fit for (eg. low-level graphics programming or embedded).
Why bother making a new variant language with limitation and no real benefit?
lylejantzi3rd|3 years ago
https://twitter.com/id_aa_carmack/status/1503844580474687493...
kuang_eleven|3 years ago
More specifically, overhead (Python + pyTorch in this case) is often a bottleneck when tensors are quite small comparatively. It also claims that overhead largely doesn't scale with problem size, so the overhead would only matter when running very small tensor operations with pyTorch with a very tight latency requirement. This is... rare in practice, but it does happen to occur, then sure, that's a good reason to not use Python as-is!
1. https://horace.io/brrr_intro.html
KyeRussell|3 years ago
baq|3 years ago
pdonis|3 years ago
In any case, if your program is waiting on network or file I/O, who cares whether the CPU could have executed one FLOP's worth of bytecode or 9.75 million FLOPs worth of native instructions in the meantime?
chlorion|3 years ago
You can open some software such as htop right now, and it will show how much CPU time each process on your system has actually used. On my system the vast majority of processes spend the majority of their time doing nothing.
Is it true for all software? Of course not! Something like my compositor for example spends a lot of time doing software compositing which is fairly expensive, and it shows quite clearly in the stats that this is true.
Simran-B|3 years ago
kuang_eleven|3 years ago