top | item 34191320

(no title)

kuang_eleven | 3 years ago

Look, Carmack is a genius in his own corner, but he is taking that quote vastly out of context. The actual linked article[1] is quite fascinating, and does point to overhead costs as being a potential bottleneck, but that specific quote is more to do with GPUs being faster than CPUs rather than anything about Python in particular.

More specifically, overhead (Python + pyTorch in this case) is often a bottleneck when tensors are quite small comparatively. It also claims that overhead largely doesn't scale with problem size, so the overhead would only matter when running very small tensor operations with pyTorch with a very tight latency requirement. This is... rare in practice, but it does happen to occur, then sure, that's a good reason to not use Python as-is!

1. https://horace.io/brrr_intro.html

discuss

order

No comments yet.