(no title)
kuang_eleven | 3 years ago
More specifically, overhead (Python + pyTorch in this case) is often a bottleneck when tensors are quite small comparatively. It also claims that overhead largely doesn't scale with problem size, so the overhead would only matter when running very small tensor operations with pyTorch with a very tight latency requirement. This is... rare in practice, but it does happen to occur, then sure, that's a good reason to not use Python as-is!
No comments yet.