top | item 32641993

(no title)

37ef_ced3 | 3 years ago

For small-scale transformer CPU inference you can use, e.g., Fabrice Bellard's https://bellard.org/libnc/

Similarly, for small-scale convolutional CPU inference, where you only need to do maybe 20 ResNet-50 (batch size 1) per second per CPU (cloud CPUs cost $0.015 per hour) you can use inference engines designed for this purpose, e.g., https://NN-512.com

You can expect about 2x the performance of TensorFlow or PyTorch.

discuss

order

tombert|3 years ago

Is there a thing that Fabrice Bellard hasn't built? I had no idea that he was interested in something like machine learning, but I guess I shouldn't have been surprised because he has built every tool that I use.