(no title)
sabhiram | 2 years ago
We design an inference accelerator which more or less accomplishes this by quantizing input tensors into logarithmic space. This allows the multiplication (in convolution especially), to be optimized into very simple adders. This (and a few other tricks) has a very dramatic impact on how much compute density we achieve while keeping power very low. We keep the tensors in our quantized space throughout the layers of the network and convert the outputs as required on the way out of the ASIC.
We achieve impressive task level performance, but this requires some specialized training and model optimizations.
Very cool to see ideas like this propagate more into the mainstream.
KRAKRISMOTT|2 years ago
sabhiram|2 years ago
kragen|2 years ago
discrete convolution is cₙ = Σᵢaᵢbₙ₋ᵢ
there is no way in which the indexes into the input matrices in a matrix multiplication are formed from sums or differences of indices and dummy variables
however, convolution is a matrix multiplication, specifically multiplication by the circulant matrix of the convolution kernel
hth, hand