top | item 36641606

(no title)

sabhiram | 2 years ago

Fascinating paper.

We design an inference accelerator which more or less accomplishes this by quantizing input tensors into logarithmic space. This allows the multiplication (in convolution especially), to be optimized into very simple adders. This (and a few other tricks) has a very dramatic impact on how much compute density we achieve while keeping power very low. We keep the tensors in our quantized space throughout the layers of the network and convert the outputs as required on the way out of the ASIC.

We achieve impressive task level performance, but this requires some specialized training and model optimizations.

Very cool to see ideas like this propagate more into the mainstream.

discuss

order

KRAKRISMOTT|2 years ago

Isn't matrix multiplication already a convolution? You are rotating the right hand side matrix anti clockwise 90 degrees and then convolving it upon the LHS matrix from top to bottom.

sabhiram|2 years ago

The point above regarding convolution had to do specifically with accelerating 3x3 and above convolutional operations, as the product and the accumulation can be done in a few clock cycles if setup with care and love.

kragen|2 years ago

no, it is not, and i am not

discrete convolution is cₙ = Σᵢaᵢbₙ

there is no way in which the indexes into the input matrices in a matrix multiplication are formed from sums or differences of indices and dummy variables

however, convolution is a matrix multiplication, specifically multiplication by the circulant matrix of the convolution kernel

hth, hand