top | item 22094741

(no title)

overlords | 6 years ago

Vowpal Wabbit has been doing this 'hashing trick' since the 200s.

It also the feature interaction, which are the same thing as a layer in transformers (all against all matrix).

So it seems like they are still catching up to where John Langford and crew were over a decade ago.

And, the vowpal wabbit approach is extremely fast to train because it's only doing stochastic gradient descent on a linear function - linear regression. Transformers are much slower to train.

EDIT: Downvoters, please see my last leaf to see why they're effectively the same. The guy responding here seems unfamiliar with all the functionality of vowpal wabbit.

discuss

order

marcinzm|6 years ago

The Google paper's hashing has, as best I can see, nothing to do with the Vowpal Wabbit's 'hashing trick.'

The VW hashing trick is about hashing your input data (ie: words, fields, etc.) into an array to lower storage requirements and deal with novel data at run time.

The google paper is about ordering the intermediate states of the neural network (ie: vectors) while preserving distance. This is done so you can chunk the resulting ordered list and perform computations on individual chunks (and their neighbors).

The only thing in common I see is the fact they both use the word hashing.

overlords|6 years ago

They are doing the same thing - using less memory by hashing.

The hashing trick in VW hashes multiple same words into one integer, not the same as reformer, but similar to how reformer puts similar vectors together.

With VW's ngram/skipgram features, you get the same kind of effect - similar strings hash into the same hash.

So locality sensitive hashing = (is around about the same thing as) ngram/skipgram on strings plus hashing trick.

overlords|6 years ago

Downvoters, please see

http://matpalm.com/resemblance/simhash/

https://en.wikipedia.org/wiki/SimHash

Simhash, a type of local sensitive hashing - using hash functions on ngrammed data.

That is exactly what Vowpal Wabbit does.

overlords|6 years ago

I'm going to write this out more clearly, because I'm still getting downvotes for my correct answer.

Why neural networks? https://en.wikipedia.org/wiki/Universal_approximation_theore...

Can polynomials do this? (Yes) https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theo...

What is transformer and attention? https://pathmind.com/wiki/attention-mechanism-memory-network

Attention = Polynomial (x2,x3 etc.)

Polynomial = interaction. VW flag -interaction

1 layer transformer = xx. (x^2)

2 layer tranformer = xxx. (x^3)

3 ... etc

What is reformer? Transformer where LSH is applied.

One type of LSH is SimHash. ngrams of strings, followed by 32 bit hash.

Vowpal Wabbit -n flag for ngrams.

vw -interact xxx -n2 -n3 and you get ngrams + 32 bit hash doing SGD over a vector.

This vector is equivalent to a 2 layer reformer.

Non-linear activation is not needed because polynomials are already nonlinear.

So vw + interact + ngrams (almost)= reformer encoder. (if reformer uses SimHash, then they are identical).

Transformer/Reformer have an advantage, the encoder-decoder can learn from unlabeled data.

However, you can get similar results from unlabeled data using preprocessing such as introducing noise to the data, and then treating it as noise/non-noise binary classification. (it can even be thought of as reinforcement learning, with the 0-1 labels as the reward using vw's contextual bandits functionality. This can then do what GAN's do - climb from noise to perfection).

junipertea|6 years ago

I guess the simpler proof that they are the same thing would be: Do they work the same?