top | item 28677802

Vectors are over, hashes are the future of AI

66 points| jsilvers | 4 years ago |sajari.com

25 comments

order

sdenton4|4 years ago

hm. I'd like to believe, but the arguments here seem a bit obtuse.

No one measures vector distance using the hamming distance on binary representations. That's silly. We use L1 or L2, usually, and the binary encoding of the numbers is irrelevant.

It sounds like the LSH is maaaaybe equivalent to vector quantization. In which case this would be a form of regularization, which sometimes works well, and sometimes meh.

mish15|4 years ago

I was part of the above article. Happy to answer questions.

In terms of accuracy, it totally depends on the resolution needed. We can get >99% accuracy of L2 waaaaay faster with 1/10 of the memory overhead. For what we are doing that is the perfect trade off.

In terms of LSH, we tried projection hashing and quantization and were always disappointed.

tomnipotent|4 years ago

Goal is to end up with binary encoding so that Hamming distances approximate Euclidean nearest neighbors (so basically L2). Combine with quantized SIFT/GIST for images and you end up with a fast-and-dirty model with decent results.

onlyrealcuzzo|4 years ago

How would these ml predictions be explainable?

foxes|4 years ago

I like to speculate for reasons this might or might not make sense at several levels, although mostly just conjecturing. The fact everything works is very interesting, but it seems so hard to come up with something concrete.

You have a map from some high dimensional vector space ~ k^N -> H, some space of hashes. H sort of looks one dimensional. I assume that actually the interesting geometry of your training data lies on a relatively low dimensional subvariety/subset in k^N, so maybe its not actually that bad? It could be a really twisted and complicated curve.

However you still need to somehow preserve the relative structure right? Things that are far apart in k^N need to be far apart in H. Seems like you want things to at least approximately be an isometry. Although there are things like space filling curves that might do this for some degree.

Also maybe even though H looks low dimensional, it can actually capture quite a bit (if your data is encoded as a coefficient of a power of 2, you could think of powers of 2 as some sort of basis, so maybe it is also pretty high dimensional).

useful|4 years ago

Contrastive and triplet loss is pretty cool for generating hashes. I'd imagine the trick they are alluding to is a rewrite the loss function to be more aware of locality instead of trying to minimize/maximize distance.

Or they are just shingling different ML hash functions, which is kinda lazy.

LuisMondragon|4 years ago

Hi, my interest got piqued. I'm developing a similarity feature where I compare embeddings of a sentence and its translation. I wanted to know if the hashing method would be faster that the pytorch multiplication by which I get the sentence similarities. Going from strings to bytes, hashing and comparing is very fast. But if I get the embeddings, turn them into bytes, hash them and compare them, both methods take almost the same time.

I used this Python library: https://github.com/trendmicro/tlsh.

mish15|4 years ago

You pay a decent cost to do the hash, it’s a compression algorithm of sorts. But the data is a fraction of the size and comparison is way faster. If you do many of these or compare the same ones more than once you amortise the cost very quickly

andyxor|4 years ago

This idea goes back to "sparse distributed memory" developed by NASA research in the 80s. It's a content-addressable memory where content hashes are encoded & decoded by neural network and similar items are in proximity to each other in the embedding space, and similarity measured via Hamming distance. https://en.wikipedia.org/wiki/Sparse_distributed_memory

a-dub|4 years ago

using fancy neural nets for learning hash functions from data is indeed pretty cool, but hash functions fit to data isn't new. see "perfect hash functions."

lsh is most famously used for approximating jaccard distances, which even if you're not doing stuff like looking at lengths or distances in l1 or l2, is still a vector operation.

lsh is best described in jeff ullman's mining massive datasets textbook (available free online), which describes how it was used for webpage deduplication in the early days at google.

jonbaer|4 years ago

I feel like I have been talking about LSH for years

tomrod|4 years ago

I recall Uber using it in a white paper writeup.

Is the speedup that remarkable? I'd be curious at to the increase in speed versus loss of precision.

jgalt212|4 years ago

let's just take it to its logical extension and make every model just one big look up table (with hashes as keys). /s

sayonaraman|4 years ago

Whoever wrote the article must have done a cursory search at best, I'm surprised they didn't mention semantic hashing by Salakhutdinov & Hinton (2007) https://www.cs.utoronto.ca/~rsalakhu/papers/semantic_final.p...

Edit: also, talking about LSH, must check out FAISS library https://github.com/facebookresearch/faiss and the current SOTA http://ann-benchmarks.com/

LuisMondragon|4 years ago

I'm intrigued by FAISS. Do you know if I can use BERT embeddings with FAISS?

mish15|4 years ago

We will add neural hash based ANN to that repo when we get time. I expect HNSW to get pushed out in time for many reasons.

mish15|4 years ago

Interesting. Hadn’t seen it actually. BOW is problematic as a starting point, but this is neat and a long time ago too.