hm. I'd like to believe, but the arguments here seem a bit obtuse.
No one measures vector distance using the hamming distance on binary representations. That's silly. We use L1 or L2, usually, and the binary encoding of the numbers is irrelevant.
It sounds like the LSH is maaaaybe equivalent to vector quantization. In which case this would be a form of regularization, which sometimes works well, and sometimes meh.
I was part of the above article. Happy to answer questions.
In terms of accuracy, it totally depends on the resolution needed. We can get >99% accuracy of L2 waaaaay faster with 1/10 of the memory overhead. For what we are doing that is the perfect trade off.
In terms of LSH, we tried projection hashing and quantization and were always disappointed.
Goal is to end up with binary encoding so that Hamming distances approximate Euclidean nearest neighbors (so basically L2). Combine with quantized SIFT/GIST for images and you end up with a fast-and-dirty model with decent results.
I like to speculate for reasons this might or might not make sense at several levels, although mostly just conjecturing. The fact everything works is very interesting, but it seems so hard to come up with something concrete.
You have a map from some high dimensional vector space ~ k^N -> H, some space of hashes. H sort of looks one dimensional. I assume that actually the interesting geometry of your training data lies on a relatively low dimensional subvariety/subset in k^N, so maybe its not actually that bad? It could be a really twisted and complicated curve.
However you still need to somehow preserve the relative structure right? Things that are far apart in k^N need to be far apart in H. Seems like you want things to at least approximately be an isometry. Although there are things like space filling curves that might do this for some degree.
Also maybe even though H looks low dimensional, it can actually capture quite a bit (if your data is encoded as a coefficient of a power of 2, you could think of powers of 2 as some sort of basis, so maybe it is also pretty high dimensional).
Contrastive and triplet loss is pretty cool for generating hashes. I'd imagine the trick they are alluding to is a rewrite the loss function to be more aware of locality instead of trying to minimize/maximize distance.
Or they are just shingling different ML hash functions, which is kinda lazy.
Hi, my interest got piqued. I'm developing a similarity feature where I compare embeddings of a sentence and its translation. I wanted to know if the hashing method would be faster that the pytorch multiplication by which I get the sentence similarities. Going from strings to bytes, hashing and comparing is very fast. But if I get the embeddings, turn them into bytes, hash them and compare them, both methods take almost the same time.
You pay a decent cost to do the hash, it’s a compression algorithm of sorts. But the data is a fraction of the size and comparison is way faster. If you do many of these or compare the same ones more than once you amortise the cost very quickly
This idea goes back to "sparse distributed memory" developed by NASA research in the 80s. It's a content-addressable memory where content hashes are encoded & decoded by neural network and similar items are in proximity to each other in the embedding space, and similarity measured via Hamming distance. https://en.wikipedia.org/wiki/Sparse_distributed_memory
using fancy neural nets for learning hash functions from data is indeed pretty cool, but hash functions fit to data isn't new. see "perfect hash functions."
lsh is most famously used for approximating jaccard distances, which even if you're not doing stuff like looking at lengths or distances in l1 or l2, is still a vector operation.
lsh is best described in jeff ullman's mining massive datasets textbook (available free online), which describes how it was used for webpage deduplication in the early days at google.
sdenton4|4 years ago
No one measures vector distance using the hamming distance on binary representations. That's silly. We use L1 or L2, usually, and the binary encoding of the numbers is irrelevant.
It sounds like the LSH is maaaaybe equivalent to vector quantization. In which case this would be a form of regularization, which sometimes works well, and sometimes meh.
mish15|4 years ago
In terms of accuracy, it totally depends on the resolution needed. We can get >99% accuracy of L2 waaaaay faster with 1/10 of the memory overhead. For what we are doing that is the perfect trade off.
In terms of LSH, we tried projection hashing and quantization and were always disappointed.
tomnipotent|4 years ago
onlyrealcuzzo|4 years ago
foxes|4 years ago
You have a map from some high dimensional vector space ~ k^N -> H, some space of hashes. H sort of looks one dimensional. I assume that actually the interesting geometry of your training data lies on a relatively low dimensional subvariety/subset in k^N, so maybe its not actually that bad? It could be a really twisted and complicated curve.
However you still need to somehow preserve the relative structure right? Things that are far apart in k^N need to be far apart in H. Seems like you want things to at least approximately be an isometry. Although there are things like space filling curves that might do this for some degree.
Also maybe even though H looks low dimensional, it can actually capture quite a bit (if your data is encoded as a coefficient of a power of 2, you could think of powers of 2 as some sort of basis, so maybe it is also pretty high dimensional).
useful|4 years ago
Or they are just shingling different ML hash functions, which is kinda lazy.
LuisMondragon|4 years ago
I used this Python library: https://github.com/trendmicro/tlsh.
mish15|4 years ago
andyxor|4 years ago
a-dub|4 years ago
lsh is most famously used for approximating jaccard distances, which even if you're not doing stuff like looking at lengths or distances in l1 or l2, is still a vector operation.
lsh is best described in jeff ullman's mining massive datasets textbook (available free online), which describes how it was used for webpage deduplication in the early days at google.
jonbaer|4 years ago
tomrod|4 years ago
Is the speedup that remarkable? I'd be curious at to the increase in speed versus loss of precision.
jgalt212|4 years ago
unknown|4 years ago
[deleted]
sayonaraman|4 years ago
Edit: also, talking about LSH, must check out FAISS library https://github.com/facebookresearch/faiss and the current SOTA http://ann-benchmarks.com/
LuisMondragon|4 years ago
mish15|4 years ago
mish15|4 years ago
nanis|4 years ago
It didn't.[1]
[1]: https://www.merriam-webster.com/words-at-play/pique-vs-peak-...