top | item 42496722

(no title)

rekoros | 1 year ago

Yep, I believe it

HNSW has been great for relatively small datasets (a website with 30K pages), but it’s a dangerous stretch for anything bigger

discuss

order

bluecoconut|1 year ago

I don’t quite understand this - by 30k pages, is this the number of entries in your index? Did you mean 30M?

At the <100k scale I just full compute / inner product directly, and I don’t mess with vector stores or added complexity. No ANN algo needed — they’ll all be slower than actual exact kNN re ranking. (10k7684 =30MB, a scan over it and a sort is on the ~100us or faster). frankly, I’ve even sent at the 5k scale to client and done that client side in JS.

Often, I find i use an ANN algo / index to get me my nearest 10k then I do final re ranking with more expensive algorithms/compute in that reduced space.

The original HNSW paper was testing/benchmarking at the 5M-15M scales. That’s where it shines compared to alternatives.

When pushing to the 1B scale (I have an instance at 200M) the memory consumption does become a frustration (100GB of ram usage). Needing to vertically scale nodes that use the index. But it’s still very fast and good. I wouldn’t call it “dangerous” just “expensive”.

Interestingly though, I found that usearch package worked great and let me split and offload indexes into separate files on disk, greatly lowered ram usage and latency is still quite good on average, but has some spikes (eg. sometimes when doing nearest 10k though can be ~1-3 seconds on the 200M dataset)

VoVAllen|1 year ago

Hi, I'm the author of the article. Please check out our vector search extension in postgres, VectorChord [1]. It's based on RabitQ (a new quantization method) + IVF. It achieves 10ms-level latency for top-10 searches on a 100M dataset and 100ms-level latency when using SSD with limited memory.

[1] https://blog.pgvecto.rs/vectorchord-store-400k-vectors-for-1...

rekoros|1 year ago

You're dealing with much larger datasets than I have, so far - mine is only a few million vectors. I have a hard constraint on resources, so had to get things working quickly in a relatively gutless environment.

aabhay|1 year ago

What systems would you recommend for larger deployments?

rekoros|1 year ago

I ended up breaking up/sharding HNSW across multiple tables, but I'm dealing with many distinct datasets, each one just small enough to make HNSW effective in terms of index build/rebuild performance.

The article suggests IVF for larger datasets - this is the direction I'd certainly explore, but I've not personally had to deal with it. HNSW sharding/partitioning might actually work even for a very large - sharded/partitioned - dataset, where each query is a parallelized map/reduce operation.

redskyluan|1 year ago

why not try milvus? you get multiple index types, SIMD based brute force search, IVF, HNSW and DiskANN and you never bother to scale