(no title)
anyeung | 2 years ago
What do y'all think. For millions of high dimensional vectors, what new precomputable and incrementally updatable data structures do you think will be useful?
Personally, I'm looking for an efficient nearest neighbour and other efficient 'intersection' tools (e.g. finding a vector most similar to a group of n vector embeddings or finding m pre computed groups that are similar by some distance calculation to a specific embedding).
No comments yet.