top | item 39432798

(no title)

yujian | 2 years ago

oh yeah this is a great question, I get this a lot when I do my talks about RAG stuff

the way I see it is if you have a small amount of data (<10,000 vectors) then it's all the same and you should stick with the technology you are most familiar with

once you get more than that, you may want to consider a vector database

the reason that vector databases exist is because vector search is a highly compute intensive task, in regular database settings, you almost never have to run compute, the database is primarily looking to do an exact match

however, because vector search is predicated on the idea of finding similar vectors, and because exact vector matches are unlikely, you find yourself in the situation of having to optimize that

if you're building on a sql/nosql database you find yourself having to manage indexing, computing distance metrics, and load balancing

pgvector manages much of that for you, but due to the structure of SQL, it doesn't manage it in a very efficient manner - because it wasn't built to, an extra system needs to be built on top

as many experienced software engineers will tell you, adding complexity doesn't necessarily make something better, and adds more points of failure

purpose built vector databases like the ones in the article (eg milvus, chroma, weaviate) are built with this compute challenge in mind, and this becomes useful as the amount of data you have expands

discuss

order

stevekaram|2 years ago

I'd also add that a huge use for LLMs and vectors in the enterprise is to build queries against production data. Keeping the vector DB external to your RDBMS or other production data store is a unique chance to amplify performance without excess latching and other performance hits against the same database you count on for day to day business. Like external super smart indexes.