Author of Ubicloud's managed Postgres service is here. I'm not sure if you refer to SATA SSDs or typical cloud database setups when you said "other, more typical storage technologies". I'll share my perspective on both.
If you compare NVMe SSDs and SATA SSDs, NVMe SSDs are order of magnitude faster. Maximum theoretical limit of SATA III bus is ~6Gbit/s. This number is 32Gbit/s for Gen 3 NVMe, 64Gbit/s for Gen 4 NVMe and 128Gbit/s for Gen5 NVMe.
For typical database setups offered by cloud providers, the situation is different though. Most of the time, network attached storage devices are used in those setups such as EBS on AWS or Premium SSDs on Azure. These setups suffer a lot due to additional network hop. They are also subject to throughput limits (which can be increased in some cases by paying significantly more). No matter what type of SSDs are used at the backend, additional network hop significantly slows down the reads and writes.
At Ubicloud, we use local NVMe SSDs, which is why we are able to achieve high read/write performances. However, as ngalstyan4 suggested, benchmarking is required to make more definitive claims.
Neither is it a good demonstration of things that people who currently maintain postgres are experienced in doing. Companies should be reluctant to manage their own vector indexes until this becomes more a more mainstream skillset.
This excellent blog post[1] demonstrates the complexities of scaling HNSW indexes and shows that at a certain point, you need to switch to ivfpq with vastly different performance and accuracy characteristics.
beoberha|1 year ago
At this point, Postgres has clearly caught up and the VCs are going to do everything it takes to hold on.
klysm|1 year ago
colordrops|1 year ago
ldjkfkdsjnv|1 year ago
cellis|1 year ago
unknown|1 year ago
[deleted]
Rapzid|1 year ago
pwmtr|1 year ago
If you compare NVMe SSDs and SATA SSDs, NVMe SSDs are order of magnitude faster. Maximum theoretical limit of SATA III bus is ~6Gbit/s. This number is 32Gbit/s for Gen 3 NVMe, 64Gbit/s for Gen 4 NVMe and 128Gbit/s for Gen5 NVMe.
For typical database setups offered by cloud providers, the situation is different though. Most of the time, network attached storage devices are used in those setups such as EBS on AWS or Premium SSDs on Azure. These setups suffer a lot due to additional network hop. They are also subject to throughput limits (which can be increased in some cases by paying significantly more). No matter what type of SSDs are used at the backend, additional network hop significantly slows down the reads and writes.
At Ubicloud, we use local NVMe SSDs, which is why we are able to achieve high read/write performances. However, as ngalstyan4 suggested, benchmarking is required to make more definitive claims.
ngalstyan4|1 year ago
But at least anecdotally, it made a ton of difference.
We met <200ms latency budget with Ubicloud NVMes but had to wait seconds to get an answer from the same query with GCP persistent disks or local SSDs
elijahbenizzy|1 year ago
hdhshdhshdjd|1 year ago
deepsquirrelnet|1 year ago
This excellent blog post[1] demonstrates the complexities of scaling HNSW indexes and shows that at a certain point, you need to switch to ivfpq with vastly different performance and accuracy characteristics.
https://aws.amazon.com/blogs/big-data/choose-the-k-nn-algori...
ngalstyan4|1 year ago
> I don’t think “get moar ram” is a good response to that particular critique.
I do not think the blog post suggested "get more ram" as a response, but happy to clarify if you could share more details!
> Indexing in Postgres is legitimately painful
Lantern is here to make the process seamless and remove most of the pain for people building LLM/AI applications. Examples:
1. We build tools to remove the guesswork of HNSW index sizing. E.g. https://lantern.dev/blog/calculator
2. We analyze typical patterns people use when building LLM apps and suggest better practices. E.g. https://lantern.dev/blog/async-embedding-tables
3. We build alerts and triggers into our cloud database that automate the discovery of many issues via heuristics.
darby_nine|1 year ago
esafak|1 year ago
Labo333|1 year ago
unknown|1 year ago
[deleted]