top | item 43270712

Postgres Just Cracked the Top Fastest Databases for Analytics

378 points| moonikakiss | 1 year ago |mooncake.dev

120 comments

order
[+] bhouston|1 year ago|reply
Just to be clear, standard SQL databases are not great for large-scale analytics. I know from first hand experience and a lot of pain.

We tried using Postgres with large analytics at my previous company https://threekit.com but it is an absolute pain. Basically we started to collected detailed analytics and thus had a rapidly growing table of around 2B records of user events during their sessions. As it grew past a 500 million records it turned out to be impossible to query this table in any thing close to real-time - it was basically untouchable because it was so slow.

I know I could have used some type of daily aggregation combined with a weekly aggregation, etc to roll up the data incrementally. A dev tried this and yeah, it hide the slow queries but then it became inflexible in terms of reporting. And writing and maintaining these cronjobs is a lot of work.

But then I tried using BigQuery on my https://web3dsurvey.com side project and I just recorded raw data and then wrote queries to do real-time aggregation (with a caching layer) in a large variety of ways. And it was near instant and easy and also very cheap.

So then I deployed this strategy over at Threekit.com and it also was easy and fast and cheap. Even more flexible queries than Web3DSurvey at basically no developer cost or infrastructure costs. No more need for aggregating cron-jobs or trying to decide ahead of time how users want to slice and dice the data. Real time, flexible queries on the fly via BigQuery is the way.

Also BigQuery bill for https://web3dsurvey.com is like $0.25 month and it is dealing with millions of records in its 3 month window of stored data. Where as just running the cheapest Postgres SQL server on Google Cloud is like >$25/month and it is a slow one.

I would never go back to traditional SQL for analytics - it was hard, slow, expensive and inflexible. Worst of all worlds.

[+] atombender|1 year ago|reply
Did you use plain Postgres tables or a columnar extension like Timescale, Citus, or pg_mooncake?

Nobody in their right mind would argue that Postgres without columnar storage is good for analytics. However, it looks like these extensions can be quite decent, at least at queries (you might still run into write performance due to Postgres' OLTP architecture, but I've never benchmarked it). In OLAP terms the size of your data is tiny and this would probably work just fine.

Personally I'm a huge fan of dedicated databases like ClickHouse. BigQuery gets very expensive as your data and query volume grows.

[+] dig1|1 year ago|reply
As someone dealing with billions of records on it, BigQuery is far from cheap; G will not charge you much for storage as they will charge you for queries and data transfer.

AFAIK, the cheapest Postgres server on GCP is very expensive compared to the usual Postgres installation (price/performance).

[+] rapfaria|1 year ago|reply
> 2B records of user events during their sessions. As it grew past a 500 million records it turned out to be impossible to query this table in any thing close to real-time - it was basically untouchable because it was so slow.

This is a solved problem, and it seems the technical folks over there lacked the skills to make it work. Having indexes is just the tip of the iceberg. Composite indexes, partitioning, sharding, caching, etc, can lower reads to a few seconds on disk.

[+] TuringNYC|1 year ago|reply
>> Just to be clear, standard SQL databases are not great for large-scale analytics.

What do you mean by "standard" SQL? Were you using an OLTP or OLAP db when you faced these difficulties? Also, what makes BigQuery not a "standard SQL database" -- they use GoogleSQL, true, but many DBs have slight SQL variants.

[+] zhousun|1 year ago|reply
Thanks for the comment but you are mixing some terminologies.

The core idea of mooncake is to built upon open columnar format + substitutable vectorized engine, while natively integrate with Postgres.

So it is indeed closer to BigQuery (especially the newer bigquery with iceberg tables) than a 'standard SQL database'. It has all the nice properties of BigQuery (ObjectStore-native, ColumnStore, Vectorized execution...) and scaling is also not impossible.

[+] Simon_O_Rourke|1 year ago|reply
> it was basically untouchable because it was so slow.

I've worked with a 5B row table on Snowflake with (maybe) no indexes, and while somewhat slow you could still run reasonable queries on it in a minute or two.

[+] Galanwe|1 year ago|reply
> a rapidly growing table of around 2B records of user events during their sessions. As it grew past a 500 million records it turned out to be impossible to query this table in any thing close to real-time

I mean, I don't know what you call "close to real time", and what kind of query you did, but I have Postgres serving requests from a 20B rows table just fine, with some light tweaking of indexes and partitions (I'm by no means a DBA).

Unless we are digging much deeper than that in the specifics of the data and queries, I don't think any conclusion can be taken from either experience.

[+] sgarland|1 year ago|reply
> Basically we started to collected detailed analytics and thus had a rapidly growing table of around 2B records of user events during their sessions. As it grew past a 500 million records it turned out to be impossible to query this table in any thing close to real-time

Analytics isn't typically something that needs real-time capabilities, for one.

> a rapidly growing table [emphasis mine]

I think I see part of the problem here. If you had a single table, that means it's completely denormalized, so your schema probably looked something like this (or wider):

    CREATE TABLE UserEvent (
      id UUID PRIMARY KEY,
      user_id UUID NOT NULL,
      user_ip_address TEXT NOT NULL,
      user_agent TEXT NOT NULL,
      event_data JSONB,
      created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
      event_type TEXT
    );

    CREATE INDEX UserEvent_user_id_idx ON UserEvent (user_id);
    CREATE INDEX UserEvent_created_at_idx ON UserEvent (created_at);
  
The JSON blob might be anywhere from a few hundred bytes to well over a kilobyte, and probably duplicates data already present as a scalar, like IP address, user agent string, timestamp, etc. I'll use the middle ground and say the JSONB objects are on average 500 bytes when stored. Now, the rest.

A UUID, if stored as its native type (or BINARY(16) in MySQL - don't sleep on this, MySQL folks; it makes a huge difference at scale) is 16 bytes. That's double the size of a BIGINT, and quadruple the size of an INT4. Also, unless you're using UUIDv7 (or UUIDv1, but no one does), it's not k-sortable. Since Postgres doesn't cluster tuples around the PK [yes, I know all indices in Postgres are technically secondary] like MySQL/InnoDB does, this doesn't immediately thrash the B+tree in the same way, but it does thrash the visibility map, and it does bloat the WAL. There are various arguments for why you shouldn't use a monotonic integer as a surrogate key, but IMO they're largely overblown, and there are workarounds to not publicly disclose it.

IPv4 addresses, stored in dotted-quad as a string, are a maximum of 15 characters, storing in 16 bytes as TEXT or VARCHAR. If stored instead in the Postgres native INET type, that drops to 7 bytes, plus you get built-in validation. If you had INT4 UNSIGNED available (as MySQL does natively), you could even store them in their numeric form and save another 3 bytes, though you lose the validation.

User Agent strings are huge, usually over 100 bytes. They're also not that unique, relatively speaking. Even if you need to know the patch version of the browser, anyone with a browser doing automatic updates is going to stay more or less in sync. The point is this could easily be a lookup table, with either a SMALLINT (2^15 - 1 maximum values, or 2^16 - 1 if you use unsigned values; possible with an extension in Postgres) or an INT (2^31 -1 maximum values) as the PK.

Not going to touch on JSON objects because the things you might want to know are endless. TOAST and de-TOAST can be slow; if you need low latency, you should normalize your data.

There may or may not be extracted scalars, which can be beneficial during queries. Again, though, lookup tables (or even native ENUM types, if the values are limited) are crucial at scale.

As it stands, the table will have an average row size of 664 bytes (assuming an average of 12 bytes stored for the IP, 100 bytes stored for the UA, 500 bytes stored for the JSONB, and 12 bytes stored for the event type). That's 332 GB for 500,000,000 rows. You could shave a couple of bytes off by aligning columns [0], which saves 1 GB. If the IP addresses and UA strings were lookup tables, each with an INT4, that saves 104 bytes per row. If you made the PK for the table a BIGINT, that saves another 8 bytes per row. The total savings between column alignment and basic normalization is 114 bytes per row, or 57 GB.

This doesn't touch on the indices, either. If you're using PG 13+, you get B+tree de-duplication [1] for free, which can help with some denormalized data, but not if you have anything with high cardinality, like a timestamp, or a UUID. With lookup tables, you would of course need to index those FKs (whether or not you're enforcing constraints), which adds some size, but is still a huge net gain.

> I know I could have used some type of daily aggregation combined with a weekly aggregation, etc to roll up the data incrementally. A dev tried this and yeah, it hide the slow queries but then it became inflexible in terms of reporting. And writing and maintaining these cronjobs is a lot of work.

And shifting your entire analytics workload isn't a lot of work? Between ROLLUP [2] and MATERIALIZED VIEW [3], which can automatically refresh itself with a cron, this doesn't seem that burdensome.

> Also BigQuery bill for https://web3dsurvey.com is like $0.25 month and it is dealing with millions of records in its 3 month window of stored data.

Then you're in the free tier (<= 1 TiB/month of processed data), because after that it's $6.25/TiB. Also worth noting there is a massive difference between millions of rows and billions of rows. The former can be handled by practically any RDBMS on any hardware, with a completely unoptimized schema. The latter requires some thought if you want it to be performant.

This isn't at all to say that specialized DBs don't have their place, because they absolutely do. If you need a KV store, use a KV store, not an RDBMS. If you need OLAP, use something designed for OLAP. The difference is scale. At startup or side project scale, you can easily do everything (including pub/sub) with an RDBMS, and if you put thought into its design and usage, you can take it a lot farther than you'd think. Eventually, you may hit a point where it's counter-productive to do so, and then you should look into breaking tasks out.

The issue I see happening time and time again is devs have little to no expertise in DBs of any kind, but since everyone says "Postgres is all you need," they decide to use it for everything, except they don't know what they're doing. If you do that, yeah, you're gonna have problems fairly early on, and then you'll either throw your hands up and decide you really need a bevy of specialized DBs, caches, and message queues (which introduces a lot of complexity), or you'll vertically scale the DB. If you choose the latter, by the time you hit scaling limits, you're easily spending $25K/month on the DB alone. If you opt to hire someone with DB expertise at this point, you'll spend about that if not more in personnel costs, and not only will it take them weeks if not months to unravel everything, your devs will be constantly complaining that queries are now "too complex" because they have to do some JOINs, and they're being told to stop chucking everything into JSON. If instead, you took at most a week to learn some RDBMS basics by a. reading its manual front-to-back b. hands-on experience, trying things out you could almost certainly get much farther on much less.

[0]: https://www.enterprisedb.com/blog/rocks-and-sand

[1]: https://www.postgresql.org/docs/current/btree.html#BTREE-DED...

[2]: https://www.postgresql.org/docs/current/queries-table-expres...

[3]: https://www.postgresql.org/docs/current/rules-materializedvi...

[+] cyptus|1 year ago|reply
same. started writing data into parquet to analyze from there with a big gain in performance and hosting costs
[+] saisrirampur|1 year ago|reply
Sai from PeerDB/ClickHouse here. Nice to see the progress on this project! I wanted to leave a few notes:

For analytics on transactional data, it looks like you'd still need to use logical replication (https://github.com/Mooncake-Labs/pg_mooncake/issues/90). Logical replication is somewhat similar to an ETL/CDC experience, though it's more Postgres-native. Managing logical replication at a production grade isn't trivial — it's quite common for customers to use PeerDB for homogeneous replication due to performance, manageability and observability issues with logical replication.

One potential value of extensions is Postgres compatibility for your queries. However, I'm curious about the SQL coverage on columnar tables — for example, advanced constructs like partitioned tables, prepared statements, correlated subqueries, RCTEs, triggers, and more. While it seems there’s ongoing work to improve compatibility, achieving full Postgres compatibility is challenging. In some cases, you might need to fall back to Postgres (instead of DuckDB), sacrificing performance for certain queries.

The biggest challenge we faced at Citus was the constant trade-off between Postgres compatibility and performance — both are complex and constantly moving targets. This was the key reason why Citus couldn't compete with purpose-built databases like Snowflake, ClickHouse, SingleStore. While DuckDB didn’t exist when we built Citus, it's still fundamentally different from Postgres. Even though the DuckDB dialect is somewhat similar to Postgres (as is Snowflake’s), retrofitting two distinct databases — each evolving independently for world's best performance — isn't trivial.

In short, relying on ETL (logical replication) without providing full Postgres compatibility raises the question: is it better to lean on a purpose-built database with a laser focus on performance, or adopt a more generalized approach?

Anyway, I come from a different school of thought — using the right tool for the right job. That said, I love seeing all the progress and evolution in the Postgres community — Postgres will always be my first love!

[+] zhousun|1 year ago|reply
Hi, Zhou From Mooncake labs here.

Love your work on PeerDB and it's inspiring the evolvement of pg_mooncake (logical replication will be the killing feature for V2)

The core idea of mooncake is to built upon open columnar format + substitutable vectorized engine, while natively integrate with Postgres:

1. For small devs, we allow the whole stack to be embedded as a Postgres extension for ease of use

2. For enterprise, our stack is also purpose-built stack similar to PeerDB + ClickHouse, not a more generalized approach

We allow a gradual transition from 1 to 2.

[+] theLiminator|1 year ago|reply
Do you believe that things like cedardb might one day remove the distinction between OLAP and OLTP dbs?
[+] dleeftink|1 year ago|reply
> To enhance query execution speed, we embedded DuckDB as the execution engine for columnstore queries

So is it Postgres or DuckDB that cracked the analytics top ?

[+] moonikakiss|1 year ago|reply
well, pg_mooncake is a Postgres extension, and Postgres + pg_mooncake is still just Postgres. Users deploy pg_mooncake as a Postgres extension and write and query all tables through psql.

Fast analytic databases need two key things: columnar storage and a vectorized execution engine. We introduce a columnstore table access method in Postgres with data stored in Parquet) and execute queries on those tables using DuckDB.

By leveraging DuckDB's execution engine, we avoid reinventing vectorized query execution while keeping everything managed through Postgres.

More on our architecture: https://www.mooncake.dev/blog/how-we-built-pgmooncake

[+] tonyhart7|1 year ago|reply
both??? I mean seeing their homepage this is clearly their business model no?
[+] nikita|1 year ago|reply
This is an exciting project. Few highlights: - Query processor is DuckDB - as long as it translates PG type system to DuckDB typesystem well - it will be very fast. - Data is stored on S3 in Parquet with Delta or Iceberg metadata. This is really cool. You don't need to push analytical data through WAL - only metadata goes into WAL. This mean fast loading at least in theory, and compatibility with all the Delta/Iceberg ecosystem. - Once they build real-time ingest, you can just push timeseries into this system and you don't need a second system like Clickhouse
[+] tarun_anand|1 year ago|reply
Data is also stored in the local filesystem. How does that compare with vanilla PG and/or S3 based storage?
[+] dsiegel2275|1 year ago|reply
A question that I have had for a while that I can't seem to find an answer: for teams that are using various columnar store extensions to turn Postgres into a viable OLAP solution - are they doing so in the same instance of their Postgres that they are using for OLTP? Or are they standing up a separate Postgres instance?

I'm trying to understand if there is any potential performance impact on the OLTP workload by including the OLAP in the same process.

[+] bach4ants|1 year ago|reply
And further, with this pg_mooncake extension allowing you to store the data in S3, is Postgres simply providing compute to run DuckDB? I suppose it's also providing a standardized interface and "data catalog."
[+] owenthejumper|1 year ago|reply
What's the business model? It's a extension that's MIT licensed, yet it has a company and a VC behind it. Sounds like a rug pull waiting to happen
[+] tesch1|1 year ago|reply
Counterpoint: It's actually very astute of them to start like this.

In order to have a viable business model they need to create value for users. Users are intelligent and will not even consider trying something that has no exit path, should the company disappear.

What every company hopes to have is customers who are thrilled with the value they deliver and tell everyone what a great deal it is so the company can grow.

What no company hopes for is to end up like hashicorp, where they end up spending more investment $ than the value they provide, never achieve profitability, and eventually just piss everyone off, and everyone is trying to make the best of a leveraged situation, they end up having to pull the rug. The user's leverage in that situation is something like opentofu, made possible by the license, same as what's being offered here.

The price of the big columnar dbs is very high, so there's a lot of ground to capture / value to arbitrage/offer in this space - as evidenced by other comments in this thread, how fast the benchmarks are changing, the likelihood of memory prices coming down, etc.

Aside from that, you have to wonder big picture if the AI space will put significant downward pressure on memory and compute prices with everyone wanting to run local LLMs, might change some fundamental tradeoffs made in db systems. If in 10 years I can get a phone with a 1 TB of fast access RAM to ask siri how to sauté mushrooms, what will 99% of us need columnar store for?

[+] zhousun|1 year ago|reply
Zhou from mooncake labs here.

Mooncake is built upon open-table formats and substitutable query engines. So it don't need to be just a postgres extension.

PG_mooncake will stay open-source under MIT, for small devs where everything fits in their postgres, we hope everyone to enjoy the !

And we would love to help companies outgrown postgres to transition into the modern stack with postgres + mooncake potentially outside pg + iceberg/delta.

[+] spapas82|1 year ago|reply
As others have mentioned the results are from an extension, not postgres.

Unfortunately using an extension that's not "part" of postgres (like pg_stat_statements) is not trivial for most people since there are both technical and legal issues.

[+] kyrra|1 year ago|reply
What are th legal issues?
[+] xkgt|1 year ago|reply
I read the title a couple of times and I'm still not sure it isn't misleading. The benchmarks are not just for Postgres but for Postgres with the Mooncake extension. There are also other results for Postgres with different extensions. While it does rank among the top fastest databases, it is not the fastest and not even within the top 10.
[+] jot|1 year ago|reply
How is this different from Crunchy Warehouse which is also built on Postgres and DuckDB?

https://www.crunchydata.com/products/warehouse

[+] moonikakiss|1 year ago|reply
Architecturally, it's very similar. We have the same vision! And they've done a great job, especially around writing to Iceberg. Some differences:

1. Our extension is fully open-source. I believe they've open-sourced bits of their stack. 2. We are unopionated about open table formats: Iceberg and Delta. 3. In v0.2, we will support small write workloads. This will open up time-series and HTAP workloads.

[+] mslot|1 year ago|reply
It's a similar idea, but Crunchy Data Warehouse was built by several founding engineers of Citus, which lets us speedrun through it :)

It's a generally available (very solid) product powering some large production workloads, with fully transactional Iceberg, and auto-compaction. All SQL queries and almost all Postgres features are fully supported on Iceberg tables.

We are also seeing interesting patterns emerging with the ability to load/query csv/json/parquet/shapefile/... directly from S3 in combination with pg_parquet and pg_incremental. For instance, incrementally & transactionally loading CSV files that show up in S3 into Iceberg, or periodically exporting from Postgres to Parquet and then querying with data warehouse.

[+] rubenvanwyk|1 year ago|reply
Think that use pure parquet, not iceberg tables.
[+] bigtones|1 year ago|reply
Looks like they're currently No. 12 on their cited Clickbench benchmark - so not quite in the top 10 three weeks later.
[+] dcreater|1 year ago|reply
All you need is postgres: part 73
[+] antonmks|1 year ago|reply
It is not really Postgres, the queries are run on DuckDB. Yeah, DuckDB is really fast for analytical queries.
[+] moonikakiss|1 year ago|reply
Well, pg_mooncake is a Postgres extension. So it's really just Postgres :)

We embed DuckDB in our extension as the vectorized execution engine on columnstore tables.

Why rebuild a vectorized execution, when DuckDB has a lot of GREAT properties for this workload (similar syntax to Postgres, embedability).

Here's our architecture: https://www.mooncake.dev/blog/how-we-built-pgmooncake

[+] dcreater|1 year ago|reply
Yes but this is an extension that enables abstracting that away for an end user and allows them to just use psql
[+] polskibus|1 year ago|reply
How does mooncake work with std oltp workloads? Can I use Postgres with OLTP , add mooncake and expect duckdb-level of performance for OLAP queries? I know that SAP HANA has some sort of several layers of storage and automatic movement of data between them to allow for such performant OLTP/OLAP hybrid, and I think this is the holy grail for cheap/open source db. Users need OLTP first but don’t want to add Kafka + clickhouse or similar pipelines when OLAP is needed.
[+] mirekrusin|1 year ago|reply
Why timescaleDB is not mentioned anywhere? Genuine question.
[+] rubenvanwyk|1 year ago|reply
Looking at the effort being put in it seems there’s a massive market for proper Postgres analytics / columnstore, which is very telling of the pain involved in moving data between systems.
[+] osigurdson|1 year ago|reply
It would be great if Postgres had native columnar storage.
[+] tarun_anand|1 year ago|reply
Will this work alongside Citus? If yes, what benefits/disadvantages would it have? I presume at some point of time workloads do need a cluster.
[+] xiphias2|1 year ago|reply
It's cool proof that it's worth to spend the effort for Postgres authors to add vectorization to the default columnstore.