The DuckDB-as-a-storage-engine approach is clever because it lets you keep your existing MySQL connections, tooling, and replication topology while routing analytical queries to a columnar engine underneath. That's a much easier sell operationally than standing up a separate analytics database and building a sync pipeline. The real question is how they handle consistency between the InnoDB and DuckDB copies of the same data, that's where every hybrid OLTP/OLAP system either shines or quietly loses rows.
Nice question! We did spend a lot of time considering the issue of data consistency.
In the MySQL replication, GTID is crucial for ensuring that no transaction is missed or replayed repeatedly. We handle this in two scenarios (depending on whether binlog is enabled):
- log_bin is OFF: We ensure that transaction in DuckDB are committed before the GTID is written to disk (in the mysql.gtid_executed table). Furthermore, after a crash recovery, we perform idempotent writes to DuckDB for a period of time (the principle is similar to upsert or delete+insert). Therefore, at any given moment after a crash recovery, we can guarantee that the data in DuckDB is consistent with the primary database.
- log_bin is ON: Unlike the previous scenario, we no longer rely on the `mysql.gtid_executed` table; we directly use the Binlog for GTID persistence. However, a new problem arises: Binlog persistence occurs before the Storage Engine commits. Therefore, we created a table in DuckDB to record the valid Binlog position. If the DuckDB transaction fails to commit, the Binlog will be truncated to the last valid position. This ensures that the data in DuckDB is consistent with the contents of the Binlog.
Therefore, if the `gtid_executed` on the replica server matches that of the primary database, then the data in DuckDB will also be consistent with the primary database.
On this page, we introduce how to implement a read-only Columnar Store (DuckDB) node leveraging the MySQL binlog mechanism. https://github.com/alibaba/AliSQL/blob/master/wiki/duckdb/du... In this implementation, we have performed extensive optimizations for binlog batch transmission, write operations, and more.
Here is the professional English translation of your analysis, optimized for a technical audience or a blog post:
Why I Believe MySQL is More Suited than PostgreSQL for DuckDB Integration
Currently, there are three mainstream solutions in the ecosystem: pg_duckdb, pg_mooncake, and pg_lake. However, they face several critical hurdles. First, PostgreSQL's logical replication is not mature enough—falling far behind the robustness of its physical replication—making it difficult to reliably connect a PG primary node to a DuckDB read-only replica via logical streams.
Furthermore, PostgreSQL lacks a truly mature pluggable storage engine architecture. While it provides the Table Access Method as an interface, it does not offer standardized support for primary-replica replication or Crash Recovery at the interface level. This makes it challenging to guarantee data consistency in many production scenarios.
MySQL, however, solves these issues elegantly:
Native Pluggable Architecture: MySQL was born with a pluggable storage engine design. Historically, MySQL pivoted from MyISAM to InnoDB as the default engine specifically to leverage InnoDB's row-level MVCC. While previous columnar attempts like InfoBright existed, they didn't reach mass adoption. Adding DuckDB as a native columnar engine in MySQL is a natural progression. It eliminates the need for "workaround" architectures seen in PostgreSQL, where data must first be written to a row-store before being converted into a columnar format.
The Power of the Binlog Ecosystem: MySQL’s "dual-log" mechanism (Binlog and Redo Log) is a double-edged sword; while it impacts raw write performance, the Binlog provides unparalleled support for the broader data ecosystem. By providing a clean stream of data changes, it facilitates seamless replication to downstream systems. This is precisely why OLAP solutions like ClickHouse, StarRocks, and SelectDB have flourished within the MySQL ecosystem.
Seamless HTAP Integration: When using DuckDB as a MySQL storage engine, the Binlog ecosystem remains fully compatible and intact. This allows the system to function as a data warehouse node that can still "egress" its own Binlog. In an HTAP (Hybrid Transactional/Analytical Processing) scenario, a primary MySQL node using InnoDB can stream Binlog directly to a downstream MySQL node using the DuckDB engine, achieving a perfectly compatible and fluid data pipeline.
Does this feed DuckDb continuously data from transactional workloads, akin to what SAP hana does? If so that would be huge - people spend lots of time trying to stitch transactional data to warehouses using Kafka/debezium.
BTW, Would be great to hear apavlo’s opinion on this.
Yes, MySQL-DuckDB columned read only node will continuously get data from transactional workload by binlog.
Then people will not need to maintain tools like kafka/debezium to sync between two node.
HTAP is here! It seems like these hybrid databases are slowly gaining adoption which is really cool to see.
The most interesting part of this is the improvements to transaction handling that it seems they've made in https://github.com/alibaba/AliSQL/blob/master/wiki/duckdb/du... (its also a good high level breakdown of MySQL internals too). Ensuring that the sync between the primary tables and the analytical ones are fast and most importantly, transactional, is awesome to see.
I don't think this is meaningfully HTAP, it's gluing together two completely different databases under a single interface. As far as I can tell, it doesn't provide transactional or consistency guarantees different than what you'd get with something like Materialize.
This isn't new either, people have been building OLAP storage engines into MySQL/Postgres for years, e.g., pg_ducklake and timescale.
How I see SQL databases evolving over the next 10 years:
1. integrate an off the shelf OLAP engine
forward OLAP queries to it
deal with continued issues keeping the two datasets in sync
2. rebase OLTP and OLAP engines to use a unified storage layer
storage layer supports both page-aligned row-oriented files and column-oriented files and remote files
still have data and semantic inconsistencies due to running two engines
3. merge the engines
policy to automatically archive old records to a compressed column-oriented file format
option to move archived record files to remote object storage, fetch on demand
queries seamlessly integrate data from freshly updated records and archived records
only noticeable difference is queries for very old records seem to take a few seconds longer to get the results back
Can tiger data be used just as a simple column store?
All I want is effectively what clickhouse does in PG. I have a single table that I need fast counts on and clickhouse can do the counts fast but I have to go through the entire sync/replication to do that.
A quick scan of TimeSeries always seemed like it was really only best setup for that and to use it another way would be a bit of a struggle.
One option is TiDB. It has support for columnar data alongside row based data. However, it is MySQL compatible, but not based on MySQL code so not quite what you asked for.
On a drive-by-glance it looks like if you had a tighter integrated version of PSQL FDW for DuckDB and Vector Storage - meets Vespa. I find it interesting they went with extending MySQL instead of FDW route on PSQL?
I’m quite certain that if DuckDB had been open-sourced and reached stability around 2020, TiDB would have definitely chosen DuckDB instead of ClickHouse.
Just guessing, but it probably wasn't planned as open source.
The real version control history might be full of useless internal Jira ticket references, confidential information about products, in Mandarin, not even in git... there's a thousand reasons to surface only a minimal fake git version history, hand-crafted from major releases.
Quickly becoming my least-favorite account. If you’re going to have a schtick, have a schtick. Write your comments in and old timey voice or iambic pentameter or whatever, include a signature, ascii art, lean into being annoying.
ruhith|26 days ago
Captain32zxz|26 days ago
In the MySQL replication, GTID is crucial for ensuring that no transaction is missed or replayed repeatedly. We handle this in two scenarios (depending on whether binlog is enabled):
Therefore, if the `gtid_executed` on the replica server matches that of the primary database, then the data in DuckDB will also be consistent with the primary database.baotiao|26 days ago
linuxhansl|26 days ago
baotiao|26 days ago
Why I Believe MySQL is More Suited than PostgreSQL for DuckDB Integration Currently, there are three mainstream solutions in the ecosystem: pg_duckdb, pg_mooncake, and pg_lake. However, they face several critical hurdles. First, PostgreSQL's logical replication is not mature enough—falling far behind the robustness of its physical replication—making it difficult to reliably connect a PG primary node to a DuckDB read-only replica via logical streams.
Furthermore, PostgreSQL lacks a truly mature pluggable storage engine architecture. While it provides the Table Access Method as an interface, it does not offer standardized support for primary-replica replication or Crash Recovery at the interface level. This makes it challenging to guarantee data consistency in many production scenarios.
MySQL, however, solves these issues elegantly:
Native Pluggable Architecture: MySQL was born with a pluggable storage engine design. Historically, MySQL pivoted from MyISAM to InnoDB as the default engine specifically to leverage InnoDB's row-level MVCC. While previous columnar attempts like InfoBright existed, they didn't reach mass adoption. Adding DuckDB as a native columnar engine in MySQL is a natural progression. It eliminates the need for "workaround" architectures seen in PostgreSQL, where data must first be written to a row-store before being converted into a columnar format.
The Power of the Binlog Ecosystem: MySQL’s "dual-log" mechanism (Binlog and Redo Log) is a double-edged sword; while it impacts raw write performance, the Binlog provides unparalleled support for the broader data ecosystem. By providing a clean stream of data changes, it facilitates seamless replication to downstream systems. This is precisely why OLAP solutions like ClickHouse, StarRocks, and SelectDB have flourished within the MySQL ecosystem.
Seamless HTAP Integration: When using DuckDB as a MySQL storage engine, the Binlog ecosystem remains fully compatible and intact. This allows the system to function as a data warehouse node that can still "egress" its own Binlog. In an HTAP (Hybrid Transactional/Analytical Processing) scenario, a primary MySQL node using InnoDB can stream Binlog directly to a downstream MySQL node using the DuckDB engine, achieving a perfectly compatible and fluid data pipeline.
polskibus|26 days ago
BTW, Would be great to hear apavlo’s opinion on this.
baotiao|23 days ago
jimmyl02|26 days ago
The most interesting part of this is the improvements to transaction handling that it seems they've made in https://github.com/alibaba/AliSQL/blob/master/wiki/duckdb/du... (its also a good high level breakdown of MySQL internals too). Ensuring that the sync between the primary tables and the analytical ones are fast and most importantly, transactional, is awesome to see.
necubi|26 days ago
This isn't new either, people have been building OLAP storage engines into MySQL/Postgres for years, e.g., pg_ducklake and timescale.
unknown|26 days ago
[deleted]
infogulch|25 days ago
dzonga|26 days ago
at the moment I use PG + Tiger Data - couldn't find a mysql equivalent
so this as one.
mhitza|26 days ago
For about a year releases include a vector storage type, so it will be interesting to see it compared in performance with what Alibaba did.
Just wanted to plug that out. Given how often Postgres is plugged on HN, I think people ignore how versatile mariadb is.
rjh29|26 days ago
tempest_|26 days ago
All I want is effectively what clickhouse does in PG. I have a single table that I need fast counts on and clickhouse can do the counts fast but I have to go through the entire sync/replication to do that.
A quick scan of TimeSeries always seemed like it was really only best setup for that and to use it another way would be a bit of a struggle.
travem|26 days ago
awesome_dude|26 days ago
manishsharan|25 days ago
And I get the benefit of resiliency and DR for free.
If you are a developing for My SQL and you are using Java/kotlin/closure/scala consider this as well.
Keyframe|26 days ago
jitl|26 days ago
anentropic|26 days ago
baotiao|25 days ago
redwood|25 days ago
baotiao|25 days ago
enamya|26 days ago
knallfrosch|26 days ago
The real version control history might be full of useless internal Jira ticket references, confidential information about products, in Mandarin, not even in git... there's a thousand reasons to surface only a minimal fake git version history, hand-crafted from major releases.
cies|25 days ago
Let's all hope Ali will pick it up :)
I'm fully invested on Postgres though.
ywxiao|25 days ago
[deleted]
milesward|26 days ago
[deleted]
akie|26 days ago
unknown|26 days ago
[deleted]
IhateAI|26 days ago
[deleted]
tclancy|26 days ago
aussieguy1234|26 days ago
unknown|26 days ago
[deleted]