top | item 40005005

Using ClickHouse to scale an events engine

237 points| wyndham | 1 year ago |github.com | reply

97 comments

order
[+] mritchie712|1 year ago|reply
> Recently, the most interesting rift in the Postgres vs OLAP space is [Hydra](https://www.hydra.so), an open-source, column-oriented distribution of Postgres that was very recently launched (after our migration to ClickHouse). Had Hydra been available during our decision-making time period, we might’ve made a different choice.

There will likely be a good OLAP solution (possibly implemented as an extension) in Postgres in the next year or so. There are a few companies are working on it (Hydra, Parade[0], tembo etc.).

0 - https://www.paradedb.com/

[+] ddorian43|1 year ago|reply
I don't think `tembo` is working on it though, probably just hosting an existing extension.
[+] Mortiffer|1 year ago|reply
so Paradedb and Hydra are using same codebase or just similar approach ?
[+] joshstrange|1 year ago|reply
I feel like with all the Clickhouse praise on HN that we /must/ be doing something fundamentally wrong because I hate every interaction I have with Clickhouse.

* Timeouts (only 30s???) unless I used the cli client

* Cancelling rows - Just kill me, so many bugs and FINAL/PREWHERE are massive foot-guns

* Cluster just feels annoying and fragile don't forget "ON CLUSTER" or you'll have a bad time

Again, I feel like we must be doing something wrong but we are paying an arm and a leg for that "privilege".

[+] nsguy|1 year ago|reply
What is your use case? If you're deleting rows that already feels like maybe it's not the intended use case. I think about clickhouse as taking in a firehose of immutable data that you want to aggregate/analyze/report on. Let's say a million records per second. I'll make up an example, the orientation, speed and acceleration of every Tesla vehicle in the world in real time every second.
[+] ergonaught|1 year ago|reply
Most, though certainly not all, problems I see with ClickHouse usage come from pretending it is another database or that it is intended for other use cases.
[+] orf|1 year ago|reply
From the docs on FINAL:

> However, using FINAL is sometimes necessary in order to produce accurate results

Welp.

[+] shin_lao|1 year ago|reply
It's meant to store immutable data, and isn't great if you need low-latency updates. Also it's quirky in some ways.
[+] zX41ZdbW|1 year ago|reply
Interesting about "Timeouts (only 30s???)" - most likely, this is a limitation configured explicitly for a user on your server. You can set it up with the `max_execution_time`, and by default, it is unlimited.

For example, I've set it up, along with many more limitations for my public playground https://play.clickhouse.com/, and it allows me to, at least, make it public and not worry much.

It could also be a configuration of a proxy if you connect through a proxy. ClickHouse has built-in HTTP API, so you can query it directly from the browser or put it behind Cloudflare, etc... Where do you host ClickHouse?

[+] alecfong|1 year ago|reply
What foot guns have you run into with FINAL?
[+] HermitX|1 year ago|reply
Is ClickHouse a suitable engine for analyzing events? Absolutely, as long as you're analyzing a large table, its speed is definitely fast enough. However, you might want to consider the cost of maintaining an OSS ClickHouse cluster, especially when you need to scale up, as the operational costs can be quite high.

If your analysis in Postgres was based on multiple tables and required a lot of JOIN operations, I don't think ClickHouse is a good choice. In such cases, you often need to denormalize multiple data tables into one large table in advance, which means complex ETL and maintenance costs.

For these more common scenarios, I think StarRocks (www.StarRocks.io) is a better choice. It's a Linux Foundation open-source project, with single-table query speeds comparable to ClickHouse (you can check Clickbench), and unmatched multi-table join query speeds, plus it can directly query open data lakes.

[+] jakearmitage|1 year ago|reply
> consider the cost of maintaining an OSS ClickHouse cluster I mean... it is pretty straightforward. 40~60 line Terraform, Ansible with templates for the proper configs that get exported from Terraform so you can write the IPs so they can see each other, and you are done.

What else could you possibly need? Backing up is built into it with S3 support: https://clickhouse.com/docs/en/operations/backup#configuring...

Upgrades are a breeze: https://clickhouse.com/docs/en/operations/update

People insist that OMG MAINTENANCE I NEED TO PAY THOUSANDS FOR MANAGED is better, when in reality, it is not.

[+] breadchris|1 year ago|reply
ClickHouse is awesome, but as the post shows, some code is involved in getting the data there.

I have been working on Scratchdata [1], which makes it easy to try out a column database to optimize aggregation queries (avg, sum, max). We have helped people [2] take their Postgres with 1 billion rows of information (1.5 TB) and significantly reduce their real-time data analysis query time. Because their data was stored more efficiently, they saved on their storage bill.

You can send data as a curl request and it will get batch-processed and flattened into ClickHouse:

curl -X POST "http://app.scratchdata.com/api/data/insert/your_table?api_ke..." --data '{"user": "alice", "event": "click"}'

The founder, Jay, is super nice and just wants to help people save time and money. If you give us a ring, he or I will personally help you [3].

[1] https://www.scratchdb.com/ [2] https://www.scratchdb.com/blog/embeddables/ [3] https://q29ksuefpvm.typeform.com/to/baKR3j0p?typeform-source...

[+] wiredfool|1 year ago|reply
My first big win for clickhouse was replacing a 1.2tb, billion + row postgresql DB with clickhouse. It was static data with occasional full replacement loads. We got the DB down to ~ 60GB, with query speeds about 45x faster.

Now, the postgres schema wasn't ideal, and we could have saved ~ 3x on it with corresponding speed increases for queries with a refactor similar to the clickhouse schema, but that wasn't really enough to move the needle to near real-time queries.

Ultimately, the entire clickhouse DB was smaller than the original postgres primary key index. The index was too big to fit in memory on an affordable machine, so it's pretty obvious where the performance is coming from.

[+] alooPotato|1 year ago|reply
We use BigQuery a lot for internal analytics and we've been super happy. I don't see a lot of love for BigQuery on HN and I wonder why. Tons of features, no hassle and easy to throw a bunch of TB at it.

I guess maybe the cost?

[+] mnahkies|1 year ago|reply
I'm a big fan of big query as well, but the cost can be problematic if you're not careful.

Generally speaking I've found it manageable if you make good use of partitioning and do incremental aggregation (we use dbt, though you have to do some macro gymnastics to make the partition key filter eligible for pruning due to restrictions on use of dynamic values https://docs.getdbt.com/docs/build/incremental-models)

It's also important to monitor your cost and watch for the point where switching from the per-tb queried pricing model to slots makes sense.

[+] RadiozRadioz|1 year ago|reply
Probably also because it is proprietary and only exists in one cloud platform.
[+] doo_daa|1 year ago|reply
We are lucky enough to be able to run BigQuery with flat rate billing. It's incredibly powerful and it's a really good example of SaaS and Serverless done right. It just works.
[+] lysecret|1 year ago|reply
Yep love it too, especially with external data on GCS. Costs this way are very low. And the convenience is amazing (getting caches you can stream from for every query is a godsend)
[+] wodenokoto|1 year ago|reply
I was quite surprised that other clouds don’t have an easy to get started analytics data warehouse solution like big query.
[+] drewda|1 year ago|reply
This change may make sense for Lago as a hosted multi-tenant service, as offered by Lago the company.

Simultaneously this change may not make sense for Lago as an open-source project self-hosted by a single tenant.

But that may also mean that it effectively makes sense for Lago as a business... to make it harder to self host.

I don't at all fault Lago for making decisions to prioritize their multi-tenant cloud offering. That's probably just the nature of running open-source SaaS these days.

[+] config_yml|1 year ago|reply
Exactly, I've seen this at Sentry where you now have to run Kafka, Clickhouse, Redis, PG, Zookeeper, memcached and what have you. I get it, but the amount of baggage to handle is a bit difficult.
[+] stephen123|1 year ago|reply
How were they doing millions of events per minute with postgres.

I'm struggling with pg write performance ATM and want some tips.

[+] Ozzie_osman|1 year ago|reply
If you're not already doing this: remove unnecessary indices, partition the table, batch your inserts/updates, or try COPY instead of INSERT.
[+] unixhero|1 year ago|reply
Turn off indexing and other optimizations done on a table level
[+] whalesalad|1 year ago|reply
What’s your hardware? RDS? Nvme storage?
[+] mathnode|1 year ago|reply
And if you use MariaDB, just enable columnstore. Why not treat yourself to s3 backed storage while you are there?

It is extremely cost effective when you can scale a different workload without migrating.

[+] hipadev23|1 year ago|reply
This is no shade to postgres or maria, but they don’t hold a candle to the simplicity, speed, and cost efficiency of clickhouse for olap needs.
[+] samber|1 year ago|reply
I'm curious: how many rows Lago store in its CH cluster? Do they collect data for fighting fraud?

PG can handle a billion rows easily.

[+] didip|1 year ago|reply
OLAP databases need to be able to handle billions of rows per hour/day.

I super love PG but PG is too far away from that.

[+] JosephRedfern|1 year ago|reply
Reading between the lines, given they're talking > 1 million rows per minute, I'd guess on the order of trillions of rows rather than billions (assuming they retain data for more than a couple of weeks)
[+] jacobsenscott|1 year ago|reply
PG can handle billions of rows for certain use cases, but not easily. Generally you can make things work but you definitely start entering "heroic effort" territory.
[+] jackbauer24|1 year ago|reply
scale is becoming more and more important, not just for cost, but also as a key technology feature to help deal with unexpected traffic and reduce the cost of manual operations.
[+] andretti1977|1 year ago|reply
I have a tangentially related question since I don’t use an Olap db: is deleting data so hard to perform? Is it necessarily an immutable storage?

If so, is it a gdpr compliant storage solution? I am asking it since gdpr compliance may require data deletion (or at least anonimization)

[+] FridgeSeal|1 year ago|reply
Columnar Db’s want stuff to be contiguous on disk, and deletes cause the rest of the data in that “block” to be rewritten (imagine deleting a chunk out of the middle of an excel table: you’ve got to move everything else up).

This in turn, creates read+write load. Modern OLAP db’s often support it, often via mitigating strategies to minimise the amount of extra work they incur: mark tainted rows, exclude them from queries, and clean up asynchronously; etc.

[+] dangoodmanUT|1 year ago|reply
deleting this comment because apparently jokes are not received well here
[+] mritchie712|1 year ago|reply
> Recently, the most interesting rift in the Postgres vs OLAP space is [Hydra](https://www.hydra.so), an open-source, column-oriented distribution of Postgres that was very recently launched (after our migration to ClickHouse). Had Hydra been available during our decision-making time period, we might’ve made a different choice.

There will likely be a good OLAP solution (possibly implemented as an extension) in Postgres in the next year or so. Many companies are working on it (Hydra, Parade[0], etc.)

0 - https://www.paradedb.com/