top | item 29207940

(no title)

rxin | 4 years ago

There's an official TPC process to audit and review the benchmark process. This debate can be easiest settled by everybody participating in the official benchmark, like we (Databricks) did.

The official review process is significantly more complicated than just offering a static dataset that's been highly optimized for answering the exact set of queries. It includes data loading, data maintenance (insert and delete data), sequential query test, and concurrent query test.

You can see the description of the official process in this 141 page document: http://tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v3....

Consider the following analogy: Professional athletes compete in the Olympics, and there are official judges and a lot of stringent rules and checks to ensure fairness. That's the real arena. That's what we (Databricks) have done with the official TPC-DS world record. For example, in data warehouse systems, data loading, ordering and updates can affect performance substantially, so it’s most useful to compare both systems on the official benchmark.

But what’s really interesting to me is that even the Snowflake self-reported numbers ($267) are still more expensive than the Databricks’ numbers ($143 on spot, and $242 on demand). This is despite Databricks cost being calculated on our enterprise tier, while Snowflake used their cheapest tier without any enterprise features (e.g. disaster recovery).

Edit: added link to audit process doc

discuss

order

_dark_matter_|4 years ago

Thanks for the additional context here. As someone who works for a company that pays for both databricks and snowflake, I will say that these results don't surprise me.

Spark has always been infinitely configurable, in my experience. There are probably tens of thousands of possible configurations; everything from Java heap size to parquet block size.

Snowflake is the opposite: you can't even specify partitions! There is only clustering.

For a business, running snowflake is easy because engineers don't have to babysit it, and we like it because now we're free to work on more interesting problems. Everybody wins.

Unless those problems are DB optimization. Then snowflake can actually get in your way.

rxin|4 years ago

Totally. Simplicity is critical. That’s why we built Databricks SQL not based on Spark.

As a matter of fact, we took the extreme approach of not allowing customers (or ourselves) to set any of the known knobs. We want to force ourselves to build the best the system to run well out of the box and yet still beats data warehouses in price perf. The official result involved no tuning. It was partitioned by date, loaded data in, provisioned a Databricks SQL endpoint and that’s it. No additional knobs or settings. (As a matter of fact, Snowflakes own sample TPC-DS dataset has more tuning than the ones we did. They clustered by multiple columns specifically to optimize for the exact set of queries.)

gibneyMI|4 years ago

Credit to you for these amazing benchmark scores via an official process. You've certainly proved to naysayers such as Stonebreaker that lakes and warehouses can be combined in a performant manner!

Shame on your for quoting a fake non-official score for Snowflake in your blog post with crude suggestions to make it seem you're showing an apples-to-apples comparison.

I run a BI org in an F500 company that uses both Databricks & Snowflake on AWS. I can tell you that such dishonest shenanigans take away much from your truly noteworthy technical achievements and make me not want to buy your stuff for lack of integrity. Not very long ago, Azure+GigaOM did a similar blog post with fake numbers on AWS Redshift and it resulted in my department and a bunch of large F500 enterprises that I know moving away from Synapse for lack of integrity.

On many occasions, I've felt that Databricks product management and sales teams lack integrity (especially the folks from Uber & VMW) and such moves only amplify this impression. Your sales guys use arm-twisting tactics to meet quotas and your PM execs. are clueless about your technology and industry. My suggestion is to overhaul some of these teams and cull the rot - it is taking away from the great work your engineers and Berkley research teams are doing.

uvdn7|4 years ago

Snowflake claims the snowflake result from Databricks was not audited. It’s not that Databricks numbers were artificially good but rather Snowflake’s number was unreasonably bad.