top | item 27582645

Accidentally exponential behavior in Spark

75 points| drob | 4 years ago |heap.io

33 comments

order

d_burfoot|4 years ago

There are two lessons you could learn from this episode:

1. Use shallow trees and the clever workaround presented in the article.

2. Don't use Spark for tasks that require complex logic.

People should trace out the line of reasoning that leads them to use tools like Spark. It is convoluted and contingent - it goes back to work done at Google in the early 2000s, when the key to getting good price / performance was using a large number of commodity machines. Because they were cheap, these machines would break often, so you needed some really smart fault tolerance technology like Hadoop/HDFS, which was followed by Spark.

The current era is completely different. Now the key to good price / performance is to light up machines on-demand and then shut them down, only paying for what you use - and perhaps using the spot market. You don't need to worry about storage - that's taken care of by the cloud provider, and you can't "bring the computation to the data" like in the old days, removing one of the big advantages of Hadoop/HDFS. Because they are doing mostly IO and networking, and because computers are just more resilient nowadays, jobs rarely fail because of hardware errors. So almost the entire rationale that led to Hadoop/HDFS/Spark is gone. But people still use Spark - and put up with "accidentally exponential behavior" - because the tech industry is do dominated by groupthink and marketing dollars.

thundergolfer|4 years ago

Exactly correct. I’ve got a post in the works called “Elegy for Hadoop” that traces the history back to the early 2000s and arrives at the present day where you can easily get on-demand instances with 500Gb of RAM and use it for only your application’s lifetime. If you want 1000Gb instead of 500gb it does not cost 5x it costs 2x, significantly invalidating the “need to use excess commodity hardware” premise of the distributed map reduce architecture.

Edit: I don’t mean to suggest that there is no reason to use Spark, but ~95% of the usage in industry is unnecessary now and should be avoided.

paulbaumgart|4 years ago

Is there an alternative you’d recommend?

ngc248|4 years ago

Spark is still the best for stream processing use cases and if you have enough volume of data coming in something like spark is still the best for batch processing. '

gopalv|4 years ago

I've hit almost the exact same issue with Hive, with a somewhat temporary workaround (like this post) to build a balanced tree out of this by reading it into a list [1] and rebuilding a binary balanced tree out of it.

But we ended up implementing a single level Multi-AND [2] so that this no longer a tree for just AND expressions & can be vectorized neater than the nested structure with a function call for each (this looks more like a tail-call rather than a recursive function).

The ORC CNF conversion has a similar massively exponential item inside which is protected by a check for 256 items or less[3].

[1] - https://github.com/t3rmin4t0r/captain-hook/blob/master/src/m...

[2] - https://issues.apache.org/jira/browse/HIVE-11398

[3] - https://github.com/apache/hive/blob/master/storage-api/src/j...

crznp|4 years ago

It seems to me that the basic problem is that binary trees are the wrong data type. For instance, you can transform the tree to balance it:

    p1 AND (p2 AND (p3 AND notp4)) -> (p1 AND p2) AND (p3 AND notp4)
But the abstraction is specifying the order of operations unnecessarily. Using general trees, I think you avoid the need to transform the order of operations and "NOT" doesn't have to be a special case.

    ALL (p1 p2 p3 NOT(p4))
Is there any reason to choose binary trees for this? (Other than inertia).

mlyle|4 years ago

Why not just...

  val transformedLeftTemp = transform(tree.left)

  val transformedLeft = if (transformedLeftTemp.isDefined) {
    transformedLeftTemp
  } else None

IvanVergiliev|4 years ago

Good question, the simplified example doesn't make this clear.

The real implementation has a mutable `builder` argument used to gradually build the converted filter. If we perform the `transform().isDefined` call directly on the "main" builder, but the subtree turns out to not be convertible, we can mess up the state of the builder.

The second example from the post would look roughly like this:

  val transformedLeft = if (transform(tree.left, new Builder()).isDefined) {
    transform(tree.left, mainBuilder)
  } else None

Since the two `transform` invocations are different, we can't cache the result this way.

There's a more detailed explanation in the old comment to the method: https://github.com/apache/spark/pull/24068/files#diff-5de773... .

lanna|4 years ago

It looks like transform(tree.left) returns an Option[Tree] already (otherwise the code would not type check) so the entire if-else in the original code seems redundant and could be replaced with:

    val transformedLeft = transform(tree.left)

eska|4 years ago

It boggles my mind that the author wrote an entire long article based on this.

The rhetorical question saying that surely that weird refactor of two different functions into one, followed by calling that new, non-trivial function twice for no reason surely shouldn't affect performance.. He already lost me during the premise of the article.

IvanVergiliev|4 years ago

Post author here. Let me know if you have any questions!

mjburgess|4 years ago

Is there anything you can say here about why you're running this query in spark?

Supposing spark is your ETL machinery... would it not make more sense to ETL this into a database?

throwaway81523|4 years ago

Would be nice if title said Apache Spark instead of just Spark, since there are other programs like Spark/Ada also called Spark.

dreyfan|4 years ago

Spark is this weird ecosystem of people who take absolutely trivial concepts in SQL, bury their heads in the sand and ignore the past 50 years of RDBMS evolution, and then write extremely complicated (or broken) and expensive to run code. But whatever it takes to get Databricks to IPO! Afterwards the hype will die down and everyone will collectively abandon it just like MongoDB except for the unfortunate companies with so much technical debt they can't extricate themselves from it.

hobs|4 years ago

There's certainly some of that and I have experienced project managers asking me to put 5GB datasets in spark... but there's definitely a set of problems where vertical scaling is a PITA and MPP basically generally breaks the SQL guarantees anyway, costs a milli, requires rewrites, etc.

When you want to process N+1 TB/PB its hard to throw standard relational approaches at it imo.

SQL is strings all the way down, testing the database itself is often shitshow...

hrkfmud50k|4 years ago

spark is far more testable and composable than sql! and you even get static typing checking. plus i can read data from anywhere - local fs, s3, rdbms, json, parquet, csv... rdbms could not compete

scottmcdot|4 years ago

Is it best to just use spark.sql?