top | item 16886436

(no title)

kprybol | 7 years ago

It depends on the use case. Our work primarily revolves around extending Spark with custom pipelines, models, ensembles, etc. to be deployed into our production systems (petabyte scale). Scala was really the only way to go for us.

discuss

order

sandGorgon|7 years ago

I can understand performance difference, but I have not generally seen a difference in building custom pipelines and ensembles .. although I grant I'm not at your scale yet.

What kind of specific pipelines did you have trouble in pyspark ?

RBerenguel|7 years ago

Although we decided to start using Scala specifically because PySpark was not as performant (2.0 is not so far ago), a reasonable use case I keep always in mind is aggregation (and in general any API which is still not solid/experimental/under work). Python bindings are always the last to be available (because all groundwork is being done in Scala). We have a relatively large scale process that takes advantage of custom-built aggregation methods on top of groupedDatasets, where we can pack a good deal of logic in the merge and reduce steps of aggregation. We could replicate this in Python using reducers, but aggregating makes more sense semantically, which makes the code is easier to understand. Also, the testing facilities for Spark code under Scala are a bit more advanced than under Python (they are not super-great, but are better), even without considering that being strongly typed makes a whole kind of errors impossible, right out of the compiler.

I very, very rarely think of using PySpark (and I have way more experience with Python than with Scala) when working with Spark. In a kitchen setting, it would be like having to prepare a cake and having to choose between a fork and a whisker. I can get it done with the fork, but I'll do a better and faster job with the whisker.