top | item 14737138

(no title)

plamb | 8 years ago

SnappyData employee here -- This is essentially what we did. The main difference is that we already had a decade old transactional K/V store, that, over time morphed into a more full fledged in-memory database. That is what we integrated with Spark versus rolling a new database. The SQL layer in this database (GemFire/Geode) already had a number of optimizations we could use to speed up Spark SQL queries, even over the native Spark cache.

Like some of the other comments in this thread, the idea was to provide all the guarantees of a OLTP store (HA, ACID, Scalability, Mutations etc) with the powerful analytic capabilities of Spark.

discuss

order

No comments yet.