top | item 26217911

A Data Pipeline Is a Materialized View

144 points| nchammas | 5 years ago |nchammas.com

47 comments

order

georgewfraser|5 years ago

Great article, one quibble: there isn’t really a clear dividing line between batch and streaming. If you process data one row at a time, that is clearly a streaming pipeline, but most systems that call themselves streaming actually process data in small batches. From a user perspective, it’s an implementational detail, the only thing you care about is the latency target.

Nearly all data sources, including the changelogs of databases, are polling-based APIs, so you’re getting data from the source in (small) batches. If your goal is to put this data into a data warehouse like Snowflake, or a system like Materialize, the lowest latency thing you can do is just immediately put that data into the destination. I sometimes see people put a message broker like Kafka in the middle of this process, thinking it’s going to imbue the system with some quality of streamyness, but this can only add latency. People are often surprised that we don’t use a message broker at Fivetran, but when you stop and think about it there’s just no benefit in this context.

nchammas|5 years ago

> If you process data one row at a time, that is clearly a streaming pipeline, but most systems that call themselves streaming actually process data in small batches. From a user perspective, it’s an implementational detail, the only thing you care about is the latency target.

Author here. 100% agreed.

As an aside, I just came across your post about how Databricks is an RDBMS [0]. I recently wrote a similar article from a slightly more abstract perspective [1].

Having worked heavily with RDBMSs in the first part of my career, I feel like so many of the concepts and patterns I learned about there are being re-expressed today with modern, distributed data tooling. And that was part of my inspiration for this post about data pipelines.

[0] https://fivetran.com/blog/databricks-is-an-rdbms

[1] https://nchammas.com/writing/modern-data-lake-database

snidane|5 years ago

Ultimatelly there will never be a pure streaming system processing, one record at a time, in real world. Any such system contains a busy loop somewhere inside polling the source, say each 100ms, and unless it shares a lock with the source system, it will never guarantee that there won't be more items in the source queue within those 100ms intervals. Therefore all such systems are at best (micro)batch systems. Also streaming systems literally batch data into time windows when doing, eg. group by operation, so they turn into batch systems then.

Pure batch systems are those where the processing window is infinite and no state is preserved. Everything is recomputed from scratch on every run. This seems to be the prefered way to do ETL because dragging state around and accidentally polluting it is better to be avoided if not handled properly.

What is more useful for real world data processing would be an "incremental batch" model, in which the processing system has a memory of what it has processed so far and after comparing that against source data, it would determine what will run in the next update batch.

Sadly, the industry is plagued with either pure streaming solutions, even though most data problems are not of this nature. Or ETL and workflow systems, which are thinking in terms of pure batching model. This results in me having to implement the necessary logic for incremental loads myself while not finding these ETL frameworks very useful.

I've honestly had more luck writing scripts myself than relying on excessively complicated frameworks for ETLing out there. They only seem to convolute stuff together like Ruby on Rails back in the days, instead of separating concerns like some small http library or web microframework.

Is there anything out there on the horizon which focuses on incremental batch processing, or as the article point out, updating materialized views that I manage myself?

gunnarmorling|5 years ago

> there’s just no benefit in this context

I'd beg to differ; having a message broker as part of the streaming pipeline allows to set up multiple consumers (e.g. Snowflake and Elasticsearch and something else). Also, depending on the settings, you can replay streams from the beginning.

That's why we see >90% of Debezium users running with Kafka or alternatives. There is a group of point-to-point users (e.g. caching use cases) mostly via Debezium engine, but it's a small minority.

corneliusphi2|5 years ago

I'd say that the difference was the systems behavior in the presence of IO, and it's pretty important in my experience. Micro-batching systems hold up processing while waiting for IO but proper streaming implementations continue using cpu for elements at other points in the stream, very roughly.

rorykoehler|5 years ago

Do you have any links you would recommend reading for designing real-time data/reporting solutions?

taeric|5 years ago

It is amusing how often we think adding work to the system will speed it up.

It is interesting when is works. :)

082349872349872|5 years ago

Sometime when I am old(er) and (somehow?) have more time, I'd like to jot down a "Rosetta Stone" of which buzzwords map to the same concepts. So often we change our vocabulary every decade without changing what we're really talking about.

Things started out in a scholarly vein, but the rush of commerce hasn't allowed much time to think where we're going. — James Thornton, Considerations in computer design (1963)

westurner|5 years ago

Like a Linked Data thesaurus with typed, reified edges between nodes/concepts/class_instances?

Here's the WordNet RDF Linked Data for "jargon"; like the "Jargon File": http://wordnet-rdf.princeton.edu/lemma/jargon

A Semantic MediaWiki Thesaurus? https://en.wikipedia.org/wiki/Semantic_MediaWiki :

> Semantic MediaWiki (SMW) is an extension to MediaWiki that allows for annotating semantic data within wiki pages, thus turning a wiki that incorporates the extension into a semantic wiki. Data that has been encoded can be used in semantic searches, used for aggregation of pages, displayed in formats like maps, calendars and graphs, and exported to the outside world via formats like RDF and CSV.

Google Books NGram viewer has "word phrase" term occurrence data by year, from books: https://books.google.com/ngrams

mjdrogalis|5 years ago

As someone who’s spent a lot of time working on data pipelines, I think this is a great breakdown of the complexity most data engineers are facing. However, I think there’s two more keys to tidying up messy pipelines in practice:

1. You need to colocate both stream processing for the data pipeline and real-time materialized view serving for the results.

2. You need one paradigm for expressing both of these things.

Let me try to describe a bit why that is.

1. You virtually always need both stream processing and view serving in practice. In the real-world, you ingest data streams from across the company and generally don’t have a say about how the data arrives. Before you can do the sort of materialization the author describes, you need to rearrange things a bit.

2. Building off of (1), if these two aren’t conceptually close, it becomes hard to make the whole system hang together. You still effectively have the same mess—it’s just spread over more components.

This is something we’re working really hard on solving at Confluent. We build ksqlDB (https://ksqldb.io/), an event streaming database over Kafka that:

1. Let’s you write programs that do stream processing and real-time materialized views in one place.

2. Let’s you write all of it in SQL. I see a lot of people on this post longing for bash scripting, and I get it. These frameworks are way too complicated today. But to me, SQL is the ideal medium. It’s both concise and deeply expressive. Way more people are competent with SQL, too.

3. Has built-in support for connecting to external systems. One other, more mundane part of the puzzle is just integrating with other systems. ksqlDB leverages the Kafka Connect ecosystem to plug into 120+ data systems.

You can read more about how the materialization pieces works in a recent blog I did. https://www.confluent.io/blog/how-real-time-materialized-vie...

CapriciousCptl|5 years ago

As someone who basically sticks everything possible into Postgres, this is interesting! Streaming tools don't automatically cache things you need? I guess it's about time they do! Postgres, for instance, has a robust LRU mechanism that deals with OLTP quite competently. OLAP too if your indices are thought-out.

Also, although built-in materialized views don't allow partial updates in Postgres, you can get a similar thing with normal tables and triggers. Hashrocket discussed that strategy here-- https://hashrocket.com/blog/posts/materialized-view-strategi... .

nchammas|5 years ago

Of the traditional RDBMSs, I believe Oracle has the most comprehensive support for materialized views, including for incremental refreshes [0].

As early as 2004, developers using Oracle were figuring out how to express complex constraints declaratively (i.e. without using application code or database triggers) by enforcing them on materialized views [1].

It's quite impressive, but this level of sophistication in what materialized views can do and how they are used does not seem to have spread far beyond Oracle.

[0]: https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG...

[1]: https://tonyandrews.blogspot.com/2004/10/enforcing-complex-c...

snidane|5 years ago

Most problems of data engineering of today would be solved in presence of a tool in which I would define arbitrary transformation of a say a single daily data increment and the system would handle the state management and loading of all of the increments. Regardless of if they came from source updates or backfills.

Data engineering really is just a maintenance of incrementally updated materialized views, but no tool out there yet recognizes it. They at best help you orchestrate and parallelize your ETLs across multiple threads and machines. They become glorified makefiles at the cost of introducing several layers of infrastructure into the picture (eg. Airflow) for what should have been solved by simple bash scripting.

Yet at best these tools only help with stateless batch processing. When it comes down to stateful processing, which is necessary for maintaining an incrementally updated materialized views and idempotent loads, I have to couple the logic of view state management (what has been loaded so far) with logic of the actual data transformation.

Response to difficulties of batch ETL from the industry is usually: batch data processing systems are resource hungry and slow, all you need from now is streaming.

No, actually I don't. For data analytics, pure streaming almost has no application. Data analytics is essentially data compression of big data to something smaller. Ie. some form of group by. I have to wait for a window of data to get close before computing anything useful. Analytics on real "real time" data on unclosed windows is confusing and useless.

So all data analytics will ever run on groups, windows and batches of data. Therefore I need a system which will help me run data transformations on batches. More precisely - stream of smaller batches. I need this to react to incoming daily, hourly or minutely batches and I need this to backfill my materialized view in the case I decide to wipe it off and start again.

You can literally do this in what was supposed to be the original system to orchestrate bunch of programs - shell scripting. And you'll be happier for it than using current complex frameworks. Only things you will miss is something to run distributed cron and to distribute load to multiple machines. At least the latter can be handled by gnu parallel.

This article hits the nail on its head with describing what conceptual model for ETL actually is and once others will follow, we might finally see new frameworks or just libraries to help us to greatly simplify ETLs. Perhaps one day data engineering will be just as simple as running an idempotent bash or python or sql script or even close to nonexistent.

endymi0n|5 years ago

https://www.getdbt.com/ comes extremely close in my eyes and even tackles the documentation and infra-as-code aspect. We went all in half a year ago and never looked back.

tehlike|5 years ago

Ravendb gets close

smknappy|5 years ago

Great post! Just heard about this from one of our customers who slacked me with "He is describing Ascend.io!" :-)

Having spent 15+ years writing big data pipelines and building teams who do the same, I couldn't agree more... the conceptual model we're all quite comfortable with is this notion of cascading, materialized views. The challenge, however, is that they are expensive to maintain in a big data pipeline context -- paid either in system resource cost, or developer cost. The only reasonable way to achieve this is a fundamental shift away from imperative pipelines, and to declarative orchestration (a few folks mention this as well). We've seen this in other domains with technologies like React, Terraform, Kubernetes, and more to great success.

I've written about this in tldr form @ https://www.ascend.io/data-engineering/, namely the evolution from ETL, to ELT, to (ETL)+, to Declarative. A also gave a more detailed tech talk on this topic @ https://www.youtube.com/watch?v=JcVTXC0qPwE.

For those who are interested in a longer white paper on data orchestration methodologies, namely imperative vs declarative, this is a good read: https://info.ascend.io/hubfs/Whitepapers/Whitepaper-Orchestr...

sasad|5 years ago

What are some of the limitations of dbt ?

camone|5 years ago

dbt doesn't do much automation/ETL outside of the database you're working in, as other tools might be able to.

That being said, it's very powerful, I love it

lincpa|5 years ago

[deleted]

xodast1|5 years ago

so each data pipeline is a pure function ? hmm geez if we had something that was all about pure functions and how they can be used to express real life problems.

gregw2|5 years ago

Whoever wrote this hasn't worked on medium-complicated data pipelines / ETL logic.

It's pretty non-trivial to try to make an effective-dated slowly changing dimension with materialized views.

A good tool makes the medium-difficulty stuff easy, and the complicated stuff possible. Materialized views do only the former.

I would love to be wrong about this.

nchammas|5 years ago

Author here.

Are you thinking of a specific implementation of materialized views? Most implementations from traditional RDBMSs would indeed be too limiting to use as a general data pipeline building block.

The post doesn't argue that, though. It's more about using materialized views as a conceptual model for understanding data pipelines, and hinting at some recent developments that may finally make them more suitable for more widespread use.

From the conclusion:

> The ideas presented in this post are not new. But materialized views never saw widespread adoption as a primary tool for building data pipelines, likely due to their limitations and ties to relational database technologies. Perhaps with this new wave of tools like dbt and Materialize we’ll see materialized views used more heavily as a primary building block in the typical data pipeline.

atwebb|5 years ago

You can absolutely get complicated data models nailed using views. Some of the views get unwieldly and a bit long but it can be done. The catch is in incrementally loading the aggregate tables or self referencing (even then the underlying views are essentially functions to be included). I scanned the article and have followed materialize.io for a bit and built pipelines that handle what is essentially the awfulness of performantly updating materialized views.

I'm not a master but believe a core piece of truth for data:

Move the data as little as possible

If you can use federated query (cost/performance is acceptable), do so. If you can use materialized views, do so. Data replication has tons of issues, you almost always have the 2 generals problem and reconciliation / recompile procedures. If you don't move the data, the original storage is the source and it is always right.

I think I went way out of responding directly to you, I do think materialize.io and delta lake and declarative pipelines are the solution to 95% of the data problems out there.

I'm speaking conceptually about materialized views as system implementations differ.