(no title)
zenlikethat | 10 months ago
One thing we do have support for “expectations” — model-like Python steps that check data quality, and can flag it if the pipeline violates them.
zenlikethat | 10 months ago
One thing we do have support for “expectations” — model-like Python steps that check data quality, and can flag it if the pipeline violates them.
tech_ken|10 months ago
I think this is kind of the answer I was looking for, and in other systems I've actually manually implemented things like this with a "temp materialize" operator that's enabled by a "debug_run=True" flag. With the NB thing basically I'm trying to "step inside" the data pipeline, like how an IDE debugger might run a script line by line until an error hits and then drops you into a REPL located 'within' the code state. In the notebook I'll typically try to replicate (as close as possible) the state of the data inside some intermediate step, and will then manually mutate the pipeline between the original and branch versions to determine how the pipeline changes relate to the data changes. I think the dream for me would be to having something that can say "the delta on line N is responsible for X% of the variance in the output", although I recognize that's probably not a well defined calculation in many cases. But either way at a high level my goal is to understand why my data changes, so I can be confident that those changes are legit and not an artifact of some error in the pipeline.
Asserting that a set of expectations is met at multiple pipeline stages also gets pretty close, although I do think it's not entirely the same. Seems loosely analogous to the difference between unit and integration/E2E tests. Obviously I'm not going to land something with failing unit tests, but even if tests are passing the delta may include more subtle (logical) changes which violate the assumptions of my users or integrated systems (ex. that their understanding the business logic is aligned with what was implemented in the pipeline).
jtagliabuetooso|10 months ago
You can automate many changes / tests by materializing the parent(s) of the target table, and use the SDK to produce variations of a pipeline programmatically. If your pipeline has a free parameter (say top-k=5 for some algos), you could just write a Python for loop, and do something like:
client.create_branch() client.run()
for each variation, materializing k versions at the end that you can inspect (client.query("SELECT MAX ...")
The broader concept is that every operation in the lake is immutably stored with an ID, so every run can be replicated with the exact same data sources and the exact same code (even if not committed to GHub), which also means you can run the same code varying the data source, or run a different code on the same data: all zero-copy, all in production.
As for the semantics of merge and other conflicts, we will be publishing by end of summer some new research: look out for a new blog post and paper if you like this space!