(no title)
haddr
|
1 year ago
I think this is the case: when you run your pipelines at scale you want to standardize and simplify some repeatable aspects to lower the cost of managing them. You may also want to be orthogonal to orchestrator engines (or triggering engines) and avoid getting too opinionated and inflexible in the future. So this framework is exploring some sweet spot between raw spark pipelines and low code etl engines.
steveBK123|1 year ago