top | item 44680613

(no title)

dm3 | 7 months ago

Looks like we're in a similar situation. What is your current go-to for setting up lean incremental data pipelines?

For me the core of the solution - parquet in object store at rest and arrow for IPC - haven't changed in years, but I'm tired of re-building the whole metadata layer and job dependency graphs at every new place. Of course the building blocks get smarter with time (SlateDB, DuckDB, etc.) but it's all so tiresome.

discuss

order

benreesman|7 months ago

Yeah, last time I had to do this was about a year ago and I used parquet and arrow on S3-compatible object stores and put a bunch of metadata in postgres and the whole thing. At that time we used Prefect for orchestration which was fine but IMHO not worth what it cost, I've also used flyte seriously and dabbled with other things, nothing that I can get really excited about recommending, it's all sort of fine but kinda meh. I used to work for a megacorp with extremely serious tooling around this and everything I've tried in open source makes me miss that.

On the front end I've always had reasonable outcomes with `wandb` for tracking runs once you kind get it all set up nicely, but it's a long tail of configuration and writing a bunch of glue code.

In this situation I'm dealing with a pretty medium amount of data and very modest model training needs (closer to `sklearn` than some mega-CUDA thing) and it feels like I should be able to give someone the company card and just get one of those things with 7 programming languages at the top of the monospace text box for "here's how to log a row", we do Smart Things and now you have this awesome web dashboard and you can give your quants this `curl foo | sh` snippet and their VSCode Jupyter will be awesome.

ZeroCool2u|7 months ago

Just reading this as well and I neglected to mention that the Domino thing we use has Flyte (They call it Flows, but it's the same thing) and MLFlow built-in as well.