top | item 28853179

(no title)

tgdn | 4 years ago

Have you looked into Spark? There are managed Spark options on AWS/GCP (for example Databricks). Spark lets you do exactly what you are saying.

Define minimum/maximum number of nodes, the machine capacity (RAM/CPU) and let Spark handle the scaling for you.

It gives you a Jupyter-like runtime to work on possibly massive datasets. Spark is perhaps too much for what you're looking for. Kubernetes could possibly be used with Airflow/DBT possibly, for example for ETL/ELT pipelines.

discuss

order

ekns|4 years ago

Ideally I'd like to extend at least the illusion of an ad hoc PC/workstation to the cloud. For me it seems like it would be less effort until I reach some ridiculous scale that requires more engineering and setup anyway.