These code examples aren't fully documented yet (which is why we've not linked them in the documentation), but you can take a look at a (more-real) implementation of Paxos here: https://github.com/hydro-project/hydro/blob/main/hydro_test/.... We're also working on building more complex applications like a key-value store.
If there is an intermediary language in the middle with its own runtime, does that mean that we lose everything Rust brings?
I thought this would introduce the language to choreograph separate Rust binaries into a consistent and functional distributed system but it looks more like you're writing DFIR the whole way through and not just as glue
Hi, I'm one of the PhD students leading the work on Hydro!
DFIR is more of a middle-layer DSL that allows us (the high-level language developers) to re-structure your Rust code to make it more amenable to low-level optimizations like vectorization. Because DFIR operators (like map, filter, etc.) take in Rust closures, we can pass those through all the way from the high-level language to the final Rust binaries. So as a user, you never interact with DFIR.
This is really exciting. Is anyone familiar with this space able to point to prior art? Have people built similar frameworks in other languages?
I know different people have worked on dataflow and remember thinking Materialize was very cool and I've used Kafka Streams at work before, and I remember thinking that a framework probably made sense for stitching this all together
From first glance it looks conceptually pretty similar to some work in the data-science space, I'm thinking of spark (which they mention in their docs) and dask.
My knee-jerk excitements is that this has the potential to be pretty powerful specifically because it's based on Rust so can play really nicely with other languages. Spark runs on the JVM which is a good choice for portability but still introduces a bunch of complexities, and Dask runs in Python which is a fairly hefty dependency you'd almost never bring in unless you're already on python.
In terms of distributed Rust, I've had a look at Lunatic too before which seems good but probably a bit more low-level than what Hydro is going for (although I haven't really done anything other than basic noodling around with it).
- Describe a dataflow graph just like Timely
- Comes from a more "semantic dataflow" kind of heritage (frp, composition, flow-of-flows, algebraic operators, proof-oriented) as opposed to the more operationally minded background of Timely
- Has a (very) different notion of "progress" than Timely, focused instead of ensuring the compositions are generative in light of potentially unbounded streaming inputs
- In fact, Flo doesn't really have any notion of "timeliness", no timestamping at all
- Supports nested looping like Timely, though via a very different mechanism. The basic algebra is extremely non-cyclic, but the nested streams/graphs formalism allows for iteration.
The paper also makes a direct comparison with DBSP, which as I understand it, is also part of the Timely/Naiad heritage. Similar to Timely, the authors suggest that Flo could be a unifying semantic framework for several other similar systems (Flink, LVars, DBSP).
So I'd say that the authors of Flo are aware of Naiad/Timely and took inspiration of nested iterative graphs, but little else.
Their latest paper [0] refers to Naiad(timely dataflow) a few times, e.g.:
"Inspired by ingress/egress nodes in Naiad [34], nested streams can be processed by nested dataflow graphs, which iteratively process chunks of data sourced from a larger stream with support for carrying state across iterations."
So each "process" is deployed as a separate binary, so presumably run as a separate process?
If so, this seems somewhat problematic in terms of increased overhead.
How is fast communication achieved?
Some fast shared memory IPC mechanism?
Also, I don't see anything about integration with async?
For better or worse, the overwhelming majority of code dealing with networking has migrated to async. You won't find good non-async libraries for many things that need networking.
By “distributed” I assumed it meant “distributed,”as in on entirely separate machines. Thus necessitating each component running as an independent process.
Currently, Hydro is focused on networked applications, where most parallelism is across machines rather than within them. So there is some extra overhead if you want single-machine parallelism. It's something we definitely want to address in the future, via shared memory as you mentioned.
At POPL 2025 (last week!), an undergraduate working on Hydro presented a compiler that automatically compiles blocks of async-await code into Hydro dataflow. You can check out that (WIP, undocumented) compiler here: https://github.com/hydro-project/HydraulicLift
looks really cool and I can see a few ways how to use it, especially deploy part which seems unique. Looking forward to more fleshed-out documentation, especially seemingly crucial Streams and Singletons and Optionals part.
One of the Hydro creators here. Ballista (and the ecosystem around Arrow and Parquet) are much more focused on analytical query processing whereas Hydro is bringing the concepts from the query processing world to the implementation of distributed systems. Our goal isn't to execute a SQL query, but rather to treat your distributed systems code (e.g a microservice implementation) like it is a SQL query. Integration with Arrow and Parquet are definitely planned in our roadmap though!
Not sure what problem this is solving. For real applications one would need sth as ray.io for Rust. Academia ppl: let's make another data flow framework.
conor-23|1 year ago
https://www.youtube.com/watch?v=YpMKUQKlak0&ab_channel=ACMSI...
IshKebab|1 year ago
shadaj|1 year ago
sebstefan|1 year ago
I thought this would introduce the language to choreograph separate Rust binaries into a consistent and functional distributed system but it looks more like you're writing DFIR the whole way through and not just as glue
shadaj|1 year ago
DFIR is more of a middle-layer DSL that allows us (the high-level language developers) to re-structure your Rust code to make it more amenable to low-level optimizations like vectorization. Because DFIR operators (like map, filter, etc.) take in Rust closures, we can pass those through all the way from the high-level language to the final Rust binaries. So as a user, you never interact with DFIR.
cess11|1 year ago
djtango|1 year ago
I know different people have worked on dataflow and remember thinking Materialize was very cool and I've used Kafka Streams at work before, and I remember thinking that a framework probably made sense for stitching this all together
benrutter|1 year ago
My knee-jerk excitements is that this has the potential to be pretty powerful specifically because it's based on Rust so can play really nicely with other languages. Spark runs on the JVM which is a good choice for portability but still introduces a bunch of complexities, and Dask runs in Python which is a fairly hefty dependency you'd almost never bring in unless you're already on python.
In terms of distributed Rust, I've had a look at Lunatic too before which seems good but probably a bit more low-level than what Hydro is going for (although I haven't really done anything other than basic noodling around with it).
Paradigma11|1 year ago
sitkack|1 year ago
https://rise.cs.berkeley.edu/projects/
Most data processing and distributed systems have some sort of link back to the research this lab has done.
halfmatthalfcat|1 year ago
thelittlenag|1 year ago
stefanka|1 year ago
[0] https://github.com/TimelyDataflow/timely-dataflow
tel|1 year ago
- Describe a dataflow graph just like Timely - Comes from a more "semantic dataflow" kind of heritage (frp, composition, flow-of-flows, algebraic operators, proof-oriented) as opposed to the more operationally minded background of Timely - Has a (very) different notion of "progress" than Timely, focused instead of ensuring the compositions are generative in light of potentially unbounded streaming inputs - In fact, Flo doesn't really have any notion of "timeliness", no timestamping at all - Supports nested looping like Timely, though via a very different mechanism. The basic algebra is extremely non-cyclic, but the nested streams/graphs formalism allows for iteration.
The paper also makes a direct comparison with DBSP, which as I understand it, is also part of the Timely/Naiad heritage. Similar to Timely, the authors suggest that Flo could be a unifying semantic framework for several other similar systems (Flink, LVars, DBSP).
So I'd say that the authors of Flo are aware of Naiad/Timely and took inspiration of nested iterative graphs, but little else.
leicmi|1 year ago
[0] https://hydro.run/papers/flo.pdf
the_duke|1 year ago
If so, this seems somewhat problematic in terms of increased overhead.
How is fast communication achieved? Some fast shared memory IPC mechanism?
Also, I don't see anything about integration with async? For better or worse, the overwhelming majority of code dealing with networking has migrated to async. You won't find good non-async libraries for many things that need networking.
mplanchard|1 year ago
shadaj|1 year ago
At POPL 2025 (last week!), an undergraduate working on Hydro presented a compiler that automatically compiles blocks of async-await code into Hydro dataflow. You can check out that (WIP, undocumented) compiler here: https://github.com/hydro-project/HydraulicLift
Keyframe|1 year ago
shadaj|1 year ago
vikslab|1 year ago
GardenLetter27|1 year ago
The latter benefits a lot from building on top of Apache Arrow and Apache Datafusion.
conor-23|1 year ago
pauldemarco|1 year ago
jmakov|1 year ago