top | item 43493665

Building a modern durable execution engine from first principles

98 points| whoiskatrin | 11 months ago |restate.dev

26 comments

order

dang|11 months ago

One of the authors worked on Apache Flink but is too modest to include that interesting detail! So I'm adding it here. Hopefully he won't mind.

sewen|11 months ago

All of the Restate co-founders com from various stages of Apache Flink.

Restate is in many ways a mirror image to Flink. Both are event-streaming architectures, but otherwise make a lot of contrary design choices.

(This is not really helpful to understand what Restate does for you, but it is an interesting tid bit about the design.)

       Flink     |   Restate
  -------------------------------
                 |
    analytics    |  transactions
                 |
  coarse-grained |  fine-grained
    snapshots    | quorum replication
                 |
   throughput-   |  latency-sensitive
    optimized    |  
                 |
  app and Flink- |  disaggregated code
  share process  |   and framework
                 |
      Java       |      Rust
the list goes on...

jedberg|11 months ago

Hi CEO of DBOS here (we’re a restate competitor).

I really enjoyed this blog post. The depth of the technical explanations is appreciated. It really helps me understand the choices you’ve made and why.

We’ve obviously made some different design choices, but each solution has its place.

digdugdirk|11 months ago

DBOS was the first thing I thought of when I saw this.

From a DBOS perspective, can you explain what the differences are between the two?

randomcatuser|11 months ago

I have to say, the examples are really good: https://github.com/restatedev/examples

For someone (me) who is new to distributed systems, and what durable execution even is, I learned a lot (both from the blog post & the examples!). thanks!

ChatGPT also helped :)

oulipo|11 months ago

Interesting, how does it compare to Inngest and DBOS?

p10jkle|11 months ago

Hey, I work on Restate. There are lots of differences throughout the architecture and the developer experience, but the one most relevant to this article is that Restate is itself a self-contained distributed stream-processing engine, which it uses to offer extremely low latency durable execution with strong consistency across AZs/regions. Other products tend to layer on top of other stores, which will inherit the good things and the bad things about those stores when it comes to throughput/latency/multi-region/consistency.

We are putting a lot of work into high throughput, low latency, distributed use cases, hence some of the decisions in this article. We felt that this necessitated a new database.

agentultra|11 months ago

Isn’t this just event sourcing? Why not re-use the terminology from there?

solatic|11 months ago

Can you elaborate more on your persistence layer?

One of the good reasons why most products will layer on top of an established database like Postgres is because concerns like ACID are solved problems in those databases, and the database itself is well battle-hardened, Jepsen-tested with a reputation for reliability, etc. One of the reasons why many new database startups fail is precisely because it is so difficult to get over that hump with potential customers - you can't really improve reliability until you run into production bugs, and you can't sell because it's not reliable. It's a tough chicken-and-egg problem.

I appreciate you have reasons to build your own persistence layer here (like a push-based model), but doesn't doing so expose you to the same kind of risk as a new database startup? Particularly when we're talking about a database for durable execution, for which, you know, durability is a hard requirement?

sewen|11 months ago

Indeed, the persistence layer is sensitive, and we do take this pretty serious.

All data is persisted via RocksDB. Not only the materialized state of invocations and journals, but even the log itself uses RocksDB as the storage layer for sequence of events. We do that to benefit from the insane testing and hardening that Meta has done (they run millions of instances). We are currently even trying to understand which operations and code paths Meta uses most, to adopt the code to use those, to get the best-tested paths possible.

The more sensitive part would be the consensus log, which only comes into play if you run in distributed deployments. In a way, that puts us into a similar boat as companies like Neon: having reliably single-node storage engine, but having to build the replication and failover around that. But in that is also the value-add over most databases.

We do actually use Jepsen internally for a lot of testing.

(Side note: Jepsen might be one of the most valuable things that this industry has - the value it adds cannot be overstated)

xwowsersx|11 months ago

Looks very interesting. How does it compare to Temporal?

sewen|11 months ago

There are a few dimensions where this is different.

(1) The design is a fully self-contained stack, event-driven, with its own replicated log and embedded storage engine.

That lets it ship as a single binary that you can use without dependency (on your laptop or the cloud). It is really easy to run.

It also scales out by starting more nodes. Every layer scales hand-in hand, from log to processors. (you should give it an object store to offload data, when running distributed)

The goal is a really simple and lightweight way to run yourself, while incrementally scaling to very large setups when necessary. I think that is non-trivial to do with most other systems.

(2) Restate pushes events, compared to Temporal pulling activities. This is to some extent a matter of taste, though the push model has a way to work very naturally with serverless functions (lambda, CF workers, fly.io, ...).

(3) Restate models services and stateful functions, not workflows. This means you can model logic that keeps state for longer than what would be the scope of a workflow (you have like a K/V store transactionally integrated with durable executions). It also supports RPC and messaging between functions (exactly-once integrated with the durable execution).

(4) The event-driven runtime, together with the push model, gets fairly good latencies (low overhead of durable execution).