top | item 40709181

Making Serverless Orchestration 25x Faster

44 points| secondrow | 1 year ago |dbos.dev

31 comments

order

plandis|1 year ago

> You simply write your workflow as a TypeScript function which calls your steps, implemented in other TypeScript functions. The framework automatically instruments each step to record its output in the database after executing.

In my mind, the key value proposition of orchestration is that it is a solution for business processes where various parts have different owners. The fact that you need to tightly couple the logic for all your various states in DBOS sounds like it’s solving, at best, a partial part of the problem in my opinion.

alanyilunli|1 year ago

Pardon my naivety but why was Typescript chosen as the interface for writing transactional workflows? I can't help but think that a backend language that is more popular for those use-cases would be more relevant.

Maybe the idea is that those who are using those other languages may have other workarounds already?

Zambyte|1 year ago

The overlap for the target audience of TypeScript and "serverless" systems seems much larger than the overlap of the target audience of "backend languages" and "serverless"

bnchrch|1 year ago

I think the naivety lies in underestimating how many backend systems are written in Typescript, and how large of the developer population knows / uses TS.

In either sense, its alot.

(Aside: The backend has plenty of bad languages. For example Python is considered a backend language but is considerably worse than most languages in speed, ergonomics, transitive dependencies, and so on... )

mind-blight|1 year ago

This seems really cool. I've just been running into scenarios where this kind of durable execution is really helpful. I've been doing basic things with RabbitMQ plus a job server, but there are definitely limitations.

I'd love to hear from folks who have experience using things like Beam or Spark. I've if the biggest pain points I've encountered is that there are definitely dozens of "mature" products to solve this problem that all different slightly in their setup and tradeoffs

robertlagrant|1 year ago

I always thought Temporal[0] would be a brilliant choice for that sort of durable processing.

[0] https://temporal.io

chipdart|1 year ago

> This seems really cool. I've just been running into scenarios where this kind of durable execution is really helpful.

You might find Azure's Durable Functions right up your alley. With durable functions you can break away workflows into activities which are invoked from orchestrator functions or other activities like regular functions, but the runtime handles the orchestration and state machine updates.

jahewson|1 year ago

These docs don’t fill me with confidence. That’s… weird given who’s behind the project. What gives?

> Workflows provide the following reliability guaranteees:

> 1. They always run to completion.

You can’t guarantee that.

> 2 […] Regardless of what failures occur during a workflow's execution, it executes each of its transactions once and only once.

“Executed” is the wrong word here, if the database goes down half way through a transaction it’s neither executed zero times nor once.

> 3. Communicators execute at least once but are never re-executed after they successfully complete. If a failure occurs inside a communicator, the communicator may be retried

That’s not what at-least-once means? In 2. “execute” means “run to completion” but here the logic only works if it means “try”.

> Workflows must be deterministic: if called multiple times with the same inputs, they should always do the same thing.

Deterministic is the wrong word here, the correct word is idempotent. e.g. A simple counting function is deterministic but not idempotent.

KraftyOne|1 year ago

Author here--thanks for the feedback! We'll update the docs to clarify the first three points assume that the database and application always restart and return online if they go offline.

For the last point, we'll clarify we mean that the code of the workflow function must be deterministic. For example, a workflow shouldn't make an HTTP GET request and use its result to determine what to do next, even though that's technically idempotent. Instead, it should make the HTTP request in a communicator (https://docs.dbos.dev/tutorials/communicator-tutorial) so its output can be saved and reused during recovery if necessary.

FpUser|1 year ago

>"Increasingly, developers are using reliable workflows to help build applications. Reliable workflows are programs that always run to completion–if they’re interrupted, they automatically resume from where they left off."

This "increasingly" has been in wide use for ages. Personally I was doing it in the 90s

ldjkfkdsjnv|1 year ago

Temporal is the final boss when it comes to orchestration technology

jpgvm|1 year ago

Maybe, that remains to be seen yet. I do however have high hopes for durable execution as a model even if Temporal doesn't end up being the eventual winner.

FridgeSeal|1 year ago

> You simply write your workflow as a TypeScript function

Infinitely slower if you don’t use typescript though lol.

hxboo|1 year ago

Is it comparing apples to apples? DBOS looks more like Spring than AWS Lambda imo.

secondrow|1 year ago

It depends what part of DBOS you're looking at. DBOS Transact is the framework (TypeScript) used to develop apps/workflows such as those in the benchamrk.

DBOS Cloud hosts and executes DBOS Transact apps/workflows a la (AWS Lambda+Step Functions). So it is apples:apples. Functionally, DBOS Cloud is like Lambda and Step Functions in one.

localfirst|1 year ago

is it just me or am i seeing less and less serverless showing up in roles? seems like there was a big rush during the hype around 2021, and people went back to ec2/kubernetes

jpgvm|1 year ago

That is because that is exactly what happened. It was tried, people went too far and tried to build entire applications in FaaS and it was largely unsuccessful. Cue a bunch of migrations onto k8s to contain costs and get back control over process lifetime, better integration with existing monitoring/tracing, etc, etc.

I am probably the farthest from a fan of serverless but I have developed some appreciation for all the tech that went into it and have found some good use-cases for serverless and serverless like things.

The one I am most bullish on is serverless at the edge. Edge compute is too expensive when provisioned the traditional way (as static reserved memory + CPU etc) and the kind of tasks you want to do at the edge (request manipulation, early AuthZ, etc) are amenable to serverless requirements/limitations. Cloudflare Workers is what I am primarily familiar with but I imagine Lambda@Edge and Fastly's solution are similar.

Is serverless dead? No. But the hype around building whole apps on Lambda and that actually being good is.

orthecreedence|1 year ago

I think people realized that they had ultimately reinvented PHP and sometimes a stateful app server is just fine for your 100 req/day app.