Inngest is very cool, and if you are interested in the topic but absolutely want to self-host, 2 other high-performance durable execution engines that enable all the above and that you can self-host and are open-source and mature would be temporal and windmill. Both can use postgresql as the queue (temporal can use cassandra too), the rest can be done leveraging the transactional properties of postgresql, such as atomic counters for the concurrency keys: https://github.com/windmill-labs/windmill/blob/main/backend/... or re-queuing jobs that haven't progressed when they should have (most likely, worker crashed).
There are very cool things you can do today without too much complexity with the primitives that modern database are capable of. Should one rebuild it for their startups, no, but if you were to extract the very core of the durable execution engine of windmill for instance, it would actually be surprisingly reasonable given that postgresql does the heavy lifting. I strongly believe the benefits of our platforms mostly come from the overall virtue to be standardized, opinionated and working out-of-the-box in a way that make everything fit together rather than the overall engineering complexity of it.
Having built my own queue-based data processing system and worked through a lot of pain in the past, I will say that I'm a huge fan of Inngest and what they are doing.
As for the article, I think the main point being driven here is:
"building a system with queues requires much more than just the queue itself"
I would imagine almost anyone who has built a production grade queue-based message processing system would agree.
For me, I would say that for the majority of software being developed, the investment to build all of that yourself just doesn't make sense. Obviously there will be exceptions, but Inngest gives you incredible power at a very simple layer of abstraction.
We keep reaching for them because they work fine most of the time. This article would have been less click-bait and possibly more persuasive if instead of “QuEuEs R oVeR” you would have simply just explored some advanced scenarios that are tricky to solve and prescribed how your product makes it worth another line item on our monthly bill.
But you also spend a lot of cycles building and maintaining the ancillary features that make queues powerful. Early- to mid-stage companies especially need to focus on business logic and less on reinventing wheels
> They offer reliability through guaranteed delivery, persistence, and dead letter queues, so developers know they aren't sending workloads into a black hole.
I disagree with this reason to use queues. If this is the only reason for using SQS or RabbitMQ or similar, perhaps the application is over-engineered.
If you want reliability, and that alone, use a transaction-based system.
This press release content marketing appears hung up on some mythical perfect ESB system containing kitchen sink cross-cutting concerns.
There are many tools in the toolbox for backend infrastructure: nosqls (memcache/redis/keydb), dlms (zk), kafka, rabbitmq, ejabberd, 0mq, nng. Some scale better than others, and some are more atomic or durable than others. OLTP and infrastructure orchestration will have different needs. Sometimes, cross-cutting concerns can be added by gating the sender, receiver, or both with "controller"-like middleware proxies or modifications.
That was a lot of reading to discover they’ve built a workflow engine. They don’t want to call it a workflow engine because there’s a million of those. So they can up with a new name for a workflow engine.
Spot on. This is what Durable Functions does on Azure and it's brilliant for implementing complicated business processes and handling multiple events in one flow, in a resilient way where the logic is easy to follow.
One catch is that you're going to have to version your code if your workflows/orchestrations run for days, or if there's no windows without running workflows. And there's no built-in support for this, so expect to duplicate your entire workflow for new versions, so the old one can run to the end with the old code.
rubenfiszel|1 year ago
There are very cool things you can do today without too much complexity with the primitives that modern database are capable of. Should one rebuild it for their startups, no, but if you were to extract the very core of the durable execution engine of windmill for instance, it would actually be surprisingly reasonable given that postgresql does the heavy lifting. I strongly believe the benefits of our platforms mostly come from the overall virtue to be standardized, opinionated and working out-of-the-box in a way that make everything fit together rather than the overall engineering complexity of it.
jesse11|1 year ago
As for the article, I think the main point being driven here is:
"building a system with queues requires much more than just the queue itself"
I would imagine almost anyone who has built a production grade queue-based message processing system would agree.
For me, I would say that for the majority of software being developed, the investment to build all of that yourself just doesn't make sense. Obviously there will be exceptions, but Inngest gives you incredible power at a very simple layer of abstraction.
moribvndvs|1 year ago
zer00eyz|1 year ago
We went from ESB's because the were opaque. And this seems like it's very opaque.
All that work that you put into queues... its ugly but it's transparent. You can rationalize it, you're not abstracting things away into magic.
goodoldneon|1 year ago
lelanthran|1 year ago
> They offer reliability through guaranteed delivery, persistence, and dead letter queues, so developers know they aren't sending workloads into a black hole.
I disagree with this reason to use queues. If this is the only reason for using SQS or RabbitMQ or similar, perhaps the application is over-engineered.
If you want reliability, and that alone, use a transaction-based system.
lijok|1 year ago
darwin67|1 year ago
1letterunixname|1 year ago
There are many tools in the toolbox for backend infrastructure: nosqls (memcache/redis/keydb), dlms (zk), kafka, rabbitmq, ejabberd, 0mq, nng. Some scale better than others, and some are more atomic or durable than others. OLTP and infrastructure orchestration will have different needs. Sometimes, cross-cutting concerns can be added by gating the sender, receiver, or both with "controller"-like middleware proxies or modifications.
"Use good judgement" is the prime directive.
crubier|1 year ago
lowbloodsugar|1 year ago
claytonjy|1 year ago
jansommer|1 year ago
One catch is that you're going to have to version your code if your workflows/orchestrations run for days, or if there's no windows without running workflows. And there's no built-in support for this, so expect to duplicate your entire workflow for new versions, so the old one can run to the end with the old code.
greenpinia|1 year ago
claytonjy|1 year ago