top | item 43115397

(no title)

p10jkle | 1 year ago

As discussed in the article, we have built our own storage engine from the ground up, which we did because we believe it will achieve better performance by taking advantage of the features of the system (streaming data, single writer etc) instead of shoehorning it into a DBMS. So, our performance goals are very high throughput (100s of thousands of actions per second, scaling horizontally), with very low latencies (like, 40ms p90 under load for a 3 step workflow)

discuss

order

jedberg|1 year ago

> instead of shoehorning it into a DBMS

Disclaimer, I'm the CEO of the aforementioned DBOS.

That's an interesting way to phrase it. We like to think that we've taken advantage of 50 years of development on DBMS by optimizing how it is used. We also take advantage of the fact that your application is already accessing the database for application data, and we sit right next to it, not on another service. So our added latency is in the single digit milliseconds (an order of magnitude faster than any external solution).

Since we are on the same database as your application data, our throughput scales with your application seamlessly as you scale your database to meet your application needs. It's part of our lightweight promise for durability -- no external services required.