top | item 43441277

(no title)

anonymars | 11 months ago

How would that work without risking loss of committed transactions?

> Fully durable transaction commits are synchronous and report a commit as successful and return control to the client only after the log records for the transaction are written to disk. Delayed durable transaction commits are asynchronous and report a commit as successful before the log records for the transaction are written to disk. Writing the transaction log entries to disk is required for a transaction to be durable. Delayed durable transactions become durable when the transaction log entries are flushed to disk.

https://learn.microsoft.com/en-us/sql/relational-databases/l...

discuss

order

TwoPhonesOneKid|11 months ago

Typically the SQL engine will allow flexibility on this. Not all transactions need to prioritize write-to-disk confirmation over throughput. If you're collecting observability metrics, for instance, these don't have the same data coherency constraints your app model (account etc) demand. In this case you can accept the logical commit and the tiny chance it might not actually hit the disk. Postgres at least allows customizing this per transaction, I believe, although I'm not quite sure how it works if you compose transactions with distinct syncrhonization constraints.

anonymars|11 months ago

Sure, but the comment I responded to was lamenting that the commits are not asynchronous by default. The documentation I linked to was all about the considerations and behavior for asynchronous commits.

toast0|11 months ago

Write data for transaction 1, write data for transaction 2, fsync, signal commit success for transactions 1 and 2.

Up to you how many transactions you want in a batch.

koolba|11 months ago

You missed a number of steps. The transactions are independent so they signal completion (to trigger the commit fsync) independently.

You can have the first transaction wait a bit to see if any other commits can be batched in the same fsync. However that’s off by default as the assumption is you want the transaction to complete as fast as possible.

At least that’s how PostgreSQL implements it.

marsovo|11 months ago

Sure, but now when transaction 1 is "committed", it isn't actually guaranteed to be there in the face of interruption. That's a big change to the default behavior...