top | item 47113787

(no title)

lateforwork | 7 days ago

> Postgres and MySQL don't default to serializable

Oracle and SQL Server also default to read committed, not serializable. Serializable looks good in text books but is rarely used in practice.

discuss

order

mike_hearn|5 days ago

One reason Oracle uses it is because this mode scales horizontally whilst allowing very large transactions. You can just keep adding write masters.

The best implementation of serializable transactions I've seen is in FoundationDB but it comes with serious costs. Transactions are limited in size and duration to a point where many normal database operations are disallowed by the system and require app-layer workarounds (at which point, of course, you lose serializability). And in many cases you do need cluster locks for other purposes anyway.

zadikian|4 days ago

Spanner has similar limitations on xact size, maybe for this reason?

zadikian|7 days ago

Yeah, the only examples I know of it being default are Spanner and Cockroach, which are for a different use case.