(no title)
Dave_Rosenthal | 9 months ago
> Another big problem I found with modeling real apps was the five second transaction timeout. This is not, as you might expect, a configurable value. It's hard-coded into the servers and clients. This turns into a hugely awkward limitation and routinely wrecks your application logic and forces you to implement very tricky concurrency algorithms inside your app, just to do basic tasks. For example, computing most reports over a large dataset does not work with FoundationDB because you can't get a consistent snapshot for more than five seconds!
I'm pretty sure that the 5-second transaction timeout is configurable with a knob. You just need enough RAM to hold the key-range information for the transaction timeout period. Basically: throughput * transaction_time_limit <= RAM, since FDB enforces that isolation reconciliation runs in memory.
But, the other reason that 5 seconds is the default is that e.g. 1 hour read/write transactions don't really make sense in the optimistic concurrency world. This is the downside of optimistic concurrency. The upside is that your system never gets blocked by bad-behaved long-running transactions, which is a serious issue in real production systems.
Finally, I think that the current "Redwood" storage engine does allow long-lived read transactions, even though the original engine backing FDB didn't.
mike_hearn|9 months ago
Transactions holding locks for too long are indeed a problem though in Oracle transactions can have priorities and steal each other's locks.
Dave_Rosenthal|9 months ago
I don't know the details know, but it was definitely configurable when I wrote it :) I remember arguing for setting it to a default of 30/60 seconds we decided against as that would have impacted throughput at our default RAM budget. I thought might have been a good tradeoff to get people going, thinking they could tune it down (or up the RAM) if they needed to scale up perf later.