Probably. I've seen it argued that TX size limits are a good practice anyway, and not having them is a design fault of SQL, but it's an argument on thin ice. Transaction size and scope is usually defined by the nature of the business logic, it's not something you can just define to be whatever you want without consequence. An RDBMS can do atomic and correct changes to an entire very large table without any developer effort. That might hang writes for a few minutes so depending on the nature of your application that might not be a feature you can get away with using, but if the table in question is updated by background workers and not on a latency sensitive path it can be a perfectly viable thing to do (on a good database engine, so not postgres mvcc).
zadikian|3 days ago
mike_hearn|1 day ago