(no title)
asQuirreL | 1 year ago
By manually batching the deletes, you are telling the database that the whole operation does not need to be atomic and other operations can see partial updates of it as they run. The database wouldn't be able to do that for every large delete without breaking its guarantees.
wruza|1 year ago
If DELETE is so special, make special ways to manage it. Don’t offload what is your competence onto a clueless user, it’s recipe for disaster. Replace DELETE with anything and it’s still true.
I know a guy (not me) who deleted rows from an OLTP table that served a country-level worth of clients and put it down for two days. That is completely database’s fault. If its engine was designed properly for bigdata, it should have refused to do so on a table with gazillions of rows and suggested a proper way to do it.cogman10|1 year ago
If you've gone to the effort of batching things, you are still writing out those records, you are just giving the db a chance to delete them from the log.
I'd like to save my ssds that heartache and instead allow the database to just delete.
In MSSQL in some extreme circumstances, we've partitioned our tables specifically so we can use the 'TRUNCATE TABLE' command as delete is just too expensive.
That operation can wipe gbs in seconds.
magicalhippo|1 year ago
That said, I agree it would be nice to have a DELETE BATCH option to make it even easier.
[1]: https://learn.microsoft.com/en-us/sql/t-sql/statements/delet...