This might stem from the domain I work in (banking), but I have the opposite take. Soft delete pros to me:
* It's obvious from the schema: If there's a `deleted_at` column, I know how to query the table correctly (vs thinking rows aren't DELETEd, or knowing where to look in another table)
* One way to do things: Analytics queries, admin pages, it all can look at the same set of data, vs having separate handling for historical data.
* DELETEs are likely fairly rare by volume for many use cases
* I haven't found soft-deleted rows to be a big performance issue. Intuitively this should be true, since queries should be O log(N)
* Undoing is really easy, because all the relationships stay in place, vs data already being moved elsewhere (In practice, I haven't found much need for this kind of undo).
In most cases, I've really enjoyed going even further and making rows fully immutable, using a new row to handle updates. This makes it really easy to reference historical data.
If I was doing the logging approach described in the article, I'd use database triggers that keep a copy of every INSERT/UPDATE/DELETEd row in a duplicate table. This way it all stays in the same database—easy to query and replicate elsewhere.
> DELETEs are likely fairly rare by volume for many use cases
All your other points make sense, given this assumption.
I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
> Undoing is really easy
Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.
In short, there are cases when soft-deletion works well, and is a good approach. In other cases it does not, and is not. Analysis is needed before adopting it.
Soft deletes in banking are just a Band-Aid to the much bigger problem of auditability. You may keep the original record by soft deleting it, but if you don't take care of amends, you will still lose auditability. The correct way is to use EventSourcing, with each change to an otherwise immutable state being recorded as an Event, including a Delete (both of an Event and the Object). This is even more problematic from a performance sense, but Syncs and Snapshots are for that exact purpose - or you can back the main table with a separate events table, with periodic "reconstruct"s.
If you're implementing immutable DB semantics maybe you should consider Datomic or alternatives because then you get that for free, for everything, and you also get time travel which is an amazing feature on top. It lets you be able to see the full, coherent state of the DB at any moment!
The core system at my previous employer (an insurance company) worked along the lines of the solution you outline at the end: each table is an append only log of point in time information about some object. So the current state is in the row with the highest timestamp, and all previous stars can be observed with appropriate filters. It’s a really powerful approach.
I never got to test this, but I always wanted to explore in postgres using table partitions to store soft deleted items in a different drive as a kind of archived storage.
I'm pretty sure it is possible, and it might even yield some performance improvements.
That way you wouldn't have to worry about deleted items impacting performance too much.
I have worked with databases my entire career. I hate triggers with a passion. The issue is no one “owns” or has the authority to keep triggers clean. Eventually triggers become a dumping ground for all sorts of nasty slow code.
I usually tell people to stop treating databases like firebase and wax on/wax off records and fields willy nilly. You need to treat the database as the store of your business process. And your business processes demand retention of all requests. You need to keep the request to soft delete a record. You need to keep a request to undelete a record.
Too much crap in the database, you need to create a field saying this record will be archived off by this date. On that date, you move that record off into another table or file that is only accessible to admins. And yes, you need to keep a record of that archival as well. Too much gunk in your request logs? Well then you need to create an archive process for that as well.
These principles are nothing new. They are in line with “Generally Accepted Record Keeping Principles” which are US oriented. Other countries have similar standards.
Maybe I'm shooting for the moon, but I'd like soft delete to be some kind of built-in database feature. It would be nice to enable it on a table then choose some built-in strategies on how it's handled.
Soft-delete is a common enough ask that it's probably worth putting the best CS/database minds to developing some OOTB feature.
Many data warehousing paradigms (e.g. Iceberg, Delta Lake, BigQuery) offer built-in "time travel," sometimes combined with scheduled table backups. That said, a lot of the teams I've worked with who want soft-delete also have other requirements that necessitate taking a custom approach (usually plain ol' SCD) instead of using the platform-native implementation.
The query complexity is the bit that catches teams off guard. You tell yourself "just add WHERE deleted_at IS NULL everywhere" but then you're six months in and someone's debugging why a report is showing ghost data because one query in a chain of 12 missed the filter.
Views help, but then you're maintaining parallel access patterns. And the moment you need to actually query deleted records (audit, support tickets, undo) you're back to bypassing your own abstractions.
Event sourcing solves this more cleanly but the operational overhead is real - most teams I've seen try it end up with a hybrid where core entities are event-sourced and everything else is just soft deleted with fingers crossed.
I'm struggling to see your point. CREATE VIEW not only helps, yes, indeed it's oftentimes exactly all you need. If you have multiple access patterns, like having to "actually query deleted records" sometimes, somewhere, at some point, someone would have to maintain invariants on these access patterns. This is not rocket science. The heart of the matter is that SWE's cannot handle schema/basic SQL to save their lives, whilst analysts/BI guys/whomever actually somewhat well-versed in SQL, have very little grasp on the inner working of a database, and carry with themselves idiosyncrasies coming all the way back from the 90's.
The pot is calling the kettle black.
Forget about soft deletes for a hot minute. I can give you another super basic example where in my experience SWE's and BI guys both lose the plot: Type 2 slowly-changing dimensions. This is actually heavily related to soft deletes, and much more common as far as access patterns are concerned. Say, you want to support data updates without losing information unless specified by a retention policy. For argument's sake, let's say you want to keep track of edits in the user profile. How do you do it? If you go read up on Stackoverflow, or whatever, you will come across the idea that did more violence to schemas worldwide than anything else in existence, "audit table." So instead of performing a cheap INSERT on a normalised data structure every time you need to make a change, and perhaps reading up-to-date data from a view, you're now performing costly UPDATE, and additional INSERT anyway. Why? Because apparently DISTINCT ON and composite primary keys are black magic (and anathema to ORM's in general.) If you think on BI side they're doing any better, you think wrong! To them, DISTINCT ON is oftentimes a mystery no less. One moment, blink, there you go, back in the subquery hell they call home.
Databases are beautiful, man.
It's a shame they are not treated with more respect that they deserve.
Trigger-based approach is the only one that really works in my experience. Partition the archive table in a way that makes sense for your data and you're good to go.
Some more rules to keep it under control:
Partition table has to be append-only. Duh.
Recovering from a delete needs to be done in the application layer. The archive is meant to be a historical record, not an operational data store. Also by the time you need to recover something, the world may have changed. The application can validate that restoring this data still makes sense.
If you need to handle updates, treat them as soft deletes on the source table. The trigger captures both the old state (before update) and continues normally. Your application can then reconstruct the timeline by ordering archive records by timestamp.
Needless to say, make sure your trigger fires BEFORE the operation, not AFTER. You want to capture the row state before it's gone. And keep the trigger logic dead simple as any complexity there will bite you during high-traffic periods.
For the partition strategy, I've found monthly partitions work well for most use cases. Yearly if your volume is low, daily if you're in write-heavy territory. The key is making sure your common queries (usually "show me history for entity X" or "what changed between dates Y and Z") align with your partition boundaries.
I've worked at companies where soft delete was implemented everywhere, even in irrelevant internal systems... I think it's a cultural thing! I still remember a college professor scolding me on an extension project because I hadn't implemented soft delete... in his words, "In the business world, data is never deleted!!"
Databases store facts. Creating a record = new fact. "Deleting" a record = new fact. But destroying rows from tables = disappeared fact. That is not great for most cases. In rare cases the volume of records may be a technical hurdle; in which case, move facts to another database. The times I've wanted to destroy large volume of facts is approximately zero.
When you start thinking of data as a potentially toxic asset with a maintenance cost to ensure it doesn't leak and cause an environmental disaster, it becomes more likely that you'd want to get rid of large volumes of facts.
Agreed. In fact I believe there should be 2 main operations in a data store: retrieve and insert. For this to actually work in practice, you probably need different types of data stores for different phases of data. Unfortunately few people have a good understanding of the Data life cycle.
I just long for DBs to evolve from "stateful" to "stateless". CQRS at the DB level.
* All inserts into append only tables. ("UserCreatedByEnrollment", "UserDeletedBySupport" instead of INSERT vs UPDATE on a stateful CRUD table)
* Declare views on these tables in the DB that present the data you want to query -- including automatically maintained materialized indices on multiple columns resulting from joins. So your "User" view is an expression involving those event tables (or "UserForApp" and "UserForSupport"), and the DB takes care of maintaining indices on these which are consistent with the insert-only tables.
* Put in archival policies saying to delete / archive events that do not affect the given subset of views. ("Delete everything in UserCreatedByEnrollment that isn't shown through UserForApp or UserForSupport")
I tend to structure my code and DB schemas like this anyway, but lack of smoother DB support means it's currently for people who are especially interested in it.
Some bleeding edge DBs let you do at least some of this efficient and user-friendly. I.e. they will maintain powerful materialized views and you don't have to write triggers etc manually. But I long for the day we get more OLTP focus in this area not just OLAP.
At Firezone we started with soft-deletes thinking it might be useful for an audit / compliance log and quickly ran into each of the problems described in this article. The real issue for us was migrations - having to maintain structure of deleted data alongside live data just didn't make sense, and undermined the point of an immutable audit trail.
We've switched to CDC using Postgres which emits into another (non-replicated) write-optimized table. The replication connection maintains a 'subject' variable to provide audit context for each INSERT/UPDATE/DELETE. So far, CDC has worked very well for us in this manner (Elixir / Postgrex).
I do think soft-deletes have their place in this world, maybe for user-facing "restore deleted" features. I don't think compliance or audit trails are the right place for them however.
In simple projects where database is only changed via an API, we just audit the API instead. It's easier to display and easier to store than tracking each DB change a single transaction does
A good solution here (can be) to utilize a view. The underlying table has soft-delete field and the view will hide rows that have been soft deleted. Then the application doesn't need to worry about this concern all over the place.
postgres with rls to hide soft deleted records means that most of the app code doesn't need to know or care about them, still issues reads, writes, deletes to the same source table and as far as the app knows its working
In my experience, archived objects are almost never accessed, and if they are, it's within a few hours or days of deletion, which leaves a fairly small chance that schema changes will have a significant impact on restoring any archived object. If you pair that with "best-effort" tooling that restores objects by calling standard "create" APIs, perhaps it's fairly safe to _not_ deal with schema changes.
Of course, as always, it depends on the system and how the archive is used. That's just my experience. I can imagine that if there are more tools or features built around the archive, the situation might be different.
I think maintaining schema changes and migrations on archived objects can be tricky in its own ways, even kept in the live tables with an 'archived_at' column, especially when objects span multiple tables with relationships. I've worked on migrations where really old archived objects just didn't make sense anymore in the new data model, and figuring out a safe migration became a difficult, error-prone project.
I like having archive/history tables. I often do similar with job queues when persisting to a database, in this way the pending table can stay small and avoid full scans to skip the need for deleted records...
Aside, another idea that I've kicked forward for event driven databases is to just use a database like sqlite and copy/wipe the whole thing as necessary after an event or the work that's related to that database. For example, all validation/chain of custody info for ballot signatures... there's not much point in having it all online or active, or even mixed in with other ballot initiatives and the schema can change with the app as needed for new events. Just copy that file, and you have that archive. Compress the file even and just have it hard archived and backed up if needed.
Could Postgres provide a mechanism where delete works as you'd expect but you can add WITH DELETED keyword to a SELECT and it returns everything even deleted records? I guess migrations are still an issue if you want to change the structure of the DB but maybe you could provide these as part of the database too - so INSERT INTO table(col1, col2, newCol...) FROM DELETED (col1, col2, newDataNotInDeleted) WHERE id = 123 CASCADE; or something like this.
There should be a preferred way to handle this as these are clearly real issues that the database should help you to deal with.
Soft deletes are an example of where engineers unintentionally lead product instead of product leading engineering. Soft delete isn’t language used by users so it should not be used by engineers when making product facing decisions.
“Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product. A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.
> Soft delete isn’t language used by users so it should not be used by engineers when making product facing decisions.
Users generally don’t even know what a database record is. There is no reason that engineers should limit their discussions of implementation details to terms a user might use.
> “Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product.
Users might say they want “delete”, but then also “undo”, and suddenly we’re talking about soft delete semantics.
> A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.
Why would implementation details be led by product? “Undo” is an action that the user may want, which would be led by product. Not the implementation in the db.
We have soft_deleted as boolean which excludes data from all queries and last_updated which a particular query can use if it needs to.
If over 50% of your data is soft deleted then it's more like historical data for archiving purposes and yes, you need to move it somewhere else. But then maybe you shouldn't use soft delete for it but a separate "archive" procedure?
Can't most db systems just create a view over the data where archived_at is null, and this view is the table you use for 99% of your business needs (except auditing, undelete, ...)?
I've given up on soft delete -- the nail in the coffin for me was my customers' legal requirements that data is fully deleted, not archived. It never worked that well anyways. I never had a successful restore from a large set of soft-deleted rows.
> customers' legal requirements that data is fully deleted
Strange. I've only ever heard of legal requirements preventing deletion of things you'd expect could be fully deleted (in case they're needed as evidence at trial or something).
Soft deletion and privacy deletion serve different purposes.
If you leave a comment on a forum, and then delete it, it may be marked as soft-deleted so that it doesn't appear publicly in the thread anymore, but admins can still read what you wrote for moderation/auditing purposes.
On the other hand, if you send a privacy deletion request to the forum, they would be required to actually fully delete or anonymize your data, so even admins can no longer tie comments that you wrote back to you.
Most social media sites probably have to implement both of these processes/systems.
The % of records that are deleted is a huge factor.
You keep 99%, soft delete 1%, use some sort of deleted flag. While I have not tried it whalesalad's suggestion of a view sounds excellent. You delete 99%, keep 1%, move it!
A view only makes sense if your RDBMS supports indexed views or the query engine is otherwise smart enough to pierce the view definition. Not all of them can do those things.
We have soft delete, with hard delete running on deletions over 45 days old. Sometimes people delete things by accident and this is the only way to practically recover that.
There are tables at $dayjob with both (begin, end) and also (incept, expire) fields. It's "on such-and-such date, X was true", but also allows for "as-of Z date, we believed that...".
Also you can have most data being currently unused even without being flagged deleted. Like if I go in to our ticketing system, I can still see my old requests that were closed ages ago.
We deal with soft delete in a Mongo app with hundreds of millions of records by simply moving the objects to a separate collection (table) separate from the “not deleted” data.
This works well especially in cases where you don’t want to waste CPU/memory scanning soft deleted records every time you do a lookup.
And avoids situations where app/backend logic forgets to apply the “deleted: false” filter.
I can see a hybrid approach working where you use a deleted_at column for soft delete, then have a process that moves this data after X days to an archive and hard deletes from the main database. This makes undeletes in the short term simple and keeps all data if needed in the future.
> I used to be pretty adamant about implementing soft delete for core business objects.
> However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.
> The amount of times I have “undeleted” something are few and far between.
Similar take from me. Soft deletes sorta makes sense if you have a very simply schema, but the biggest problem I have is that a soft delete leads to broken-ness - some other table now has a reference to a record in the target table that is not supposed to be visible. IOW, DB referential integrity is out the window because we can now have references to records that should not exist!
My preferred way (for now, anyway) is to copy the record to a new audit table and nuke it in the target table in a single transaction. If the delete fails we can at least log the fact somewhere that some FK somewhere is preventing a deletion.
With soft deletes, all sorts of logic rules and constraints are broken.
We have an offline-first infrastructure that replicates the state to possibly offline clients. Hard deletes were causing a lot of fun issues with conflicts, where a client could "resurrect" a deleted object. Or deletion might succeed locally but fail later because somebody added a dependent object. There are ways around that, of course, but why bother?
Soft deletes can be handled just like any regular update. Then we just periodically run a garbage collector to hard-delete objects after some time.
That's why adding a DELETE FROM ... RETENTION UNTIL <date> for SQL would be very nice, combining both hard and soft delete with an internal TTL to reduce the impact
The trigger architecture is actually quite interesting, especially because cleanup is relatively cheap. As far as compliance goes, it's also simply to declare that "after 45 days, deletions are permanent" as a catch all, and then you get to keep restores. For example, I think (IANAL), the CCPA gives you a 45 day buffer for right to erasure requests.
Now instead of chasing down different systems and backups, you can simply set ensure your archival process runs regularly and you should be good.
I would never recommend my method for every type of application nor perhaps even most. However, I have had great success with not using soft deletes at all. I just write the records to a duplicate table then hard delete the records from the main table.
Of course, in a system with 1000s of tables, I would not likely do this. But for simpler systems, it's been quite a boon.
Both the article and many comments here seem to miss that UPDATE deletes data -- the previous value of the field being updated -- which is a serious problem if soft-delete is your tool to keep old data. If you actually want historical data, you'll need logs or go straight to event sourcing.
I don't know, pruning based on age and restoring by writing a new row based on the soft deleted one seems less complex than the cascade handling in the trigger solution.
I have a love/hate relationship with soft deleted. There are cases where it’s not really a delete but rather a historical fact. For example, let’s say I have a table which stores an employee’s current hourly rate. They are hired at say $15/hour, then go to $17 six months later, then to $20/hour three months later. All of these three things are true and I want to be able to query which rate the employee had on a specific date even after their rate had changed. When I have a starts_on and an ends_on dates and the latter is nullable, with some data consistency logic I can create a linear history of compensation and can query historical and current data the same exact way. I also get
But this is such a huge PITA because you constantly have to mind if any given object has this setup or not and what if related objects have different start/end dates? And something like a scheduled raise for next year to $22/hour can get funny if I then try to insert that just for July it will be $24/hour (this would take my single record for next year and split it into two and then you gotta figure out which gets the original ID and which is the new row.
Another alternative to this is a pattern where you store the current state and separately you store mutations. So you have a compensation table and a compensation_mutations table which says how to evolve a specific row in a compensation table and when. The mutations for anything in the future can be deleted but the past ones cannot which lets you reconstruct who did what, when, and why. But this also has drawbacks. One of them is that you can’t query historical data the same way as current data. You also have to somehow apply these mutations (cron job? DB trigger?)
And of course there are database extensions that allow soft deletes but I have never tried them for vague portability reasons (as if anyone ever moved off Postgres).
One thing that often gets forgotten in the discussions about whether to soft delete and how to do it is: what about analysis of your data? Even if you don't have a data science team, or even a dedicated business analyst, there's a good chance that somebody at some point will want to analyze something in the data. And there's a good chance that the analysis will either be explicitly "intertemporal" in that it looks at and compares data from various points in time, or implicitly in that the data spans a long time range and you need to know the states of various entities "as of" a particular time in history. If you didn't keep snapshots and you don't have soft edits/deletes you're kinda SoL. Don't forget the data people down the line... which might include you, trying to make a product decision or diagnose a slippery production bug.
My brother's now ex-wife learned the hard way about the challenges of soft delete. Too bad about the contents of that SQLite database, but his knowing was for the better.
There is another solution I use all the time: move deleted records to their own table. You probably don't need to do this for all tables. It allows you to not pepper your codebase with where clauses or statuses, everything works as intended, and you can easily restore records deleted by mistake, which is the original intent anyways. You can easily set this up by using a trigger at the database level in almost every database, that just works.
The hidden cost we battle in e-commerce isn't just DB storage/performance, it's Search Index Pollution. We treat 'availability' as a complex state machine (In Stock, Backorder, Discontinued-but-visible, Soft Deleted). Trying to map this logic directly into a Postgres query with WHERE deleted_at IS NULL works for CRUD, but it creates massive friction for discovery.
We found that strict CQRS/Decoupling is the only way to scale this. Let the operational DB keep the soft-deletes for audit/integrity (as mentioned by others), but the Search Index must be a clean, ephemeral projection of only what is currently purchasable.
Trying to filter soft-deletes at query time inside the search engine is a recipe for latency spikes.
TLDR: Soft deletes look easy, but they spread complexity everywhere. Actually deleting data and archiving it separately often keeps databases simpler, faster, and easier to maintain.
MaxGabriel|1 month ago
* It's obvious from the schema: If there's a `deleted_at` column, I know how to query the table correctly (vs thinking rows aren't DELETEd, or knowing where to look in another table)
* One way to do things: Analytics queries, admin pages, it all can look at the same set of data, vs having separate handling for historical data.
* DELETEs are likely fairly rare by volume for many use cases
* I haven't found soft-deleted rows to be a big performance issue. Intuitively this should be true, since queries should be O log(N)
* Undoing is really easy, because all the relationships stay in place, vs data already being moved elsewhere (In practice, I haven't found much need for this kind of undo).
In most cases, I've really enjoyed going even further and making rows fully immutable, using a new row to handle updates. This makes it really easy to reference historical data.
If I was doing the logging approach described in the article, I'd use database triggers that keep a copy of every INSERT/UPDATE/DELETEd row in a duplicate table. This way it all stays in the same database—easy to query and replicate elsewhere.
nine_k|1 month ago
All your other points make sense, given this assumption.
I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
> Undoing is really easy
Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.
In short, there are cases when soft-deletion works well, and is a good approach. In other cases it does not, and is not. Analysis is needed before adopting it.
postexitus|1 month ago
gleenn|1 month ago
arnsholt|1 month ago
ozim|1 month ago
I think one of our problems is getting users to delete stuff they don’t need anymore.
eddd-ddde|1 month ago
I'm pretty sure it is possible, and it might even yield some performance improvements.
That way you wouldn't have to worry about deleted items impacting performance too much.
MaxGabriel|1 month ago
I think this is likely unnecessary for most use cases and is mostly a RAM saving measure, but could help in some cases.
rawgabbit|1 month ago
I usually tell people to stop treating databases like firebase and wax on/wax off records and fields willy nilly. You need to treat the database as the store of your business process. And your business processes demand retention of all requests. You need to keep the request to soft delete a record. You need to keep a request to undelete a record.
Too much crap in the database, you need to create a field saying this record will be archived off by this date. On that date, you move that record off into another table or file that is only accessible to admins. And yes, you need to keep a record of that archival as well. Too much gunk in your request logs? Well then you need to create an archive process for that as well.
These principles are nothing new. They are in line with “Generally Accepted Record Keeping Principles” which are US oriented. Other countries have similar standards.
hnthrow0287345|1 month ago
Soft-delete is a common enough ask that it's probably worth putting the best CS/database minds to developing some OOTB feature.
Centigonal|1 month ago
unknown|1 month ago
[deleted]
jackfranklyn|1 month ago
Views help, but then you're maintaining parallel access patterns. And the moment you need to actually query deleted records (audit, support tickets, undo) you're back to bypassing your own abstractions.
Event sourcing solves this more cleanly but the operational overhead is real - most teams I've seen try it end up with a hybrid where core entities are event-sourced and everything else is just soft deleted with fingers crossed.
tucnak|1 month ago
The pot is calling the kettle black.
Forget about soft deletes for a hot minute. I can give you another super basic example where in my experience SWE's and BI guys both lose the plot: Type 2 slowly-changing dimensions. This is actually heavily related to soft deletes, and much more common as far as access patterns are concerned. Say, you want to support data updates without losing information unless specified by a retention policy. For argument's sake, let's say you want to keep track of edits in the user profile. How do you do it? If you go read up on Stackoverflow, or whatever, you will come across the idea that did more violence to schemas worldwide than anything else in existence, "audit table." So instead of performing a cheap INSERT on a normalised data structure every time you need to make a change, and perhaps reading up-to-date data from a view, you're now performing costly UPDATE, and additional INSERT anyway. Why? Because apparently DISTINCT ON and composite primary keys are black magic (and anathema to ORM's in general.) If you think on BI side they're doing any better, you think wrong! To them, DISTINCT ON is oftentimes a mystery no less. One moment, blink, there you go, back in the subquery hell they call home.
Databases are beautiful, man.
It's a shame they are not treated with more respect that they deserve.
patates|1 month ago
Some more rules to keep it under control:
Partition table has to be append-only. Duh.
Recovering from a delete needs to be done in the application layer. The archive is meant to be a historical record, not an operational data store. Also by the time you need to recover something, the world may have changed. The application can validate that restoring this data still makes sense.
If you need to handle updates, treat them as soft deletes on the source table. The trigger captures both the old state (before update) and continues normally. Your application can then reconstruct the timeline by ordering archive records by timestamp.
Needless to say, make sure your trigger fires BEFORE the operation, not AFTER. You want to capture the row state before it's gone. And keep the trigger logic dead simple as any complexity there will bite you during high-traffic periods.
For the partition strategy, I've found monthly partitions work well for most use cases. Yearly if your volume is low, daily if you're in write-heavy territory. The key is making sure your common queries (usually "show me history for entity X" or "what changed between dates Y and Z") align with your partition boundaries.
talesmm14|1 month ago
salomonk_mur|1 month ago
Storage is cheap. Never delete data.
mrkeen|1 month ago
rorylaitila|1 month ago
pixl97|1 month ago
dpark|1 month ago
There are many legitimate reasons to delete data. The decision to retain data forever should not be taken lightly.
keithluu|1 month ago
dagss|1 month ago
* All inserts into append only tables. ("UserCreatedByEnrollment", "UserDeletedBySupport" instead of INSERT vs UPDATE on a stateful CRUD table)
* Declare views on these tables in the DB that present the data you want to query -- including automatically maintained materialized indices on multiple columns resulting from joins. So your "User" view is an expression involving those event tables (or "UserForApp" and "UserForSupport"), and the DB takes care of maintaining indices on these which are consistent with the insert-only tables.
* Put in archival policies saying to delete / archive events that do not affect the given subset of views. ("Delete everything in UserCreatedByEnrollment that isn't shown through UserForApp or UserForSupport")
I tend to structure my code and DB schemas like this anyway, but lack of smoother DB support means it's currently for people who are especially interested in it.
Some bleeding edge DBs let you do at least some of this efficient and user-friendly. I.e. they will maintain powerful materialized views and you don't have to write triggers etc manually. But I long for the day we get more OLTP focus in this area not just OLAP.
jperras|1 month ago
https://martinfowler.com/eaaDev/EventSourcing.html
jamilbk|1 month ago
We've switched to CDC using Postgres which emits into another (non-replicated) write-optimized table. The replication connection maintains a 'subject' variable to provide audit context for each INSERT/UPDATE/DELETE. So far, CDC has worked very well for us in this manner (Elixir / Postgrex).
I do think soft-deletes have their place in this world, maybe for user-facing "restore deleted" features. I don't think compliance or audit trails are the right place for them however.
d0100|1 month ago
whalesalad|1 month ago
elyobo|1 month ago
maxchehab|1 month ago
The data archive serialized the schema of the deleted object representative the schema in that point in time.
But fast-forward some schema changes, now your system has to migrate the archived objects to the current schema?
buchanae|1 month ago
Of course, as always, it depends on the system and how the archive is used. That's just my experience. I can imagine that if there are more tools or features built around the archive, the situation might be different.
I think maintaining schema changes and migrations on archived objects can be tricky in its own ways, even kept in the live tables with an 'archived_at' column, especially when objects span multiple tables with relationships. I've worked on migrations where really old archived objects just didn't make sense anymore in the new data model, and figuring out a safe migration became a difficult, error-prone project.
tracker1|1 month ago
Aside, another idea that I've kicked forward for event driven databases is to just use a database like sqlite and copy/wipe the whole thing as necessary after an event or the work that's related to that database. For example, all validation/chain of custody info for ballot signatures... there's not much point in having it all online or active, or even mixed in with other ballot initiatives and the schema can change with the app as needed for new events. Just copy that file, and you have that archive. Compress the file even and just have it hard archived and backed up if needed.
andy_ppp|1 month ago
There should be a preferred way to handle this as these are clearly real issues that the database should help you to deal with.
3rodents|1 month ago
“Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product. A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.
dpark|1 month ago
Users generally don’t even know what a database record is. There is no reason that engineers should limit their discussions of implementation details to terms a user might use.
> “Delete” “archive” “hide” are the type of actions a user typically wants, each with their own semantics specific to the product.
Users might say they want “delete”, but then also “undo”, and suddenly we’re talking about soft delete semantics.
> A flag on the row, a separate table, deleting a row, these are all implementation options that should be led by the product.
None of these are terms an end user would use.
monkpit|1 month ago
nitwit005|1 month ago
It's fairly common in some industries to get support requests to recover lost data.
antonvs|1 month ago
nottorp|1 month ago
We have soft_deleted as boolean which excludes data from all queries and last_updated which a particular query can use if it needs to.
If over 50% of your data is soft deleted then it's more like historical data for archiving purposes and yes, you need to move it somewhere else. But then maybe you shouldn't use soft delete for it but a separate "archive" procedure?
hapidjus|1 month ago
One reason is that you might want to know when it was last updated before it was deleted.
alkonaut|1 month ago
arethuza|1 month ago
ntonozzi|1 month ago
zahlman|1 month ago
Strange. I've only ever heard of legal requirements preventing deletion of things you'd expect could be fully deleted (in case they're needed as evidence at trial or something).
theLiminator|1 month ago
wavemode|1 month ago
If you leave a comment on a forum, and then delete it, it may be marked as soft-deleted so that it doesn't appear publicly in the thread anymore, but admins can still read what you wrote for moderation/auditing purposes.
On the other hand, if you send a privacy deletion request to the forum, they would be required to actually fully delete or anonymize your data, so even admins can no longer tie comments that you wrote back to you.
Most social media sites probably have to implement both of these processes/systems.
sedatk|1 month ago
LorenPechtel|1 month ago
You keep 99%, soft delete 1%, use some sort of deleted flag. While I have not tried it whalesalad's suggestion of a view sounds excellent. You delete 99%, keep 1%, move it!
da_chicken|1 month ago
clickety_clack|1 month ago
tbrownaw|1 month ago
Also you can have most data being currently unused even without being flagged deleted. Like if I go in to our ticketing system, I can still see my old requests that were closed ages ago.
cj|1 month ago
This works well especially in cases where you don’t want to waste CPU/memory scanning soft deleted records every time you do a lookup.
And avoids situations where app/backend logic forgets to apply the “deleted: false” filter.
vjvjvjvjghv|1 month ago
t1234s|1 month ago
iterateoften|1 month ago
However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.
The amount of times I have “undeleted” something are few and far between.
lelanthran|1 month ago
> However after 15 years I prefer to just back up regularly, have point in time restores and then just delete normally.
> The amount of times I have “undeleted” something are few and far between.
Similar take from me. Soft deletes sorta makes sense if you have a very simply schema, but the biggest problem I have is that a soft delete leads to broken-ness - some other table now has a reference to a record in the target table that is not supposed to be visible. IOW, DB referential integrity is out the window because we can now have references to records that should not exist!
My preferred way (for now, anyway) is to copy the record to a new audit table and nuke it in the target table in a single transaction. If the delete fails we can at least log the fact somewhere that some FK somewhere is preventing a deletion.
With soft deletes, all sorts of logic rules and constraints are broken.
cyberax|1 month ago
We have an offline-first infrastructure that replicates the state to possibly offline clients. Hard deletes were causing a lot of fun issues with conflicts, where a client could "resurrect" a deleted object. Or deletion might succeed locally but fail later because somebody added a dependent object. There are ways around that, of course, but why bother?
Soft deletes can be handled just like any regular update. Then we just periodically run a garbage collector to hard-delete objects after some time.
stevefan1999|1 month ago
nemothekid|1 month ago
Now instead of chasing down different systems and backups, you can simply set ensure your archival process runs regularly and you should be good.
hirvi74|1 month ago
Of course, in a system with 1000s of tables, I would not likely do this. But for simpler systems, it's been quite a boon.
pjs_|1 month ago
moring|1 month ago
cess11|1 month ago
IgorPartola|1 month ago
But this is such a huge PITA because you constantly have to mind if any given object has this setup or not and what if related objects have different start/end dates? And something like a scheduled raise for next year to $22/hour can get funny if I then try to insert that just for July it will be $24/hour (this would take my single record for next year and split it into two and then you gotta figure out which gets the original ID and which is the new row.
Another alternative to this is a pattern where you store the current state and separately you store mutations. So you have a compensation table and a compensation_mutations table which says how to evolve a specific row in a compensation table and when. The mutations for anything in the future can be deleted but the past ones cannot which lets you reconstruct who did what, when, and why. But this also has drawbacks. One of them is that you can’t query historical data the same way as current data. You also have to somehow apply these mutations (cron job? DB trigger?)
And of course there are database extensions that allow soft deletes but I have never tried them for vague portability reasons (as if anyone ever moved off Postgres).
nerdponx|1 month ago
cadamsdotcom|1 month ago
And perf problems are only speculative until you actually have them. Premature optimization and all that.
piratebroadcast|1 month ago
JohnLeitch|1 month ago
gizzlon|1 month ago
iamleppert|1 month ago
MarginalGainz|1 month ago
We found that strict CQRS/Decoupling is the only way to scale this. Let the operational DB keep the soft-deletes for audit/integrity (as mentioned by others), but the Search Index must be a clean, ephemeral projection of only what is currently purchasable.
Trying to filter soft-deletes at query time inside the search engine is a recipe for latency spikes.
ctxc|1 month ago
MORPHOICES|1 month ago
[deleted]
Barathkanna|1 month ago