(no title)
theanzelm | 1 year ago
There are some edge cases where you might loose easy access to some data if two clients migrate concurrently, but we’re hoping to provide patterns to mitigate these
Edit: Right now it all depends on you to implement migrations “the right way” but we hope to provide guardrails soon
mentalgear|1 year ago
This approach may require a central authority (with no access to user data) responsible solely for providing the schema and migration patterns as code transformations.
Since running arbitrary code from the payload introduces potential security risks, the migration code could be cryptographically signed, ensuring that only valid, trusted transformation code is executed. A possible additional security layer would be to have the transformation code execute in a sandbox which can only output JSON data. (keeping a possible full before-migration version as backup in case something went wrong would always be a good idea)
Another option would be to use a transformation library for migrations, but in this case, the approach would only describe (as JSON) the functions and parameters needed to transition the schema from one version to another.
klabb3|1 year ago
Agreed, that’s the only sensible thing to do. Not sure it’s 90% though.
> but we’re hoping to provide patterns to mitigate these
Hope is not confidence inspiring for the most difficult problem at the heart of the system. That doesn’t mean it has to be impossible, but it needs to be taken seriously and not an afterthought.
Another thing you have to think about is what happens when data of a new schema is sent to a client on an older schema. Does the “merging” work with unknown fields? Does it ignore and drop them? Or do you enforce clients are up to date in some way so that you don’t have new-data-old-software?
theanzelm|1 year ago