top | item 39861615

(no title)

radiospiel | 1 year ago

> The JSON operations are not atomic

I hear this today the first time. What exactly os not atomic, and is there a resource with more details?

discuss

order

adastral|1 year ago

Not sure if this is what the above comment means by "atomic", but a shortcoming of Postgres' JSON support is that it will have to rewrite an entire JSON object every time a part of it gets updated, no matter how many keys the update really affected. E.g. if I update an integer in a 100MB JSON object, Postgres will write ~100MB (plus WAL, TOAST overhead, etc.), not just a few bytes. I imagine this can be a no-go for certain use cases.

bandrami|1 year ago

It drives me batty to see people store 100MB JSON objects with a predictable internal structure as single records in an RDB rather than destructuring it and storing the fields as a single record. Like, yes, you can design it the worst possible way like that, but why? But I see it all the time.

winrid|1 year ago

It's not an issue with size. It's an issue with race conditions. With Mongo I can update a.b and a.c concurrently from different nodes and both writes will set the right values.

You can't do that with PG JSONB unless you lock the row for reading...

jeltz|1 year ago

A JSON object which is 100 MB after compression is a quite huge thing.