top | item 39861920

(no title)

adastral | 1 year ago

Not sure if this is what the above comment means by "atomic", but a shortcoming of Postgres' JSON support is that it will have to rewrite an entire JSON object every time a part of it gets updated, no matter how many keys the update really affected. E.g. if I update an integer in a 100MB JSON object, Postgres will write ~100MB (plus WAL, TOAST overhead, etc.), not just a few bytes. I imagine this can be a no-go for certain use cases.

discuss

order

bandrami|1 year ago

It drives me batty to see people store 100MB JSON objects with a predictable internal structure as single records in an RDB rather than destructuring it and storing the fields as a single record. Like, yes, you can design it the worst possible way like that, but why? But I see it all the time.

Temporary_31337|1 year ago

Because schemas. The whole point of nosql is that you can alter your data model without having to reload the whole database

winrid|1 year ago

It's not an issue with size. It's an issue with race conditions. With Mongo I can update a.b and a.c concurrently from different nodes and both writes will set the right values.

You can't do that with PG JSONB unless you lock the row for reading...

callalex|1 year ago

Yes but that simplified write complexity means you are pushing a ton of read complexity out to your application.

jeltz|1 year ago

A JSON object which is 100 MB after compression is a quite huge thing.