top | item 30294259

(no title)

etirk | 4 years ago

Glad you asked about the JSON datatype. The Postgres JSON type is a great addition and certainly works well. Especially for situations where you have some common traits shared across something like 'products' (e.g. price), store those common traits in columns and then use the JSON type to store the uncommon columns (e.g. 'flavor').

Where SFSQL really helps is:

- When new attributes are created that need indexing, you still have to be aware of those new columns before you can index them. An app might let users create their own attributes on the fly. In SFSQL all new attributes are indexed.

- if that JSON document contains some deeper structures (e.g. 'Brand' which contains 'Address', 'Support Phone', etc) that you want to pull out and into their own tables (now or eventually). SFSQL stores all objects (aka nodes in the document) as objects.

- SFSQL updates individual attributes of that document independently from the other attributes, meaning you could even nest a 'counter' within the original document and constantly update it efficiently.

- Simplicity. You can still store anything (just like you would in a JSON type column) yet you gain the ability to reference objects as objects and query (even new attributes) without first indexing.

discuss

order

CoffeeOnWrite|4 years ago

How do you protect against user input that produces a psychopathic volume of indexes?

etirk|4 years ago

That's a problem that standard SQL tables encounter - chasing new columns with new/modified indexes. (more info at our FAQ). SFSQL never runs into that problem since the indexes are static in structure. No new indexes are made, even if every attribute name was unique. The result is a very consistent query performance.

rmbyrro|4 years ago

I guess that would be on the developer side to sanitize/limit/clean in some way, like limiting the number of key-value pairs, object depth, etc?