Actually, Postgres does have a JSON datatype. The only extra thing it does is ensure the data is valid JSON, but it is still technically incorrect to say that there is no JSON type.
In related news (here: https://news.ycombinator.com/item?id=5589593 ), the Opa framework which is based on Node.js just released support for both MongoDB and Postgres from the same application source.
Wouldn't it be a lot nicer to just use a schema and serializer? I know that's not really the point of this article, but if you're trying to get a SQL db to operate like a NoSQl store, it feels like a lot of hand-waving.
It really depends on the data structure. If you've got something that can have an unknown amount of variables in it and you're already serving up JSON, then I think using JSON is the way to go.
Another use-case: We've got hundreds of clients that send health statuses for a ton of different metrics every 10 minutes. Stuff like Wifi strength, exceptions caught/uncaught, various errors and crash reports, blah blah. Anyway, we need a flexible store for all of this stuff because we're always adding more metrics. Whatever the clients send as their request body gets added as a JSON object.
We also want to dynamically display all of these metrics. We can literally grab the data as JSON and make the keys table column headers in an html view. Adding new metrics can automatically be reflected in both the database and in our html views. We can query against new fields without changing schemas or business logic.
If you can use a more general-purpose system like Postgres and get everything you need, then that's better than relying on special-purpose NoSQL systesm. The better question to ask is: why use a NoSQL system when Postgres works just fine and is useful in more situations?
The history of databases is a history of absorbing special-purpose systems into general-purpose SQL systems. XML databases were once a major topic (albeit misguided); now it's just a feature. Same with Columnar storage, or OO databases, or geospatial (postgres is a leader in geospatial, as well).
Because data integration is so incredibly valuable, it pushes strongly toward general-purpose systems and people dislike one-off special-purpose databases unless they deliver a huge amount of additional value.
Depends on your needs really. The querying capability in Postgres is so much nicer than NoSql DB's and generally, or at least is the case with Mongo, you don't get transactions.
[+] [-] andrewguenther|13 years ago|reply
http://www.postgresql.org/docs/devel/static/datatype-json.ht...
[+] [-] JohnDotAwesome|13 years ago|reply
Thanks for pointing out the error!
[+] [-] jvm|13 years ago|reply
Even that's actually not quite fair, it has special syntax for queries within the JSON (->> etc), and a bunch of functions: http://www.postgresql.org/docs/devel/static/functions-json.h...
[+] [-] hbbio|13 years ago|reply
[+] [-] audreyt|13 years ago|reply
[+] [-] film42|13 years ago|reply
[+] [-] joevandyk|13 years ago|reply
[+] [-] JohnDotAwesome|13 years ago|reply
Another use-case: We've got hundreds of clients that send health statuses for a ton of different metrics every 10 minutes. Stuff like Wifi strength, exceptions caught/uncaught, various errors and crash reports, blah blah. Anyway, we need a flexible store for all of this stuff because we're always adding more metrics. Whatever the clients send as their request body gets added as a JSON object.
We also want to dynamically display all of these metrics. We can literally grab the data as JSON and make the keys table column headers in an html view. Adding new metrics can automatically be reflected in both the database and in our html views. We can query against new fields without changing schemas or business logic.
[+] [-] semihandy|13 years ago|reply
[+] [-] jeffdavis|13 years ago|reply
If you can use a more general-purpose system like Postgres and get everything you need, then that's better than relying on special-purpose NoSQL systesm. The better question to ask is: why use a NoSQL system when Postgres works just fine and is useful in more situations?
The history of databases is a history of absorbing special-purpose systems into general-purpose SQL systems. XML databases were once a major topic (albeit misguided); now it's just a feature. Same with Columnar storage, or OO databases, or geospatial (postgres is a leader in geospatial, as well).
Because data integration is so incredibly valuable, it pushes strongly toward general-purpose systems and people dislike one-off special-purpose databases unless they deliver a huge amount of additional value.
[+] [-] jeffdavis|13 years ago|reply
It allows normal btree indexing of, for example, the values for some given key by using functional and/or partial indexes.
It also allows indexing of "contains", "contains key", "contains all of these keys" and "contains some of these keys" by using GiST and GIN indexes.
[+] [-] JohnDotAwesome|13 years ago|reply
[+] [-] pygy_|13 years ago|reply
http://www.postgresql.org/docs/9.2/static/hstore.html
You can also create custom indexing functions using the PL/$language extensions.
See here for some benchmarks comparing Mongo to several Postgresql options: https://wiki.postgresql.org/images/b/b4/Pg-as-nosql-pgday-fo...
MongoDB is not that fast.
[+] [-] joevandyk|13 years ago|reply
[+] [-] scubaguy|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]