At what point did HTTP and JSON become synonymous to Simple? Yes, a protocol and a data format can be convenient when applied to a specific problem, but that doesn't mean that everything should use them.
I've seen people refuse to use LDAP, and request a JSON (not rest) API instead, thus disregarding the security benefits of using a protocol that has 20 years of experience for dealing with access control and authentication.
Postgres has a tremendously powerful and efficient network protocol. There is great value in using and understanding it, because not every problem is a nail that can be hammered with a few GET requests.
I've seen people refuse to use LDAP, and request a JSON (not rest) API instead, thus disregarding the security benefits of using a protocol that has 20 years of experience for dealing with access control and authentication.
Not really. HTTP/JSON is just a transport mechanism, which is not particularly secure in LDAP (it just uses SSL). The layers above of the protocol can simply use HTTP/JSON as the transport mechanism, like multiple existing LDAP-to-JSON gateways show. There are no security benefits being lost.
There is great value in using and understanding it, because not every problem is a nail that can be hammered with a few GET requests.
That would be a good argument against a proposal to replace the existing protocol.
Pointless extra API layers is why our software is slow.
What is the reasoning for this except "simple access"? It's not simple when it adds another API on top of an already existing one, that it needs to be kept in sync with. If Android cannot use the normal binary API (which is nowdays pretty secure, compact, bug-free), then Android needs to be fixed.
Why would it be an extra API layer? This appears to be building an HTTP API directly onto postgres, so that instead of:
app <- http -> web middleware <-> libpq <-> postgres
You would just have:
app <- http -> postgres
HTTP is also pretty secure, compact, and bug-free, and has the added advantages of being the universal firewall tunnelling protocol and implementing every authentication mechanism ever invented. It also does compression-on-the-wire, which postgres still doesn't have for its byte-packed protocol, and you don't need as much extra code to interact with it because everybody has an http client already.
What is the reasoning for using the postgres protocol? I'm not really seeing any advantages compared to HTTP.
I don't think this will catch on, nor do I think it really should. Nearly all applications out there need a robust security layer to keep people from messing with things they shouldn't have permissions to.
Additionally, you should always validate business logic on the back end. Otherwise it's trivial to, say, give yourself a 10% raise. Or confirm that you got the new high score.
Even if used as a backend protocol, there's just way too much overhead in HTTP requests and responses.
I can see use for something like this as a dev tool if only given private network access for only dev tools, but aside from that I'm not sure it's that worthwhile.
Theoretically postgres already has a robust layer of security and validation; although it may not be used for the typical web app. It seems questionable to have additional API level checks when the database already implements the same security features in a more tested manner. I don't want to set two levels of permissions for exactly the same thing that may behave differently.
Why not expose something similar to a prepared statement with the HTTP API? That lets you define input data types, and only queries that have been explicitly enabled could be run.
Having a new application-layer transport doesn't impact any of these things. We're talking about how queries and their results are exchanged between a client and the sql server here.
SQL is a pain to parse, having a JSON API would make things much more hackable.
This is quite useful to have. Several tech oriented companies do automatically create a data service when a new database / table needs to be exposed. No reason why Postgresql cannot do the same.
I'd imagine it looks something like this when it is finished - http://blog.dreamfactory.com/add-a-rest-api-to-any-sql-db-in.... The concept is similar although I've not used it personally myself. Given that it is open source perhaps Postgresql can learn from the user experience this product has to offer.
I work in Business Intelligence(BI) building data warehouses. I think such functionality could be useful. I see many posts with security concerns, in a BI/analytics environment where you don't really modify data but just analyse, being able to access the data without an application layer could be very useful.
IMHO BI one of the reasons BI departments struggle to deliver is because of the technical debt acquired over time. A big part of this debt is in the application layer. Data exists but you need to jump through modelling tools to show it to your end user.
I think there is immense value in extending the capabilities of an SQL data store so that it can communicate over HTTP.
After building an extremely complex Web app, which, at the end of the day is CRUD with tonnes of business logic, I have felt the need for the following:
* Not being forced to write a JSON API layer for data coming straight out of a DB table
* Not having to design & implement SQL semantics in an HTTP API, ie limit & offset, or JOINING with a table to get records of an an associated model, or filtering using complex logic. Basically anything that would've been straightforward if written as a bunch of SQL queries.
* over time as our app has grown, we have found ourselves implementing a lot of CHECK & TRIGGER CONSTRAINTS on the DB to ensure no bug in the app layer messes up the data invariants EVER. Basically, sometimes I wish I could've written my app logic in the DB itself and be done with it. The old approach of exposing a a custom procedure (plPG/Sql) for each business function.
Actually I think of a step forward and wonder why we need an OS stack in the first place. Just boot directly into the DB server and let it use the bare hardware to deliver extreme performance.
> over time as our app has grown, we have found ourselves implementing a lot of CHECK & TRIGGER CONSTRAINTS on the DB to ensure no bug in the app layer messes up the data invariants EVER. Basically, sometimes I wish I could've written my app logic in the DB itself and be done with it. The old approach of exposing a a custom procedure (plPG/Sql) for each business function.
Having worked with an application mostly written in (Oracle) PLSQL, frankly, don't do that. It's a pain to maintain, a pain to test, a pain to version. And PLSQL is fairly awful. For extra pain, use DOM manipulation in PLSQL.
> Actually I think of a step forward and wonder why we need an OS stack in the first place. Just boot directly into the DB server and let it use the bare hardware to deliver extreme performance.
That's what unikernels are for [1], though not exactly bare metal in this case.
"I like this idea in principle, but in practice I wonder about the implications of it. Normally the database is a sort of special-purpose component with its own resources and scaling methods. I think if someone actually did apply the full power of this system to build their whole web application logic into the database server they might run into problems if/when they need to scale up due to coupling the database and application layer. Everyone who didn't build their web application logic in the database would pay the performance penalty for parsing / authenticating requests in a scripting language instead of in compiled C code. The database layer is one possibly rare place where people will be counting milliseconds of overhead per request as a problem." This.
While I'm unsure of what a http query api would look like it would be hugely useful. Every language has some sort of http and xml or Json library, and so do many monitoring tools.
I probably wouldn't use it for an application, but for back office things like monitoring and stats it would be great.
Love the new json/jsonb datatypes and this would be a great way to use those.
I'd personally prefer pg to mongo and even without the HTTP API I could see myself using it in the future, though with an HTTP API there would seem no reason not to use pg.
This seems like it should live as a layer above postgres for a while, and after it gets widespread adoption the project could become more of a standard part of a postgres distribution.
That would allow some time to sort out the tricky aspects like authentication and access control, and also to see what the performance hit is like and find ways to minimize it.
I'd love to see a jsonapi[1] interface to postgres. I'm mainly building ember apps at the moment and the rails layer is really just an interface straight onto the database(all the logic is client side).
I was thinking of a pg-as-a-service (got pgaas.com/pgsafe.com) solution not too different along these lines. Does the idea have any merit? Would people be interested?
[+] [-] esaym|11 years ago|reply
Although it is already pretty easy with a couple of lines Perl:
https://metacpan.org/pod/Catalyst::Plugin::AutoCRUD
https://metacpan.org/pod/App::AutoCRUD
http://www.slideshare.net/ldami/app-auto-crud
:)
[+] [-] draegtun|11 years ago|reply
[+] [-] techtalsky|11 years ago|reply
;)
[+] [-] jvehent|11 years ago|reply
I've seen people refuse to use LDAP, and request a JSON (not rest) API instead, thus disregarding the security benefits of using a protocol that has 20 years of experience for dealing with access control and authentication.
Postgres has a tremendously powerful and efficient network protocol. There is great value in using and understanding it, because not every problem is a nail that can be hammered with a few GET requests.
[+] [-] icebraining|11 years ago|reply
Not really. HTTP/JSON is just a transport mechanism, which is not particularly secure in LDAP (it just uses SSL). The layers above of the protocol can simply use HTTP/JSON as the transport mechanism, like multiple existing LDAP-to-JSON gateways show. There are no security benefits being lost.
There is great value in using and understanding it, because not every problem is a nail that can be hammered with a few GET requests.
That would be a good argument against a proposal to replace the existing protocol.
[+] [-] lazyjones|11 years ago|reply
What is the reasoning for this except "simple access"? It's not simple when it adds another API on top of an already existing one, that it needs to be kept in sync with. If Android cannot use the normal binary API (which is nowdays pretty secure, compact, bug-free), then Android needs to be fixed.
[+] [-] asuffield|11 years ago|reply
app <- http -> web middleware <-> libpq <-> postgres
You would just have:
app <- http -> postgres
HTTP is also pretty secure, compact, and bug-free, and has the added advantages of being the universal firewall tunnelling protocol and implementing every authentication mechanism ever invented. It also does compression-on-the-wire, which postgres still doesn't have for its byte-packed protocol, and you don't need as much extra code to interact with it because everybody has an http client already.
What is the reasoning for using the postgres protocol? I'm not really seeing any advantages compared to HTTP.
[+] [-] themgt|11 years ago|reply
[+] [-] jonny_eh|11 years ago|reply
[+] [-] andrewstuart2|11 years ago|reply
Additionally, you should always validate business logic on the back end. Otherwise it's trivial to, say, give yourself a 10% raise. Or confirm that you got the new high score.
Even if used as a backend protocol, there's just way too much overhead in HTTP requests and responses.
I can see use for something like this as a dev tool if only given private network access for only dev tools, but aside from that I'm not sure it's that worthwhile.
[+] [-] imanaccount247|11 years ago|reply
How does using HTTP preclude that?
>Additionally, you should always validate business logic on the back end.
Uh huh? So how is talking to the back end going to prevent that?
[+] [-] 7952|11 years ago|reply
Why not expose something similar to a prepared statement with the HTTP API? That lets you define input data types, and only queries that have been explicitly enabled could be run.
[+] [-] fla|11 years ago|reply
SQL is a pain to parse, having a JSON API would make things much more hackable.
[+] [-] stettix|11 years ago|reply
Compression can reduce the data size overheads, and HTTP auth can deal with security issues too.
[+] [-] cinbun8|11 years ago|reply
I'd imagine it looks something like this when it is finished - http://blog.dreamfactory.com/add-a-rest-api-to-any-sql-db-in.... The concept is similar although I've not used it personally myself. Given that it is open source perhaps Postgresql can learn from the user experience this product has to offer.
[+] [-] waffle_ss|11 years ago|reply
[+] [-] mmsimanga|11 years ago|reply
IMHO BI one of the reasons BI departments struggle to deliver is because of the technical debt acquired over time. A big part of this debt is in the application layer. Data exists but you need to jump through modelling tools to show it to your end user.
[+] [-] saurabhnanda|11 years ago|reply
After building an extremely complex Web app, which, at the end of the day is CRUD with tonnes of business logic, I have felt the need for the following:
* Not being forced to write a JSON API layer for data coming straight out of a DB table * Not having to design & implement SQL semantics in an HTTP API, ie limit & offset, or JOINING with a table to get records of an an associated model, or filtering using complex logic. Basically anything that would've been straightforward if written as a bunch of SQL queries. * over time as our app has grown, we have found ourselves implementing a lot of CHECK & TRIGGER CONSTRAINTS on the DB to ensure no bug in the app layer messes up the data invariants EVER. Basically, sometimes I wish I could've written my app logic in the DB itself and be done with it. The old approach of exposing a a custom procedure (plPG/Sql) for each business function.
Actually I think of a step forward and wonder why we need an OS stack in the first place. Just boot directly into the DB server and let it use the bare hardware to deliver extreme performance.
[+] [-] mercurial|11 years ago|reply
Having worked with an application mostly written in (Oracle) PLSQL, frankly, don't do that. It's a pain to maintain, a pain to test, a pain to version. And PLSQL is fairly awful. For extra pain, use DOM manipulation in PLSQL.
> Actually I think of a step forward and wonder why we need an OS stack in the first place. Just boot directly into the DB server and let it use the bare hardware to deliver extreme performance.
That's what unikernels are for [1], though not exactly bare metal in this case.
1: http://openmirage.org/
[+] [-] davidhariri|11 years ago|reply
[+] [-] sitharus|11 years ago|reply
I probably wouldn't use it for an application, but for back office things like monitoring and stats it would be great.
[+] [-] fingerprinter|11 years ago|reply
Love the new json/jsonb datatypes and this would be a great way to use those.
I'd personally prefer pg to mongo and even without the HTTP API I could see myself using it in the future, though with an HTTP API there would seem no reason not to use pg.
[+] [-] michaelbuckbee|11 years ago|reply
http://labs.frickle.com/nginx_ngx_postgres/
[+] [-] justincormack|11 years ago|reply
[+] [-] jeffdavis|11 years ago|reply
That would allow some time to sort out the tricky aspects like authentication and access control, and also to see what the performance hit is like and find ways to minimize it.
[+] [-] ollysb|11 years ago|reply
[1] http://jsonapi.org
[+] [-] andrewstuart2|11 years ago|reply
[+] [-] davyjones|11 years ago|reply
[+] [-] abrkn|11 years ago|reply
[+] [-] mrfancypants|11 years ago|reply
[+] [-] joevandyk|11 years ago|reply
[+] [-] regularfry|11 years ago|reply
[+] [-] nicerobot|11 years ago|reply
[+] [-] PythonDeveloper|11 years ago|reply
[deleted]