Every project on https://supabase.io uses PostgREST. It's an amazing tool, especially when combined with Row Level Security.
We have built client libraries for Javascript, Rust, Python, Dart, and a few more on the way thanks to all the community maintainers.
Supabase employs the PostgREST maintainer (Steve), who is an amazing guy. He works full-time on PostgREST, and there are a few cool things coming including better JSONB and PostGIS support.
I have been using Postgraphile (similar to this one but for GraphQL) for many months. The great thing about this way to create systems is that you don't expend time doing glue code, just in the real business logic.
But a big pain is to maintain your database code, by example the version control of your functions. There are not suitable linters, and testing can't be done over postgres functions but must be done over GraphQL instead.
Using things like this will save you months of development time!, Even if I agree there are some systems that will not be good idea to implement in this way.
Specifically, testing the sql side with sql (pgtap) and having your sql code in files that get autoloaded to your dev db when saved. Migrations are autogenerated* with apgdiff and managed with sqitch. The process has it’s rough edges but it makes developing with this kind of stack much easier
* you still have to review the generated migration file and make small adjustments but it allows you to work with many entities at a time in your schema without having to remember to reflect the change in a specific migration file
> Using things like this will save you months of development time!
When you need to integrate with 3rd parties though you're right back to writing traditional backend code. So now you have extra dependencies and a splintered code base.
Yes, this automates boilerplate which is awesome for small, standalone apps, but in my experience I haven't seen months of development time saved with these tools.
Regarding linters, check plpgsql_check[1]. Also, as snuxoll mentioned, for tests there's pgTAP. Here's a sample project[2] that has some tests using it.
> But a big pain is to maintain your database code, by example the version control of your functions.
The solution I came with is to have a git repository in which each schema is represented by a directory and each table, view, function, etc… by a .sql file containing its DDL. Every time I make a change in the database I make the change in the repository. It doesn't automate anything, it doesn't save me time in the process of modifying the database, it's actually a lot of extra work, but I think it's worth it. If I want to know when, who, why and how a table has been modified over the last 2 years, I just check the logs, commits dates, authors and messages, and display the diffs If I want to see exactly what changed.
I don't use postgraphile myself, but as the author of a schema diff tool, I know a lot of people use schema diff tools to manage function changes automatically, and seems to work pretty well for them.
I've looked quite extensively at Postgraphile and the extensive dependency on database functions and sql is an issue. Really hard to write tests and SQL itself is not the greatest programming language. The whole setup lacks so many of the affordances of modern environments.
PostgREST is great and really reduces the boilerplate around building REST APIs. They recommend implementing access rules and extra business logic using PostgreSQL-native features (row level security, functions etc) but once you get your head around that, it really speeds things up!
If you're interested in playing around with a PostgREST-backed API, we run a fork of PostgREST internally at Splitgraph to generate read-only APIs for every dataset on the platform. It's OpenAPI-compatible too, so you get code and UI generators out of the box (example [0]):
There's always this confusion that comes up whenever PostgREST is discussed on HN, which is many times at this point. There is the misconception that arises that you use PostgREST to remove your existing stack entirely and you do everything in SQL. This is not true, you're not going to send emails from SQL or run entire multi-GB ETL reductions during an http request.
If you want an API call to kick off some external processing, then insert that job into a queue table and do the same thing you always did before, consume the queue out of band and run whatever process you want.
Another one that comes up is that somehow postgrest is "insecure". Of course, if you invert the problem, you see that postgrest is actually the most secure because it uses postgres' native security system to enforce access. That access is enforced onto your API, and you know what, it's enforced on every other client to your DB as well. That's a security unification right there. That's more secure.
What PostgREST does is let you stop spending months of time shuttling little spoonfuls of data back and forth from your tables to your frontend. It's all boilerplate, install it in a day, get used to it, and move onto the all those other interesting, possibly-out-of-band, tasks that you can't get to because the API work is always such a boring lift.
I am clearly missing something, but at that point, why not just allow the client (presumably a web app running in a browser in most cases) to just make requests via SQL? You would of course want to add authentication/authorization layers. You’re want record and field level restriction. But if the entirety of you data is in an RDMS, why put this “REST” paradigm in the middle? Why not just admit you’re querying a database, and query the database? If the only reason is access control, it seems a simpler solution must be possible.
> I am clearly missing something, but at that point, why not just allow the client (presumably a web app running in a browser in most cases) to just make requests via SQL?
Because REST-over-HTTP is low impedance with browser-based web apps, whereas SQL is...not.
Plus, with REST, you abstract whether all, some, or none of your data is an RDBMS; the fact that you've implemented something with PostgREST doesn't mean everything related to it and linked from it is implemented the same way.
It's difficult to grant an untrusted client a level of raw SQL access that won't let them do harm.
For example, even without access to any specific table or function, even with rate limits, I can denial-of-service the server by asking it to make a massive recursive pure computation.
I often thought the same with the use of GraphQL. Instead of building a highly curated API with business logic, optimisations and other general stuff you are building a direct interface around how your data looks in the db, a 1:1 mapping and crud interface. In software we are taught to prefer loose coupling vs tight coupling yet here people are preferring the tight coupling.
I’m not a fan of this as the user interface (API) has a tight coupling with how you store your data. Then like you say, why not just speak SQL as you have all the same issues, essentially multiple clients writing to/owning the same tables.
SQL is very complex, T-SQL is turing complete, meaning you can do lots of damage. you can bring servers to a halt if unchecked. It's pretty hard to restrict what can be done keeping flexibility.
Postgres isn't 100% set up to let fully untrusted clients do stuff. For example, right now there is no access control around listen/notify. You'd have to do some baseline query pre-processing to be completely secure, which could be a non-trivial task depending on just what you want to let clients do.
I've done it in read-only internal business apps though, it's great.
I really love postgrest!
I did a lot of django before and found myself constantly wrestling with the orm to express what I wanted, then writing endless pages of serializers and view sets and had to debug why they were always slow.
Postgrest has a learning curve, but the performance boost vs django is huge, and I can use more advanced db features like triggers more easily and in a way that’s native to the overall project.
I've been witness of a fully functional http client wrapper written in PL/SQL on Oracle. It's been working very well on REST and SOAP APIs. Codebase was a mission critical system for a large financial organization.
Development soeed was advantage, but the trade-off was that the good database developer skill is still rare and you had to grow and teach other [junior] devs for years. They used to stick with the team much longer time than the average developer, but still I believe it is a disadvantage.
What about PostgREST, the biggest issue I have with it is a DB server being available publicly in the net, I usually try my best to either place DB servers in the private network or "hide" them.
Other than this argument, it's a pleasure to develop on that low level. SQL is an important skill and it's strange why so many devs know it superficially.
> What about PostgREST, the biggest issue I have with it is a DB server being available publicly in the net, I usually try my best to either place DB servers in the private network or "hide" them.
I've heard this argument many times (and thought it myself), but when dealing with postgrest it seems that if you have a proper JWT setup (which is how postgrest handles AuthN) and use postgres' security features (like row level security) perhaps it should not be thought as a rule anymore.
IMO it seems like having the api layer only assume a role and having the DB handle AuthZ would mean better security since you can implement more fine grained rules that are actually verified by the part of the stack that knows the data structure already.
It's also not allowing arbitrary SQL, it's translating from HTTP to SQL, so nobody can do "SET ROLE 'admin';" unless you write a specific SQL function that does that.
Surpringly enough, crt.sh is pretty much all built in PLSQL and PLpgSQL.[1] I'm sure there are advantages to running it all from the database engine itself, but finding devs that can work on it must not be easy.
I made something[1] similar in rust inspired by this project.
I made modification to the syntax to make it more ergonomic in the case of grouping filters with parenthesis for complex and nested filters.
I also added joining of tables, renaming columns and more.
Big thumbs up for PostgREST. While it hasn't been completely issue free for me, it's a great project making a ton of progress and the community is very helpful and responsive.
I found myself butting heads with the limitations of the API quite a bit, but since it has a wonderful RPC feature, you can always drop a custom endpoint to do what you need to do without completely ejecting.
You can also surface other systems with PostgREST, using foreign data wrappers. This is great because you can use Postgres's rock solid role system to manage access to them. FDWs are surprisingly easy to write using Multicorn, and you can get pretty crazy with them if you're fronting a read replica (which you should be doing anyway once past the proof of concept stage).
Anyone here using this in production? I’ve always thought it was a neat idea, but haven’t heard of anyone building their API-oriented-SaaS company around it.
I've used it for admin tools at real jobs. And once powered a read only "consumer" (writes came from offline webscraping) hobby project with it to make a no-backend-code react app. I see no reason it couldn't work in prod, especially if you had a CDN or HTTP cache in front.
What I found most striking is that it relies on postgres for just about everything. Content obviously (sometimes straight from tables, sometimes via db views), but also users and permissions. I'd first assumed there would be a config file a mile long but it really is all Postgres.
I've learned you're better off building software that matches exactly what you need right now and change it over time as requirements change. These tools are great for a prototype but you pay the debt in the future. This is coming from someone who built a system system just like postgrest for a different tech stack.
I see how it can replace 80% of glue in my typical project. However I am not sure how to handle the remaining difficult 20%. Particulary where to put the business logic.
I don't want to have all my business logic in database. I don't want to write all business logic in SQL. SQL is not good language for this and the tooling is suboptimal: editors, version control, libraries, testing.
Is there a way to define stored procedures and triggers in some host languge (as with SQLite)? Or is the recommended way to add extra API handled by language of choice? But I don't want to do the same things in two different ways (i.e. querying by PostgREST and querying the DB directly)
[+] [-] kiwicopple|5 years ago|reply
We have built client libraries for Javascript, Rust, Python, Dart, and a few more on the way thanks to all the community maintainers.
Supabase employs the PostgREST maintainer (Steve), who is an amazing guy. He works full-time on PostgREST, and there are a few cool things coming including better JSONB and PostGIS support.
We recently benchmarked PostgREST for those interested: https://github.com/supabase/benchmarks/issues/2
nb: i'm a supabase cofounder
[+] [-] steve-chavez|5 years ago|reply
Clickable links to the Supabase client libraries, for those interested:
- JS: https://github.com/supabase/postgrest-js/
- Dart: https://github.com/supabase/postgrest-dart
- Rust: https://github.com/supabase/postgrest-rs
- Python: https://github.com/supabase/postgrest-py
Also, you can see how they're used together on https://pro.tzkt.io.
[+] [-] hrishi|5 years ago|reply
Did you run any loadtests that stressed the system enough to start dropping/failing requests? I'm wondering where that threshold is.
[+] [-] ErunamoJAZZ|5 years ago|reply
[+] [-] ruslan_talpa|5 years ago|reply
Specifically, testing the sql side with sql (pgtap) and having your sql code in files that get autoloaded to your dev db when saved. Migrations are autogenerated* with apgdiff and managed with sqitch. The process has it’s rough edges but it makes developing with this kind of stack much easier
* you still have to review the generated migration file and make small adjustments but it allows you to work with many entities at a time in your schema without having to remember to reflect the change in a specific migration file
[+] [-] awb|5 years ago|reply
When you need to integrate with 3rd parties though you're right back to writing traditional backend code. So now you have extra dependencies and a splintered code base.
Yes, this automates boilerplate which is awesome for small, standalone apps, but in my experience I haven't seen months of development time saved with these tools.
[+] [-] steve-chavez|5 years ago|reply
[1]: https://github.com/okbob/plpgsql_check
[2]: https://github.com/steve-chavez/socnet/blob/master/tests/ano...
[+] [-] ggregoire|5 years ago|reply
The solution I came with is to have a git repository in which each schema is represented by a directory and each table, view, function, etc… by a .sql file containing its DDL. Every time I make a change in the database I make the change in the repository. It doesn't automate anything, it doesn't save me time in the process of modifying the database, it's actually a lot of extra work, but I think it's worth it. If I want to know when, who, why and how a table has been modified over the last 2 years, I just check the logs, commits dates, authors and messages, and display the diffs If I want to see exactly what changed.
[+] [-] djrobstep|5 years ago|reply
[+] [-] snuxoll|5 years ago|reply
[+] [-] tobyhede|5 years ago|reply
[+] [-] w1|5 years ago|reply
[+] [-] mildbyte|5 years ago|reply
If you're interested in playing around with a PostgREST-backed API, we run a fork of PostgREST internally at Splitgraph to generate read-only APIs for every dataset on the platform. It's OpenAPI-compatible too, so you get code and UI generators out of the box (example [0]):
[0] https://www.splitgraph.com/splitgraph/oxcovid19/latest/-/api...[+] [-] dang|5 years ago|reply
[+] [-] chrisweekly|5 years ago|reply
That is a pretty awesome feature to be mentioning as an "oh yeah, also..."! :) Bookmarked.
[+] [-] uhoh-itsmaciek|5 years ago|reply
[+] [-] michelpp|5 years ago|reply
If you want an API call to kick off some external processing, then insert that job into a queue table and do the same thing you always did before, consume the queue out of band and run whatever process you want.
Another one that comes up is that somehow postgrest is "insecure". Of course, if you invert the problem, you see that postgrest is actually the most secure because it uses postgres' native security system to enforce access. That access is enforced onto your API, and you know what, it's enforced on every other client to your DB as well. That's a security unification right there. That's more secure.
What PostgREST does is let you stop spending months of time shuttling little spoonfuls of data back and forth from your tables to your frontend. It's all boilerplate, install it in a day, get used to it, and move onto the all those other interesting, possibly-out-of-band, tasks that you can't get to because the API work is always such a boring lift.
[+] [-] Fire-Dragon-DoL|5 years ago|reply
That would be game breaking for me, lot of software can be skipped with such a thing
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] sopooneo|5 years ago|reply
[+] [-] dragonwriter|5 years ago|reply
Because REST-over-HTTP is low impedance with browser-based web apps, whereas SQL is...not.
Plus, with REST, you abstract whether all, some, or none of your data is an RDBMS; the fact that you've implemented something with PostgREST doesn't mean everything related to it and linked from it is implemented the same way.
[+] [-] piaste|5 years ago|reply
For example, even without access to any specific table or function, even with rate limits, I can denial-of-service the server by asking it to make a massive recursive pure computation.
[+] [-] tiew9Vii|5 years ago|reply
I’m not a fan of this as the user interface (API) has a tight coupling with how you store your data. Then like you say, why not just speak SQL as you have all the same issues, essentially multiple clients writing to/owning the same tables.
[+] [-] tehlike|5 years ago|reply
SQL is very complex, T-SQL is turing complete, meaning you can do lots of damage. you can bring servers to a halt if unchecked. It's pretty hard to restrict what can be done keeping flexibility.
[+] [-] CuriouslyC|5 years ago|reply
I've done it in read-only internal business apps though, it's great.
[+] [-] xwdv|5 years ago|reply
[+] [-] talolard|5 years ago|reply
Postgrest has a learning curve, but the performance boost vs django is huge, and I can use more advanced db features like triggers more easily and in a way that’s native to the overall project.
[+] [-] otar|5 years ago|reply
Development soeed was advantage, but the trade-off was that the good database developer skill is still rare and you had to grow and teach other [junior] devs for years. They used to stick with the team much longer time than the average developer, but still I believe it is a disadvantage.
What about PostgREST, the biggest issue I have with it is a DB server being available publicly in the net, I usually try my best to either place DB servers in the private network or "hide" them.
Other than this argument, it's a pleasure to develop on that low level. SQL is an important skill and it's strange why so many devs know it superficially.
[+] [-] SahAssar|5 years ago|reply
I've heard this argument many times (and thought it myself), but when dealing with postgrest it seems that if you have a proper JWT setup (which is how postgrest handles AuthN) and use postgres' security features (like row level security) perhaps it should not be thought as a rule anymore.
IMO it seems like having the api layer only assume a role and having the DB handle AuthZ would mean better security since you can implement more fine grained rules that are actually verified by the part of the stack that knows the data structure already.
It's also not allowing arbitrary SQL, it's translating from HTTP to SQL, so nobody can do "SET ROLE 'admin';" unless you write a specific SQL function that does that.
[+] [-] somurzakov|5 years ago|reply
[+] [-] robertlagrant|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] dang|5 years ago|reply
2019 https://news.ycombinator.com/item?id=21435195
2017 https://news.ycombinator.com/item?id=13959156
2015 https://news.ycombinator.com/item?id=9927771
2015 https://news.ycombinator.com/item?id=8831960
[+] [-] fulafel|5 years ago|reply
Also
- written in Haskell
- a major building block for YC funded startup supabase.io (https://news.ycombinator.com/item?id=23319901)
[+] [-] xav0989|5 years ago|reply
[1]: https://github.com/crtsh/certwatch_db
[+] [-] ivanceras|5 years ago|reply
[1]: https://github.com/ivanceras/restq
[+] [-] jensneuse|5 years ago|reply
[+] [-] michelpp|5 years ago|reply
[+] [-] fulafel|5 years ago|reply
[+] [-] np_tedious|5 years ago|reply
[+] [-] SahAssar|5 years ago|reply
[+] [-] CuriouslyC|5 years ago|reply
I found myself butting heads with the limitations of the API quite a bit, but since it has a wonderful RPC feature, you can always drop a custom endpoint to do what you need to do without completely ejecting.
You can also surface other systems with PostgREST, using foreign data wrappers. This is great because you can use Postgres's rock solid role system to manage access to them. FDWs are surprisingly easy to write using Multicorn, and you can get pretty crazy with them if you're fronting a read replica (which you should be doing anyway once past the proof of concept stage).
[+] [-] xgenecloud|5 years ago|reply
- MySQL / MariaDB
- SQL Server / MSSQL
- SQLite
- and Postgres
https://github.com/xgenecloud/xgenecloud/
We do support instant GraphQL as well!
(Full disclosure : Im the creator)
[+] [-] derefr|5 years ago|reply
[+] [-] np_tedious|5 years ago|reply
What I found most striking is that it relies on postgres for just about everything. Content obviously (sometimes straight from tables, sometimes via db views), but also users and permissions. I'd first assumed there would be a config file a mile long but it really is all Postgres.
[+] [-] majkinetor|5 years ago|reply
https://github.com/majkinetor/postgrest-test
Also on nix by steve-chavez
https://github.com/steve-chavez/postgrest-benchmark
[+] [-] codenesium|5 years ago|reply
[+] [-] nickthemagicman|5 years ago|reply
This is what it takes to implement user auth.
https://postgrest.org/en/v3.2/examples_users.html
[+] [-] janci|5 years ago|reply
I don't want to have all my business logic in database. I don't want to write all business logic in SQL. SQL is not good language for this and the tooling is suboptimal: editors, version control, libraries, testing.
Is there a way to define stored procedures and triggers in some host languge (as with SQLite)? Or is the recommended way to add extra API handled by language of choice? But I don't want to do the same things in two different ways (i.e. querying by PostgREST and querying the DB directly)
[+] [-] specialist|5 years ago|reply
I'd rather have MIME type(s) for result sets. So we can tunnel over HTTPS.
Postgres Wire Protocol https://crate.io/docs/crate/reference/en/4.3/interfaces/post...
Tabular Data Stream https://en.wikipedia.org/wiki/Tabular_Data_Stream