Love the idea of having APIs flow out of a single set of schema definitions. The Rails style of speccing a model, migrations, and controller/serializer or graphql types feels overly verbose and repetitive.
To me the biggest thing these groups could do to speed adoption is flesh out the feature development / test story. For instance, the postgraphile examples have development scripts that constantly clear out the DB and no tests. Compared to Rails, it's hard to imagine how you'd iterate on a product.
Are there other reasons this hasn't seen more widespread adoption? Is there some inherent architectural flaw or just not enough incremental benefit?
We've been using it for about a year in production across four projects and it's been a major boon for productivity because it allows our frontend developers to do simple backend features on their own. I think there are a lot of use cases where it would be a great fit for many projects and there are I'm sure a lot of companies that could save a lot of time by using it as part of their toolkit.
Also it doesn't have to be all or nothing. For example, there might be an API endpoint needed where there are multiple mutations needed inside a transaction but we just route between that custom backend and postgrest using nginx.
I think the approach is fine for prototyping, but the architectural flaw that can make it a bad choice for production systems is that it ties your APIs to your data model. When you need to evolve your API or extend your data model, you get to choose between dealing with potentially massive data migrations and downtime, or having a third layer of glue in the form of database views. You probably would need to adopt a rule of no direct-to-table access, and a separate schema per API version containing views, and dealing with the pain of rewriting the views for all the supported API versions when you need to change your data model. Or scale the database horizontally, with views on foreign data wrappers that pull from the real data sources. Which I think this sort of solution is marketed as a way for a Database Administrator (singular) to expose an API, rather than a scalable approach to application development.
I actually think the one schema to rule them all approach is a bad one. Different parts of the stack have different use cases and require different data representations. I think it's a fundamental flaw many ORMs fall into.
As for postgrest and graphQL, queries and data transformations are proof that your DB schema shouldn't be forced the the stack and onto the clients. I would argue that graphQL strived for the exact opposite of single schema and tends to 'clients should choose their own API' approach.
> Are there other reasons this hasn't seen more widespread adoption? Is there some inherent architectural flaw or just not enough incremental benefit?
I'm a fan of the concept and would love to have a middle layer as thin as postgrest, but authentication/authorization has been my roadblock. I still use node as an app server because of passport. I had not heard of the postgrest starter kit before today and will check out https://docs.subzero.cloud/athentication-authorization-flow/
It’just the tools you linked didnt prioritize the development workflow from the point of view od a backend developer (code in files, git, tests, migrations) but from the frontend developer perspective (ui to create tables)
I've been using Postgraphile along with Apollo Client with code-generated types for queries. When I change the db schema, I get compile errors in my client. Feels like I'm cheating
> The Rails style of speccing a model, migrations, and controller/serializer or graphql types feels overly verbose and repetitive.
For the record, ActiveRecord models do not specify, for example, table schemas - they are loaded from the database in the runtime, and migrations are not really necessary - they provide schema versioning with a rollback possibility, something that AFAIK the databases don't usually offer. Rails (and Ruby) is actually so non-verbose and non-repetitive, that a lot of people is criticising it for the very reason, usually complaining about it being too "magical".
I haven't used Rails a lot, but in Django you get a lot more from the ORM models than just database queries.
The Django ORM is used pervasively throughout django and third party libraries. You can compose and reuse querysets in Templates, Form validation, authorization and so on. You can create a CRUD admin interface and a REST API with almost no extra configuration.
It also has a very mature migration system, meaning you can have your migrations in code, and apply them in production or in multiple environments.
One problem I expect with PostgREST (haven't used it yet) is that validation and authorization may suffer. At the very least this will have to be configured either as SQL data or as SQL stored procedures, with SQL not being the strong suit of most frontend people.
They probably have safeguards against SQL injections through their API. Logic validation will be harder, and Postgres only supports rudimentary permissions, so all of this has to be implemented in stored procedures, also.
My main concern with PostgREST is that the database is stateful and persistent, and that you have to put more code in there. Which means every deployment is going to be a migration, no matter what. You better don't forget any of the state changes you made in your development database. Version management will certainly be harder.
Please don't get me wrong. I am sure PostgREST has perfectly valid usecases, and I like Postgres a lot. I am just dropping a few thoughts on why I would prefer a traditional application between database and frontend.
There are trade offs when you couple your front end to your database structure.
Additionally, if you are using a lossy intermediate query representation instead of plain SQL, you will end up losing a lot of he database performance and find yourself doing complex joins on your front end.
> Are there other reasons this hasn't seen more widespread adoption?
This approach had widespread adoption in the 1990's, most notably via PowerBuilder[0].
> Is there some inherent architectural flaw or just not enough incremental benefit?
Yes, there is an inherent architectural flaw with two-tiered systems in that they force a system to choose to place complexity either in the GUI or in the RDBMS. This often leads to "Jenga Programming" maintenance techniques which cause the code base to take a life of its own.
> Are there other reasons this hasn't seen more widespread adoption?
I think it's because PostgreSQL got major features for web development — like JSON and RLS — not too long ago and most developers are just catching up.
JSON was added back in 2013[1] and RLS back in 2016[2].
Separately, I've got a fairly advanced setup of schema/seed lifecycle flow in a Graphile-powered application that will likely be open-sourced at some point. The simplicity of the architecture makes these things relatively easy to spin up from scratch.
Any equivalent in MySQL? I have a legacy project in MySQL 5.1 I need to keep up. I have been using Peewee to generate the structure of the DB but would like to do some direct API call like Postgrest
Inherent architectural flaws:
- Complete lack of separation of concerns: you can see this by the terrible idea of simply slapping a transformation procedure to a non-specified part of the codebase. Every time you do this you end up having to maintain a codebase which has no architecturally defined place anywhere. See: https://postgrest.org/en/v6.0/how-tos/casting-type-to-custom...
- There is no ORM that you can control. It doesn't mean that there is no transformation needed though. It just simply means that you lose control over it. All the work which were once under your control now is handled by the database layer. Of course you can go to that layer and change the logic of the framework yourself (by sending strangely formatted HTTP requests according to the doc), but then you gain nothing by using this framework. Of course this problem stems from the same fact that there are no separation of concerns. So basically the framework is simply ignoring the fact that there is an inherent object relational impedance mismatch. Of course the authors are aware that there is a mismatch but they simply sweep it under the rug by saying that:
"The PostgREST URL grammar limits the kinds of queries clients can perform. It prevents arbitrary, potentially poorly constructed and slow client queries. It’s good for quality of service, but means database administrators must create custom views and stored procedures to provide richer endpoints."
This will come back and bite you in the ass when you try to work with anything slightly complicated like inheritance for example. So the authors basically dump every single person in the world under the category of poor programmers, and they offer their excellence in place of your lack of skill or talent. Go, figure! I am always very skeptical when I read something like this. See: https://postgrest.org/en/v6.0/api.html#resource-embedding
- No testability: pl/pgSQL is a really poor language to write unit/integration tests for, mostly because it's ancient and it's procedural and there are no native frameworks to do so. Of course you can use pl/Python and others, but then again, why would you when you can write the same thing in Python without embedding the code in Postgres, and the Python ORM would take care of having the same performance on the database level for you.
- Lack of tools for migrations, etc. The list goes on.
Somebody in our team put this on production. I guess this solution has some merits if you need something quick, but in the long run it turned out to be painful. It's basically SQL over REST. Additionally, your DB schema becomes your API schema and that either means you force one for the purposes of the other or you build DB views to fix that.
That's what VIEWs are for! Well, one use-case of VIEWs, anyways.
There's nothing wrong with the schema as the API since you can use VIEWs to maintain backwards compatibility as you evolve your product.
Put another way: you will have an API, you will need to maintain backwards compatibility. Not exposing a SQL schema as an API does not absolve you or make it easier to be backwards-compatible.
You might argue that you could have server-side JSON schema mapping code to help with schema transitions, and, indeed, that would be true, but whatever you write that code in, it's code, and using SQL or something else is just as well.
Having a bit of experience with OCaml, I hoped to see what production-ready Haskell code looked like with this library. I tried to read some files of the project and... IMHO "production-ready" Haskell code is still not easily readable, for example, the main file for the tests :
I don't know, maybe it lacks comments ? The code is really not easy to follow if you are not using Haskell 100% of your coding time.
While the library may work well in practice, it's a maintainability red flag and, by using this library, you rely on rare Haskell programmers for the future.
Often you do not want users to have access to a whole table, but only posts made by the user, or posts to to user. I could however see this replace Excel apps. But then you will also have to generate the user interface for it to be useful. The developer should only have to specify the views, the rest can be automated. I once made such a tool in order to save a few hundred man-hours on a tight budget, and it worked fairly well. But for most apps you want to customize every layer.
I like this idea. Especially helpful for prototyping a web UI against an arbitrary existing dataset. PostgREST is much more full featured and as commented by a few others “production-ready”(?) but if you’re into this and looking for something a bit more naive but just as accessible I wrote a similar utility to expose some Postgres data over http: https://github.com/daetal-us/grotto
I'd say it has been production-ready for some years now. There are some documented cases of companies using it in production here: http://postgrest.org/en/v6.0/#in-production.
I am (slowly) working on a project with similar tool (postgraphile) to eliminate most of CRUD stuff. One thing I always wondered - how you would version control the schema itself? I settled on Skitch - https://sqitch.org/
you don't expose your tables directly, you expose a schema that consists only of views and stored procedures.
If you really need a totally different version then you jsut exppose a new schema but more often it's the same situation as in graphql ecosystem, you jsut add a new column/view/procedure and don't delete the old one. Postgrest has the same power to describe what you want as a graphql api woudl have (by using it's select parameter)
I'm using the JOOQ type-safe SQL generator with PostgreSQL. My application server build script runs PostgreSQL in a Docker container, creates the database and tables, applies all migrations (via Flyway), and then invokes JOOQ which connects to the database and creates Java classes based on the tables and columns in the database.
JOOQ mostly prevents SQL syntax errors, column name errors, column type errors, supplying the wrong number of arguments, etc. These become compile-time errors.
With PostgREST and other JSON APIs, you only get run-time errors. And you rely on test coverage to check code correctness.
I prefer compile-time errors to runtime-errors. I find that software utilizing comnpile-time checks is easier to maintain.
PostgreSQL already gives you SQL syntax errors(try creating a VIEW with a misspelled SELCT), column type errors(try doing a `select 'asdf'::int;`), wrong number of arguments on a sp call(try putting one more argument to `select int4_sum(2, 3);`).
Thanks to PostgreSQL transactional DDL[1] you would get all of these errors at creation-time and without any change to your database if any migration is wrong. There's no need for a SQL codegen to get this already included safety.
Btw, PostgREST is not only a JSON API. Out of the box, it supports CSV, plain text and binary output and it's extendable for supporting other media types[2]. If you have to output xml by using pg xml functions you can do so with PostgREST.
I love the idea - and it's definitely something I'll put through the paces on one of my projects shortly. Being able to separate the schema from data ingestion, and data transmission is a very powerful scale option for one of the things I'm playing with.
Don't do this with a public API with third-party clients!!
This way you are directly tying the REST API to your database schema. The whole point of having a public API (you know, Application Programming Interface) is that you can serve your data in a controlled way, maybe totally different from your schema. In the moment you change a little bit on your schema, congratulations, you broke all clients.
So why use REST at all at this point? What is the benefit REST is bringing to the table here?
Seems like if you want a declarative API you might as well do something like a local read replica a la Firebase. Seems like the natural progression of these API as single schema technologies.
Is the main reasons for sticking to REST here compatibility or is there something in the RESTful design we want to hold on to?
REST it pretty stupid, but it works over HTTP(S) and is state-less. And tools that use HTTP has nice abstraction layers already, and are very common, so it becomes simple to use.
Personally for talking with a web front-end I would use Websocket's with long-polling as fallback. And use JSON instead of query-string for querying. It does however require yet an abstraction layer, and is more brittle and less secure then REST.
REST is a school-bus. Other methods are like exotic sports-cars.
Derek recently wrote about it [0]. The fact that you can have the thinnest client between your front-end and backend makes things incredibly flexible. If you learn PG properly you can do 100% of data preparation on the database side and just expose it through an API. If you decide to change something, you change a view and it's now whatever you just did. No code changes, no redeployments.
I had situation where was implementing something quite simple - an URL shortener. I didn't use PostgREST, but I decided to use ORM, because it was simple CRUD operation. It had an option to either use generated url or allow user to specify a custom one. And it worked as expected.
But then once completed I decided to add extra functionality, for example extra statistics, like what IPs were accessing it and how much. Adding expiration times etc.
I realized that ORM encouraged me to implement all of my logic in the application even when I actually would put less load on the database and made things simpler if I would let the database do many things for me and use types and functionality provided to me. I am not taking here about using stored procedures, I could do all operations as 1 at most 2 SQL statements. While ORM had to send multiple. In the end I dropped SQLAlchemy (this was python code) and just use psycopg2 directly didn't even bother with wrappers, just used built-in pools. It was also easier for me to make my code use two endpoints for reading and writing, so I can scale my code better.
I realized that ORM did not save me much code at all, it was the same amount of code with or without it, and without ORM I had greater control of what I wanted to do.
I previously believed that ORM was standing in a way when your application gets bigger, but my belief was ORM was good for small projects. Here I realized that it doesn't bring much benefits even for simple projects.
I think REST interface like this is doubling down on what ORM tries to do. Maybe it could be beneficial in places that don't have libraries to communicate with a database and only can make http requests?
I am not sure if it is a good idea to add PostgREST to your stack. As PostgREST can only interact with your DB, so you would probably be calling PostgREST from another REST. In this case, you would be better off using ORM.
You could use PostgreSQL Foreign data wrappers[1] and leverage PostgREST for a SQL Server schema.
tds_fdw[2] works pretty well for this(I've used it in a project related to open data). Basically, you'd have to map mssql tables to pg foreign tables[3] defined on a pg schema. Lastly expose this pg schema through PostgREST.
[+] [-] rgbrgb|6 years ago|reply
- https://www.graphile.org/postgraphile/
- https://hasura.io/
Love the idea of having APIs flow out of a single set of schema definitions. The Rails style of speccing a model, migrations, and controller/serializer or graphql types feels overly verbose and repetitive.
To me the biggest thing these groups could do to speed adoption is flesh out the feature development / test story. For instance, the postgraphile examples have development scripts that constantly clear out the DB and no tests. Compared to Rails, it's hard to imagine how you'd iterate on a product.
Are there other reasons this hasn't seen more widespread adoption? Is there some inherent architectural flaw or just not enough incremental benefit?
[+] [-] alexrtan|6 years ago|reply
Also it doesn't have to be all or nothing. For example, there might be an API endpoint needed where there are multiple mutations needed inside a transaction but we just route between that custom backend and postgrest using nginx.
Other things we use:
- dbmate for migrations: https://github.com/amacneil/dbmate
- postgrest-client, a library I developed for constructing type-safe queries to Postgrest in Elm: https://github.com/alex-tan/postgrest-client
[+] [-] stubish|6 years ago|reply
[+] [-] jayd16|6 years ago|reply
As for postgrest and graphQL, queries and data transformations are proof that your DB schema shouldn't be forced the the stack and onto the clients. I would argue that graphQL strived for the exact opposite of single schema and tends to 'clients should choose their own API' approach.
[+] [-] brlewis|6 years ago|reply
I'm a fan of the concept and would love to have a middle layer as thin as postgrest, but authentication/authorization has been my roadblock. I still use node as an app server because of passport. I had not heard of the postgrest starter kit before today and will check out https://docs.subzero.cloud/athentication-authorization-flow/
If they correct the typo before I get to reading it, it will be https://docs.subzero.cloud/authentication-authorization-flow...
[+] [-] ruslan_talpa|6 years ago|reply
It’just the tools you linked didnt prioritize the development workflow from the point of view od a backend developer (code in files, git, tests, migrations) but from the frontend developer perspective (ui to create tables)
[+] [-] markhalonen|6 years ago|reply
[+] [-] myrryr|6 years ago|reply
Postgrest was better, but we had trouble getting it to work with JWT, and have a reasonable permissions model.
Postgrest was DAMN fast though, it was really nice to work with.
In the end we had to write our own.
[+] [-] jablan|6 years ago|reply
For the record, ActiveRecord models do not specify, for example, table schemas - they are loaded from the database in the runtime, and migrations are not really necessary - they provide schema versioning with a rollback possibility, something that AFAIK the databases don't usually offer. Rails (and Ruby) is actually so non-verbose and non-repetitive, that a lot of people is criticising it for the very reason, usually complaining about it being too "magical".
[+] [-] bayesian_horse|6 years ago|reply
The Django ORM is used pervasively throughout django and third party libraries. You can compose and reuse querysets in Templates, Form validation, authorization and so on. You can create a CRUD admin interface and a REST API with almost no extra configuration.
It also has a very mature migration system, meaning you can have your migrations in code, and apply them in production or in multiple environments.
One problem I expect with PostgREST (haven't used it yet) is that validation and authorization may suffer. At the very least this will have to be configured either as SQL data or as SQL stored procedures, with SQL not being the strong suit of most frontend people.
They probably have safeguards against SQL injections through their API. Logic validation will be harder, and Postgres only supports rudimentary permissions, so all of this has to be implemented in stored procedures, also.
My main concern with PostgREST is that the database is stateful and persistent, and that you have to put more code in there. Which means every deployment is going to be a migration, no matter what. You better don't forget any of the state changes you made in your development database. Version management will certainly be harder.
Please don't get me wrong. I am sure PostgREST has perfectly valid usecases, and I like Postgres a lot. I am just dropping a few thoughts on why I would prefer a traditional application between database and frontend.
[+] [-] konschubert|6 years ago|reply
Additionally, if you are using a lossy intermediate query representation instead of plain SQL, you will end up losing a lot of he database performance and find yourself doing complex joins on your front end.
[+] [-] AdieuToLogic|6 years ago|reply
This approach had widespread adoption in the 1990's, most notably via PowerBuilder[0].
> Is there some inherent architectural flaw or just not enough incremental benefit?
Yes, there is an inherent architectural flaw with two-tiered systems in that they force a system to choose to place complexity either in the GUI or in the RDBMS. This often leads to "Jenga Programming" maintenance techniques which cause the code base to take a life of its own.
It turns out that this does not scale.
0 - https://en.wikipedia.org/wiki/PowerBuilder#History
[+] [-] agumonkey|6 years ago|reply
Similarly after a week of vuejs I couldn't stop thinking redux/vuex should be an embedded sqlite ...
[+] [-] steve-chavez|6 years ago|reply
I think it's because PostgreSQL got major features for web development — like JSON and RLS — not too long ago and most developers are just catching up.
JSON was added back in 2013[1] and RLS back in 2016[2].
[1]: https://www.postgresql.org/docs/release/9.3.0/
[2]: https://www.postgresql.org/docs/release/9.5.0/
[+] [-] joshuakelly|6 years ago|reply
Separately, I've got a fairly advanced setup of schema/seed lifecycle flow in a Graphile-powered application that will likely be open-sourced at some point. The simplicity of the architecture makes these things relatively easy to spin up from scratch.
[+] [-] babayega2|6 years ago|reply
[+] [-] fogetti|6 years ago|reply
- There is no ORM that you can control. It doesn't mean that there is no transformation needed though. It just simply means that you lose control over it. All the work which were once under your control now is handled by the database layer. Of course you can go to that layer and change the logic of the framework yourself (by sending strangely formatted HTTP requests according to the doc), but then you gain nothing by using this framework. Of course this problem stems from the same fact that there are no separation of concerns. So basically the framework is simply ignoring the fact that there is an inherent object relational impedance mismatch. Of course the authors are aware that there is a mismatch but they simply sweep it under the rug by saying that:
This will come back and bite you in the ass when you try to work with anything slightly complicated like inheritance for example. So the authors basically dump every single person in the world under the category of poor programmers, and they offer their excellence in place of your lack of skill or talent. Go, figure! I am always very skeptical when I read something like this. See: https://postgrest.org/en/v6.0/api.html#resource-embeddinghttps://postgrest.org/en/v6.0/api.html#custom-queries
- No testability: pl/pgSQL is a really poor language to write unit/integration tests for, mostly because it's ancient and it's procedural and there are no native frameworks to do so. Of course you can use pl/Python and others, but then again, why would you when you can write the same thing in Python without embedding the code in Postgres, and the Python ORM would take care of having the same performance on the database level for you.
- Lack of tools for migrations, etc. The list goes on.
[+] [-] stefanchrobot|6 years ago|reply
[+] [-] cryptonector|6 years ago|reply
That's what VIEWs are for! Well, one use-case of VIEWs, anyways.
There's nothing wrong with the schema as the API since you can use VIEWs to maintain backwards compatibility as you evolve your product.
Put another way: you will have an API, you will need to maintain backwards compatibility. Not exposing a SQL schema as an API does not absolve you or make it easier to be backwards-compatible.
You might argue that you could have server-side JSON schema mapping code to help with schema transitions, and, indeed, that would be true, but whatever you write that code in, it's code, and using SQL or something else is just as well.
[+] [-] ruslan_talpa|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] haolez|6 years ago|reply
[+] [-] antpls|6 years ago|reply
https://github.com/PostgREST/postgrest/blob/master/test/Main...
and
https://github.com/PostgREST/postgrest/blob/master/test/Quer...
I don't know, maybe it lacks comments ? The code is really not easy to follow if you are not using Haskell 100% of your coding time.
While the library may work well in practice, it's a maintainability red flag and, by using this library, you rely on rare Haskell programmers for the future.
[+] [-] z3t4|6 years ago|reply
[+] [-] pointlessjon|6 years ago|reply
[+] [-] steve-chavez|6 years ago|reply
[+] [-] deepersprout|6 years ago|reply
- something to conveniently version control database objects
- something to conveniently debug stored procedures. Maybe directly from vscode or your preferred editor.
If those two things get solved somehow, pg could be a really awesome application server.
[+] [-] ruslan_talpa|6 years ago|reply
[+] [-] hippich|6 years ago|reply
[+] [-] no_wizard|6 years ago|reply
I highly recommend reading the source code https://github.com/PostgREST/postgrest
[+] [-] korijn|6 years ago|reply
As in, how do you change the data model without breaking clients?
[+] [-] ruslan_talpa|6 years ago|reply
[+] [-] fiatjaf|6 years ago|reply
[+] [-] oftenwrong|6 years ago|reply
For example:
"PostgREST: a web server that turns a PostgreSQL database into a REST API"
[+] [-] loeg|6 years ago|reply
[+] [-] mleonhard|6 years ago|reply
JOOQ mostly prevents SQL syntax errors, column name errors, column type errors, supplying the wrong number of arguments, etc. These become compile-time errors.
With PostgREST and other JSON APIs, you only get run-time errors. And you rely on test coverage to check code correctness.
I prefer compile-time errors to runtime-errors. I find that software utilizing comnpile-time checks is easier to maintain.
[+] [-] steve-chavez|6 years ago|reply
Thanks to PostgreSQL transactional DDL[1] you would get all of these errors at creation-time and without any change to your database if any migration is wrong. There's no need for a SQL codegen to get this already included safety.
Btw, PostgREST is not only a JSON API. Out of the box, it supports CSV, plain text and binary output and it's extendable for supporting other media types[2]. If you have to output xml by using pg xml functions you can do so with PostgREST.
[1]: https://wiki.postgresql.org/wiki/Transactional_DDL_in_Postgr...
[2]: http://postgrest.org/en/v6.0/configuration.html#raw-media-ty...
[+] [-] thijsvandien|6 years ago|reply
[+] [-] cik|6 years ago|reply
[+] [-] consultSKI|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] kissgyorgy|6 years ago|reply
This way you are directly tying the REST API to your database schema. The whole point of having a public API (you know, Application Programming Interface) is that you can serve your data in a controlled way, maybe totally different from your schema. In the moment you change a little bit on your schema, congratulations, you broke all clients.
[+] [-] jayd16|6 years ago|reply
Seems like if you want a declarative API you might as well do something like a local read replica a la Firebase. Seems like the natural progression of these API as single schema technologies.
Is the main reasons for sticking to REST here compatibility or is there something in the RESTful design we want to hold on to?
[+] [-] z3t4|6 years ago|reply
Personally for talking with a web front-end I would use Websocket's with long-polling as fallback. And use JSON instead of query-string for querying. It does however require yet an abstraction layer, and is more brittle and less secure then REST.
REST is a school-bus. Other methods are like exotic sports-cars.
[+] [-] siquick|6 years ago|reply
[+] [-] doh|6 years ago|reply
[0] https://sivers.org/pg2
[+] [-] takeda|6 years ago|reply
I had situation where was implementing something quite simple - an URL shortener. I didn't use PostgREST, but I decided to use ORM, because it was simple CRUD operation. It had an option to either use generated url or allow user to specify a custom one. And it worked as expected.
But then once completed I decided to add extra functionality, for example extra statistics, like what IPs were accessing it and how much. Adding expiration times etc.
I realized that ORM encouraged me to implement all of my logic in the application even when I actually would put less load on the database and made things simpler if I would let the database do many things for me and use types and functionality provided to me. I am not taking here about using stored procedures, I could do all operations as 1 at most 2 SQL statements. While ORM had to send multiple. In the end I dropped SQLAlchemy (this was python code) and just use psycopg2 directly didn't even bother with wrappers, just used built-in pools. It was also easier for me to make my code use two endpoints for reading and writing, so I can scale my code better.
I realized that ORM did not save me much code at all, it was the same amount of code with or without it, and without ORM I had greater control of what I wanted to do.
I previously believed that ORM was standing in a way when your application gets bigger, but my belief was ORM was good for small projects. Here I realized that it doesn't bring much benefits even for simple projects.
I think REST interface like this is doubling down on what ORM tries to do. Maybe it could be beneficial in places that don't have libraries to communicate with a database and only can make http requests?
[+] [-] louis8799|6 years ago|reply
[+] [-] ijidak|6 years ago|reply
Is there anything like this for Microsoft SQL Server?
[+] [-] steve-chavez|6 years ago|reply
tds_fdw[2] works pretty well for this(I've used it in a project related to open data). Basically, you'd have to map mssql tables to pg foreign tables[3] defined on a pg schema. Lastly expose this pg schema through PostgREST.
[1]: https://wiki.postgresql.org/wiki/Foreign_data_wrappers
[2]: https://github.com/tds-fdw/tds_fdw/
[3]: https://github.com/tds-fdw/tds_fdw/blob/master/ForeignTableC...