(no title)
onlydnaq | 6 years ago
A carefully designed database schema would be able to support all of these use cases in a way that (at least for me) seems a lot simpler than wrapping it in new abstractions. Inserting multiple objects in the same transaction?, already implemented. Updating several fields at the same time as well? Getting only the entities you are interested in together with their subentities?, well that’s what a relational database does.
As I said, I’m not a web developer, and I haven’t touched on use cases where a lot of different things need to happen on server operations, but all the examples in the article would be easily solvable with SQL.
strken|6 years ago
As I understand, most of the constraints (writes are done in isolation, you don't have transactions, etc.) are there because they're consistent with an architecture where you can't actually implement those things. Transactions are something that happens further down the chain of services, and a single mutation may or may not be implemented using transactions that may or may not be distributed across multiple services. A mutation could cause information to be sent to message queues, stored in various data stores with different technologies, cached in multiple places, persisted to a data warehouse, or other side effects, and transactions are an implementation detail.
You could use foreign data wrappers in postgres, or some other equivalent, but most of the good bits of SQL would take a lot of work or wouldn't even be possible.
gboss|6 years ago
https://postgrest.org/en/v6.0/
BiteCode_dev|6 years ago
It's a very young project, and as a heavy ORM using I'm not the best person to judge, but I like the way they think.
no_wizard|6 years ago
* the schema/Endpoint already exists and it’s being updated to a newer standard (like GraphQL or OpenAPI) and they simply use the schema of the existing data source (usually a database) and these are just coupled to the data source. This gains more efficient operations and possibly documentation and validation (all good things!) but misses another goal (will explain momentarily)
* That an api is being built from scratch and one of these schema based standards is being used as a stand in for the database store. In this case the tight coupling is just backward (your data source will likely be built to match the API)
What I seem to not see is anyone talking serious about how these standards and associated technologies or implementations aren’t being leveraged to create an abstract interface to data points regardless of data source structure or schema. The strength of GraphQL, OpenAPI (formerly Swagger), JSONApi and others isn’t just in pure description and validation, it’s in that it allows you to build descriptive and normalized APIs that can in effect act as middleware over any data source and keeps your data sources independent from API concerns.
The end result being you shouldn’t have to change your API schemas because you changed your data store or how you want to store your data. So you never have to version your APIs. It’s suppose to make your APIs inclusive of change.
t0astbread|6 years ago
notus|6 years ago
BiteCode_dev|6 years ago
In fact, because it's such a new tech, and since it's embedded into very few things, you have a lot to write by hands and perf on single servers will suffer compared to regular SQL + Restish.
The concept is interesting for big distributed heterogeneous systems though, especially when you have many clients.
jarjoura|6 years ago
It provides a full SQL query engine on top of unstructured data so you can assemble results.
Personally I would never expose the raw DB in any capacity to the front end systems for security and sanity reasons. DB schemas need to be free to change all the time and you wouldn't want to break entire services from a column type change.
ameliaquining|6 years ago
dragonwriter|6 years ago
Yes, that's why you expose a simple set of views as your public interface for each role, and apply the dirt simple RBAC that every multiuser RDBMS supports to limit users to the appropriate set of views.
> SQL is so computationally powerful that you can't really get that assurance.
The computational power of SQL is mostly irrelevant, because the dirt simple security model let you limit database effects, so that all the computational power means is that people can tie up resources with poorly (or maliciously) crafted queries, if you don't apply the simple mitigations every multiuser RDBMS has against those.
People not bothering to learn about the database engines they are using leads to putting (often poor) ad hoc security in the web layer in front of a database that isn't properly secured because the app in front is excessively trusted, making this less, rather than more, secure than if people just learned to understand heir DB and exposed it directly instead of trying to acheive security through cargo cult complexity.
It's possible to enhance database security with defense in depth and and additional layers, but none of that starts with misunderstanding RDBMSs so badly as to think that they are, in an app stack, one of the hard parts to secure.
badrabbit|6 years ago
siscia|6 years ago
I really don't understand why we don't simply ship SQL queries around, maybe a very simple subset of SQL so that it either works on most vendors or that it can be easily manipulated and adapt to different storage backend.
Can somebody enlight me?
bryanrasmussen|6 years ago
thrower123|6 years ago