In my opinion, GraphQL moves too much of the burden to the user of the API. It makes most sense if the data is highly dynamic, you have a mobile app and every call is expensive, or (and this seems more common) the backend and frontend teams don't like to talk to each other. As a user, I just want to GET /foo, with a good old API token I pasted from your dev docs, and move on to the next thing. I don't want to spend time figuring out your database schema. Or perhaps I've just yet to see single good public GraphQL API. I just recently had look at the Github GraphQL API and it's non-existent docs (no, I don't want introspect your data models at runtime), noped the hell out of that, and got the REST integration for the bit I needed done in an hour.
This is my experience as well. We briefly worked with GraphQL for one of our smaller projects. One of the biggest shortcomings of the API we created was that there was an incredibly large learning curve for teams/people not interested in using GraphQL. Instead of using the sensible features of GraphQL queries, the other team consuming our API were writing the most verbose queries they possibly could, not utilizing any of the nice convenience operations that GraphQL offers.
They were no doubt frustrated with what they thought was a needless hassle compared to REST, and I myself found a lot of the query building pretty tedious.
Why do you think it makes the most sense if “the backend and frontend teams don't like to talk to each other”, given that your biggest complaint seems to be “I don't want to spend time figuring out your database schema“? Aren’t you the frontend guy in this scenario, and not wanting/able to talk to the backend guys (the people who designed the database schema)?
How I _wish_ I could nope the hell out of our GraphQL dependency, for the reasons listed and for the frighteningly complex client libs we have to use to consume it, rather than straight up ajax/fetch.
Recently tried out GraphQL in a new web app - and ripped it out all out shortly after from frontend and backend. Unnecessary complexity over simple rest calls with no benefits.
As a manager/business owner, I like GraphQL because it shifts the burden to the API consumer. That consumer, for us, is a front-end resource who is sometimes less expensive but very often less busy than our backend team.
That's obviously not the same formula for every company. I'm just offering it as a potential counterpoint.
Absolutely. Before GraphQL we were making a monumental effort to build a REST API. After deliberating on exactly what REST was and how we’d represent a few red haired resources, we were spending a lot of client time fetching deep trees through resource links. When we moved to GraphQL it solved a lot of the administrative and philosophical headaches and considerably reduced the number of connections, wasted data, and made our client code so much simpler through easily grokked queries. Highly recommmend GraphQL to anyone.
I should also mention that we finished our migration ahead of schedule. It was super easy to have GraphQL alongside REST, and we quickly iterated on converting each rest call to graphQL.
We’ve also found that on boarding our new hires is much simpler. There’s a lot of misinformation about REST, and we were having to retrain people, and when they wanted to see our schema we would then have to teach them swagger as well. With GraphQL we just send them to the official docs with our schema with is our single source of truth for the API and they come back a day later ready to go. Generally GraphQL being more standardized and centrally managed has been great from a training perspective.
"We moved to GraphQL because things were bad, and now things are good. GraphQL is amazing".
I don't want this to come off as a personal attack (and I apologize if it does), but your comment contains absolutely no information whatsoever regarding a specific situation/use-case, nothing from which the rest of us can formulate our own opinions on the REST/GraphQL discussion.
This sounds a little over-the-top. There are certainly cases where REST would be the better recommendation over GraphQL. I have no idea what your specific requirements were but if building a REST API was a 'monumental effort' then GraphQL was probably a good choice for you. That does not mean that in all cases GraphQL > REST.
REST semantics are a distraction. The best possible outcome of REST is when developers are encouraged to consider the concept of "idempotency" when they stumble upon the technical definition of the "PUT" verb. Everything else is line noise.
GraphQL has a schema with types. It makes it very straightforward and approachable for all developers to reason about what an API should deliver up-front, and also easy to reason about what is and is not a breaking change. Automatic validation of queries against the schema also saves massive amounts of time in writing validation logic.
There are still things that could be better with GraphQL. The query language is... interesting. The whole thing is still very client-server asymmetric -- look at a client query syntax versus the schema syntax you'll use on the server side for three seconds and you'll immediately and viscerally know what I mean -- and that strikes me as disappointing in this age. It's still very easy for developers to fall into mental quicksand which causes them to make many individual requests even when GraphQL would let them batch things up into one. And so on.
But overall: yes, working on a GraphQL stack is an awesome experience compared to going it alone with REST and JSON and crossed fingers.
Mostly, NOTE: I'm using python/django/graphene server side and apollo client side.
I love how flexible it is for client developers and because the great client side libraries it helps to eliminate a ton of boiler plate code on the client side.
My biggest complaint has been "lost" exceptions and caching.
Because it's possible for an exception to be thrown server side on one field while the other ones succeed I've been plagued with hard to monitor/find errors. I ended up writing a shim to parse the response in an attempt to get more insight into #errors / fields (this has also been really helpful for monitoring slow queries in new relic since all requests go to the same endpoint which breaks a ton of APM monitoring).
My other issue has been around caching, in apollo there are ways to say "don't use the cache for this request", but it's not to give an object a cache ttl. My app allows users to search for events that are happening near them right now, and I've run into several issues where apollo decided that an event from yesterday should be added to a result. It happened frequently enough even with queries that included times as an argument that I ended up basically implementing a "middleware" between what apollo gives back and the component, which felt really ugly.
you should take a look at the caching policies (fetch policy) apollo client provides as well as the error policy. neither are easy to find
for the error policy, you basically control the behavior (for each client instance or each request) when a request is considered failed. none, ignore and all. [1]
fetch policy allows you immense control over caching. this all depends on what you're doing but in some instances it can even make sense to never cache any requests, depending on how you application is structured. the docs are hard to google for this, but here's a link for you [2]
Regarding monitoring, I would recommend to check out Apollo Engine if you haven't already. Basically middleware on the server side that injects error and performance info into the extensions object of the response which is then sent to them and presented in a nice UI.
I've never looked at graphene before (I've stopped spending time learning about each fad) but this really looks like it solves a real problem I've experienced. Definitely going to try it out.
As a frontend dev, I had a positive experience with one service because the backend was far more willing to add new query options. The much publicized "only get what you ask for" part was largely irrelevant.
I am, however, unsettled at the prospect is losing all the built in network and browser caching for idempotent calls (mostly I'm unsettled because no one else seems to seriously consider the issue - it may end up too small to matter, but I dont trust that anyone else here has honestly evaluated it).
Another poster mentioned the issue with partial errors, which sounds like something else that will not get the upfront attention it deserves, while not being an immediate dealbreaker. Add in to that how to manage deprecation of particular query statements as they can no longer be distinguished as distinct endpoints.
My other concern is how much magic frontend libraries provide. This magic looks great if your app is nothing more than input/output over CRUD calls, but sounds very brittle if your app has client side logic (and while perhaps a webapp should ideally avoid that, other services can also be clients.)
So far I have concerns but not concrete problems, I just worry that we wont be able to confirm the severity until we've already invested and committed, particularly when our initial adopters are so enthusiastic. At the same time I don't want to be the guy unwilling to change and adopt new things.
We found our sweet spot in terms of enabling the flexibility of front-end devs experimenting and defining their own queries while maintaining cacheability. During development, front-end devs use the Graphiql endpoint to play around with the data and figure out exactly what they want. Once that's settled, we turn it into a persisted query that is stored on the server and keyed by a unique ID that the client apps use in production instead of the raw GraphQL payload. You add a small amount of overhead for the coordination to create the persisted queries, but we're considering even building a self service process in the future.
We decided against it. We’re in a java backend and GraphQL in Java with ORM is considerably problematic when trying to create efficient resolvers. We simply ran into one hurdle after another and we were finding ourselves in diminishing returns.
The concept is great, and if you write custom SQL queries for each resolver (if necessary), properly caching things that can be cached, and use the first class citizen programming language (JavaScript), then it seems GraphQL works wonderfully.
Trying to fit it into an existing ORM paradigm with respect to complex sub-collections, lazy loading, and efficient database querying, it just didn’t work out for us.
We migrated most of our backend (written in Clojure) from REST to GraphQL (actually a homemade alternative to GraphQL, but not relevant to this discussion).
It went well, it greatly simplified both our backend and frontend code. The backend code got simpler and more stable because it no longer had to deal with "data packaging". The frontend code became more transparent, because you can now easily read what data gets exchanged, and more decoupled, because different components can independently require the data they care about. One of the biggest benefits has been the degree of independence to evolve both the client and server.
We haven't had the performance issues some people have mentioned (N+1 query etc.), because of the server-side design of our homemade GraphQL engine - in which data resolvers are batching and asynchronous by default, unlike most backend libs which approach this problem more naively. Will open-source that soon.
The biggest limitations I see to GraphQL are:
- it doesn't really have a story for caching. I have some ideas for addressing that, but it hasn't been a problem for us really.
- it repeated the SQL mistake of exposing a query language based on text, not data structures. Now we have to write queries as templates instead of assembling them programmatically. This hurts both application developers and library authors.
- it doesn't really have a story for structured writes.
I tend to write publicly facing APIs so this conversation is colored a little by that. An internal API or Microservice is a different story.
I don't think it is either of. I use both. In the same API. The two are largely compatible.
All REST APIs can be modeled in RPC style APIs and that's no different with GraphQL.
I've done APIs where I have a GraphQL facade in front of REST and REST Facades infront of GraphQL by having all my REST endpoints be two lines
1. Graph QL query
2. Format result as REST
That first like maps to a graphQL query and the second one can be standardized for all your REST endpoints.
I tend to like REST in-front of GraphQL better since it allows for some performance optimizations when you know ahead of time all the data you need to grab.
And since GraphQL can be mapped to classes you could also just skip the GraphQL query compile and use the classes directly from your REST with only a few more lines of code.
Overall I like GraphQL a lot because it allows the frontend to make less round trips to the server and makes it easier to exclude data you don't want (which helps when data is large). Json:API tries to solve some of this with includes, related, and fields but it doesn't quite allow as much expressiveness as GraphQL.
You're going to get responses from people who have invested a considerable amount of time in something they already had plans for (a "sunk cost") so I'm not sure you will get the sort of information you are after here.
I'll be the voice of someone who is actively implementing GraphQL for the first time.
We were mid-way through our project before realizing GraphQL might be a better fit for our use case so we paused for a week to play around with it and see if we could stand up something inside of our project that made sense.
GraphQL felt very "plug and play" to us. Aside from having to re-work some validation to fit into GraphQL's idea of mutations, we were mostly able to drop our existing models and logic directly in and see it working right away.
Having built very well defined REST APIs (and SOAP before that) for years, the flexibility that GraphQL offers made me feel a bit "uneasy" at first but I have come around to appreciating how much freedom it gives the front-end to only request the data they need.
I'm usually the type to shy away from flashy new doodads and stick with what I know is safe+reliable, especially in an enterprise environment, but as the project continues I'm feeling more and more confident in our choice. I suppose only time will tell though.
Also, many people who have only recently adopted GraphQL, as it is relatively new. These people will not have the long-term experience necessary to assess GraphQL in hindsight.
There is an advantage to making technology decisions behind-the-curve, choosing mature, "battle-tested" technology, even if it means overlooking some known warts, and missing out on what's "hot". We can call this being a "late-adopter". I am most interested in N-years-later perspectives where N is around 3 or more.
Sure, but as always in software that's still better than the speculations of people who have no experience using it but think they can make an informed comparison.
These are taken from some of the other comments in this thread:
* As a consumer of APIs I vastly prefer REST APIs.
* Highly recommend GraphQL to anyone.
* I *love* how flexible it is for client developers.
* As a manager/business owner, I like GraphQL.
* How I _wish_ I could nope the hell out of our GraphQL dependency
* Unnecessary complexity over simple rest calls with no benefits.
* REST semantics are a distraction
* my experience is just ok.
What I see here is many different reactions from different people, presumably with different experiences and use cases on their hands. And that's perfectly OK -- REST, GraphQL, XML-RPC, heck even SOAP are just tools, and use whichever works best for your particular situation.
Just because it worked for me doesn't mean it will work great for everyone else; and if it sucks for me doesn't mean it won't be a life-saver for someone else. Those are just tools; use them as you find fit.
I'd like to add that correctly designed resolvers allow you:
- to control very easily who can fetch what where it's fetched (permissions)
- to fetch nested data when you need it without writing serializers
- to help your frontend team find what they are looking for without asking the backend team everytime
- mutations are a huge plus when it comes to standardization of your API too
FYI we're using it in production over a django backend (which comes with some drawbacks, since subscriptions == pushed updates are not perfectly implemented) with our react/apollo apps (web and native) and in my opinion the overhead lies surprisingly more in the frontend side (writing data connectors is longer, but way more explicit, than using rest queries returning json)than on the backend (where you just declare resolvers, a thing you don't even need to do in nodejs) and handle permissions.
>to control very easily who can fetch what where it's fetched (permissions)
This is a piece of GraphQL I haven't been able to get my head around. Could you elaborate or point me to a good explanation of how this is implemented? Everything I found when I looked into GraphQL previously was something like "you control access to individual resources in your business layer" but never explained how.
GraphQL is great for the frontend, but moving to GraphQL involves both people and tech issues. Common mistakes made when using new technologies are made all over again.
* Watch out for bad implementation of the GraphQL API (this will definitely result in bad performance).
* Design the GraphQL schema that you want the user to see/perceive. Not every object or field in your database needs to be exposed via the API the way it is.
My workplace is currently moving a huge monolith into a bunch of manageable components. Each of these components has its own GraphQL endpoint. Using schema-stitching, these are being stitched together into one endpoint for API users.
As a result of our codebase, we've tried GraphQL in:
* Ruby (graphql-ruby) - WATCHOUT Relay arguments for connection fields are not exposed to the library user. So basically you have to implement your own Relay-compliant stuff if you need access to the pagination arguments from Relay. Also, documentation is broken.
* Python (graphene) - We've had no issues so far. We worked around it.
* Node.js (Apollo GraphQL) - OH MY BUTTERFLIES. So far, this is the ONLY library I have come across that is polished and has plenty of documentation.
* Elixir (Absinthe) - My coworker worked on this part. He did not complain. So I'm assuming he had no issues.
The "Learn * in a day" joke applies to GraphQL. As simple as GraphQL looks for the client-side, it is beast of a job to build a GraphQL backend that is optimized for production.
Servers-side implementation of GraphQL is not very well documented apart from hello-worldly examples. Most of the knowledge found online is about client-side usage.
Due to poor documentation/examples provided, ramping up people with GraphQL is hard. Most first iterations I've had to review were slower than our REST APIs because of unoptimized code. Sitting down for a few minutes solves that problem.
To ramp up people at work place, I ended up having to do this:
* Ask people to use the GitHub v4 API to checkout GraphQL.
* Make them build a GraphQL server for a blog app.
* Dive straight into whatever feature/API they would build.
* Review their work a few dozen times and show them optimization tricks.
My most valuable lesson: When in doubt, dig into the source of these libraries.
my experience is just ok. as someone here puts it, great for frontend devs but bad for backend devs.
if you have db schemas on the backend if using orm, get ready to duplicate them again for graphql.
and on the frontend, get prepared to write out every songle fields you need from the backend. i can imagine it may be brutal for those who have a lot of changes in their schemas.
my conclusion is that, since im a fullstack who does both frontend and backend, i feel myself getting a bit more fatigued than when i was doing rest style api. i find myself wanting rest time to time, esp at times i dont feel like writing out all the fields i need back that i cant remember off top of my head.
We've just started to dabble in GraphQL, and like many others we've seen mixed results.
On the upside, we can construct complex queries than eliminate many consecutive RPCs that you'd end up with in a traditional REST API. At scale this should work wonderfully, greatly reducing the client/server latency for our realtime app.
On the downside, the tooling is still far behind. This is somewhat due to GraphQL being a younger technology so you have to give it some time. OTOH, I feel like you can get things off the ground with REST more quickly. Problems with GraphQL tend to be harder to reconcile due to the debug tooling handicap.
Some of our engineers take a little time ramping up to GraphQL due to its complex nature. This is probably a good thing in the long run though, since it stresses the importance of keeping RPCs to a minimum and eliminates having to sync or batch consecutive RPCs.
Overall I still think it's a win. The tooling should improve over time, and hopefully it will be a first-class citizen in IDEs and libraries soon. Until then, you've got to be prepared to muscle though it.
Nick Schrock here, one of the GraphQL co-creators. I agree with a lot of the criticism in terms of the difficulty of implementing GraphQL backends. I think there's a big opportunity for folks to build vertically integrated toolkits that deal with N+1 issues, integrate DataLoader natively and so forth. Good versions of these would deal with a lot of issues described here in greenfield GraphQL backends. I talked about this at the GraphQL Europe keynote last month (https://www.youtube.com/watch?v=zMa8rfXI6MM).
Current greenfield implements are typically stacked on ORMs like Django and RoR, and the impedance mismatch is real. Personally I abide by the dictum that ORMS are "Vietnam of computer programming" and should be avoided at all costs for anything that will grow beyond a small app. GraphQL was not originally implemented on top of an ORM, but instead an object model built on key-value + edge store internal to Facebook.
In terms of other criticisms in this thread:
1) Exceptions: The default behavior in graphql-js (mimicked in other language implementations) of swallowing native exceptions by default was probably a mistake in hindsight. Whenever I've played with GraphQL using different toolsets the first thing I change is to add a wrapper function which checks for errors and then rethrows the initial exception that caused the GraphQL error for use in testing and CI/CD contexts.
2. Caching: Personally I've always been confused about the concern with leveraging HTTP-level caching. While a clever hack, with any real app with any sort of even mildly dynamic behavior you don't want to do this. Staleness will be interpreted, rightly, as bugs by your users. If you want to replicate the behavior the most straightforward way would be to use persisted queries (described here https://blog.apollographql.com/persisted-graphql-queries-wit...) combined with HTTP GETs. With persisted queries you can encode the entire request in the query string, which should get you the HTTP-level caching you want.
3. Docs: Quite confused about this one. While particular implementations of GraphQL can be problematic the documentation of the core language (which I am not responsible for) is superb. See http://graphql.org/ and https://www.howtographql.com/.
Not at all (we rolled our own back end). The added complexity of building queries to join our API ended up creating such a quagmire of SQL that any dev coming in will basically have to learn our own custom ORM.
It makes many things easier and many things harder. The lack of really good backend libraries/frameworks outside of NodeJS is the most concerning thing.
Also; debugging and monitoring GraphQL APIs sucks. Considerations:
- Any subfield of a query can throw an error, but the rest of the fields can succeed, because GraphQL frameworks are allowed to run each field resolver asynchronously.
- Because of this, any GraphQL query is capable of returning multiple errors.
- Rate limiting is exceedingly difficult due to nested resolvers. I've seen solutions which involve annotating your schemas with "cost" numbers, and only allowing each query to run up to a maximum "cost" before failing by dynamically adding the costs of each field they request. Traditional rate limiting doesn't work.
- Traditional APM platforms also don't work. Prepare to adopt Apollo Engine and pay them $600/month on top of the money you're already paying New Relic or Datadog.
I really wish everyone would just move to GRPC and be done with it.
GraphQL just feels too tied to the datastore on the back end to be generally useful. REST/Swagger is hugely overcomplicated for the basic REST premise of moving objects back and forth.
GRPC is what REST should have been. Ship objects back and forth between multiple languages with minimum fuss.
You don't have to "move" to reap GraphQL's benefits – you can just add a GraphQL layer.
I'm backend Systems Architect at a big publishing company, and my current primary project is an aggregating caching GraphQL proxy for our REST microservices.
Our front ends were making too many calls to the REST APIs, so we went overboard embedding related resources – and now they're getting too much unneeded data back, and cache invalidation is a nightmare. Sounds familiar, probably!
So we're building a GraphQL service that stitches those REST APIs together to let the caller request exactly the fields they need, from any API's resource. By caching individual resources, rather than nested multi-resource serializations, we can invalidate easily by UUID on change events – so cache TTLs can be long – and the GraphQL API's field resolvers can assemble complex responses with a few fast Redis MGETs, which are batched by DataLoaders.
This also gives us a place to centralize business logic, rather than having each front end service reimplement field formatting, resource transforms, &c. Since the REST APIs remain available as the source of truth, existing services can migrate to the GraphQL proxy at their own pace, which we hope will be an easy sell since it's so much faster.
[+] [-] Androider|7 years ago|reply
In my opinion, GraphQL moves too much of the burden to the user of the API. It makes most sense if the data is highly dynamic, you have a mobile app and every call is expensive, or (and this seems more common) the backend and frontend teams don't like to talk to each other. As a user, I just want to GET /foo, with a good old API token I pasted from your dev docs, and move on to the next thing. I don't want to spend time figuring out your database schema. Or perhaps I've just yet to see single good public GraphQL API. I just recently had look at the Github GraphQL API and it's non-existent docs (no, I don't want introspect your data models at runtime), noped the hell out of that, and got the REST integration for the bit I needed done in an hour.
[+] [-] MattyRad|7 years ago|reply
They were no doubt frustrated with what they thought was a needless hassle compared to REST, and I myself found a lot of the query building pretty tedious.
[+] [-] CGamesPlay|7 years ago|reply
[+] [-] pandeiro|7 years ago|reply
How I _wish_ I could nope the hell out of our GraphQL dependency, for the reasons listed and for the frighteningly complex client libs we have to use to consume it, rather than straight up ajax/fetch.
[+] [-] sheeshkebab|7 years ago|reply
[+] [-] smt88|7 years ago|reply
That's obviously not the same formula for every company. I'm just offering it as a potential counterpoint.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] sgrove|7 years ago|reply
[+] [-] ilovecaching|7 years ago|reply
[+] [-] ilovecaching|7 years ago|reply
We’ve also found that on boarding our new hires is much simpler. There’s a lot of misinformation about REST, and we were having to retrain people, and when they wanted to see our schema we would then have to teach them swagger as well. With GraphQL we just send them to the official docs with our schema with is our single source of truth for the API and they come back a day later ready to go. Generally GraphQL being more standardized and centrally managed has been great from a training perspective.
[+] [-] xtrapolate|7 years ago|reply
I don't want this to come off as a personal attack (and I apologize if it does), but your comment contains absolutely no information whatsoever regarding a specific situation/use-case, nothing from which the rest of us can formulate our own opinions on the REST/GraphQL discussion.
[+] [-] krainboltgreene|7 years ago|reply
I mean, you could have also just done this with what you had.
[+] [-] mrtbld|7 years ago|reply
[+] [-] dugword|7 years ago|reply
[+] [-] wuliwong|7 years ago|reply
This sounds a little over-the-top. There are certainly cases where REST would be the better recommendation over GraphQL. I have no idea what your specific requirements were but if building a REST API was a 'monumental effort' then GraphQL was probably a good choice for you. That does not mean that in all cases GraphQL > REST.
[+] [-] heavenlyhash|7 years ago|reply
REST semantics are a distraction. The best possible outcome of REST is when developers are encouraged to consider the concept of "idempotency" when they stumble upon the technical definition of the "PUT" verb. Everything else is line noise.
GraphQL has a schema with types. It makes it very straightforward and approachable for all developers to reason about what an API should deliver up-front, and also easy to reason about what is and is not a breaking change. Automatic validation of queries against the schema also saves massive amounts of time in writing validation logic.
There are still things that could be better with GraphQL. The query language is... interesting. The whole thing is still very client-server asymmetric -- look at a client query syntax versus the schema syntax you'll use on the server side for three seconds and you'll immediately and viscerally know what I mean -- and that strikes me as disappointing in this age. It's still very easy for developers to fall into mental quicksand which causes them to make many individual requests even when GraphQL would let them batch things up into one. And so on.
But overall: yes, working on a GraphQL stack is an awesome experience compared to going it alone with REST and JSON and crossed fingers.
[+] [-] krainboltgreene|7 years ago|reply
It's always weird to see this type of thing on HN. How anyone can call the way the web works "line noise" is baffling to me.
[+] [-] intrasight|7 years ago|reply
A good summary
[+] [-] haney|7 years ago|reply
I love how flexible it is for client developers and because the great client side libraries it helps to eliminate a ton of boiler plate code on the client side.
My biggest complaint has been "lost" exceptions and caching.
Because it's possible for an exception to be thrown server side on one field while the other ones succeed I've been plagued with hard to monitor/find errors. I ended up writing a shim to parse the response in an attempt to get more insight into #errors / fields (this has also been really helpful for monitoring slow queries in new relic since all requests go to the same endpoint which breaks a ton of APM monitoring).
My other issue has been around caching, in apollo there are ways to say "don't use the cache for this request", but it's not to give an object a cache ttl. My app allows users to search for events that are happening near them right now, and I've run into several issues where apollo decided that an event from yesterday should be added to a result. It happened frequently enough even with queries that included times as an argument that I ended up basically implementing a "middleware" between what apollo gives back and the component, which felt really ugly.
[+] [-] arnorhs|7 years ago|reply
for the error policy, you basically control the behavior (for each client instance or each request) when a request is considered failed. none, ignore and all. [1]
fetch policy allows you immense control over caching. this all depends on what you're doing but in some instances it can even make sense to never cache any requests, depending on how you application is structured. the docs are hard to google for this, but here's a link for you [2]
[1] https://www.apollographql.com/docs/react/features/error-hand...
[2] https://www.apollographql.com/docs/react/api/react-apollo.ht...
[+] [-] syrusakbary|7 years ago|reply
[+] [-] vinayan3|7 years ago|reply
This is quite annoying because the server never throws a 500 error so the normal logging doens't kick in.
[+] [-] filleokus|7 years ago|reply
[+] [-] PhineasRex|7 years ago|reply
[+] [-] swalsh|7 years ago|reply
[+] [-] 11235813213455|7 years ago|reply
[deleted]
[+] [-] ergothus|7 years ago|reply
As a frontend dev, I had a positive experience with one service because the backend was far more willing to add new query options. The much publicized "only get what you ask for" part was largely irrelevant.
I am, however, unsettled at the prospect is losing all the built in network and browser caching for idempotent calls (mostly I'm unsettled because no one else seems to seriously consider the issue - it may end up too small to matter, but I dont trust that anyone else here has honestly evaluated it).
Another poster mentioned the issue with partial errors, which sounds like something else that will not get the upfront attention it deserves, while not being an immediate dealbreaker. Add in to that how to manage deprecation of particular query statements as they can no longer be distinguished as distinct endpoints.
My other concern is how much magic frontend libraries provide. This magic looks great if your app is nothing more than input/output over CRUD calls, but sounds very brittle if your app has client side logic (and while perhaps a webapp should ideally avoid that, other services can also be clients.)
So far I have concerns but not concrete problems, I just worry that we wont be able to confirm the severity until we've already invested and committed, particularly when our initial adopters are so enthusiastic. At the same time I don't want to be the guy unwilling to change and adopt new things.
[+] [-] jwoah12|7 years ago|reply
[+] [-] ulkesh|7 years ago|reply
The concept is great, and if you write custom SQL queries for each resolver (if necessary), properly caching things that can be cached, and use the first class citizen programming language (JavaScript), then it seems GraphQL works wonderfully.
Trying to fit it into an existing ORM paradigm with respect to complex sub-collections, lazy loading, and efficient database querying, it just didn’t work out for us.
[+] [-] valw|7 years ago|reply
It went well, it greatly simplified both our backend and frontend code. The backend code got simpler and more stable because it no longer had to deal with "data packaging". The frontend code became more transparent, because you can now easily read what data gets exchanged, and more decoupled, because different components can independently require the data they care about. One of the biggest benefits has been the degree of independence to evolve both the client and server.
We haven't had the performance issues some people have mentioned (N+1 query etc.), because of the server-side design of our homemade GraphQL engine - in which data resolvers are batching and asynchronous by default, unlike most backend libs which approach this problem more naively. Will open-source that soon.
The biggest limitations I see to GraphQL are:
- it doesn't really have a story for caching. I have some ideas for addressing that, but it hasn't been a problem for us really.
- it repeated the SQL mistake of exposing a query language based on text, not data structures. Now we have to write queries as templates instead of assembling them programmatically. This hurts both application developers and library authors.
- it doesn't really have a story for structured writes.
[+] [-] throwaway2016a|7 years ago|reply
I don't think it is either of. I use both. In the same API. The two are largely compatible.
All REST APIs can be modeled in RPC style APIs and that's no different with GraphQL.
I've done APIs where I have a GraphQL facade in front of REST and REST Facades infront of GraphQL by having all my REST endpoints be two lines
1. Graph QL query
2. Format result as REST
That first like maps to a graphQL query and the second one can be standardized for all your REST endpoints.
I tend to like REST in-front of GraphQL better since it allows for some performance optimizations when you know ahead of time all the data you need to grab.
And since GraphQL can be mapped to classes you could also just skip the GraphQL query compile and use the classes directly from your REST with only a few more lines of code.
Overall I like GraphQL a lot because it allows the frontend to make less round trips to the server and makes it easier to exclude data you don't want (which helps when data is large). Json:API tries to solve some of this with includes, related, and fields but it doesn't quite allow as much expressiveness as GraphQL.
[+] [-] OldSchoolJohnny|7 years ago|reply
[+] [-] HEHENE|7 years ago|reply
We were mid-way through our project before realizing GraphQL might be a better fit for our use case so we paused for a week to play around with it and see if we could stand up something inside of our project that made sense.
GraphQL felt very "plug and play" to us. Aside from having to re-work some validation to fit into GraphQL's idea of mutations, we were mostly able to drop our existing models and logic directly in and see it working right away.
Having built very well defined REST APIs (and SOAP before that) for years, the flexibility that GraphQL offers made me feel a bit "uneasy" at first but I have come around to appreciating how much freedom it gives the front-end to only request the data they need.
I'm usually the type to shy away from flashy new doodads and stick with what I know is safe+reliable, especially in an enterprise environment, but as the project continues I'm feeling more and more confident in our choice. I suppose only time will tell though.
[+] [-] oftenwrong|7 years ago|reply
There is an advantage to making technology decisions behind-the-curve, choosing mature, "battle-tested" technology, even if it means overlooking some known warts, and missing out on what's "hot". We can call this being a "late-adopter". I am most interested in N-years-later perspectives where N is around 3 or more.
[+] [-] valw|7 years ago|reply
[+] [-] BerislavLopac|7 years ago|reply
Just because it worked for me doesn't mean it will work great for everyone else; and if it sucks for me doesn't mean it won't be a life-saver for someone else. Those are just tools; use them as you find fit.
[+] [-] t_fatus|7 years ago|reply
FYI we're using it in production over a django backend (which comes with some drawbacks, since subscriptions == pushed updates are not perfectly implemented) with our react/apollo apps (web and native) and in my opinion the overhead lies surprisingly more in the frontend side (writing data connectors is longer, but way more explicit, than using rest queries returning json)than on the backend (where you just declare resolvers, a thing you don't even need to do in nodejs) and handle permissions.
[+] [-] kej|7 years ago|reply
This is a piece of GraphQL I haven't been able to get my head around. Could you elaborate or point me to a good explanation of how this is implemented? Everything I found when I looked into GraphQL previously was something like "you control access to individual resources in your business layer" but never explained how.
[+] [-] SingAlong|7 years ago|reply
* Watch out for bad implementation of the GraphQL API (this will definitely result in bad performance).
* Design the GraphQL schema that you want the user to see/perceive. Not every object or field in your database needs to be exposed via the API the way it is.
My workplace is currently moving a huge monolith into a bunch of manageable components. Each of these components has its own GraphQL endpoint. Using schema-stitching, these are being stitched together into one endpoint for API users.
As a result of our codebase, we've tried GraphQL in:
* Ruby (graphql-ruby) - WATCHOUT Relay arguments for connection fields are not exposed to the library user. So basically you have to implement your own Relay-compliant stuff if you need access to the pagination arguments from Relay. Also, documentation is broken.
* Python (graphene) - We've had no issues so far. We worked around it.
* Node.js (Apollo GraphQL) - OH MY BUTTERFLIES. So far, this is the ONLY library I have come across that is polished and has plenty of documentation.
* Elixir (Absinthe) - My coworker worked on this part. He did not complain. So I'm assuming he had no issues.
The "Learn * in a day" joke applies to GraphQL. As simple as GraphQL looks for the client-side, it is beast of a job to build a GraphQL backend that is optimized for production.
Servers-side implementation of GraphQL is not very well documented apart from hello-worldly examples. Most of the knowledge found online is about client-side usage.
Due to poor documentation/examples provided, ramping up people with GraphQL is hard. Most first iterations I've had to review were slower than our REST APIs because of unoptimized code. Sitting down for a few minutes solves that problem.
To ramp up people at work place, I ended up having to do this:
* Ask people to use the GitHub v4 API to checkout GraphQL.
* Make them build a GraphQL server for a blog app.
* Dive straight into whatever feature/API they would build.
* Review their work a few dozen times and show them optimization tricks.
My most valuable lesson: When in doubt, dig into the source of these libraries.
[+] [-] jaequery|7 years ago|reply
if you have db schemas on the backend if using orm, get ready to duplicate them again for graphql.
and on the frontend, get prepared to write out every songle fields you need from the backend. i can imagine it may be brutal for those who have a lot of changes in their schemas.
my conclusion is that, since im a fullstack who does both frontend and backend, i feel myself getting a bit more fatigued than when i was doing rest style api. i find myself wanting rest time to time, esp at times i dont feel like writing out all the fields i need back that i cant remember off top of my head.
[+] [-] syvex|7 years ago|reply
On the upside, we can construct complex queries than eliminate many consecutive RPCs that you'd end up with in a traditional REST API. At scale this should work wonderfully, greatly reducing the client/server latency for our realtime app.
On the downside, the tooling is still far behind. This is somewhat due to GraphQL being a younger technology so you have to give it some time. OTOH, I feel like you can get things off the ground with REST more quickly. Problems with GraphQL tend to be harder to reconcile due to the debug tooling handicap.
Some of our engineers take a little time ramping up to GraphQL due to its complex nature. This is probably a good thing in the long run though, since it stresses the importance of keeping RPCs to a minimum and eliminates having to sync or batch consecutive RPCs.
Overall I still think it's a win. The tooling should improve over time, and hopefully it will be a first-class citizen in IDEs and libraries soon. Until then, you've got to be prepared to muscle though it.
[+] [-] schrockn|7 years ago|reply
Current greenfield implements are typically stacked on ORMs like Django and RoR, and the impedance mismatch is real. Personally I abide by the dictum that ORMS are "Vietnam of computer programming" and should be avoided at all costs for anything that will grow beyond a small app. GraphQL was not originally implemented on top of an ORM, but instead an object model built on key-value + edge store internal to Facebook.
In terms of other criticisms in this thread:
1) Exceptions: The default behavior in graphql-js (mimicked in other language implementations) of swallowing native exceptions by default was probably a mistake in hindsight. Whenever I've played with GraphQL using different toolsets the first thing I change is to add a wrapper function which checks for errors and then rethrows the initial exception that caused the GraphQL error for use in testing and CI/CD contexts.
2. Caching: Personally I've always been confused about the concern with leveraging HTTP-level caching. While a clever hack, with any real app with any sort of even mildly dynamic behavior you don't want to do this. Staleness will be interpreted, rightly, as bugs by your users. If you want to replicate the behavior the most straightforward way would be to use persisted queries (described here https://blog.apollographql.com/persisted-graphql-queries-wit...) combined with HTTP GETs. With persisted queries you can encode the entire request in the query string, which should get you the HTTP-level caching you want.
3. Docs: Quite confused about this one. While particular implementations of GraphQL can be problematic the documentation of the core language (which I am not responsible for) is superb. See http://graphql.org/ and https://www.howtographql.com/.
[+] [-] IBCNU|7 years ago|reply
[+] [-] 013a|7 years ago|reply
Also; debugging and monitoring GraphQL APIs sucks. Considerations:
- Any subfield of a query can throw an error, but the rest of the fields can succeed, because GraphQL frameworks are allowed to run each field resolver asynchronously.
- Because of this, any GraphQL query is capable of returning multiple errors.
- Rate limiting is exceedingly difficult due to nested resolvers. I've seen solutions which involve annotating your schemas with "cost" numbers, and only allowing each query to run up to a maximum "cost" before failing by dynamically adding the costs of each field they request. Traditional rate limiting doesn't work.
- Traditional APM platforms also don't work. Prepare to adopt Apollo Engine and pay them $600/month on top of the money you're already paying New Relic or Datadog.
[+] [-] xentronium|7 years ago|reply
There are some quirks (error handling), performance issues (e.g. fixing n+1 queries) and DOS concerns, but again, it isn't all that bad.
(we're using rails/graphql-ruby on backend | react/relay on frontend)
[+] [-] thermodynthrway|7 years ago|reply
GraphQL just feels too tied to the datastore on the back end to be generally useful. REST/Swagger is hugely overcomplicated for the basic REST premise of moving objects back and forth.
GRPC is what REST should have been. Ship objects back and forth between multiple languages with minimum fuss.
[+] [-] nicwolff|7 years ago|reply
I'm backend Systems Architect at a big publishing company, and my current primary project is an aggregating caching GraphQL proxy for our REST microservices.
Our front ends were making too many calls to the REST APIs, so we went overboard embedding related resources – and now they're getting too much unneeded data back, and cache invalidation is a nightmare. Sounds familiar, probably!
So we're building a GraphQL service that stitches those REST APIs together to let the caller request exactly the fields they need, from any API's resource. By caching individual resources, rather than nested multi-resource serializations, we can invalidate easily by UUID on change events – so cache TTLs can be long – and the GraphQL API's field resolvers can assemble complex responses with a few fast Redis MGETs, which are batched by DataLoaders.
This also gives us a place to centralize business logic, rather than having each front end service reimplement field formatting, resource transforms, &c. Since the REST APIs remain available as the source of truth, existing services can migrate to the GraphQL proxy at their own pace, which we hope will be an easy sell since it's so much faster.