This post feels like a uninformed and undifferentiated rant against "things are too complex". Let's start with the first paragraph: What does the JavaScript fetch API have to do with data management? How can you compare the fetch() API with Swagger (an API documentation format) with Protobuf (a serialisation format)? That doesn't even make sense.
Second paragraph: "The UI should automatically update everywhere the data is used". Again, what does this have to do with any of the above? That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
UX when fetching data: What does that have to do with any of the above? You still have to decide how your UI displays the fact that a piece of data is loading. What do you expect there to be in place? Best thing I could imagine is to have a global operations indicators a la Adobe Lightroom which tells you how many HTTP requests are in flight.
I could go on, but the last paragraphs maybe highlights the lack of understanding the author had: "UI Frameworks (at this point, React has won)". If React had "won" then why would we be having this discussion. React hasn't "won" because it solves one piece of the puzzle: Rendering. For every little other thing you have to incorporate another library or figure out your own solution: Routing, State Management, CRUD / HTTP API, etc. If anything, Ember.js would most closely fit the bill of incorporating most of the things the author seems to care about yet can't articulate clearly.
Data fetching, caching, consistency, and UX are all closely related. If you treat them as separate problems, you're punting the problem onto product engineers, who won't solve it well. (See the last paragraph of the post, which suggests this same idea.)
> That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
You're right that to do this manually it's just state management. But to automatically update the UI, it means your client data layer (eg. Apollo) needs to know & track the identity of fields; the data layer also needs to be able to subscribe to fetches anywhere in your app, not local fetches; it also means your protocol needs to support this identity (via agreed-upon ID fields); etc. These problems are all closely related.
> Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
eg. Relay does this by statically combining requests throughout a React tree into a single request at the top of the tree. You could also do it dynamically, as you suggest. The tradeoff is often performance.
> UX when fetching data
Having engineers manually define loading states doesn't scale. React is approaching this problem with Suspense, and you could imagine standard loading states when fetches are in flight.
I think the post is a bit unfortunate in its wording and this seems to have sent you off on a wrong track.
From how I read it, this the post is not specifically about JS and a discussion of the specific technologies mentioned but rather concerned with the following very general situation:
1. There's a user sitting in front of a browser.
2. There's a backend server providing data and points of interaction with that data.
I think the central point of the post now is that we don't have a satisfying technical solution for this situation.
Let's take a look at one of the points you mentioned. Maybe you might find there's actually some valid points in the post and give it a more favourable reread.
> UX when fetching data: What does that have to do with any of the above?
Here, the author of the post writes: "It’s a big burden for engineers to have to manually add loading spinners and error states, and engineers often forget."
I think it's more or less clear how you could implement this using only plain vanilla js. Cumbersome but doable: A very manual, imperative process.
Now let's envision a technology from a possible future:
const twitterFeed = createMagicDataSource("https://twitter.com/...")
const feedComponent = magicRendererComponent(twitterFeed, state => /* HTML like declarative description of the visuals */)
Imagine this was everything you had write in your code to get the following:
* state gets automatically loaded when the first instance of the component is created
* updates are automatically visualized in the client based on the internals of the declaration in the renderer
* you don't have to specify if the updates are done by polling, websockets or whatever: the two magic methods figure this out by themselves.
* you don't have to specify how the data is fetched in the first place. giving the a URI to the `createMagicDataSource` function is enough.
* additional instances of the component don't fetch the data again
* updates are efficient: the sync method only exchanges exactly the data required, only the minimal visual updates are performed
* marking feeds as "seen" by the user is also done by magic and syncs across devices (same for other non-ephemeral ui state).
Now: And I'm sure you'd agree that we are not there yet technologically. But I hope you agree that this would be really nice.
I actually think the problems outlined are pretty valid.
Yeah, there are solutions to them, but I think what the author is saying is that you have to subscribe to something batteries-included, like Meteor, or otherwise implement the solutions yourself from bits and pieces.
I might intuit that 'request batching' could be benefit http/2 but probably not what they were thinking about. And you'd have to have a lot of simultaneous http requests for it to help at all.
But I'm with you on trying to batch this with your JS should not be a browser thing or even a built in standard library JS function.
One could do it themselves - maybe graphQl kind of this idea of more complex queries + maybe just throw the requests you want to make into an array and when it hits length = 5 send them all at once. But I don't see how that would help response times at all.
almost all of the mentioned issues have solutions out their but the author is jumping around the stack with no sense of true purpose. How does graphql impact your SPA updating views, it doesn’t..
Half of the stuff he mentioned are covered by apollo-graphql and libs in that eco-system: query batching, notifications (via subscriptions), automatic state updates via cache, typesafety with typescript and grapqhl-codegen. It doesn‘t seem like he even looked at the libraries he listed. :(
Ember.js seems to check off a lot of those boxes, if you are willing to use a complete solution and not do the 'pick and choose' method of react and friends. While it has lost popularity in the last few years, it has been moving forward technically. It has become leaner and meaner.
Really learning the "Ember Way" to do things can reduce some of the friction the author mentions.
Ember Data can allow you to talk to multiple api backends (differing schemas) while presenting the same model to your UI. If you use the default JsonApi[0] based backend, you get a lot of things for free: powerful filtering, side loading data (relationships), consistent error handling. Sometimes it can be chatty, but that's a spot where HTTP2 can help.
Use ember-concurrency add-on and you have a nice way to manage your requests, things like de-bouncing, handling loading spinners, etc.
I'm saddened to see that almost a decade after Ember's release, the front-end world still insists rolling their own (terrible) reimplementation of it.
I would've expected that a decade later we would've settled on an Ember-like framework for the front-end and moved beyond constructing URLs and parsing JSON responses manually but apparently not.
I was just talking with someone earlier today about how I sometimes wonder if we traded off “performance” (heavy air quotes intended) for developer productivity too quickly. Ember solves a lot of the big problems of modern web dev and data handling like this blog post points out, and when I think back to my times working with React and wonder if that trade off was really worth it.
> You shouldn’t have to ship kilobytes of metadata describing your data schema to clients in order to fetch data.
I feel like this requirement makes every other requirement impossible unless you use some kind of compiled language feature and even then you're not going to be able to have type safety without some kind of metadata.
Also, I don't think it's possible to make any one library that solves all "data fetching" scenarios. How could you reasonably live data (eg stock ticker), enormous data sets (eg looking through individual analytics events) slow but largely static lookups (eg searching a library catalog) and fast but uncachable (eg a SPA forum) into the same API without making it horrendous?
That said, we do need some better standards. I look after a bunch of API integrations at work and we have everything from SOAP to GraphQL to downloading a magically named .xml.gzip file in a FTP directory with at least 10 other homemade REST implementation between and every single one of them is basically returning the same info but in wildly incompatible ways.
At least the SOAP one crashes when they change the API without telling us :/
I'm currently experimenting with React and WebSockets and they seem to be a perfect fit.
No need to write wrappers for Fetch, network errors and reconnects can be handled on high-level, handlers for each message type can be mounted and unmounted on useEffect hooks, all back-end jobs can notify the user in realtime, all session-based client-side data can be updated in realtime (in single or multiple open tabs).
I'm also using uWebSockets.js[0] which is great in terms of API design, stability, and performance. Their benchmarks[1] are just convincing. Highly recommend people using ExpressJS / Koa / whatever to try it.
Not to toot our own horn, but while this mentions GraphQL with Relay / Apollo as fetching clients, with urql and its normalised cache Graphcache we started approaching more of these problems.
Solving the mentioned problems in the article for best practices around fragments is on our near future roadmap, but there are some other points here that we've worked on that especially Apollo did not (yet)
Request batching is in my humble opinion not quite needed with GraphQL and especially with HTTP/2 and edge caching via persisted queries, however we have stronger guarantees around commutative application of responses from the server.
We also have optimistic updates and a lot of intuitive safe guards around how these and other updates are applied to all normalised data. They're applied in a pre-determined order and optimistic updates are applied in such a way that the optimistic/temporary data can never be mixed with "permanent" data in the cache. It also prevents races by queueing up queries that would otherwise overwrite optimistic data accidentally and defers them up until all optimistic updates are completed, which will all settle in a single batch, rather than one by one.
I find this article really interesting since it seems to summarise a lot of the efforts that we've also identified as "weaknesses" in normalised caching and GraphQL data fetching, and common problems that come up during development with data fetching clients that aren't aware of these issues.
Together with React and their (still experimental / upcoming) Suspense API it's actually rather easy to build consistent loading experiences as well. The same goes for Vue 3's Suspense boundaries too.
Edit: Also this all being said, in most cases Relay actually also does a great job on most of the criticism that the author lays out here, so if the only complaint that a reader here picks up are the DX around fragments and nothing else applies this once again shows how solid Relay can be as well.
Isn't this like someone in 2011 saying "UI frameworks (at this point, jQuery has won)"? I think things like Svelte and other up and coming UI frameworks are still being developed because we all recognize that React is not the ultimate UI Framework. Perhaps we will never get there, but surely we can do better than where we are at.
^ React is just the new jQuery. We have people who basically don't really know HTML, CSS, or JavaScript but make React apps day-in day-out. Ten years ago the exact same thing was happening with jQuery.
I still cringe when I see questions on StackOverflow where they ask "how can I XYZ with jQuery" where XYZ is something that has absolutely nothing to do with the DOM.
I too am curious. All of the platform-specific frameworks which do networking+datastore, that I've used, check even fewer boxes. Especially when it comes to reactive UI and optimistic responses.
This post is hard to digest - it is a rant on 10 different things. All are valid hurdles but I don't see how we can just tie it all up in a bag and call it "data fetching".
Architectures are what help solve these sorts of problems - not tools or libraries. Placing the blame on tools means the real issue has not been identified.
This is possible with the Fetch API actually. You can get the chunked up contents of the request as they arrive. This link has a good example: https://javascript.info/fetch-progress
This applies to everything in the browser honestly. Why can’t I bind a variable to the DOM natively? I want the variable X to match the value of <input> and vice versa without having to set up a bunch of listeners and hope they don’t go in a loop.
sounds like you want something like Vue baked into the browser. but i'd prefer React baked into the browser! we can't have both, so it's probably best we have neither :)
seriously though, why not just write a simple wrapper around event listeners + maybe some proxy magic and get the semantics you prefer? or find a library that does that, i'm sure there's a bunch out there
Server side react components has the best data fetching story I've ever seen
They demoed an application that has a react component powered directly by a SQL query, when the client next fetched anything from that page, changes in terms of data coming out of the SQL query were automatically propagated to the client
No data fetch API just coding like you would in a server side app with React handling the IO
I hope someone starts implementing something like Drupal on top for automatic deadlock prevention, declarative data driven interfaces for forms, schema etc
You can serialize pretty complex graphs of data and UI information in a new format called "HTML."
The software capable of consuming this format is able to merge fragments of it into something called a DOM.
If you're doing something more complex than this in your app then quit whining about browsers not being convenient for you to abuse. The users don't want your crap any more than browser implementers do.
[+] [-] heipei|5 years ago|reply
Second paragraph: "The UI should automatically update everywhere the data is used". Again, what does this have to do with any of the above? That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
UX when fetching data: What does that have to do with any of the above? You still have to decide how your UI displays the fact that a piece of data is loading. What do you expect there to be in place? Best thing I could imagine is to have a global operations indicators a la Adobe Lightroom which tells you how many HTTP requests are in flight.
I could go on, but the last paragraphs maybe highlights the lack of understanding the author had: "UI Frameworks (at this point, React has won)". If React had "won" then why would we be having this discussion. React hasn't "won" because it solves one piece of the puzzle: Rendering. For every little other thing you have to incorporate another library or figure out your own solution: Routing, State Management, CRUD / HTTP API, etc. If anything, Ember.js would most closely fit the bill of incorporating most of the things the author seems to care about yet can't articulate clearly.
[+] [-] bcherny|5 years ago|reply
Data fetching, caching, consistency, and UX are all closely related. If you treat them as separate problems, you're punting the problem onto product engineers, who won't solve it well. (See the last paragraph of the post, which suggests this same idea.)
> That is state management, yeah, and you can build proper state management with any HTTP library and any message serialisation format.
You're right that to do this manually it's just state management. But to automatically update the UI, it means your client data layer (eg. Apollo) needs to know & track the identity of fields; the data layer also needs to be able to subscribe to fetches anywhere in your app, not local fetches; it also means your protocol needs to support this identity (via agreed-upon ID fields); etc. These problems are all closely related.
> Request batching: How would that happen "automatically"? By waiting to fire requests and then batching them?
eg. Relay does this by statically combining requests throughout a React tree into a single request at the top of the tree. You could also do it dynamically, as you suggest. The tradeoff is often performance.
> UX when fetching data
Having engineers manually define loading states doesn't scale. React is approaching this problem with Suspense, and you could imagine standard loading states when fetches are in flight.
[+] [-] Garlef|5 years ago|reply
From how I read it, this the post is not specifically about JS and a discussion of the specific technologies mentioned but rather concerned with the following very general situation:
1. There's a user sitting in front of a browser.
2. There's a backend server providing data and points of interaction with that data.
I think the central point of the post now is that we don't have a satisfying technical solution for this situation.
Let's take a look at one of the points you mentioned. Maybe you might find there's actually some valid points in the post and give it a more favourable reread.
> UX when fetching data: What does that have to do with any of the above?
Here, the author of the post writes: "It’s a big burden for engineers to have to manually add loading spinners and error states, and engineers often forget."
I think it's more or less clear how you could implement this using only plain vanilla js. Cumbersome but doable: A very manual, imperative process.
Now let's envision a technology from a possible future:
Imagine this was everything you had write in your code to get the following:* state gets automatically loaded when the first instance of the component is created
* updates are automatically visualized in the client based on the internals of the declaration in the renderer
* you don't have to specify if the updates are done by polling, websockets or whatever: the two magic methods figure this out by themselves.
* you don't have to specify how the data is fetched in the first place. giving the a URI to the `createMagicDataSource` function is enough.
* additional instances of the component don't fetch the data again
* updates are efficient: the sync method only exchanges exactly the data required, only the minimal visual updates are performed
* marking feeds as "seen" by the user is also done by magic and syncs across devices (same for other non-ephemeral ui state).
Now: And I'm sure you'd agree that we are not there yet technologically. But I hope you agree that this would be really nice.
[+] [-] lxe|5 years ago|reply
Yeah, there are solutions to them, but I think what the author is saying is that you have to subscribe to something batteries-included, like Meteor, or otherwise implement the solutions yourself from bits and pieces.
[+] [-] dillondoyle|5 years ago|reply
But I'm with you on trying to batch this with your JS should not be a browser thing or even a built in standard library JS function.
One could do it themselves - maybe graphQl kind of this idea of more complex queries + maybe just throw the requests you want to make into an array and when it hits length = 5 send them all at once. But I don't see how that would help response times at all.
[+] [-] batoure|5 years ago|reply
almost all of the mentioned issues have solutions out their but the author is jumping around the stack with no sense of true purpose. How does graphql impact your SPA updating views, it doesn’t..
[+] [-] whateveracct|5 years ago|reply
Haxl is good prior art for automatic batching and concurrency. But it helps that Haskell makes the "when" very explicit thanks to its type system.
https://engineering.fb.com/2014/06/10/web/open-sourcing-haxl...
[+] [-] pferdone|5 years ago|reply
[+] [-] yrio|5 years ago|reply
(I'm not familiar with Ember.js)
[+] [-] joverholt|5 years ago|reply
Really learning the "Ember Way" to do things can reduce some of the friction the author mentions.
Ember Data can allow you to talk to multiple api backends (differing schemas) while presenting the same model to your UI. If you use the default JsonApi[0] based backend, you get a lot of things for free: powerful filtering, side loading data (relationships), consistent error handling. Sometimes it can be chatty, but that's a spot where HTTP2 can help.
Use ember-concurrency add-on and you have a nice way to manage your requests, things like de-bouncing, handling loading spinners, etc.
[0]https://jsonapi.org/format/
[+] [-] Nextgrid|5 years ago|reply
I would've expected that a decade later we would've settled on an Ember-like framework for the front-end and moved beyond constructing URLs and parsing JSON responses manually but apparently not.
[+] [-] shakezula|5 years ago|reply
[+] [-] orf|5 years ago|reply
[+] [-] yrio|5 years ago|reply
(I'm not familiar with Ember.js)
[+] [-] lexicality|5 years ago|reply
I feel like this requirement makes every other requirement impossible unless you use some kind of compiled language feature and even then you're not going to be able to have type safety without some kind of metadata.
Also, I don't think it's possible to make any one library that solves all "data fetching" scenarios. How could you reasonably live data (eg stock ticker), enormous data sets (eg looking through individual analytics events) slow but largely static lookups (eg searching a library catalog) and fast but uncachable (eg a SPA forum) into the same API without making it horrendous?
[+] [-] lexicality|5 years ago|reply
At least the SOAP one crashes when they change the API without telling us :/
[+] [-] joshxyz|5 years ago|reply
No need to write wrappers for Fetch, network errors and reconnects can be handled on high-level, handlers for each message type can be mounted and unmounted on useEffect hooks, all back-end jobs can notify the user in realtime, all session-based client-side data can be updated in realtime (in single or multiple open tabs).
I'm also using uWebSockets.js[0] which is great in terms of API design, stability, and performance. Their benchmarks[1] are just convincing. Highly recommend people using ExpressJS / Koa / whatever to try it.
[0] https://github.com/uNetworking/uWebSockets.js/
[1] https://github.com/uNetworking/uWebSockets/tree/master/bench...
[+] [-] kekington|5 years ago|reply
[0] https://hexdocs.pm/phoenix/channels.html
[+] [-] azangru|5 years ago|reply
Needs a fairly capable server though, I suppose. One that can handle lots and lots of open websocket connections.
[+] [-] philplckthun|5 years ago|reply
Solving the mentioned problems in the article for best practices around fragments is on our near future roadmap, but there are some other points here that we've worked on that especially Apollo did not (yet)
Request batching is in my humble opinion not quite needed with GraphQL and especially with HTTP/2 and edge caching via persisted queries, however we have stronger guarantees around commutative application of responses from the server.
We also have optimistic updates and a lot of intuitive safe guards around how these and other updates are applied to all normalised data. They're applied in a pre-determined order and optimistic updates are applied in such a way that the optimistic/temporary data can never be mixed with "permanent" data in the cache. It also prevents races by queueing up queries that would otherwise overwrite optimistic data accidentally and defers them up until all optimistic updates are completed, which will all settle in a single batch, rather than one by one.
I find this article really interesting since it seems to summarise a lot of the efforts that we've also identified as "weaknesses" in normalised caching and GraphQL data fetching, and common problems that come up during development with data fetching clients that aren't aware of these issues.
Together with React and their (still experimental / upcoming) Suspense API it's actually rather easy to build consistent loading experiences as well. The same goes for Vue 3's Suspense boundaries too.
Edit: Also this all being said, in most cases Relay actually also does a great job on most of the criticism that the author lays out here, so if the only complaint that a reader here picks up are the DX around fragments and nothing else applies this once again shows how solid Relay can be as well.
[+] [-] irrational|5 years ago|reply
Isn't this like someone in 2011 saying "UI frameworks (at this point, jQuery has won)"? I think things like Svelte and other up and coming UI frameworks are still being developed because we all recognize that React is not the ultimate UI Framework. Perhaps we will never get there, but surely we can do better than where we are at.
[+] [-] deergomoo|5 years ago|reply
Where React still leads handsomely is tooling (the official VS Code Vue plug-in is buggy as all hell), breadth of ecosystem, and employability.
[+] [-] pseudosavant|5 years ago|reply
I still cringe when I see questions on StackOverflow where they ask "how can I XYZ with jQuery" where XYZ is something that has absolutely nothing to do with the DOM.
[+] [-] patates|5 years ago|reply
[+] [-] dhritzkiv|5 years ago|reply
[+] [-] whalesalad|5 years ago|reply
Architectures are what help solve these sorts of problems - not tools or libraries. Placing the blame on tools means the real issue has not been identified.
[+] [-] pier25|5 years ago|reply
Fetching data has nothing to do with managing state, rendering UI, or even UX. Each layer has different problems it needs to address.
And btw you can totally do GraphQL requests with fetch().
[+] [-] gitpusher|5 years ago|reply
However, by NO means should all of this be handled by one "data-fetching library" - as he suggests.
[+] [-] gajus|5 years ago|reply
[+] [-] bcherny|5 years ago|reply
I might lump its support into a few groups:
- Full (eg. request batching)
- Full, but painful (eg. colocation)
- Partial (eg. mutation queuing)
- Non-existent (eg. durability guarantees)
(I've been using Relay Modern daily for a few years.)
[+] [-] ch4s3|5 years ago|reply
[+] [-] tyingq|5 years ago|reply
[+] [-] pseudosavant|5 years ago|reply
[+] [-] kabes|5 years ago|reply
[+] [-] asiando|5 years ago|reply
[+] [-] uryga|5 years ago|reply
seriously though, why not just write a simple wrapper around event listeners + maybe some proxy magic and get the semantics you prefer? or find a library that does that, i'm sure there's a bunch out there
[+] [-] iskin|5 years ago|reply
Logux is one of the example of this approach:
https://logux.io/
[+] [-] slifin|5 years ago|reply
They demoed an application that has a react component powered directly by a SQL query, when the client next fetched anything from that page, changes in terms of data coming out of the SQL query were automatically propagated to the client
No data fetch API just coding like you would in a server side app with React handling the IO
I hope someone starts implementing something like Drupal on top for automatic deadlock prevention, declarative data driven interfaces for forms, schema etc
[+] [-] swiley|5 years ago|reply
If you're doing something more complex than this in your app then quit whining about browsers not being convenient for you to abuse. The users don't want your crap any more than browser implementers do.
[+] [-] ficklepickle|5 years ago|reply
One of the many reasons I love nuxt/Vue.
[+] [-] robertoandred|5 years ago|reply
[+] [-] dinkleberg|5 years ago|reply
Overall react-query is impressive for data fetching, but I’ve run into issues with mutations and cache invalidation.
[+] [-] dsego|5 years ago|reply
[+] [-] SimeVidas|5 years ago|reply