I will go even further with this one. I've been working on a personal project of mine with REST APIs on the backend and a fancy combo of TypeScript/React/Redux Saga on the frontend. Everything was shiny and cool, except the overall development process was super slow. I'd spend hours polishing components, figuring out various states, installing TS libraries, fighting the compiler and god know what else. It was exhausting and tiresome, until one day I said "f#%k it, I'm done". I opened my favorite search engine and typed bootstrap premium themes. jQuery? Whatever. HTML? Fine. Pure CSS? Sure! No AJAX. No fancy reloads. No history management. Plain old <form method="POST">, good ol' cookies and simple HTML files. Within just three weekends I was already MILES ahead of a previous stack.
Long story short, if you are working on a personal project, please, consider the most dumb setup. With the vast options of super polished modern frameworks, it'll take you pretty far. A few more weekends and I'll prob be ready to go prod.
Edit: remembered something fun. I have a page that requires to poll the backend and then take some action. So I thought a bit of Ajax is fine. I opened the corresponding HTML file and started typing:
<script>
jQuery.ajax(...)
but then... hold on a minute! That's getting way too complex. META REFRESH FTW, M%F%CK%RS! :D
Same here, except I didn't go back to that extreme. There is a wonderful underrated middleground which is using tools such as Django + Unpoly[1] + Slippers[2] + Tailwind, or Rails + view_components + hotwire + Tailwind, etc.... you can be insanely productive while still making your code very maintainable.
People usually think that it is either a modern SPA or jQuery spaghetti. Those are two extremes. If you put 10% the effort you were putting into building your SPA + API key into organizing better a "modern traditional" stack.... it can be wonderful.
> Long story short, if you are working on a personal project, please, consider the most dumb setup.
This is not limited to personal projects. I can’t recall more than a single project I’ve worked on during the last decade where front-end code was really useful. Some cool stuff, ok, but never worth the pain.
I acknowledge it can be useful. For some real time project. Not for your crud-for-a-living.
You took a similar path that I did with my personal website. Kubernetes, istio, redis, OIDC, redux, sagas, you name it. All this unnecessary complexity was, however, deliberate. It was an excuse to learn about all the interesting things people are talking about. Then one day I decided the experiment was over and rewrote it from scratch in ~24 hours using plain create-react-app and Ubuntu on a $5/mo droplet. It was a valuable learning experience.
How does one reconcile moving fast like this with having fun coding? For me, I don't really want to use PHP, jQuery, etc, I want to use TypeScript and React simply because they're more fun to use, if a little slower.
"...TypeScript/React/Redux Saga on the frontend..." This plus RxJS + websockets + in-house validation libraries + ... was the stack of my last company. For doing F**ng CRUD forms.
This was one of the main reasons why I left.
Also, consider using Rails and Turbolinks if you're familiar with them. You can get pretty far with turbolinks and the responsiveness is similar to a single page app.
Here's an idea – instead of passing the "visual page structure" to the client as JSON, use a markup language specifically designed for that purpose – HTML.
What ends up happening though is that you have to build that general purpose API for mobile apps regardless, and right after that developers start using it to render their web app components. Rinse and repeat.
> Here's an idea – instead of passing the "visual page structure" to the client as JSON, use a markup language specifically designed for that purpose – HTML.
Why didn't you just come out and say "Use Hotwire!" or "Use LiveView!" ;)
Wait, are you saying that "html-over-the-wire" is preferable even for mobile apps, or that having mobile clients makes using "html-over-the-wire" undesirable?
I totally agree with the idea of catering your BE to service your -- most likely -- single FE client. Every rails project I have worked on has this 1:1 mapping between endpoints and entities and it drives me insane. We set up these APIs to have clean lines between models and endpoints and consequently we push all the complexity of combining all of these related entities onto the FE. Then we wonder why the FE is so complicated! If I have to make 5+ API requests for one page then there was a severe failure in planning the API.
What's even more frustrating is this is the rationale for moving to something like graphQL. We have engineers advocating for it because then it's "just one request per page" and it doesn't click that the framework we are using is pushing us into a less than favorable API design.
GraphQL is appealing from the frontend perspective, but I've yet to see a case where it would do anything but make the backend 10x harder to develop, and it doesn't solve the problem of the frontend also having to know all of the entity relationships. Since you're using Rails, it's a lot easier to just add a controller endpoint that provides the composite data (as has been mentioned elsewhere already) for one UI action/page/navigation/state change, instead of using an entity-focused API. Controllers are supposed to abstract over potentially multiple models, not just be 1:1 endpoints for each type of entity.
I may have misunderstood what you're saying, but binding your endpoints to your entities isn't a bad idea at all. For example, even when you're not making an SPA and are using good ol' server-side generated Rails, you would still have a 1:1 mapping of pages to resources, and your "app" would move you from page to page as you did stuff. This works out pretty well because your resources are what people want to interact with!
Try out Basecamp sometime - watch the routes, they're all well bound to resources, yet it feels very much like an app. The progenitors of Rails are still doing things the old way and they're pretty good at it.
I completely agree with the author. A few months back I examined the number of network requests required to populate the data for a page I was trying to improve performance on.
Let's just say it did not go well. Despite best intentions of being API-first with a SPA front-end, when you have a data-heavy and query-heavy application, it is absolutely the wrong choice ten times out of ten.
It leads to (not so) hilarious situations where the older, server-side rendered version of your app that uses jQuery absolutely demolishes your new hotness in performance. Try explaining that one with a straight face.
Did you consider endpoint composition? Instead of making 7 API requests (via fetch, or whatever), provide an endpoint that’s simply the composition of the 7 endpoints.
I feel like your last example is a known tradeoff not an exception. Frontend frameworks never intended to solve performance, it had usually been at the cost of performance that we get a more powerful/dev friendly experience.
They are heavier applications not simple websites, and you pay for that in performance and often UX latency. The modern web feels slower than it did in 2010 in many situations. Latency and lazy developers not implementing affordances for when things are loading or the page is changing. Your app is probably slow for everyoelse, you need loading spinners! Especially if you are hijacking the browser navigation.
With HTTP2's multiplexing feature, it doesn't really matter how many requests the frontend is firing up. Just use HTTP 2 when you have multiple requests on a page.
Just another reason to use graphql. I really don't understand why the industry has to spend another decade kicking and screaming until everyone transitions to using it.
I've built and used APIs for full systems in all 3x: RPC, REST, GraphQL as both a creator and consumer and as far as I'm concerned everything else is dead
Do you honestly think if we got a bunch of people to comment who'd used all three, they'd all say Yep, graphql, that's my favourite? None of them is dead because there are people who prefer each one, and ceteris is rarely paribus - if one (I originally wrote RPC but I suppose it could be any of them) is already in use for inter microservice communication say, that's going to tilt the balance.
I mostly agree with this (provided you are sure you should be building an SPA in the first place, which is currently a massively over-used architectural pattern).
The one thing I disagree with is dismissing the idea "But we can reuse this API for the mobile app too!"
Depending on how you organization is structured, it can be common for the mobile app team to end up in need of APIs that aren't being delivered promptly.
Should that happen, having a web front-end that's entirely powered by APIs can level the playing field enormously - the website can no longer "cheat" and not bother with an API, which means the mobile team will get everything they need.
I truly start to think that organizations with silos in what you can code are just wrong and prone to this over engineering thing.
Just make your backend the shared space between your teams. There is no reason a qualified front end or mobile programmer could not at least write the controller for the endpoint they need.
It's a solid pattern. Eventually I find you end up wanting a write API for validation, and a read API for flexible querying in some applications though.
That is well-served by GraphQL mutations and queries.
BFF pattern can be more approachable and reduce client code, however, and that's a plus.
One of the best parts of Twitter is they load their main and the various sidebar widgets asynchronously. That allows you to see things that load first, and makes it appear as if something is "happening".
I don't like this concept of sending the page structure as JSON.
It would require all parts of the UI to just be stuck on "loading" while the back-end essentially the entire page.
If you decide to turn this into "ok well make it /page/a/component/3" then we're back to square one on the whole idea.
If everything on the page is slow to load, then I would agree. Twitter is such a crazy outlier that I can't even imagine what they have to deal with. In a usual case you'd get a rare 1-2 slow retrievals, and the rest of the page can show up immediately.
Another use case where you are very probably better to build a general-purpose API: when you want offline support, especially if you want sync. Ad-hoc offline support will cause you far more trouble than the effort of making a principled design. (As far as foundations for a sync-capable web protocol are concerned, I’d suggest JMAP as generally a good choice for client-server sync, and even if it doesn’t match your requirements, it’s good reading for those unfamiliar with sync considerations to get a feel for what you’re going to need.)
One you get over the hump of splitting your data and app APIs, the next step is to realize that your application API can be a hypermedia API rather than a dumb JSON API, and you are off to the races:
I'll go one step further than this - build an endpoint for each component in your frontend. That way you can re-use each of these components on multiple pages, and you end up with components that are scoped fairly narrowly.
In practice this ends up building out a reasonable approximation of what a "public" api would be. Eg, your WidgetList component forces out a /widgets endpoint, which might get re-used by some other widget too. That's fine. The point is you're still working UI-first and making the minimum viable backend.
With lots of components on a page, you might end up making multiple calls for the same information. That's also fine. You can optimize that later if it becomes a real problem.
How do you stop the spinner/load jank hell of 100 different components all requesting their own data? Not to mention, even if you're running a low traffic site so your backend performance is fine, the browser is limited to a set number of concurrent requests.
Locally, it might all go super fast, but as soon as you deploy it, that dashboard calling 30 endpoints is going to feel insanely slow just waiting for the network to become free for use.
> Have you seen that list of annoying decisions up there? For one, they are gone now.
Erm, even with that solution you still have to consider the changes that impact this schema. These problems didn't go away, they simply became masked differently.
I'd argue that the difference between "we're changing schema for all callers" vs "we're changing schema for 1 page of 1 frontend that we fully control" is so profound, as to render that whole concern moot.
I'm stuck in this quagmire and have been for 18 months. Right now it's manifesting as a React on GraalJS project written in Clojure. I've learned a lot, but actual progress is, of course, elusive.
I have done something similar with a previous project at work, though everything went through three layers, the front-end, which was reasonable as they describe, the front backend, which just serves the single page's required data, and the back backend which was mostly the legacy version of the site's back-end but still had useful logic in it.
Ended up working out okay, and the idea of each pages gets a service endpoint was a bit weird at first, but really gave it the flexibility to avoid needing to go touching the legacy layer very often.
I just use GraphQL and freeze the queries at compile time so only known queries can be used. That stops worries about misuse from the frontend.
On the backend, for performance we have two choices:
1. Over-fetch. It's relatively cheap doing that from a cache. Most queries don't use too many different variations.
2. Optimize what data we fetch based on the node's children in the GraphQL query received. I don't think people do this often enough but... GraphQL gives you the full query as an AST, so you can use that at runtime to know what your node's children will be rendering before they're hit. Because we enforce #1, we don't have to build some general-purpose query-builder a simple check of "Are you X named query?" is good enough. Internally it looks kind of like JSON RPC.
Am I doing it wrong? It seems to give you the best of both worlds because GraphQL APIs have great tooling and documentation behind them, and if you cut out all the 'general' purpose aspects in production you can be fairly efficient (e.g. our GraphQL API is about 5% slower than the JSON API it replaced).
I think this is totally fine if you're willing to accept some performance trade offs in favor of productivity, you're staying mindful of limits and scoping, and just generally you're accustomed to living in the GraphQL world.
I could not agree more. This is the way I design things and it makes life so much easier. It solves so many problems by just building a separate API for each app. They can all use and re-use the same underlying functions and logic, but each API serves the data in the most convenient way for whatever page is being rendered. I would never go back.
Don't bother looking back. I would encourage you to consider GraphQL depending on your use case, but for data-heavy apps the generic API approach is the worst approach.
This resonates with my experience, which is basically doing this the wrong way and watching all of it play out the way OP warns it will.
I'm sure there are others with the opposite experience but I can vouch both for the desire and expectation of devs to build general-purpose APIs when not needed, and the ongoing pain resulting from that year after year.
I agree with this. It's tangentially related to it being so standard to refer to every JSON HTTP API as a "REST" API. REST APIs (in the strict sense) are for public integrations (but how common is that really?) / other teams in your (largish) company.
> Your business logic has now moved from being haphazardly split between frontend and backend into just backend.
This. I have spent countless hours pulling my hair trying to understand backend business logic only to see part of being implemented in the front end.
What's more, the supposed generic backend API makes quite a lot of assumptions about the front end orchestration, so the API can be used only in conjunction with the front end it is serving.
Now not only is your backend API not reusable but also the business logic is brain split between the front end and backend.
> I wish. It’s pretty hard to measure these kinds of things in our industry. Who’s gonna maintain 2 architectures for the same software for 3 years, and compare productivity between them? All I got is a mixed bag of personal experiences. Feels inductively justifiable.
Oh wow this is sort of fascinating. Maybe we could crowd source experiments like this. Like maintain some random open source app with two different backend structures for X years and blog about it, share battle scars.
[+] [-] andreygrehov|4 years ago|reply
Long story short, if you are working on a personal project, please, consider the most dumb setup. With the vast options of super polished modern frameworks, it'll take you pretty far. A few more weekends and I'll prob be ready to go prod.
Edit: remembered something fun. I have a page that requires to poll the backend and then take some action. So I thought a bit of Ajax is fine. I opened the corresponding HTML file and started typing:
but then... hold on a minute! That's getting way too complex. META REFRESH FTW, M%F%CK%RS! :D[+] [-] midrus|4 years ago|reply
People usually think that it is either a modern SPA or jQuery spaghetti. Those are two extremes. If you put 10% the effort you were putting into building your SPA + API key into organizing better a "modern traditional" stack.... it can be wonderful.
[1] https://unpoly.com/
[2] https://mitchel.me/slippers/
[+] [-] pjerem|4 years ago|reply
This is not limited to personal projects. I can’t recall more than a single project I’ve worked on during the last decade where front-end code was really useful. Some cool stuff, ok, but never worth the pain.
I acknowledge it can be useful. For some real time project. Not for your crud-for-a-living.
[+] [-] GrantZvolsky|4 years ago|reply
[+] [-] satvikpendem|4 years ago|reply
[+] [-] hatch_q|4 years ago|reply
If you really need reactive global state i'd prefer to use MobX.
[+] [-] midrus|4 years ago|reply
[+] [-] BrandoElFollito|4 years ago|reply
It may be too heavy or whatever but I easily build spa, pwa, electron from one place.
I forced myself not to look for the new shiny thing unil I really get to a point where Quasar and Vue are not enough.
I am an amateur dev and all users will have evergreen browsers and fast connections so plenty of concerns are moot.
[+] [-] the_fire_friar|4 years ago|reply
[1] https://fiers.co/
[2] https://getelodie.com/
[+] [-] tdeck|4 years ago|reply
[+] [-] mamcx|4 years ago|reply
[+] [-] paxys|4 years ago|reply
What ends up happening though is that you have to build that general purpose API for mobile apps regardless, and right after that developers start using it to render their web app components. Rinse and repeat.
[+] [-] lucasyvas|4 years ago|reply
This is easier said than done, however, as you basically need to reinvent parts of HTML but worse (ex. attaching event handlers to UI elements, etc.)
Or, use GraphQL. Or a BFF for each client.
[+] [-] square_usual|4 years ago|reply
Why didn't you just come out and say "Use Hotwire!" or "Use LiveView!" ;)
[+] [-] davidatbu|4 years ago|reply
[+] [-] qudat|4 years ago|reply
What's even more frustrating is this is the rationale for moving to something like graphQL. We have engineers advocating for it because then it's "just one request per page" and it doesn't click that the framework we are using is pushing us into a less than favorable API design.
[+] [-] nitrogen|4 years ago|reply
[+] [-] square_usual|4 years ago|reply
Try out Basecamp sometime - watch the routes, they're all well bound to resources, yet it feels very much like an app. The progenitors of Rails are still doing things the old way and they're pretty good at it.
[+] [-] lucasyvas|4 years ago|reply
Let's just say it did not go well. Despite best intentions of being API-first with a SPA front-end, when you have a data-heavy and query-heavy application, it is absolutely the wrong choice ten times out of ten.
It leads to (not so) hilarious situations where the older, server-side rendered version of your app that uses jQuery absolutely demolishes your new hotness in performance. Try explaining that one with a straight face.
[+] [-] ovao|4 years ago|reply
[+] [-] ehnto|4 years ago|reply
They are heavier applications not simple websites, and you pay for that in performance and often UX latency. The modern web feels slower than it did in 2010 in many situations. Latency and lazy developers not implementing affordances for when things are loading or the page is changing. Your app is probably slow for everyoelse, you need loading spinners! Especially if you are hijacking the browser navigation.
[+] [-] afurculita|4 years ago|reply
[+] [-] intellix|4 years ago|reply
I've built and used APIs for full systems in all 3x: RPC, REST, GraphQL as both a creator and consumer and as far as I'm concerned everything else is dead
[+] [-] OJFord|4 years ago|reply
[+] [-] simonw|4 years ago|reply
The one thing I disagree with is dismissing the idea "But we can reuse this API for the mobile app too!"
Depending on how you organization is structured, it can be common for the mobile app team to end up in need of APIs that aren't being delivered promptly.
Should that happen, having a web front-end that's entirely powered by APIs can level the playing field enormously - the website can no longer "cheat" and not bother with an API, which means the mobile team will get everything they need.
[+] [-] pjerem|4 years ago|reply
I truly start to think that organizations with silos in what you can code are just wrong and prone to this over engineering thing.
Just make your backend the shared space between your teams. There is no reason a qualified front end or mobile programmer could not at least write the controller for the endpoint they need.
[+] [-] underwater|4 years ago|reply
[+] [-] lucasyvas|4 years ago|reply
That is well-served by GraphQL mutations and queries.
BFF pattern can be more approachable and reduce client code, however, and that's a plus.
[+] [-] EMM_386|4 years ago|reply
I don't like this concept of sending the page structure as JSON.
It would require all parts of the UI to just be stuck on "loading" while the back-end essentially the entire page.
If you decide to turn this into "ok well make it /page/a/component/3" then we're back to square one on the whole idea.
[+] [-] hakunin|4 years ago|reply
[+] [-] chrismorgan|4 years ago|reply
[+] [-] TedDoesntTalk|4 years ago|reply
[+] [-] recursivedoubts|4 years ago|reply
You application API is churny, specific and tuned for certain screens and user interfaces.
Your general data API is, well, general, rate limited, concerned with limiting the ability of that expressive power to damage your system, etc.
https://intercoolerjs.org/2016/01/18/rescuing-rest.html
One you get over the hump of splitting your data and app APIs, the next step is to realize that your application API can be a hypermedia API rather than a dumb JSON API, and you are off to the races:
https://htmx.org/essays/hypermedia-apis-vs-data-apis/
[+] [-] stickfigure|4 years ago|reply
In practice this ends up building out a reasonable approximation of what a "public" api would be. Eg, your WidgetList component forces out a /widgets endpoint, which might get re-used by some other widget too. That's fine. The point is you're still working UI-first and making the minimum viable backend.
With lots of components on a page, you might end up making multiple calls for the same information. That's also fine. You can optimize that later if it becomes a real problem.
[+] [-] yurishimo|4 years ago|reply
Locally, it might all go super fast, but as soon as you deploy it, that dashboard calling 30 endpoints is going to feel insanely slow just waiting for the network to become free for use.
[+] [-] scns|4 years ago|reply
(edit) What could go wrong? https://www.youtube.com/watch?v=y8OnoxKotPQ
[+] [-] TedDoesntTalk|4 years ago|reply
[+] [-] urda|4 years ago|reply
Erm, even with that solution you still have to consider the changes that impact this schema. These problems didn't go away, they simply became masked differently.
[+] [-] hakunin|4 years ago|reply
[+] [-] aaron-santos|4 years ago|reply
[+] [-] Tyr42|4 years ago|reply
Ended up working out okay, and the idea of each pages gets a service endpoint was a bit weird at first, but really gave it the flexibility to avoid needing to go touching the legacy layer very often.
[+] [-] true_religion|4 years ago|reply
On the backend, for performance we have two choices:
1. Over-fetch. It's relatively cheap doing that from a cache. Most queries don't use too many different variations.
2. Optimize what data we fetch based on the node's children in the GraphQL query received. I don't think people do this often enough but... GraphQL gives you the full query as an AST, so you can use that at runtime to know what your node's children will be rendering before they're hit. Because we enforce #1, we don't have to build some general-purpose query-builder a simple check of "Are you X named query?" is good enough. Internally it looks kind of like JSON RPC.
Am I doing it wrong? It seems to give you the best of both worlds because GraphQL APIs have great tooling and documentation behind them, and if you cut out all the 'general' purpose aspects in production you can be fairly efficient (e.g. our GraphQL API is about 5% slower than the JSON API it replaced).
[+] [-] hakunin|4 years ago|reply
[+] [-] issa|4 years ago|reply
[+] [-] lucasyvas|4 years ago|reply
[+] [-] spfzero|4 years ago|reply
I'm sure there are others with the opposite experience but I can vouch both for the desire and expectation of devs to build general-purpose APIs when not needed, and the ongoing pain resulting from that year after year.
[+] [-] da39a3ee|4 years ago|reply
[+] [-] vishnugupta|4 years ago|reply
This. I have spent countless hours pulling my hair trying to understand backend business logic only to see part of being implemented in the front end.
What's more, the supposed generic backend API makes quite a lot of assumptions about the front end orchestration, so the API can be used only in conjunction with the front end it is serving.
Now not only is your backend API not reusable but also the business logic is brain split between the front end and backend.
[+] [-] gnulinux|4 years ago|reply
> I wish. It’s pretty hard to measure these kinds of things in our industry. Who’s gonna maintain 2 architectures for the same software for 3 years, and compare productivity between them? All I got is a mixed bag of personal experiences. Feels inductively justifiable.
Oh wow this is sort of fascinating. Maybe we could crowd source experiments like this. Like maintain some random open source app with two different backend structures for X years and blog about it, share battle scars.