top | item 43495266

(no title)

Tehnix | 11 months ago

> Ah, the penny drops. The idea that you can’t run a traditional server and must rely on serverless vendor if you’re “serious”

That's not at all how you should read this. They later on give an example of exactly what kinds of problems you'll run into once you start needing to horizontally scale you Next.js servers (e.g. as pods in k8s, which is not serverless):

> The issue of stale data is trickier than it seems. For example, as each node has its own cache, if you use revalidatePath in your server action or route handler code, that code would run on just one of your nodes that happens to process that action/route, and only purge the cache for that node.

Seeing as a Node.js server running Next.js serving SSR or ISR (otherwise you'd just serve static files, which I personally prefer) is not known to have the greatest performance, you will quickly run into the need of needing to scale up your application once you hit any meaningful amount of traffic.

You can then try to keep scaling vertically to avoid the horizontal pains, but even that has limits seeing as Node.js is single-threaded, and will run into issues with the templating part of stringing together HTML simply taking too long (that is, compute will always block, only I/O can be yielded).

The common solution for this in Python, Ruby, and JS/Node.js is to run more instances of your program. Could be on the same machine still, but voila! you are now in horizontal scaling land, and will run into the cache issues mentioned above.

There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.

discuss

order

eddythompson80|11 months ago

> (e.g. as pods in k8s, which is not serverless):

> There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.

It's not because you can use an external cache like Redis[1]. You can scale to hundreds of instances with an external redis cache and you'll be fine. The problem is that you can't operate on Netlify scale with a simple implementation like that. Netlify can't afford running a redis instance for every NextJS application without significantly cutting into their margins (not just from compute cost, but running and managing millions of redis instances at scale won't work).

Clearly Vercel has their own in-house cache service that they have priced in their model. Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

[1] https://github.com/vercel/next.js/tree/canary/examples/cache...

cryptonym|11 months ago

Interesting and definitely something platforms must take into consideration.

Now back to the post, implementing custom cache is not something Netlify is strongly complaining about. They are mostly asking for some documentation with rather stable APIs. Other Frameworks seems to provide that.

ascorbic|11 months ago

> Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

But they have done that, as they say in the post.

Disclosure: used to work at Netlify, now work at Astro