I don't really agree with the critiques. E.g. GCP will run a Docker container in a serverless fashion, so you can run any language you like using that.
The big issue, from my point of view, is that the programming model takes all the problems of microservices and adds more. Microservices make even trivial features into a distributed system, bringing all the difficulties of reasoning about code that distributed systems entail. Then add an opaque runtime and almost certainly a number of vendor specific services to handle state and other features the serverless model cannot address directly, and you have a recipe for slow development and frustrating debugging.
I don't see serverless as having failed at all. FAAS is a great model, you can run a functional monolith and easily break pieces off or integrate stuff in different languages without changing patterns. Doing it with cloud services like lambda can be expensive but if you have kube setup it has the best of all worlds.
Same here! We're fans of Cloud Run at my company, but it'd be great if they made it easier to pull messages from Pub/Sub using Cloud Run. Sometimes you just don't want push.
The pricing model of serverless platform is enough for me to not want to use them. Being charged for every little thing your app does is crazy especially when you see horror stories of runaway bills because a function has a simple bug in it that you didn't catch and overnight it ran up your bill so high you can't pay it and are now praying the company feels bad for you.
I'm scared to use serverless platform unless I don't have to put in my credit card lol.
I feel safe paying the same price every month despite my app growing. Servers are powerful these days and are very simple to maintain and scale using cloud platforms because they do it all for you lol.
I enjoyed Google's GCP for quite a long time until they removed the ability to cap expenses with a budget limit. (It used to be that if you hit your budget limit you can make your site error out). Now everyone on GCP is one DDOS away from a nightmare cloud bill. I'd rather use a traditional server and be able to sleep at night. I moved all my sites and client sites off GCP.
The most annoying part is that Googles infrastructure for cloud computing is so much better than the others if you're willing to work within their ecosystem. Simple deployments, version management, rollbacks, etc... There is nothing quite like it. (Im not saying there aren't competitors, just that Google seems easiest to use)
Article is 3.5 years old, best put the year in the title, OP!
I think the definition of serverless is too narrow (AWS lambdas/ Azure functions) and that serverless is really "I want to build apps, not manage infrastructure". That's not the same thing as putting everything onto functions.
I have a monolith we're preparing to move onto a managed container system (prolly AWS AppRunner). I don't want to manage K8 if I can help it, our app doesn't need complex server architecture.
Personally, I think it's just the next layer of abstraction up. Some won't benefit from serverless in the same way some are better off with in-house tin. I know some that need custom chipsets in-house, so can't even buy a stock rack server! However, many many web apps don't really need the control and will probably use facets of serverless over time. But it is still a revolution for old people like me that just don't want to manage servers anymore!
AWS right now feels like the most lagging in serverless containers.
If you get a chance, try out the DX with Google Cloud Run or GKE AutoPilot. Building and shipping to GCR is fast enough that it feels like a local build-run worflow. Google Cloud Run jobs are also fantastic and you get a pretty hefty free monthly grant (~60h of compute).
Both Azure Container Apps and GCR are true scale to zero whereas AppRunner is not and always maintains a minimum monthly baseline cost (~$5-6).
AWS feels the most behind in terms of its serverless container workload experience (I use AWS every day professionally, but use GCP and Azure for side projects).
> its roots can be traced all the way back to 2006
Serverless was already a thing in 70s, in the mainframe era. In the 90s the pendulum moved to servers and server farms and nowadays is sliding back.
> The Problems With Serverless:
> Limited Programming Languages
> Vendor Lock
> Performance
None of this is a real problem, the only thing that matters is the cost-effectiveness, and currently (2024) it's stil cheaper to run most of your software on "traditional VMs".
> The concept of letting users pay only for the time that code actually runs has
> been around since it was introduced as part of the Zimki PaaS in 2006, and the
> Google App Engine offered a very similar solution at around the same time.
You could argue the way you architect "serverless" services feels very similar to the way we built CGI scripts in the 90s. If you kept those scripts small, everyone of those would have been the equivalent of a serverless function today.
Didn't old-school mainframes with discrete transistor CPUs use time-share mechanisms and charge enough for the time that people cared about minimizing it? UNIX goes back 50 years, but IIRC there was a substantial amount of "pre-history" before that too.
Serverless a simple idea that copying a snippet to the cloud and auto-scale.
You know we used to place the code in .php files and ftp it and call it a day. There were tons of php hosting (anyone remember Dreamhost?) that can auto-scale in some way and bill by page views.
That's a very fat function to run on Lambda. What's the advantage compared to run the same container in Fargate? Compared to a VM I guess you're saving the sysadmin time.
Do you have some recurring activities to babysit Lambda or is it zero maintenance for your team?
For me, serverless solved a problem at the opposite side of the spectrum of “scalability”. It helped me to start new projects very easily.
Since I like React and the Next framework, Vercel is a one-stop shop for me now. With their new storage feature, I can start creating a fullstack web app very easily. Frontend, functions, and Postgres, all hosted one Vercel and very easily integrated.
Having a Postgres DB makes it very easy to export to a proper server in the future, if I need it
We cut one system’s cloud bill by two thirds by moving to virtual private servers. The amount of time/effort serverless and other x-as-a-services were saving in maintenance was not worth the cost and other constraints they caused.
>resources used on serverless frameworks are typically paid for by the minute (or even by the second). This means that clients only pay for the time they are actually running code. This contrasts favorably with the traditional cloud-based virtual machine, where often you end up paying for a machine that sits idle much of the time.
I'm going to ask something very stupid because I'm very ignorant but... when do you need this?
In my mind you have scenario 1: the app never runs because no users, so running it 24/7 is a waste. In this case why cloud? Pay $5 a month for a simple web host?
Scenario 2: you are hitting 100% on a server and you need a second server but that second server will only use 1% most of the time. Again, why not just upgrade the server with your web host?
Scenario 3: you have a bajillion users on peak hours and a tenth of a bajillion on other hours. I assume you would have a business model that affords you to hire someone to manage scaling your infrastructure?
It stalled the second people realised the actual costs are significantly north of using your own ifrastructure, and is often burried in a line like "oh by the way you'll need a NAS Gateway and it'll cost you an aditional $50k a month"
I think appengine is one of the most underrated "serverless" offerings out there. I deploy everything on it, with almost zero concern about infra or server architecture
When did go fast break things become go fast be stupid.
Serverless is like driverless.... Sure you might be in the backseat but there is a chauffeur up front and you pay a lot of money for that.
Unless your not paying for it (directly) --- serverless makes a LOT of sense if you have light weight tasks that can be distributed and you send it to "excess" infrastructure you already own/run.
most critiques do not apply if you look at providers that have open source runtimes. for services running on deno or workerd/cloudlfare workers you can add your own runners on big cost effective servers that allow longer execution and serve core locations or certain workloads with the same code. "limited languages" still applies as these are limited to js/wasm languages. "not being able to run entire applications" still applies only if you include the database servers, as these will need special operation. (but running databases was most of the time a seperate thing from the core application anyways)
Stuff like Fargate, GKE Autopilot and Cloud Run I would also consider "serverless" just in a weaker sense. I can see these platforms having pretty wide appeal for workloads that don't benefit from stronger control over the underlying VMs and networking.
The biggest selling point that seems somewhat legitimate is a bunch of these have scale to zero capability. This is pretty useful for on-demand code that is required very infrequently and for which cold-starts are a non-issue.
I think where they fall down is that once you run up against one of their hard constraints is that the cost of moving to something without those constraints acts as a high enough activation energy to instead contort the application architecture to work around said constraints. This I feel is very bad and leads to horrible results, especially when aforementioned platform is FaaS.
In practice if you know you are going to want to run any non-trivial long-running or stateful services you are just better off going k8s from the start. The API is good, the managed options are good, life is just less complicated when you don't need to deal with proprietary bullshit.
>The biggest selling point that seems somewhat legitimate is a bunch of these have scale to zero capability. This is pretty useful for on-demand code that is required very infrequently and for which cold-starts are a non-issue.
I don't quite understand this. You can pay for 1 physical/virtual server and have apache serve 2 different domain names, right? Can't you just drop your infrequent app there so it shares resources with, I don't know, your more regularly accessed website?
The only scenario I can imagine this would be useful is if you own zero servers or the code is incompatible with the server you do own for some reason.
[+] [-] noelwelsh|2 years ago|reply
The big issue, from my point of view, is that the programming model takes all the problems of microservices and adds more. Microservices make even trivial features into a distributed system, bringing all the difficulties of reasoning about code that distributed systems entail. Then add an opaque runtime and almost certainly a number of vendor specific services to handle state and other features the serverless model cannot address directly, and you have a recipe for slow development and frustrating debugging.
[+] [-] CuriouslyC|2 years ago|reply
[+] [-] baridbelmedar|2 years ago|reply
[+] [-] impulser_|2 years ago|reply
I'm scared to use serverless platform unless I don't have to put in my credit card lol.
I feel safe paying the same price every month despite my app growing. Servers are powerful these days and are very simple to maintain and scale using cloud platforms because they do it all for you lol.
[+] [-] justin101|2 years ago|reply
The most annoying part is that Googles infrastructure for cloud computing is so much better than the others if you're willing to work within their ecosystem. Simple deployments, version management, rollbacks, etc... There is nothing quite like it. (Im not saying there aren't competitors, just that Google seems easiest to use)
[+] [-] soco|2 years ago|reply
[deleted]
[+] [-] brainwipe|2 years ago|reply
I think the definition of serverless is too narrow (AWS lambdas/ Azure functions) and that serverless is really "I want to build apps, not manage infrastructure". That's not the same thing as putting everything onto functions.
I have a monolith we're preparing to move onto a managed container system (prolly AWS AppRunner). I don't want to manage K8 if I can help it, our app doesn't need complex server architecture.
Personally, I think it's just the next layer of abstraction up. Some won't benefit from serverless in the same way some are better off with in-house tin. I know some that need custom chipsets in-house, so can't even buy a stock rack server! However, many many web apps don't really need the control and will probably use facets of serverless over time. But it is still a revolution for old people like me that just don't want to manage servers anymore!
[+] [-] CharlieDigital|2 years ago|reply
If you get a chance, try out the DX with Google Cloud Run or GKE AutoPilot. Building and shipping to GCR is fast enough that it feels like a local build-run worflow. Google Cloud Run jobs are also fantastic and you get a pretty hefty free monthly grant (~60h of compute).
Both Azure Container Apps and GCR are true scale to zero whereas AppRunner is not and always maintains a minimum monthly baseline cost (~$5-6).
AWS feels the most behind in terms of its serverless container workload experience (I use AWS every day professionally, but use GCP and Azure for side projects).
[+] [-] vb-8448|2 years ago|reply
Serverless was already a thing in 70s, in the mainframe era. In the 90s the pendulum moved to servers and server farms and nowadays is sliding back.
> The Problems With Serverless: > Limited Programming Languages > Vendor Lock > Performance
None of this is a real problem, the only thing that matters is the cost-effectiveness, and currently (2024) it's stil cheaper to run most of your software on "traditional VMs".
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] G3rn0ti|2 years ago|reply
> The concept of letting users pay only for the time that code actually runs has
> been around since it was introduced as part of the Zimki PaaS in 2006, and the
> Google App Engine offered a very similar solution at around the same time.
You could argue the way you architect "serverless" services feels very similar to the way we built CGI scripts in the 90s. If you kept those scripts small, everyone of those would have been the equivalent of a serverless function today.
[+] [-] miohtama|2 years ago|reply
The whole dynamic web was bunch of PHP scripts in your home folder.
Even the early Amazon was a collection of Perl scripts glued together.
[+] [-] smallmancontrov|2 years ago|reply
What's old is new again.
[+] [-] est|2 years ago|reply
You know we used to place the code in .php files and ftp it and call it a day. There were tons of php hosting (anyone remember Dreamhost?) that can auto-scale in some way and bill by page views.
[+] [-] hackernoteng|2 years ago|reply
[+] [-] irjustin|2 years ago|reply
I think the serverless revolution is here just not in the way people originally dreamed.
[+] [-] pmontra|2 years ago|reply
Do you have some recurring activities to babysit Lambda or is it zero maintenance for your team?
[+] [-] danielvaughn|2 years ago|reply
[+] [-] PurpleRamen|2 years ago|reply
[+] [-] aidos|2 years ago|reply
[+] [-] xrisk|2 years ago|reply
[+] [-] sjducb|2 years ago|reply
[+] [-] soneca|2 years ago|reply
Since I like React and the Next framework, Vercel is a one-stop shop for me now. With their new storage feature, I can start creating a fullstack web app very easily. Frontend, functions, and Postgres, all hosted one Vercel and very easily integrated.
Having a Postgres DB makes it very easy to export to a proper server in the future, if I need it
[+] [-] hacknews20|2 years ago|reply
[+] [-] optimalsolver|2 years ago|reply
[+] [-] AlienRobot|2 years ago|reply
>resources used on serverless frameworks are typically paid for by the minute (or even by the second). This means that clients only pay for the time they are actually running code. This contrasts favorably with the traditional cloud-based virtual machine, where often you end up paying for a machine that sits idle much of the time.
I'm going to ask something very stupid because I'm very ignorant but... when do you need this?
In my mind you have scenario 1: the app never runs because no users, so running it 24/7 is a waste. In this case why cloud? Pay $5 a month for a simple web host?
Scenario 2: you are hitting 100% on a server and you need a second server but that second server will only use 1% most of the time. Again, why not just upgrade the server with your web host?
Scenario 3: you have a bajillion users on peak hours and a tenth of a bajillion on other hours. I assume you would have a business model that affords you to hire someone to manage scaling your infrastructure?
What am I missing in this picture?
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] esskay|2 years ago|reply
[+] [-] jmull|2 years ago|reply
I guess when this was written (3+ years ago) a lot of people were still high on the marketing hype so the reality seemed flat.
But I think we got some nice options.
(Which, of course, aren't silver bullets that solve all the problems.)
[+] [-] alamsterdam|2 years ago|reply
[+] [-] le-mark|2 years ago|reply
[+] [-] zer00eyz|2 years ago|reply
Serverless is like driverless.... Sure you might be in the backseat but there is a chauffeur up front and you pay a lot of money for that.
Unless your not paying for it (directly) --- serverless makes a LOT of sense if you have light weight tasks that can be distributed and you send it to "excess" infrastructure you already own/run.
[+] [-] haolez|2 years ago|reply
[+] [-] jFriedensreich|2 years ago|reply
[+] [-] jpgvm|2 years ago|reply
I think FaaS turned out to be a largely bad idea.
Stuff like Fargate, GKE Autopilot and Cloud Run I would also consider "serverless" just in a weaker sense. I can see these platforms having pretty wide appeal for workloads that don't benefit from stronger control over the underlying VMs and networking.
The biggest selling point that seems somewhat legitimate is a bunch of these have scale to zero capability. This is pretty useful for on-demand code that is required very infrequently and for which cold-starts are a non-issue.
I think where they fall down is that once you run up against one of their hard constraints is that the cost of moving to something without those constraints acts as a high enough activation energy to instead contort the application architecture to work around said constraints. This I feel is very bad and leads to horrible results, especially when aforementioned platform is FaaS.
In practice if you know you are going to want to run any non-trivial long-running or stateful services you are just better off going k8s from the start. The API is good, the managed options are good, life is just less complicated when you don't need to deal with proprietary bullshit.
[+] [-] AlienRobot|2 years ago|reply
I don't quite understand this. You can pay for 1 physical/virtual server and have apache serve 2 different domain names, right? Can't you just drop your infrequent app there so it shares resources with, I don't know, your more regularly accessed website?
The only scenario I can imagine this would be useful is if you own zero servers or the code is incompatible with the server you do own for some reason.
[+] [-] nwatson|2 years ago|reply