Cloud commoditized datacenters. People then tightly coupled their applications to cloud providers like AWS. After some time, containers came along and commoditized the cloud providers. Those applications no longer needed to be tightly coupled to a specific cloud provider. This reduced both costs and the risk of being invested in a single cloud provider.
Serverless (AWS Lambda) is just the cloud's way of trying to "de-commoditize" the containers that commoditized them. They want you to tightly couple your applications to their specific cloud provider again. And charge you more. How much is that function in the zip from S3 going to cost you to run in a container managed and run by AWS? And how long did you spend configuring the API Gateway (and where else can you apply what you've learned doing the configuration)? Next time you see an article fawning over serverless and saying things like "Containers just don’t matter in the serverless world" take a look at what the author does for a living. You'll start to see the same pattern I've been seeing.
Meanwhile, you could be spending less money and time treating AWS like a dumb processor with a network connection while only really configuring the open source software you already know. But it's your time and money.
If you write your Lambda function to put the interaction between the AWS Lambda service at the surface level of your application (i.e. the entry/exit point), and then write your business logic inbetween, you can create a function that is quite easily tranferrable between cloud providers.
The vendor lock in creeps in more around the surrounding services that the cloud provider offers. You can protect yourself from this though by writing good quality hooks that have pluggable backends, e.g. writing/reading to an object store should be abstracted
I just don't get the "lambda locks you in" thing. You write a server the same way you always would, but like an extra 50LOC exists to make it work on lambda. So if I wanted to deploy elsewhere what would the big deal be?
Lambda also provides a lot more than containerized services, as the article mentions - I no longer have to patch my system, which is a huge operational/ security burden that many companies struggle with. The time to configure API Gateway/ Lambda feels trivial compared to the work involved in maintaining a patched FAAS solution.
I don’t understand why people complain about Serverless vendor lock in.
Let’s split lock-in into two categories:
1. Essential complexity
DyanmoDB and Firebase are different products with different features and complexities. There is no “ANSI SQL” here. They are as different as they are similar. Moving from one to the other is a non-zero cost because they have different features your app would have to make up for or adopt.
2. Inessential complexity
Serverless functions don’t necessarily need different signatures and it is conceivable that a standard for HTTP evented functions could emerge. Many are working on this. I expect this to be largely resolved over the next few years.
Lastly, I’m grateful for ANSI SQL but over the last 20 years I think I’ve seen one or two clients migrate a mature Java app from one database vendor to another (excluding some very recent moves to AWS RDS). Keep in mind that JDBC is about as good an abstraction as we’ve ever had for database agnosticism.
When you choose to build a lot of complexity upon an abstraction you have to be honest with yourself: have you ever dealt with a service (storage, queues, naming, auth, etc...) that didn’t have leaky abstractions? Of those with few/zero leaky abstractions how often did you need to migrate to a different vendor? Why do we expect a rapidly evolving set of systems and services to behave like mature commodity software?
Fear of vendor lock-in is a premature optimization.
API Gateway is the literal manifestation of the worst product I have ever had the misfortune of configuring. This is mostly due to AWS's incompetence at delivering any semblance of working documentation around the product. Looking forward to their new "open source" documentation.
Another issue with Lambda is that everything goes off the rails if you need to do something that Amazon hasn’t prioritized yet. Let’s say you want a GRPC endpoint, web sockets, http/2, mutual tls authentication, etc. Lambda won’t help you and you’ll be left building up your own patterns for those. Once you have those patterns, then Lambda’s use cases can be satisfied as well.
Also, Lambda doesn’t provide any sla’s on container reuse. They could restart your container multiple times per second or every few minutes. You are at their mercy to keep your containers warm.
Finally, with the meltdown example, would containers actually need to be patched if the parent OS gets patched since the kernels are shared between host and client containers? With Fargate, amazon would patch the base OS and your containers would be safe as they get rescheduled onto patched nodes.
Serverless is nothing more than PaaS - just at a more granular deployment/operation/billing/analytics scale. PaaS has always had a certain amount of lock-in because it's an abstraction, as any abstraction naturally does.
There are also some abstractions like Kubernetes that are neutral yet also managed with extras by all the providers which is a nice middle ground.
None of this is new or groundbreaking other than the silly hype words like "serverless".
It’s a cycle
Depending on where you start your argument, one can be the commodity or meta commodity
Your point is sort of moot
Cloud for next 10 to 20 years, is still about killing old it vendors and make people not worried to put everything they own on cloud. Container or serve less is but major milestones.
Why is this a surprise? CGI is a great model for server programming. Lambda provides the same model packaged in a way that is enormously more convenient and powerful.
Deploying .zip files is one of my favorite features of Lambda, particularly when writing Go apps.
Docker always felt wrong with Go to me.
Why do I need a local VM, a Linux build service, an image registry and servers with container orchestration to deploy Go software? I understand how all this helps with "legacy" Rails or Java apps, but why can't I just throw a Go binary somewhere to run it?
Lambda is exactly this. I upload a cross-compiled .zip to S3 and everything else is taken care of. This was a big breakthrough for me, seeing a much simpler solution for deploys than all the container stuff.
I've been building a boilerplate app that demonstrates just how little "stuff" is needed to run a Go service on Lambda:
It actually wouldn't be too hard to build a MacOS tool for building scratch Docker containers with a cross-compiled Go binary. The docker image format roughly just a tar file containing a manifest json file and a tar file containing the root filesystem. *
Sure Lambda is pretty simple (just upload a zip!) and is very similar to what Google AppEngine has been doing since 2008 (AppEngine also uses zip). But you are signing up for complete lock-in...
Also, it's depressing to note that MacOS is now the only major operating system that does not have support for native Linux containers without the use of a VM. * *
You don't need all of that shit, though. You absolutely can just throw a golang binary into a super minimal alpine image and run it wherever you've got a Docker daemon installed. You literally don't need any of the things you listed! Plus, it is simply untrue that "everything else is taken care of" in Lambda. There is plenty of one-off configuration to do ("toil").
Serverless has its upsides but it also has downsides.
In an ideal, pure, world it’s awesome. The problem is that your lambda function usually needs to use some sort of external resources. (It is pretty useless if you don’t interact with anything)
Now you have to worry about and understand how to do access control on the said resources, how to collect the logs, how to collect metrics relevant to your code, how do you troubleshoot and deploy the function itself, how do you integrate it with the rest of things you have/own.
You’re just shifting complexity from an area to a another area.
It works great in some cases, but it’s far from beeing a panacea.
Reading through this comment thread, if anybody is interested in a much more streamlined “serverless” experience and superior API Gateway, check out StdLib [1]. We’ve built a product for getting APIs built and shipped, full stop.
Bonus: .tar.gz format for submitting packages instead of .zip (though our CLI handles it automatically), and a bunch of freebies - everything from automatically generated documentation [2], SDKs, simple local development and testing, and more.
Disclaimer: Am founder.
Disclaimer 2: At the risk of being borderline spammy, we’re hiring aggressively. If you’re passionate about building a future that uses serverless technologies to make software development and APIs more accessible, e-mail me. (And thank you, pre-emptively.)
I feel it is like reinventing executables, the dumb way.
We can compare that with ELF binaries or remote procedure calls when someone else maintains the OS and makes sure that the infrastructure scales. Actual not even executables, because it is still interpreted code, instead of being "compiled" it is packed and distributed with a spec much less precise than a typical platform. Just because someone has a machine ready to be deployed on and scale ad infinitum.
Am I the only one who finds that a dumb protocol for a remote call? Could someone point on which part is this a smart badass thing? Perhaps it is just a weird "renascence of Operating Systems"?
Edit: I still think that we got here because it was too much work to learn about pre-existing technology or hire an expert and it was more fashionable/easy/cool/sexy to kick ass and move on.
> Perhaps it is just a weird "renascence of Operating Systems"?
"Serverless" really comes down to being a fancy inetd(8). The "new" part is all in proprietary software-as-a-service to automate server deployment, cluster management and configuration, network configuration, access control, and log collection. In the next technology cycle 6-10 years from now, people will realize that having so much of your software tied into proprietary software-as-a-service is an idea that is even worse than running on proprietary operating systems (NT, Netware, VMS) was in the 1990s.
> I feel it is like reinventing executables, the dumb way.
Consider this: in the Unix philosophy, most executables consist of taking things that should be procedures and packaging them up as programs. "Serverless" is largely about applying the Unix philosophy to SaaS - "microservices." So now you are taking things that are really procedures, packaging them up as executables, then packaging those programs up as Internet servers.
I heard or read somewhere a quip that debugging this "stack" is something like the Russian fairytale about Кощей[1]: you have to find a needle, which is in an egg, which is in a duck, which is in a hare, which is in an iron chest, which is buried under an oak tree, which is on an island.
I agree with this sentiment and feel the same way. I also find this is a frustrating trend in our industry in where a lot of engineers seem to be spending more time learning about "services" from specific vendors than about things like network protocols and their workings and limitations.
I kept waiting for the author to say, "Surprise, it's a jar file. Jars are just zips, and lambda supports Java." Then I realized author really was reinventing wheels for the reader who probably doesn't know what Java is.
I am totally with you... I think we are going backwards in technical choices just because some people had pain-points learning pre-existing technology or did not know how to deal with infrastructure. But if people pays for it, it is a market I guess.
I played with Firebase (Google Cloud) Functions recently and the developer experience was pretty good. It's like if serverless (https://serverless.com/) had first party support, in the form of the Firebase CLI. Deploying was simple, the Firebase dashboard is nice. That said, I only used it for a side project, so I don't know about any pains scaling. I would definitely use it again for another side project, though.
> You had to patch your containers and your instances/servers, but you didn’t have to patch Lambda functions.
This was right after talking about Fargate. Let's compare what you do and don't have to patch in Fargate / Lambda:
Kernel / host OS: Lambda and Fargate patch that for you without any work. This is the more important patch.
Container bins/libraries: some programs, like Chrome, which had JIT, could be exploited to read memory of that same process. In the case of lambda and fargate, you only needed to patch containers/zips that contained such programs.
If you were using something like 'serverless-chrome'[0] in lambda, you would have to update your zip file to get Chrome's workaround for meltdown. If you had a fargate container with headless chrome, same deal. It's practically identical in the cited case of meltdown.
There are many cases (like glibc or openssl vulnerabilities) where containers need to be patched, but lambda can patch it for you ... but in the case of kernel exploits, fargate and lambda can patch equally well.
IMHO Cloud functions serve well a very small niche. However most of the people advocating for cloud functions seem to misused them. Their use cases would be better served using a well designed PaaS(though I admit is hard to find a good one these days).
Don't think that building your whole service on lambda or google cloud functions with tons of code is the best idea, but it's a great shim between clients and backends. Tech changes constantly and you'll always be up against a change agent. Be nimble.
I actually find zipping and uploading manually a PITA. Started using the Serverless framework [1] recently and now I just run `serverless deploy --stage dev --aws-profile profilename` from my repo (which is an npm script).
Uploading manually is a PITA. That’s because you really should automate it using CI/CD if it’s not your local dev environment. If you are manually uploading your code for a dev environment it would be worth exploring a better local, offline framework to avoid this. Don’t upload manually.
It’s not quite as bad if you use .NET Core (which is what I’m currently using for a number of things in an integration stack). I just have to run “dotnet lambda deploy-function” with a template set up.
I started with Golang, however, and found the “out of the box” deployment process pretty painful which is part of why I switched to .NET Core.
Why would you do it manually? Write a two-line script to do it. Downloading a framework to run an npm script that does the equivalent of running `zip` followed by `scp` (or `aws s3 cp` or whatever) seems crazy to me.
Another cool thing about Serverless is it can deploy a Python WSGI app (e.g. a Flask app) with little change in the code. It simplifies local testing and makes it easier to leave Lambda and move to dedicated hosting if I ever decide to.
When they do it for you it's awesome isn't it. I've been using Apex Up. It handles API Gateway config, the zip upload, SSL certs and deployment to AWS Lambda
You know what would be better than a .zip? A .tar.gz file. Zip files cannot be unzipped without having the whole thing in memory. What's funny is that the Lambda environments (at least for Node) do not have the `zip` executable available which means that to zip/unzip something inside Lambda I need one written in javascript which has to be included in the zip file I upload to AWS instead of being able to use the native one.
You have it backwards. Zip files have a table of contents letting you unzip individual files without decompressing the whole thing, and you don't need to load it all into memory.
[+] [-] fapjacks|8 years ago|reply
Serverless (AWS Lambda) is just the cloud's way of trying to "de-commoditize" the containers that commoditized them. They want you to tightly couple your applications to their specific cloud provider again. And charge you more. How much is that function in the zip from S3 going to cost you to run in a container managed and run by AWS? And how long did you spend configuring the API Gateway (and where else can you apply what you've learned doing the configuration)? Next time you see an article fawning over serverless and saying things like "Containers just don’t matter in the serverless world" take a look at what the author does for a living. You'll start to see the same pattern I've been seeing.
Meanwhile, you could be spending less money and time treating AWS like a dumb processor with a network connection while only really configuring the open source software you already know. But it's your time and money.
[+] [-] djhworld|8 years ago|reply
If you write your Lambda function to put the interaction between the AWS Lambda service at the surface level of your application (i.e. the entry/exit point), and then write your business logic inbetween, you can create a function that is quite easily tranferrable between cloud providers.
The vendor lock in creeps in more around the surrounding services that the cloud provider offers. You can protect yourself from this though by writing good quality hooks that have pluggable backends, e.g. writing/reading to an object store should be abstracted
[+] [-] staticassertion|8 years ago|reply
Lambda also provides a lot more than containerized services, as the article mentions - I no longer have to patch my system, which is a huge operational/ security burden that many companies struggle with. The time to configure API Gateway/ Lambda feels trivial compared to the work involved in maintaining a patched FAAS solution.
[+] [-] ryanmarsh|8 years ago|reply
Let’s split lock-in into two categories:
1. Essential complexity
DyanmoDB and Firebase are different products with different features and complexities. There is no “ANSI SQL” here. They are as different as they are similar. Moving from one to the other is a non-zero cost because they have different features your app would have to make up for or adopt.
2. Inessential complexity
Serverless functions don’t necessarily need different signatures and it is conceivable that a standard for HTTP evented functions could emerge. Many are working on this. I expect this to be largely resolved over the next few years.
Lastly, I’m grateful for ANSI SQL but over the last 20 years I think I’ve seen one or two clients migrate a mature Java app from one database vendor to another (excluding some very recent moves to AWS RDS). Keep in mind that JDBC is about as good an abstraction as we’ve ever had for database agnosticism.
When you choose to build a lot of complexity upon an abstraction you have to be honest with yourself: have you ever dealt with a service (storage, queues, naming, auth, etc...) that didn’t have leaky abstractions? Of those with few/zero leaky abstractions how often did you need to migrate to a different vendor? Why do we expect a rapidly evolving set of systems and services to behave like mature commodity software?
Fear of vendor lock-in is a premature optimization.
[+] [-] dev1n|8 years ago|reply
[+] [-] ec109685|8 years ago|reply
Also, Lambda doesn’t provide any sla’s on container reuse. They could restart your container multiple times per second or every few minutes. You are at their mercy to keep your containers warm.
Finally, with the meltdown example, would containers actually need to be patched if the parent OS gets patched since the kernels are shared between host and client containers? With Fargate, amazon would patch the base OS and your containers would be safe as they get rescheduled onto patched nodes.
[+] [-] manigandham|8 years ago|reply
There are also some abstractions like Kubernetes that are neutral yet also managed with extras by all the providers which is a nice middle ground.
None of this is new or groundbreaking other than the silly hype words like "serverless".
[+] [-] cdnsteve|8 years ago|reply
[+] [-] justicezyx|8 years ago|reply
[+] [-] bthornbury|8 years ago|reply
Forget that they take more time to care for, and to learn how to ride, and how to feed properly.
[+] [-] WhitneyLand|8 years ago|reply
[+] [-] mankash666|8 years ago|reply
You can choose to deploy your own open source Faas framework (like openwhisk or open-faas) on one of these clouds, but YOU will have to:
1. Manage scaling of underling EC2s
2. Manage security patching of both the docker and underlying OS
3. Setup whole lot of configs
4 manage optimizations
With a managed FaaS you're trading ops for more dev time, but someone has to be paid for the ops - it isn't free
[+] [-] nine_k|8 years ago|reply
(For those who did not write web apps in 1990s: CGI here is https://en.wikipedia.org/wiki/Common_Gateway_Interface)
[+] [-] roywiggins|8 years ago|reply
[+] [-] not_kurt_godel|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] nzoschke|8 years ago|reply
Docker always felt wrong with Go to me.
Why do I need a local VM, a Linux build service, an image registry and servers with container orchestration to deploy Go software? I understand how all this helps with "legacy" Rails or Java apps, but why can't I just throw a Go binary somewhere to run it?
Lambda is exactly this. I upload a cross-compiled .zip to S3 and everything else is taken care of. This was a big breakthrough for me, seeing a much simpler solution for deploys than all the container stuff.
I've been building a boilerplate app that demonstrates just how little "stuff" is needed to run a Go service on Lambda:
https://github.com/nzoschke/gofaas
More thoughts about how .zip files make our lives easier is here: https://github.com/nzoschke/gofaas/blob/master/docs/dev-pack...
[+] [-] jeffnappi|8 years ago|reply
Sure Lambda is pretty simple (just upload a zip!) and is very similar to what Google AppEngine has been doing since 2008 (AppEngine also uses zip). But you are signing up for complete lock-in...
Also, it's depressing to note that MacOS is now the only major operating system that does not have support for native Linux containers without the use of a VM. * *
* https://github.com/moby/moby/blob/master/image/spec/v1.md
* * https://www.hanselman.com/blog/DockerAndLinuxContainersOnWin...
[+] [-] fapjacks|8 years ago|reply
[+] [-] mohaine|8 years ago|reply
[+] [-] mirceal|8 years ago|reply
In an ideal, pure, world it’s awesome. The problem is that your lambda function usually needs to use some sort of external resources. (It is pretty useless if you don’t interact with anything)
Now you have to worry about and understand how to do access control on the said resources, how to collect the logs, how to collect metrics relevant to your code, how do you troubleshoot and deploy the function itself, how do you integrate it with the rest of things you have/own.
You’re just shifting complexity from an area to a another area.
It works great in some cases, but it’s far from beeing a panacea.
[+] [-] keithwhor|8 years ago|reply
Bonus: .tar.gz format for submitting packages instead of .zip (though our CLI handles it automatically), and a bunch of freebies - everything from automatically generated documentation [2], SDKs, simple local development and testing, and more.
Disclaimer: Am founder.
Disclaimer 2: At the risk of being borderline spammy, we’re hiring aggressively. If you’re passionate about building a future that uses serverless technologies to make software development and APIs more accessible, e-mail me. (And thank you, pre-emptively.)
[1] https://stdlib.com/
[2] https://stdlib.com/@messagebird/lib/numbers/
[+] [-] appdrag|8 years ago|reply
Disclosure: I'm founder
[+] [-] nudpiedo|8 years ago|reply
We can compare that with ELF binaries or remote procedure calls when someone else maintains the OS and makes sure that the infrastructure scales. Actual not even executables, because it is still interpreted code, instead of being "compiled" it is packed and distributed with a spec much less precise than a typical platform. Just because someone has a machine ready to be deployed on and scale ad infinitum.
Am I the only one who finds that a dumb protocol for a remote call? Could someone point on which part is this a smart badass thing? Perhaps it is just a weird "renascence of Operating Systems"?
Edit: I still think that we got here because it was too much work to learn about pre-existing technology or hire an expert and it was more fashionable/easy/cool/sexy to kick ass and move on.
P.S. Sorry for this sudden rant
[+] [-] sedachv|8 years ago|reply
"Serverless" really comes down to being a fancy inetd(8). The "new" part is all in proprietary software-as-a-service to automate server deployment, cluster management and configuration, network configuration, access control, and log collection. In the next technology cycle 6-10 years from now, people will realize that having so much of your software tied into proprietary software-as-a-service is an idea that is even worse than running on proprietary operating systems (NT, Netware, VMS) was in the 1990s.
> I feel it is like reinventing executables, the dumb way.
Consider this: in the Unix philosophy, most executables consist of taking things that should be procedures and packaging them up as programs. "Serverless" is largely about applying the Unix philosophy to SaaS - "microservices." So now you are taking things that are really procedures, packaging them up as executables, then packaging those programs up as Internet servers.
I heard or read somewhere a quip that debugging this "stack" is something like the Russian fairytale about Кощей[1]: you have to find a needle, which is in an egg, which is in a duck, which is in a hare, which is in an iron chest, which is buried under an oak tree, which is on an island.
[1] https://en.wikipedia.org/wiki/Koschei
[+] [-] bamboozled|8 years ago|reply
[+] [-] chapill|8 years ago|reply
[+] [-] nudpiedo|8 years ago|reply
[+] [-] wrmsr|8 years ago|reply
Hey, unrelated what does "cannot open shared object file" mean?
[+] [-] TAForObvReasons|8 years ago|reply
[+] [-] faitswulff|8 years ago|reply
[+] [-] bthornbury|8 years ago|reply
Makes the OP's scenario much easier. Code upload and configuration all handled for you. You run code online the same way you call a local function.
Disclaimer: I am the founder/dev/designer of CoherenceApi.
[+] [-] TheDong|8 years ago|reply
> You had to patch your containers and your instances/servers, but you didn’t have to patch Lambda functions.
This was right after talking about Fargate. Let's compare what you do and don't have to patch in Fargate / Lambda:
Kernel / host OS: Lambda and Fargate patch that for you without any work. This is the more important patch.
Container bins/libraries: some programs, like Chrome, which had JIT, could be exploited to read memory of that same process. In the case of lambda and fargate, you only needed to patch containers/zips that contained such programs.
If you were using something like 'serverless-chrome'[0] in lambda, you would have to update your zip file to get Chrome's workaround for meltdown. If you had a fargate container with headless chrome, same deal. It's practically identical in the cited case of meltdown.
There are many cases (like glibc or openssl vulnerabilities) where containers need to be patched, but lambda can patch it for you ... but in the case of kernel exploits, fargate and lambda can patch equally well.
[0]: https://github.com/adieuadieu/serverless-chrome
[+] [-] themihai|8 years ago|reply
[+] [-] bahmboo|8 years ago|reply
[+] [-] tracker1|8 years ago|reply
[+] [-] alexcroox|8 years ago|reply
[1]https://serverless.com/
[+] [-] marcc|8 years ago|reply
[+] [-] finaliteration|8 years ago|reply
I started with Golang, however, and found the “out of the box” deployment process pretty painful which is part of why I switched to .NET Core.
[+] [-] repsilat|8 years ago|reply
[+] [-] paulgb|8 years ago|reply
[+] [-] Blackstone4|8 years ago|reply
[+] [-] moondev|8 years ago|reply
[+] [-] idbehold|8 years ago|reply
[+] [-] userbinator|8 years ago|reply
Where/how did you come upon that piece of misinformation?
I can still remember using PKZIP to extract archives of over 100MB, on a machine with only 4MB of RAM, many years ago.
[+] [-] skybrian|8 years ago|reply
[+] [-] FLUX-YOU|8 years ago|reply