And so ZEIT, my favorite serverless provider, keeps getting better. Highlights:
- "sub-second cold boot (full round trip) for most workloads"
- HTTP/2.0 and websocket support
- Tune CPU and memory usage, which means even smoother scaling
And all that for any service you can fit in a Docker container - which is also how you get near-perfect dev/prod parity, an often overlooked issue with other serverless deployment techniques.
On top of all that, ZEIT has one of the best developer experiences out there today. Highly recommend trying it out.
And for the perpetual serverless haters out there: this is not a product for you, FANG developer. I think people underestimate how well serverless fits for the majority of web applications in the world today. Most small-medium businesses would be better off exploring serverless than standing up their own k8s cluster to run effectively one or two websites.
I'm not a "serverless hater", but every company I've ever worked with had backend processes that were not tied to HTTP requests. I still keep actual servers around because the HTTP gateway is not the pain point. It's long-running processes, message systems, stream processing, and reporting.
That said, I look forward to the company (or side project) where "serverless" can save me from also assuming the "devops" role.
I know of 1 large scale web application that is 100% driven by serverless technologies.
The user experience (as an end user using the website) is pretty terrible IMO. It often takes multiple seconds for various areas of the site to load (bound by the network). It's also super Javascript heavy and just doesn't feel good even on a fast desktop workstation.
Authentication is also a nightmare from a UX point of view. Every time I access the site it makes me click through an intermediate auth0 login screen.
I really hope this doesn't become a common trend to run web apps like this. It would make the web nearly unusable.
I'm not a "serverless" hater either. I want technology to move forward to make my life as a developer better. I don't care what tech it is, but right now I just don't get that impression from serverless set ups (both from the developer and end user POV).
Count me in as a over-engineering and selling-things-engineers-do-not-need hater.
The very list of benefits on ZEIT, mentions 4 things: first two of them are clear over-engineering bloat (plus premature optimization), and second two were actually created only because of serverless. They were (and nothing more was mentioned in section benefits ;) :
* Clusters or federations of clusters
* Build nodes or build farms
* Container registries and authentication
* Container image storage, garbage collection and
distributed caching
I don't know why everybody can't see fakeness of argument'you need clusters, farms and hundreds of servers'. You don't. Actually you do only (contrary to your statement), if you're FANG.
Why? Because look at real world HUGE examples. E.G. stackoverflow (and no, your company/startup/whatever, will not reach their level of traffic) can do everything on literally dozen of servers, while they admitted that in some scenarios 1 web server was enough. Source: https://nickcraver.com/blog/2016/02/17/stack-overflow-the-ar...
Our 10x..100x smaller companies would perfectly do on 2..4. There is no need for whole over-engineering.
The ultra funny thing is ZEIT selling 'deployment self-heal' as old known (windows anybody) and ridiculed recipe: it will work after restart. Right. It's better to shut off car engine, go out, go in, and start again. This is XXI engineering :)
Am I the only one having problems to follow .gif "demos"?
When I get to the image, it is in the middle of everything and I don't really have an idea what is going on. Even watching it multiple times, I am not sure where it starts, ends, what the individual steps are.
Or is it because I just don't know enough about this stuff?
Yep. Pretty soon. You write code. You create a docker file. You find a place to run the docker file with your code [cheapest!]. Run it through your tests. Monitor it. The end. No vpcs,salts, puppets, sshs,chefs, horses,anisbles, cats,ec2s,devops,noops, sysadmins, kubernetes or chaos monkeys required.
Until you discover that the thing you are building requires more than a single application running in a single container and you end up building an entire "Operating System" around your containers and the circle starts all over again.
Complexity is hardly ever in the solution, but mostly in the problem. Single solutions to complex problems often ignore/forget important parts of the problem and they come back to bite you, hard.
It feels so amazingly depressing for so much sunk knowledge to just go away and be worthless. Feels like I could have learned so many more things that I could have had gotten joy out of today and still extract useful things out of, or build things on top of, long into the future if I'd just focus on the time-invariants of knowledge space.
Half the stuff you described there solve different problems to docker. Not to mention that docker doesn't solve all problems in infrastructure.
I like to think of docker like git. It solves some problems with distributing data but it doesn't solve the problem of developers writing code, writing tests, pipelines, etc. Nor even problems with 3rd party APIs etc. Obviously this isn't a perfect analogy but my point is docker isn't a magical silver bullet.
I've worked in places where their solution to everything was "docker" and it honestly caused more complexity problems than it solved. That's not to say I don't like docker; in fact I've been a big advocate for containerisation long before docker was a thing. However like with any tech, the key is using the right tool for the right job instead of attacking every screw with the same hammer.
I don’t think that’s likely or desirable. VPCs and the ability to make a service that’s not in the public internet are hugely valuable from a security standpoint. And Ansible, Terraform, etc are great for automation and managing configuration and architecture. It’s unwise to discard everything that came before you when hopping in the shiny new bandwagon. Even if it is the future, last generations tools can still teach you a lot.
You still need all that, you're just paying someone else to do it for you. As the app gets more complicated you will have to do more. It is impossible to remove complexity by adding abstraction. You've only hidden it.
Whenever a solution offers you simplicity, you're giving up flexibility. That's why I'd prefer to use a more advanced but standardized open source platform like Kubernetes.
You use dat or MaidSAFE to write client side apps. The back end is end to end encrypted, secure, automatically rebalanced, uncensorable, permissionless, and people install your app and use a cryptocurrency to pay for resources.
And of course you use public key cryptography to maintain your app and upgrades propagate without needing to select a domain or host for them.
Looks great for basic websites but it's missing the biggest and most difficult piece of cloud infrastructure. The DATABASE!
Today you'd have to open up your cloud DB provider to the world since Zeit can't provide a list of IPs to whitelist. This is a showstopper for me unfortunately.
From what I've seen, when people talk about serverless there's 2 camps.
Functions makes life easier camp. Serverless development and deployments have nicer properties which make them easier to reason about and eliminate entire classes of errors.
Functions make edge computing possible camp. Serverless functions can be deployed in datacenters around the globe close to your users, offloading compute from your core and improving latency.
Now let's talk about DB semantics. You and many other people in the first camp probably want your business logic on strongly-consistent SQL transactions. That's good, it's the right semantics for the job. But it's incompatible with the edge model where the functions are decoupled from the central datastore.
So I think that you're asking for something the community isn't mature enough to provide yet. The momentum is towards unification where we need stratification (with respect to coupling).
Most databases are quite unfit for the serverless world that's becoming a reality, where the needs shift towards global replication, flexible horizontal scalability (sharding) and vertical (provisioned QPS).
We like and use CosmosDB because it fits this criteria. We anticipate that Google Spanner, CockroachDB and similar databases will become the go-tos in combination with ZEIT Now.
At Cloudflare, we're working on expanding Workers (https://www.cloudflare.com/products/cloudflare-workers/) to allow access to your existing DB servers & offer protection with Argo Tunnel (https://www.cloudflare.com/products/argo-tunnel/). We are also enabling Workers to write into Cloudflare’s globally distributed cache, reducing retrieval time for repeated query results. We hope this will be a differentiator with using Cloudflare & highly valuable for your use cases.
Amazon is building an HTTP interface to Serverless Aurora to solve this problem. You can secure it via IAM rather than network segmentation, much like DynamoDB.
> “A very common category of failure of software applications is associated with failures that occur after programs get into states that the developers didn't anticipate, usually arising after many cycles.
In other words, programs can fail unexpectedly from accumulating state over a long lifespan of operation. Perhaps the most common example of this is a memory leak: the unanticipated growth of irreclaimable memory that ultimately concludes in a faulty application.
> Serverless means never having to "try turning it off and back on again"
> Serverless models completely remove this category of issues, ensuring that no request goes unserviced during the recycling, upgrading or scaling of an application, even when it encounters runtime errors.
> How Does Now Ensure This?
> Your deployment instances are constantly recycling and rotating. Because of the request-driven nature of scheduling execution, combined with limits such as maximum execution length, you avoid many common operational errors completely.
Somehow this sounds very expensive to me (like restarting Windows 2000 every hour just to avoid a BSoD, except that here it’s not that time consuming a process) and seems to leave aside caching, state management and other related requirements on the wayside for someone else to handle or recover from.
Or it’s likely that I’ve understood this wrong and that this can actually scale well for large, distributed apps of any kind. Sounds like magic if it’s that way.
> I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests.
> Somehow this sounds very expensive to me (like restarting Windows 2000 every hour just to avoid a BSoD, except that here it’s not that time consuming a process)
The crucial difference is that the activation cost of booting up a full OS (like Windows 2000) is massive (typically involving a chain-reaction of CPU and IO-intensive init services), whereas that is not the case for our serverless infrastructure.
What you do point out is one of the most important engineering challenges we had to solve (and continue to be focused on).
> seems to leave aside caching, state management and other related requirements on the wayside for someone else to handle or recover from.
Configuration state is the bane of server management. Ideally you want to maintain state in a system specifically designed for it - i.e. a database, or version control software.
The primary benefit of serverless is that you can pay per-second for CPU usage, right when a request comes in, instead of leaving whole OSes running all the time. This other stuff is just a bonus.
Awesome!
While I was at AWS Summit in NY, I asked a round circle of AWS ECS/EKS users (Container orchestration products) about thoughts on a Docker container service that could execute like a FaaS product and there seemed to be none anyone knew of. I have a portion of a legacy application that's used infrequently and too costly to decompose but works fine Dockerized.
I'm confused about pricing. I come from using AWS Lambda, where you pay for the amount of memory allocated for your function, how many times it runs and how long each run is.
Looking at Now, it looks like you are billed by the 'plan' that you choose, and that decides how many deployment instances you are limited to. What does a deployment instance mean for something that is 'serverless'?
EDIT: Whoops, I see that there are 'on demand' prices for deployment instances too--now I just need to figure out how deployment instances map to serverless.
So I've been messing with Fn + Clojure + Graal Native Image and I'm seeing cold start times around 300-400ms and hot runs around 10-30ms. TLS adds something like 100-150ms on top of that. I was excited about seeing improved docker start times, but it seems like you guys are pretty much at the same place I am with it.
Here's my question, being relatively ignorant of Docker's internals: _is it possible_ to improve that docker create/docker start time from 300-400 ms (all in) to <100ms? 300-400ms is kind of a lot of latency for a cold boot still, and people still do things like keepalive pings to keep functions warm, so it would be pretty great to bring that down some more.
I am already running an API service on Zeit now using a golang container that adds the binary and a csv file to a scratch image and re-reads the csv on each request (all requests take less than 0.3 seconds so i have not optimized).
Currently I have to make sure to set min instances to 1 on the 'production' version of the API and set min instances = 0 to older versions.
Will have to try before knowing for sure but I seems like switching to serverless docker would mean I don't have to make the distinction between 'old production' and 'new production' anymore and keep servicing requests to very old versions of my API without an expensive (7 seconds) cold boot.
I did not play with it, but this actually looks pretty cool. The documentation seems to be sane/good quality.
A few questions: How would one coordinate between multiple nodes running an app? (scenario in which the nodes cooperate to find each other. is there some sort of discovery available?) For the docker case, does it support health probes? (how you you know if the app is healthy?)
This is truly amazing and has so much baked in I find myself wanting to use it. However most apps my team and I work on require some form of persistence - uploads, log ins, API rate limits, etc. These scenarios don't fit well with the serverless world. It is technically possible to offset most of this to S3, a hosted NoSQL and perhaps even a service for dealing with a images/thumbnails. By that time your monthly bill is in the three digits, though. For a big web shop this is fine but your average freelancer with multiple smaller and finite-resource projects it's something to consider carefully.
> A new slot configuration property which defines the resource allocation in terms of CPU and Memory, defaulting to c.125-m512 (.125 of a vCPU and 512MB of memory)
Sure sounds like needing to be concerned about the hardware. That feels like a leaky abstraction that the serverless design pattern claims or appears to take care of, but seems like it doesn't in practice. Is "serverless" the right level of abstraction? I'm not sure.
Whether it's serverless of not choice of hardware is very important.
For an simple web app, you might not need special CPU and Memory requirements.
But for a video encoding/decoding serverless app may needs different CPU/Memory requirements (and GPU).
That's what this `slot` configuration property does.
The default is good enough for all the general purpose apps. But if you need more, you can control it.
This development was inevitable, especially since most Functions-as-a-service infrastructure was really docker (or some other) containers running in the background, spinning up on demand using a pool of servers. Azure Container Instances launched last year, and now Google Cloud Functions has serverless containers in alpha, along with the serverless addon for GKE/Kubernetes, and there should be a good bit of announcements this year as other providers follow.
The max time limits still seem pointless to me, there should be an option for unlimited uptime so the full spectrum of ephemeral function to entire app can be managed and deployed using a single flow.
There is no 'hype' word in history I hate more then serverless. I was fine with microservices and all those other hype words but serverless is terrible.
This is more of a question to @rauchg and others who have used now one. When I use now within the AWS realm, are the AWS services when used in conjunction with Zeit Now containers subjected to ingress/egress costs? As it does sound like I am using a different cloud as far as AWS is concerned
[+] [-] andrewtorkbaker|7 years ago|reply
- "sub-second cold boot (full round trip) for most workloads"
- HTTP/2.0 and websocket support
- Tune CPU and memory usage, which means even smoother scaling
And all that for any service you can fit in a Docker container - which is also how you get near-perfect dev/prod parity, an often overlooked issue with other serverless deployment techniques.
On top of all that, ZEIT has one of the best developer experiences out there today. Highly recommend trying it out.
And for the perpetual serverless haters out there: this is not a product for you, FANG developer. I think people underestimate how well serverless fits for the majority of web applications in the world today. Most small-medium businesses would be better off exploring serverless than standing up their own k8s cluster to run effectively one or two websites.
[+] [-] symlock|7 years ago|reply
That said, I look forward to the company (or side project) where "serverless" can save me from also assuming the "devops" role.
[+] [-] nickjj|7 years ago|reply
The user experience (as an end user using the website) is pretty terrible IMO. It often takes multiple seconds for various areas of the site to load (bound by the network). It's also super Javascript heavy and just doesn't feel good even on a fast desktop workstation.
Authentication is also a nightmare from a UX point of view. Every time I access the site it makes me click through an intermediate auth0 login screen.
I really hope this doesn't become a common trend to run web apps like this. It would make the web nearly unusable.
I'm not a "serverless" hater either. I want technology to move forward to make my life as a developer better. I don't care what tech it is, but right now I just don't get that impression from serverless set ups (both from the developer and end user POV).
[+] [-] rdslw|7 years ago|reply
The very list of benefits on ZEIT, mentions 4 things: first two of them are clear over-engineering bloat (plus premature optimization), and second two were actually created only because of serverless. They were (and nothing more was mentioned in section benefits ;) :
* Clusters or federations of clusters * Build nodes or build farms * Container registries and authentication * Container image storage, garbage collection and distributed caching
I don't know why everybody can't see fakeness of argument'you need clusters, farms and hundreds of servers'. You don't. Actually you do only (contrary to your statement), if you're FANG.
Why? Because look at real world HUGE examples. E.G. stackoverflow (and no, your company/startup/whatever, will not reach their level of traffic) can do everything on literally dozen of servers, while they admitted that in some scenarios 1 web server was enough. Source: https://nickcraver.com/blog/2016/02/17/stack-overflow-the-ar...
Our 10x..100x smaller companies would perfectly do on 2..4. There is no need for whole over-engineering.
The ultra funny thing is ZEIT selling 'deployment self-heal' as old known (windows anybody) and ridiculed recipe: it will work after restart. Right. It's better to shut off car engine, go out, go in, and start again. This is XXI engineering :)
[+] [-] celim307|7 years ago|reply
[+] [-] justicezyx|7 years ago|reply
They are on the forefront of serverless.
[+] [-] Sujan|7 years ago|reply
When I get to the image, it is in the middle of everything and I don't really have an idea what is going on. Even watching it multiple times, I am not sure where it starts, ends, what the individual steps are.
Or is it because I just don't know enough about this stuff?
[+] [-] ransom1538|7 years ago|reply
[+] [-] aequitas|7 years ago|reply
Complexity is hardly ever in the solution, but mostly in the problem. Single solutions to complex problems often ignore/forget important parts of the problem and they come back to bite you, hard.
[+] [-] grezql|7 years ago|reply
I went from right clicking and deploying from visual studio to SSH and configuring dockerfiles, docker compose, even nginx.conf to loadbalance.
You do get more bang for the buck with such setup but its too much work on infrastructure and less time for development.
edit: add kubernetes to it, although AKS and GKE being free lessens the burden its still too much for a software dev like me.
[+] [-] tw1010|7 years ago|reply
[+] [-] laumars|7 years ago|reply
I like to think of docker like git. It solves some problems with distributing data but it doesn't solve the problem of developers writing code, writing tests, pipelines, etc. Nor even problems with 3rd party APIs etc. Obviously this isn't a perfect analogy but my point is docker isn't a magical silver bullet.
I've worked in places where their solution to everything was "docker" and it honestly caused more complexity problems than it solved. That's not to say I don't like docker; in fact I've been a big advocate for containerisation long before docker was a thing. However like with any tech, the key is using the right tool for the right job instead of attacking every screw with the same hammer.
[+] [-] artursapek|7 years ago|reply
[+] [-] crgwbr|7 years ago|reply
[+] [-] peterwwillis|7 years ago|reply
[+] [-] meritt|7 years ago|reply
[+] [-] grosjona|7 years ago|reply
[+] [-] um_ya|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] EGreg|7 years ago|reply
You use dat or MaidSAFE to write client side apps. The back end is end to end encrypted, secure, automatically rebalanced, uncensorable, permissionless, and people install your app and use a cryptocurrency to pay for resources.
And of course you use public key cryptography to maintain your app and upgrades propagate without needing to select a domain or host for them.
[+] [-] watty|7 years ago|reply
Today you'd have to open up your cloud DB provider to the world since Zeit can't provide a list of IPs to whitelist. This is a showstopper for me unfortunately.
[+] [-] CBLT|7 years ago|reply
Functions makes life easier camp. Serverless development and deployments have nicer properties which make them easier to reason about and eliminate entire classes of errors.
Functions make edge computing possible camp. Serverless functions can be deployed in datacenters around the globe close to your users, offloading compute from your core and improving latency.
Now let's talk about DB semantics. You and many other people in the first camp probably want your business logic on strongly-consistent SQL transactions. That's good, it's the right semantics for the job. But it's incompatible with the edge model where the functions are decoupled from the central datastore.
So I think that you're asking for something the community isn't mature enough to provide yet. The momentum is towards unification where we need stratification (with respect to coupling).
[+] [-] Rauchg|7 years ago|reply
We like and use CosmosDB because it fits this criteria. We anticipate that Google Spanner, CockroachDB and similar databases will become the go-tos in combination with ZEIT Now.
[+] [-] a17anxx|7 years ago|reply
[+] [-] wahnfrieden|7 years ago|reply
[+] [-] lilactown|7 years ago|reply
[+] [-] akx|7 years ago|reply
[+] [-] orf|7 years ago|reply
https://github.com/zpnk/deploy.now/issues/27
Hopefully someone from Zeit reading this can get my fix merged, it seems to be quite a popular service
[+] [-] Rauchg|7 years ago|reply
[+] [-] newscracker|7 years ago|reply
> Serverless means never having to "try turning it off and back on again"
> Serverless models completely remove this category of issues, ensuring that no request goes unserviced during the recycling, upgrading or scaling of an application, even when it encounters runtime errors.
> How Does Now Ensure This?
> Your deployment instances are constantly recycling and rotating. Because of the request-driven nature of scheduling execution, combined with limits such as maximum execution length, you avoid many common operational errors completely.
Somehow this sounds very expensive to me (like restarting Windows 2000 every hour just to avoid a BSoD, except that here it’s not that time consuming a process) and seems to leave aside caching, state management and other related requirements on the wayside for someone else to handle or recover from.
Or it’s likely that I’ve understood this wrong and that this can actually scale well for large, distributed apps of any kind. Sounds like magic if it’s that way.
[+] [-] agency|7 years ago|reply
> I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests.
[+] [-] Rauchg|7 years ago|reply
The crucial difference is that the activation cost of booting up a full OS (like Windows 2000) is massive (typically involving a chain-reaction of CPU and IO-intensive init services), whereas that is not the case for our serverless infrastructure.
What you do point out is one of the most important engineering challenges we had to solve (and continue to be focused on).
[+] [-] mr_toad|7 years ago|reply
Configuration state is the bane of server management. Ideally you want to maintain state in a system specifically designed for it - i.e. a database, or version control software.
[+] [-] sp332|7 years ago|reply
[+] [-] tfolbrecht|7 years ago|reply
Looking forward to using your product!
[+] [-] robrtsql|7 years ago|reply
Looking at Now, it looks like you are billed by the 'plan' that you choose, and that decides how many deployment instances you are limited to. What does a deployment instance mean for something that is 'serverless'?
EDIT: Whoops, I see that there are 'on demand' prices for deployment instances too--now I just need to figure out how deployment instances map to serverless.
[+] [-] jgh|7 years ago|reply
Here's my question, being relatively ignorant of Docker's internals: _is it possible_ to improve that docker create/docker start time from 300-400 ms (all in) to <100ms? 300-400ms is kind of a lot of latency for a cold boot still, and people still do things like keepalive pings to keep functions warm, so it would be pretty great to bring that down some more.
[+] [-] tango12|7 years ago|reply
What are the other fundamental differences?
[+] [-] ingenieroariel|7 years ago|reply
Currently I have to make sure to set min instances to 1 on the 'production' version of the API and set min instances = 0 to older versions.
Will have to try before knowing for sure but I seems like switching to serverless docker would mean I don't have to make the distinction between 'old production' and 'new production' anymore and keep servicing requests to very old versions of my API without an expensive (7 seconds) cold boot.
Nice!
[+] [-] arunoda|7 years ago|reply
So, you don't scale up and down as you deploy. Once you call the `now alias` it'll take care of the scaling.
[+] [-] gamegod|7 years ago|reply
[+] [-] mirceal|7 years ago|reply
A few questions: How would one coordinate between multiple nodes running an app? (scenario in which the nodes cooperate to find each other. is there some sort of discovery available?) For the docker case, does it support health probes? (how you you know if the app is healthy?)
[+] [-] StanAngeloff|7 years ago|reply
[+] [-] danpalmer|7 years ago|reply
> A new slot configuration property which defines the resource allocation in terms of CPU and Memory, defaulting to c.125-m512 (.125 of a vCPU and 512MB of memory)
Sure sounds like needing to be concerned about the hardware. That feels like a leaky abstraction that the serverless design pattern claims or appears to take care of, but seems like it doesn't in practice. Is "serverless" the right level of abstraction? I'm not sure.
[+] [-] arunoda|7 years ago|reply
For an simple web app, you might not need special CPU and Memory requirements. But for a video encoding/decoding serverless app may needs different CPU/Memory requirements (and GPU).
That's what this `slot` configuration property does.
The default is good enough for all the general purpose apps. But if you need more, you can control it.
[+] [-] manigandham|7 years ago|reply
This development was inevitable, especially since most Functions-as-a-service infrastructure was really docker (or some other) containers running in the background, spinning up on demand using a pool of servers. Azure Container Instances launched last year, and now Google Cloud Functions has serverless containers in alpha, along with the serverless addon for GKE/Kubernetes, and there should be a good bit of announcements this year as other providers follow.
The max time limits still seem pointless to me, there should be an option for unlimited uptime so the full spectrum of ephemeral function to entire app can be managed and deployed using a single flow.
[+] [-] virtualritz|7 years ago|reply
Why would I want to switch to this? What are pros/cons of having Docker in there?
[+] [-] nickik|7 years ago|reply
[+] [-] gitgud|7 years ago|reply
1. Once your docker container is built, am I paying to store it on Zeit?
2. How is a Dockerfile versioned? Can I update it? Or do I need to redeploy.
3. Is pricing for containers granular by time or per request to a container?
4. How can these Dockerfiles talk to each other? Is there an API or method to fetch specific url's for each container?
Looks like a great idea, am curious to try it out sometime.
[+] [-] tapsboy|7 years ago|reply