top | item 31182099

Running Containers on AWS Lambda

170 points| shaicoleman | 3 years ago |earthly.dev | reply

78 comments

order
[+] adamgordonbell|3 years ago|reply
Author here. I didn't expect this to show up here. I found containers on lambda to work really well. I can easily test them locally and they scale to zero when I'm not using them in AWS. So it's perfect for things that are idle a lot.

I have a follow up coming where I use go in a container, and the request speed got a lot better.

The service in this article is my html to text convert, so having a container where I could install OS-dependencies was crucial to getting this working. It's covered here and here:

https://news.ycombinator.com/item?id=30829568

https://earthly-tools.com/text-mode

[+] squeaky-clean|3 years ago|reply
> I have a follow up coming where I use go in a container, and the request speed got a lot better.

My team transitioned a lightweight python based lambda from zip to container based and we also saw a small (but definitely real) speedup in request time. I'm not the one who did the benchmarking, but iirc it was about 10ms faster. From ~50ms to ~40ms or so.

edit: originally phrased it backwards to seem like container method was slower.

[+] excuses_|3 years ago|reply
I had much better experience with GCP Cloud Run. Prepare a OCI/Docker image, type `gcloud run` with a few flags and you’re done. In 2021 they added a bunch of features which in my opinion make Cloud Run one of the most trivial ways of deploying containers indented for production usage.
[+] nogbit|3 years ago|reply
Simply the best cloud service from any of the three big CSP’s bar none. The day AWS supports this more out of the box on Lambda (no, I will not use your required base image or signatures) will be the day containers and serverless become one (like it is already on GCP).
[+] holografix|3 years ago|reply
Prepare image…? Just let buildpacks sort it out and forget the Dockerfile ever existed.
[+] gadflyinyoureye|3 years ago|reply
We're trying this out at a large insurance company. Historically actuarial teams created excel workbooks, r code, and python. Then those models were given to development teams to implement in a different language. As one might guess there were loads of bugs and the process was slow. Now we're going to deploy an R lambda that owned by DevOps which integrates all the I/O into dataframes. The lambda calls a calculation in R that takes those dataframes and returns a dataframe answer. If all goes well (prototype works fine), we saved probably 500k and 6 months.
[+] carfacts|3 years ago|reply
You’ll have to deal with lambda cold starts if you want it to be performant:

> When the Lambda service receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this step, the service downloads the code for the function, which is stored in an internal Amazon S3 bucket (or in Amazon Elastic Container Registry if the function uses container packaging). It then creates an environment with the memory, runtime, and configuration specified. Once complete, Lambda runs any initialization code outside of the event handler before finally running the handler code.

https://aws.amazon.com/blogs/compute/operating-lambda-perfor...

[+] mjb|3 years ago|reply
It's not entirely accurate that Lambda pulls container images from ECR at start-up time. Here's me talking about what happens behind the scenes (which, in the real world, often makes things orders of magnitude faster than a full container pull): https://www.youtube.com/watch?v=A-7j0QlGwFk

But your broader point is correct. Cold starts are a challenge, but they're one that the team is constantly working on and improving. You can also help reduce cold-start time by picking languages without heavy VMs (Go, Rust, etc), but reducing work done in 'static' code, and by minimizing the size of your container image. All those things will get less important over time, but they all can have a huge impact on cold-starts now.

Another option is Lambda Provisioned concurrency, which allows you to pay a small amount to control how many sandboxes Lambda keeps warm on your behalf: https://docs.aws.amazon.com/lambda/latest/dg/provisioned-con...

[+] daenz|3 years ago|reply
They have a feature called "provisioned concurrency" where basically one "instance" of your lambda (or however many you want to configure) stays running warm, so that it can handle requests quickly.

I know it defeats the conceptual purpose of serverless, but it's a nice workaround while cloud platforms work on mitigating the cold start problem.

[+] sam0x17|3 years ago|reply
If cold starts are at all an issue for whatever use-case, you can just do a warming job like we do (in our case it's built into Ruby on Jets). We find invoking every 30 seconds is enough to never have a cold start. It's still quite cheap as well. The lambda portion of our bill (with tons of platform usage) is still incredibly low / low double digits.

Just doing a warming job with no other usage falls well within free tier usage, I can confirm.

[+] booi|3 years ago|reply
This is definitely an issue especially with infrequently accessed functions but I've seen cold start issues regardless. I assume some scaling events will cause cold starts (measured in seconds).

There's a good argument to go with packaged code instead of containers if you can manage the development complication and versioning (cold starts measured in milliseconds).

[+] fswd|3 years ago|reply
Lambda is Greek for CGI script.
[+] mjb|3 years ago|reply
In a way, sure.

But Lambda also does things that CGI didn't do: dynamic auto-scaling, integration with queues and streams for async processing, strong security isolation with fine-grained authorization, host failure detection and recovery, datacenter failure resilience, and many other things. The interface being familiar and relatively analogous to existing things is intentional. The point is to be different where being different helps, and familiar where being different doesn't help.

[+] throwaway2016a|3 years ago|reply
As someone who has been around long enough to actually remember setting up /cgi-bin/... there is a lot more to lambdas.

They are scalable (including, importantly, down to zero!), have their own dedicated resources for each process, and are pretty efficient at being put to sleep and woken up. Plus the code is immutable by default and you get a lot out of the box like logging and setting limits.

I wouldn't start a new project with CGI at all right now but I use Lambda constantly.

[+] recuter|3 years ago|reply
In addition to what the other replies said I'd like to offer the following observation:

Lambda is the control plane running inside an "AWS OS" context, that means it has access to internal apis with scoped permissions. Most commonly people discuss user facing lambdas on the edge, however you are not obligated to expose it to the world.

If you do choose to go the cloud route understand that your account activities generate quite a lot of data. Simplest example would be custom CloudWatch events generated from say autoscaling groups i.e. "Server 23 has ram usage above some moving average threshold" => kick off lambda => custom logic (send email/slack, reboot server, spin up more instances, whatever).

People who like middle brow dismissals would say "what does it matter where the script runs, it could just as easily be running on the instance itself" - to them I say, pain is the best teacher. :)

[+] quaffapint|3 years ago|reply
We are using a container hosting .net6 with our lambda. We use it where I think lambdas really work well, that is to process queue items off of SQS. It works well with the dead-letter queue as well. We dont notice any performance issues, but this is just a processor, so we don't need to worry about real-time responses either.
[+] cebert|3 years ago|reply
What was the thought behind processing SQS messages with a dotnet 6 containized lambda instead of a Node or Python lambda?
[+] victor106|3 years ago|reply
I found using AWS Copilot to deploy to AWS Fargate easy to deploy, maintain and scale.
[+] nogbit|3 years ago|reply
Until it it fails, and the backing CF errors and won’t resolve itself for over 24rs. Bit twice, never again with Copilot. Good idea they had, just shaky foundation.
[+] vorpalhex|3 years ago|reply
I went down the rabbithole of wanting to build my own lightweight Lambda, only to wonder if Lambda is just distributed CGI by another name.
[+] dsanchez97|3 years ago|reply
I think CGI is a good high level way of think about AWS Lambda and other Serverless compute platforms. I think the key innovations are the integrations with cloud services, and the scaling/economic model. Videos like the one linked below really demonstrate the level of optimizations implemented to make this work on the "cloud" scale. I think Serverless Compute platforms are going to become a really meaningful part of the software development ecosystem.

[1] https://www.youtube.com/watch?v=A-7j0QlGwFk&ab_channel=Insid...

[+] tyingq|3 years ago|reply
Feels pretty similar to any typical fastcgi implementation with pools, scaling up/down, separate spaces for initialization code and per-request code, etc.
[+] zevv|3 years ago|reply
And, was it?
[+] holografix|3 years ago|reply
No offence but Cloud Run has been doing this for a while?

And now Cloud Functions gen 2 as well…?

[+] fulafel|3 years ago|reply
I'v found that using this will cause Lambda sometimes to return 500 errors while it's reloading the container image from the registry. This might be the price for allowing large images, they've decided not to do it in a blocking way.
[+] Kalanos|3 years ago|reply
How long does it take to fetch the container - is it warm or cold? For AWS Batch it was taking me 1-3 min. So I was really surprised/ happy to see this lambda container post.
[+] abofh|3 years ago|reply
It's warm - when you change the ImageUri or "Update Code" for the lambda definition, it downloads the container into "somewhere" lambda-y - this takes a few seconds depending on size. Startups are fairly quick, but because of the way it persists your running image in memory, your container is generally (on frequent usage) 'paused' between invocations, and resumes quite quickly.
[+] scottydelta|3 years ago|reply
Is it possible to host an app like Django inside container on lambda? This could help the Django/postgres apps to scale horizontally easily.
[+] throwaway2016a|3 years ago|reply
Not sure about Django but on the Node.js side there are Express.js compatible libraries that let you write your app like you would Express but it's exposed via Lambda and API gateway. Good chance Python has something similar. Biggest difference is you're not dealing with HTTP (directly) but it can be abstracted to -- from a framework perspective -- act like HTTP.
[+] adamgordonbell|3 years ago|reply
Yes, but is the startup time of Django an issue? Besides that you'd have to return your data in the shape that the API gateway expects from lambdas.

Returning a 500 would look something like this:

    {
     "statusCode" : 500,
      "headers" : {
        "content-type": "text/plain; charset=utf-8"
     },
     "body" : "Some error fetching the content"
   }
[+] gatvol|3 years ago|reply
Yes it absolutely is possible. I have been considering doing this in combination with PG Aurora v2
[+] spullara|3 years ago|reply
I have an entire video analytics pipeline running about a dozen containers for inference. Works great.
[+] faangiq|3 years ago|reply
“Why run on lambda instead of fargate? Oops, we won’t tell you.” - AWS
[+] andrew_|3 years ago|reply
This is going to throw unknowing readers for a loop, because it's a comment trying to be cheeky.

Simply put: Fargate/ECS/EC2+EB = long running tasks

Lambda = Short burst tasks with a max life of 15 minutes

Running a lambda 24/7 will nuke your credit card. Using Fargate for a scheduler/cron job that only runs 4 times a day will nuke your credit card. Use the right tool for the right job.