For reference - Lambda functions used to billed at 100ms intervals. My Node.Js function usually only takes 37-40ms to run. So this is a pretty good advancement for cost savings.
Just out of curiosity, what are you getting out of your 160 million CPU cycles? Are you mostly on the CPU, or mostly waiting for something (database call or whatever)?
So this confirms there is a lot of competition in the serverless space: aws lambda, Azure cloud functions, Google cloud functions, serverless containers. Like knative, Google cloud run...
Just out of curiosity, could you share what kind of things you use it for?
I've never used Lambda, but any time I have a function that I need to run in response to some event or perodically (that's what Lambda is, right?), it's set up in a background worker specifically because it's long and slow, as anything fast can be done synchronously without the overhead.
Necessary change. Now writing stuff in fast languages suddenly matter in cost, changing the landscape of when these solutions might become viable for a chase.
I've been here for a while resisting the temptation to write a sarcastic comment. Speed has been the opposite of what matters for decades. Every change is always trading speed for something else. And suddenly some offer by Amazon is going to change that? Seems unlikely.
From the research I did, here's how languages stack up in Lambda runtime (lowest first):
1. Python & JS
2. Go
3. C# & Java
I couldn't find any data on Rust.
The understanding at the time was that Python & JS runtimes are built-in, so the interpreter is "already running" Go is the fastest of compiled languages, but just can't beat the built-in runtimes. C# and Java were poorest as they're spinning up a larger runtime that's more optimized for long-running throughput.
A lot of this was based around the fact that we've seen languages become just so much more performant. This includes Go/Rust/etc, but a lot of Node.js workloads are also sub 100ms, or fast enough that they'd benefit from this pretty well.
I've got bad experiences with go startup (i.e., cold runs). They're much more expensive than I would have expected. If node can indeed run in 40ms (as https://news.ycombinator.com/item?id=25267211 says), then I'm surely going back to JS.
JS is a great choice for Lambda thanks to great cold performance. I’m seeing runtimes in the 40ms to 100ms range.
Most of the time in Lambda is usually spent waiting for IO, which is slow in any language. If you’re using Lambda for heavy computation, that’s not a great choice.
IME Lambda functions are mostly sitting around waiting on I/O, so I don't think it would make much of a difference for those workloads. The important technical factors for those workloads are startup time and I/O capabilities...JS is strong in both of those areas. For simple Lambda functions JS still seems like a great choice, along with Go. Rust would be overkill IMO unless you need to share a codebase or aren't I/O bound or have some other unique requirements.
I've learned that AWS pricing tends to improve over time, and I appreciate it. I just recently switched from a startup offering authorization to AWS Cognito because the startup kept raising their price(s).
It's nice to see this drop, though I'm sure Amazon does it due to competition as well.
Exciting! As a primarily C & C++ programmer this makes me happy. Also, I see that there's now examples for C++ that don't involve "step 1, download nodejs". Progress!
Interesting: the german version (and other non-english versions, if I parse that correct) of the page still mentions rounding up to 100ms while the english version to 1ms.
Cache? Not yet translated? Different pricing model?
Oh, that absolutely changes the price calculation for Lambda. Historically the 100ms minimum billing interval made Lambda significantly more expensive than EC2 for large numbers of work loads.
I find it annoying to have all these pricing per second, and now per millisecond. It's really hard for my mind to visualize what `$0.0000000021 per millisec` actually is.
Being billed by the millisecond does not mean that you should give a pricing per millisecond.
I prefer Digital Ocean or Heroku's approach of billing by the second, but giving the price per month. How on hell is `$0.0000000021 per millisec` better than `$5/month, billed by the millisecond`? If I know that my workload will be about 20% of a dedicated CPU, I know that I'll end up paying about $1 per month.
There is simply an enourmous amount of assumptions that would go into estimating anything else, because ms is the only correct metric. Lambda billing for a month? What on earth does that say? 10 invocations running 15 minutes? 90,000 invocations running 100ms? (Those two are equivalent btw).
If I know my function takes around ~35ms ballpark, and I will probably invoke it 5,000 times per day, then I can calculate my monthly: 0.0000000021 $/ms * 35ms * 5,000 * 30 = 0.011 $/month.
AWS usually shows a neat example of usecase and what the billing would be on their pricing pages.
Is there a way to use lambda where I can use ffmpeg to watermark and downscale a 4K video? Possibly some system where I can throw a lot of computing at the job and get it done quickly. Right now on my vps it takes multiple minutes to get done and scaling up the vps for one feature or dedicating another server for it is overkill.
I don't see why not, assuming it's less than the lambda max run time (15 minutes I believe). Use a python script and an included binary of ffmpeg might work. Note: You'll need to write to the tmp space of the lambda as that's the only place that allows writing to the filesystem and then you'll need to upload it to S3 or elsewhere.
Lambda's are good at batch jobs where you might need to kick off a few of them but not have a dedicated system for it. I've used it to automate manual customer support tasks that are sporadic in requests.
I wonder if this change will push more people to consider rewriting code to more memory and CPU-efficient programming platforms, for example from java/c# to C or Rust?
[+] [-] catchmeifyoucan|5 years ago|reply
[+] [-] munns|5 years ago|reply
- Chris, Serverless@AWS
[+] [-] jeffbee|5 years ago|reply
[+] [-] eb0la|5 years ago|reply
[+] [-] franga2000|5 years ago|reply
I've never used Lambda, but any time I have a function that I need to run in response to some event or perodically (that's what Lambda is, right?), it's set up in a background worker specifically because it's long and slow, as anything fast can be done synchronously without the overhead.
[+] [-] tgv|5 years ago|reply
[+] [-] jimmaswell|5 years ago|reply
[+] [-] Supermancho|5 years ago|reply
For some people. Those cost savings are made up somewhere else. Ultimately, Amazon is not a loss leader.
[+] [-] jlouis|5 years ago|reply
[+] [-] koolba|5 years ago|reply
[+] [-] SoSoRoCoCo|5 years ago|reply
https://news.ycombinator.com/item?id=25200324
[+] [-] thetrooper|5 years ago|reply
[deleted]
[+] [-] narag|5 years ago|reply
[+] [-] akh|5 years ago|reply
I'm interested to hear what people think about https://www.infracost.io/docs/usage_based_resources - longer term we could extend that to fetch average_request_duration from cloudwatch or datadog.
[+] [-] Hortinstein|5 years ago|reply
[+] [-] maxpanas|5 years ago|reply
[+] [-] pwinnski|5 years ago|reply
Very tired of this: `Duration: 58.62 ms Billed Duration: 100 ms`
Very happy about this: `Duration: 48.74 ms Billed Duration: 49 ms`
[+] [-] brunoluiz|5 years ago|reply
[+] [-] valbaca|5 years ago|reply
1. Python & JS 2. Go 3. C# & Java
I couldn't find any data on Rust.
The understanding at the time was that Python & JS runtimes are built-in, so the interpreter is "already running" Go is the fastest of compiled languages, but just can't beat the built-in runtimes. C# and Java were poorest as they're spinning up a larger runtime that's more optimized for long-running throughput.
https://docs.aws.amazon.com/lambda/latest/dg/best-practices....
https://medium.com/the-theam-journey/benchmarking-aws-lambda...
https://epsagon.com/development/aws-lambda-programming-langu...
https://read.acloud.guru/comparing-aws-lambda-performance-of...
Of course, benchmarks like this only go so far. Use as a starting point for your own evaluation; not as an end-all-be-all.
[+] [-] munns|5 years ago|reply
- Chris, Serverless@AWS
[+] [-] tgv|5 years ago|reply
[+] [-] Swizec|5 years ago|reply
Most of the time in Lambda is usually spent waiting for IO, which is slow in any language. If you’re using Lambda for heavy computation, that’s not a great choice.
[+] [-] k__|5 years ago|reply
On the other hand I heard legends about under 10ms Rust Lambdas.
[+] [-] JJJollyjim|5 years ago|reply
[+] [-] cle|5 years ago|reply
[+] [-] camhart|5 years ago|reply
It's nice to see this drop, though I'm sure Amazon does it due to competition as well.
[+] [-] Supermancho|5 years ago|reply
[+] [-] astuyvenberg|5 years ago|reply
[+] [-] 4lejandrito|5 years ago|reply
I'm an ignorant in AWS Lambda but how do you know if their ms measurement is accurate? Is there any way to verify this?
[+] [-] munns|5 years ago|reply
- Chris - Serverless@AWS
[+] [-] astuyvenberg|5 years ago|reply
[+] [-] loxias|5 years ago|reply
[+] [-] forrestbrazeal|5 years ago|reply
[+] [-] ite07|5 years ago|reply
[+] [-] maverwa|5 years ago|reply
[+] [-] ashtonkem|5 years ago|reply
[+] [-] code4tee|5 years ago|reply
[+] [-] gitweb|5 years ago|reply
[+] [-] simlevesque|5 years ago|reply
[+] [-] cordite|5 years ago|reply
A feature I'd really like next is secrets as environment variables like ECS.
Retrieving SecretsManager secrets and SSM Secure Parameters in application code is messy and provides significant friction for developers on my team.
[+] [-] raphaelj|5 years ago|reply
Being billed by the millisecond does not mean that you should give a pricing per millisecond.
I prefer Digital Ocean or Heroku's approach of billing by the second, but giving the price per month. How on hell is `$0.0000000021 per millisec` better than `$5/month, billed by the millisecond`? If I know that my workload will be about 20% of a dedicated CPU, I know that I'll end up paying about $1 per month.
[+] [-] Tehnix|5 years ago|reply
If I know my function takes around ~35ms ballpark, and I will probably invoke it 5,000 times per day, then I can calculate my monthly: 0.0000000021 $/ms * 35ms * 5,000 * 30 = 0.011 $/month.
AWS usually shows a neat example of usecase and what the billing would be on their pricing pages.
[+] [-] yeldarb|5 years ago|reply
[+] [-] fareesh|5 years ago|reply
[+] [-] winslow|5 years ago|reply
Lambda's are good at batch jobs where you might need to kick off a few of them but not have a dedicated system for it. I've used it to automate manual customer support tasks that are sporadic in requests.
[+] [-] itisit|5 years ago|reply
[+] [-] pcnix|5 years ago|reply
[+] [-] npollock|5 years ago|reply
[+] [-] kadukeitor|5 years ago|reply
https://callbackfy.com
It's essentially a way to save some money by avoiding long http requests by buffering requests and sending a callback when the result is complete.
[+] [-] polskibus|5 years ago|reply