AWS
x86 Price $0.0000166667 for every GB-second $0.20 per 1M requests
Arm Price $0.0000133334 for every GB-second $0.20 per 1M requests
DO $0.0000185/GB-second
So basically a little higher price for each GB/s but no cost pr request.
AWS free tier provides 400,000 GB-s and 1 million free requests per month
DO free tier 90,000GB-seconds of usage for free per month
So cheaper in some respects compared to AWS. Google and Azure is roughly the same as AWS.
Also DO include egress which is expensive on AWS (it does not say it cost anything).
Thanks this was the information I was looking for and glad I won't be moving off AWS Lambda anytime soon.
Really puzzling why DO would announce a new product and make it more expensive than incumbents.
Typically, you would want to undercut existing competitors right out of the gate because your competitors will do it for you....by making the first move to reduce their prices, diminishing your own market.
Perhaps there is some other lining here but I don't understand how you launch a serverless hosting and make it more expensive than AWS.
Digital Ocean's pricing model lines up more with how I expect serverless to be used. It's cheaper for APIs that are very simple and lightweight, but need to be able to handle infinite traffic. I like it. I think it incentives people to use it correctly.
Finally! DO were seriously lagging behind some competitors in the "small, easier, cheaper than the big3" space. Scaleway have had serverless functions and containers ( which is even better because even easier to migrate across providers), message queue for a couple of years now?
Congrats on the work anyways, and let's hope they put out a Container as a Service soon too.
Will any sort of scheduled execution be possible with DO Functions? I've been looking for a way to add scheduled tasks (e.g. session cleanup) to DOAP apps.
Waiting for a standardized approach to serverless that lets us just write the code once and deploy to _any_ cloud provider. Right now, all of these products (aws, azure, do, etc) seem to have, at least, slight differences that would make it not possible to switch to another provider without some code changes.
Functions aren't necessarily always used in these kind of overarching structures. React frameworks like next.js use them to simply provide backend endpoints for data and logic. I think that's what they're going for, as an addition to their app platform more than anything else
Given the max execution time of 5s and the max memory of 1GB, it seems like that is exactly what this is. Some endpoints to fetch e.g. data off the database or some processing tasks for a frontend framework, like the functions Vercel or Netlify offer.
The maximum execution duration of 5 seconds seems incredibly short!
The only time I've used "cloud functions" in the past has been for executing complex jobs that may take up to several minutes. For this, cloud functions has been very useful and easy to scale.
Functions through app platform are fully managed and deployed with zero downtime and offer roll backs as well. Deployments from the CLI are fast enough you can enable watch-mode and continuously deploy as you code in your IDE.
Cold starts depend on runtime and size of the function and an area of continuous improvement (for all cloud providers).
-- "Serverless Business Unit at DigitalOcean" -- i see a lot of "enterprise lingo" around digital ocean these days - are they still the best "individual developer focused" provider? --
I'd say Yeah. I use and pay for their Apps platform for a few personal projects. Its cheap (like $5/mo/app). It reminds me enough of Heroku that I'm happy. The built-in CI is kind of slow. But it just works and requires relatively low configuration, especially when compared to AWS/GCP/Azure.
There are simpler, more specialized products, but I think DO strikes a really good balance. For example, I've hosted static sites on Cloudflare Pages, which is a solid product, but its also rather unconfigureable; I was running into issues with their built-in CI even using a static site generator which wasn't on their supported list.
Yeah, seems like they changed a lot recently (since they're on NYSE). I feel like they are transitioning away from the "for developers" cloud, to more like a "to be" Heorku or AWS/Azure/GCP cloud.
Serverless has been kinda out of reach of non-enterprise but has a lot of promise for individual developer experience. There's no add'l vendor lock in if you use serverless platforms for typical (mini-)monolith deployments such as django+postgres
I am a DO customer despite the cost being a little higher for my purposes than AWS. I like their UI, their email reminders, their pricing transparency, and their support.
I switched to DO from Linode a long time back for something that was relatively minor in hindsight, but I've never not been happy with the change.
A handful of firms now doing this, same sort of serverless/workers thing, makes me hesitant to be bullish on certain players like CF, Netlify, Vercel, whatever other sort of 'up n comers'. If they're all offering it, the value of one over another goes down drasically. Yeah yeah, vendor lock in, it's fine, people have options and will stay where they want, but some of the big value propositions from these firms over the last year or so has been heavily focused on this aspect of the service feels like. Is it just a case of 'there's lots of business to go around' maybe?
Serverless computing is the future of cloud computing with more and more infrastructure management shifted to the provider rather than internal infra and ops teams. It isn't to say it's all or nothing, and it is a long enduring transition which is why the DigitalOcean integration between Functions and App Platform is differentiated related to some of the other vendors you mention. You don't have to choose between serverfull (containers, servers, kubernetes even) and serverless -- functions, and entirely managed services like CDNs, load balancers, containers that scale from 0 to N, SSL cert management, automatic build and deploys, rollbacks, API gateways, object storage ... and managed serverless key-value stores and databases.
So functions is really table stakes for a cloud, as are events, scheduled and background functions. You've seen this play out with every cloud provider since the arrival of AWS Lambda. Our goal is to make it scale, make it cheap, and make it secure with DigitalOcean developer simplicity and so you are right, a corner stone of this endeavor is the developer experience and integration with cloud services because a cloud application consumes all of these services. As these become core competencies for a cloud, the integrations between all the services only get better and deliver more value to the customer.
I shared a few details above - with the implementation backed by a mature open source project, there are lots of details available already. Thanks for bringing this up though so we can focus on the parts of interest to the community as we roll out more technical documents on the product.
I recently learn about this term, so can someone be kind enough to explain to me ELI5 serverless usage[0], advantages over "traditional" servers and why I should/could care?
[0] by usage I mean, I still deploy my monolithic app, or just one function per route?
Some people go monolithic app but best practice is usually one function per route.
Traditional servers come with management overhead (e.g. defining/managing/monitoring scaling) and by using Serverless servers you avoid that overhead and optimize for good engineers which are almost always your bigger cost center.
With traditional server you keep your webserver running all the time and keep paying for the underlying infrastructure whether irrespective of actual usage.
You can still spawn more instances/containers on demand and autoscale but you need to think about provisioning and how that affects your cost.
With serverless the costing maps directly to usage and scaling is (ostensibly) taken care of for you.
If you have long running workloads that anyways need preallocated infrastructure and forward planning, you likely don't gain much from serverless. If your work can be split into smaller units of executions that can be invoked on-the-fly independently, you will likely benefit from serverless pricing.
We use AWS lambda for running slower background tasks triggered from a user action on a website (eg. generating a report, clearing and rebuilding a cache, etc...).
We deploy our full monolithic app to lambda (as a docker image) and then just have a wrapper entrypoint script that dispatches the request to the appropriate module and function.
There are benefits to keeping each lambda function small but we like the benefit of deploying one lambda and being able to call any function within the monolith.
To a certain extent, serverless actually delivers on the "No sysadmins required" promise that EC2 doesn't really enable (someone still has to manage your EC2 herd's configuration, even if you call that person a "devops engineer").
The code for the functions is vendor agnostic. Vendor lock-in comes from the integrations the code that runs in the cloud ends up consuming, and the developer experience one acquires. The nature of cloud development is that one invariably becomes an expert in a cloud or stack, and that's the real lock in / why it's expensive to move in practice.
Digital Ocean is great for spammers because Digital Ocean does not care to be a good internet citizen. Being a good internet citizen would be too expensive, so DO just throws up its hands and does nothing about its spamming customers.
Uhm, you just sounded like an add for DO here: "DO won't randomly shut down your account because they guess you might be doing something fishy like spamming"
...and don't tell me you don't get "spam" from AWS users. Only GCP is doing more policing, but as a dev I'd be wary of using them of fear of being labeled "suspicious" because some weirder scraping usage patterns or whatever.
kennethh|3 years ago
Arm Price $0.0000133334 for every GB-second $0.20 per 1M requests
DO $0.0000185/GB-second So basically a little higher price for each GB/s but no cost pr request.
AWS free tier provides 400,000 GB-s and 1 million free requests per month
DO free tier 90,000GB-seconds of usage for free per month
So cheaper in some respects compared to AWS. Google and Azure is roughly the same as AWS. Also DO include egress which is expensive on AWS (it does not say it cost anything).
Zababa|3 years ago
Free tier is 100 000 requests per day
Here's the pricing page: https://workers.cloudflare.com/#plans
upupandup|3 years ago
Really puzzling why DO would announce a new product and make it more expensive than incumbents.
Typically, you would want to undercut existing competitors right out of the gate because your competitors will do it for you....by making the first move to reduce their prices, diminishing your own market.
Perhaps there is some other lining here but I don't understand how you launch a serverless hosting and make it more expensive than AWS.
chickenpotpie|3 years ago
kennethh|3 years ago
gagan2020|3 years ago
sofixa|3 years ago
Congrats on the work anyways, and let's hope they put out a Container as a Service soon too.
adoxyz|3 years ago
killingtime74|3 years ago
5 second max function duration? Cloudflare is unlimited and AWS is 15 minutes.
https://docs.digitalocean.com/products/functions/details/lim...
cc-f31535adfaf|3 years ago
malfist|3 years ago
BigglesZX|3 years ago
8K832d7tNmiQ|3 years ago
slig|3 years ago
l30n4da5|3 years ago
killingtime74|3 years ago
bengale|3 years ago
Mo3|3 years ago
Given the max execution time of 5s and the max memory of 1GB, it seems like that is exactly what this is. Some endpoints to fetch e.g. data off the database or some processing tasks for a frontend framework, like the functions Vercel or Netlify offer.
Nothing to build serious architectures with.
pqvst|3 years ago
The only time I've used "cloud functions" in the past has been for executing complex jobs that may take up to several minutes. For this, cloud functions has been very useful and easy to scale.
I hope DO can consider increasing this limit.
anony23|3 years ago
bcjordan|3 years ago
How long does it take to deploy an update? Are updates rolling/zero-downtime?
rabbah|3 years ago
Cold starts depend on runtime and size of the function and an area of continuous improvement (for all cloud providers).
pigtailgirl|3 years ago
015a|3 years ago
There are simpler, more specialized products, but I think DO strikes a really good balance. For example, I've hosted static sites on Cloudflare Pages, which is a solid product, but its also rather unconfigureable; I was running into issues with their built-in CI even using a static site generator which wasn't on their supported list.
abdouls|3 years ago
erikrothoff|3 years ago
wahnfrieden|3 years ago
debacle|3 years ago
I switched to DO from Linode a long time back for something that was relatively minor in hindsight, but I've never not been happy with the change.
ChrisArchitect|3 years ago
rabbah|3 years ago
So functions is really table stakes for a cloud, as are events, scheduled and background functions. You've seen this play out with every cloud provider since the arrival of AWS Lambda. Our goal is to make it scale, make it cheap, and make it secure with DigitalOcean developer simplicity and so you are right, a corner stone of this endeavor is the developer experience and integration with cloud services because a cloud application consumes all of these services. As these become core competencies for a cloud, the integrations between all the services only get better and deliver more value to the customer.
mrloba|3 years ago
Will you be transparent about how it works, such as when a function is frozen? This is something I miss with many Aws services
rabbah|3 years ago
norman784|3 years ago
[0] by usage I mean, I still deploy my monolithic app, or just one function per route?
erikerikson|3 years ago
Some people go monolithic app but best practice is usually one function per route.
Traditional servers come with management overhead (e.g. defining/managing/monitoring scaling) and by using Serverless servers you avoid that overhead and optimize for good engineers which are almost always your bigger cost center.
lf-non|3 years ago
You can still spawn more instances/containers on demand and autoscale but you need to think about provisioning and how that affects your cost.
With serverless the costing maps directly to usage and scaling is (ostensibly) taken care of for you.
If you have long running workloads that anyways need preallocated infrastructure and forward planning, you likely don't gain much from serverless. If your work can be split into smaller units of executions that can be invoked on-the-fly independently, you will likely benefit from serverless pricing.
danstewart_|3 years ago
We deploy our full monolithic app to lambda (as a docker image) and then just have a wrapper entrypoint script that dispatches the request to the appropriate module and function.
There are benefits to keeping each lambda function small but we like the benefit of deploying one lambda and being able to call any function within the monolith.
NateEag|3 years ago
my69thaccount|3 years ago
2. It locks you in to a certain vendor
3. It costs more
unknown|3 years ago
[deleted]
unknown|3 years ago
[deleted]
bongobingo1|3 years ago
hasa|3 years ago
rabbah|3 years ago
pabl0rg|3 years ago
rabbah|3 years ago
annoyingnoob|3 years ago
catchclose8919|3 years ago
...and don't tell me you don't get "spam" from AWS users. Only GCP is doing more policing, but as a dev I'd be wary of using them of fear of being labeled "suspicious" because some weirder scraping usage patterns or whatever.
nickphx|3 years ago
JohnHaugeland|3 years ago
Single infrastructure code dies and the author can't usually save it
Ask a heroku fan how they're feeling right now before moving forward with any single host approach
ceejayoz|3 years ago
Just fine? Fly.io has a fairly seamless automated migration at https://fly.io/launch/heroku ; my Heroku apps are quite portable.
https://www.digitalocean.com/community/conceptual_articles/h... shows a pretty similar approach to Lambda - they just invoke a function with a handler. You could run the same handler on AWS Lambda or Cloudfront's workers, probably without any changes.
theshrike79|3 years ago
Having all your code platform agnostic isn't really a viable option unless you've got budget to spare.
jamesfisher|3 years ago
[1]: https://news.ycombinator.com/item?id=31472997