This is great news. I'm really rooting for a successful trend of Serverless runtimes, mainly as a weapon against rising cloud deployment costs.
While the general trend today is to back the serverless environment with Javascript runtimes (Cloudflare runs its edge on top of V8, Netlify uses deno, most other serverless runtimes use nodejs), I'm optimistic that WebAssembly will take over this space eventually, for a bunch of reasons like:
1. Running a WASM engine on the cloud means, running user code with all security
controls, but with a fraction of the overhead of a container or nodejs environment. Even the existing Javascript runtimes, comes with WebAssembly execution support out of the box! which means these companies can launch support for WASM with minimal infra changes.
2.It unlocks the possibility of running a wide range of languages. So there’s no lock-in with
the language that the Serverless provider mandates.
3.Web pages that are as ancient as the early 90s are perfectly
rendered even today in the most modern browsers because the group behind
the web standards strive for backward compatibility. WebAssembly’s specifications are
driven by those same folks - which means WASM is the ultimate format for any form of
code to exist. Basically, it means a WASM binary is future proof by default.
I fully agree with your take. I think JS-directed computation is very good towards short-term adoption (since running JS on the Edge is probably the most popular use case), but eventually the needs to run other programming languages at the Edge will likely eclipse the JS use case.
At Wasmer [1] we have been working actively towards this future. Lately more companies have been also doing awesome work on these fronts: Lunatic, Suborbital, Cosmonic (WasmCloud) and Fermyon (Spin). However, each of us with a different take/vision on how to approach the future of Computation at the Edge. I'm very excited about to see what each approach will bring into the table.
I mean, only in theory or when looking at it from the right angle, right? Or are you only comparing against JavaScript (unclear)? WASM is still much slower than native code. Containers spend most of their time executing native code; the "overhead" of containers is at the boundaries and is minor compared to the slowdown by moving from native code to WASM. In the future WASM may approach native performance, but it's not there now. I'm 100% certain that transitioning my native-code-in-containers workloads to WASM would be slower, not faster.
Also a common spec for serverless is badly needed. Serverless code should be portable between different cloud providers, otherwise there's vendor lock in and much greater opportunity to price gouge.
Anybody know if a common API for serverless components is being worked on?
Really, really dumb question. I've seen a lot of node/python/etc serverless offerings. Is there something where you just provide a binary and its executed each time?
For example, I write a simple single responsibility piece of code in Go `add_to_cart.go` and build it, deploy it, and somehow map it to some network request. dot slash pass args, and return the result?
I agree about WASM. I am sort of worried that Deno may be too late tbh. Why would I bother with an interpreted language at all when I can code in any language I want and run it anywhere with WASM?
As far as I can tell from the outside, that's still "WASM-called-by-Javascript", and many of their JS optimizations don't work the same way. E.g. if a Worker calls JS `fetch` and returns that `Response`, they recognize that and remove the JS from the data path; same is not true for WASM at this time.
Hmmm I had a look into WASM runtimes and the idea seems interesting to deploy something on a server as a lightweight execution environment (I think of Firecracker from AWS for VMs).
To be honest on the server side of things containers are so nice because 99% of the time they include all your dependencies you need to run the app.
> I'm really rooting for a successful trend of Serverless runtimes, mainly as a weapon against rising cloud deployment costs.
How would that work? Don't these tend to facilitate cloud lock-in or at least be cloud-only in the sense that they make it hard to operate your own metal infrastructure?
> mainly as a weapon against rising cloud deployment costs.
Cloud Functions is literally code you're running in the cloud. And the moment you approach their limit(ation)s, you will see the same "rising cloud deployment costs"
> Running a WASM engine on the cloud means ... a fraction of the overhead of a container or nodejs environment
You do realise that there are other languages than javascript in nodejs? That there other environments than cloud functions? And that you can skip that overhead entirely by running with a different language in a different environment? Or even run Rust in AWS Lambda if you so wish?
> so there’s no lock-in with the language that the Serverless provider mandates.
And at the same time you're advertising for a runtime lock in. This doesn't compute.
> Web pages that are as ancient as the early 90s are perfectly rendered even today... Basically, it means a WASM binary is future proof by default.
It's not future proof.
Web Pages from the 90s are not actually rendered perfectly today because browsers didn't agree on a standard rendering until late 2000s, and many web pages from the 90s and 2000s were targeting a specific browser's feature set and rendering quirks. Web Pages from the 90s are rendered good enough (and they had few things to render to begin with).
As web's standards approach runaway asymptotical complexity, their "future-proofness" is also also questionable. Chrome broke audio [1], browsers are planning to remove alert/confirm/prompt [2], some specs are deprecated after barely seeing the light of day [3], some specs are just shitty and require backtracking or multiple additional specs on top to fix the most glaring holes, etc.
> I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique
"Let's replace somewhat unlimited code with severely limited, resource constrained code running in a slow VM in a shared instance" is not a good take.
Netlify for me is a prime example of a great company gone wrong by raising too much VC money.
The basic product of Netlify is a great one: build and host static sites without the need to mess with any of the tech stack. For us developer folk, this should be easy: run the build command of any static site generator and stick the results into an S3 bucket. And yet, something as simple as this became so popular with even developer companies (see Hashicorp’s quotes on Netlify).
This could have been a great story but then tons and tons of VC money came in and now you’d have to think of ways to make the valuation worth it and make the product sticky: so now we have edge Deno powered functions, lambda-esq applications, form embedded HTML and so much other features that are used by the long tail of their customer base while they changed their price to charge by git committees and have daily short downtimes of 1 to 5 mins for the past month (monitored by external services, as they wouldn’t reflect that in their status page).
Soon, they’ll sell the company to some corp like Akamai or similar “enterprise” outfit leaving us high and dry.
There is a lot of money in building businesses that do boring stuff that just makes peoples lives easier. But when you take VC money, you’d need to build a moat to fend off cloud providers from the bottom, capture the value for the top from developers and everything in between.
I’d be interested in building the bootstrapped “git push and we build and publish”, aka “heroku for static site compilers”
Chime in if you’d like to be one of the first few customers. If there’s enough interest here’s how I’d play it:
1. I won’t raise VC money. I know how to build a SaaS business without it—I bootstrapped Poll Everywhere from $0 to $10m+.
2. My motivations these days are to build low complexity products. Ideally they’re “evergreen”, meaning I can ship a core feature set that I know will be the same in 10 years. The feature I’m selling their is stability.
3. I like to price things in a way that makes them accessible to as many people as possible while being sustainable for the business so it can operate for a long time with the support it needs for customers.
I think this is a natural and fine extension of the Netlify platform. They've had various "serverless functions" for a few years that's mostly been out of the way if you don't need it.
It fits within their goal of a 'heroku for frontend websites', for easily deploying sites.
I guess Netlify still offers the basic static site hosting, which can be anything from drag-drop to easy to set up automated github deployments. I mean, it's not like Netlify offers a worse static hosting service post-funding, right? With VC funding they've just built out more features. Not to mention I think they've always aimed to build out the "JAM" stack and support as many frameworks as possible.
Netlify pricing has always been confusing to me, but I'm not entirely sure why. I guess I'm more accustomed to pay-as-you-go in this space (CFW) than tiered plans (Netlify bundles their features into starter/pro/business).
It seems that the free plan is 3M invocations/mo, starter is 15M/mo, and business is 150M/mo, but there aren't any ways to increase those limits (business says to contact them for higher limits).
Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive. To me the point is to sign-up-and-forget-it without having to worry if I'm within those limitations.
Netlify's creative pricing is what lost me as a customer. They decided to start reading our git commits to decide how much to charge us. Instead of charging for usage of bandwidth and build minutes they decided to charge based on how many different authors we had--even though those people never interacted with Netlify or even knew how we were deployed. If we didn't hurry up and migrate to Render.com this would have taken our bill from $1.5k/year to over $25k/year.
> Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive.
Please, a hard no to that. That's the worse aspect of AWS, Azure and all those new huge hosting centers - hard to calculate the real cost and set a budget.
I don't know about Netlify, but the old Linode (before it got acquired), was flexible with its "hard" limits in a plan - for example, if your site got slashdotted / Digged (or was that dug?) and suddenly saw a spike on its resource, exceeding the limits, they were quite accommodating in not charging their users for the unexpected extra usage. Linode even wouldn't mind an occasional surge in resource a few times a year. But if it happened more frequently, they would recommend that you upgrade to a more suitable plan. They earned a lot of goodwill that way from their clients who really appreciated that their server / site wasn't unexpectedly taken offline because of a resource crunch they hadn't paid for and / or anticipated.
I have no special insight into Netlify, so this is (educated) speculation: there's an important difference between pay-as-you-go compute providers, like AWS, and Netlify: Netlify is a platform, their value is not derived from the workloads they process, so charging (or not charging) based on compute doesn't align with their value proposition. The value of Netlify is that it's an end to end platform, taking a business from having some code to having a live website, where compute is just one component of the entire value proposition.
The marginal cost of a request is probably negligible, hence the tens of millions of requests included, but there is a cost associated with each user making use of their platform because it includes a lot more than just compute, and that's the value they're charging for.
I think if you're looking for a compute provider that offers pay as you go billing in order to minimise your costs, then Netlify probably isn't the platform for you, and you'd be better off using their service provider directly (in this case, Deno, but many Netlify alternatives use Lambda, Cloudflare Workers etc.).
This has been one of the big knocks on AWS, that a poor little old lady can setup a "free" AWS account then when her website (and accompanying Lambda function) goes viral she gets hit with a $100k bill from uncle Jeff.
> Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive. To me the point is to sign-up-and-forget-it without having to worry if I'm within those limitations.
Sure, if you can set a max budget. Otherwise, you'd constantly have to worry about the unbounded cost.
I would love to jump over to something like Vercel or Netlify Edge, but maddeningly none of these platforms give you control over the cache key. I have pages that are server-side rendered with Cache-Control headers, but because our visitors come with unique tracking params on the end of their URL (e.g. from Mailchimp or Branch), we would essentially have no cache hits.
It seems the only way to have control over this is to write your own Cloudflare Workers. There must be a better way? I can't imagine this is an infrequent problem for people at scale.
So far Netlify Edge Functions runs before the cache layer, so you can actually use a minimal function to rewrite the URL to remove all unique params, etc, and then let it pass through our system to a Netlify Function which runs behind our caching layer.
For anything you can do at build time as a static HTML pages we already strip query parameters from cache keys.
I'm biased but there is a better way. Give developers a high performance method programmatically manipulating the cache key from JavaScript. That's what we created with EdgeJS: https://docs.layer0.co/guides/caching It's less work to write and higher performance than dealing with edge functions or worker for routine tasks like this.
You're experiencing friction trying to use something in a way that it's supposed to not be used. (I.e., click-tracking by junking up URLs.) You could look for an answer, or you could take a step back, evaluate your expectations, and then decide not to do what you're trying to do.
I'm surprised that the function is async but context.rewrite() doesn't use an await. Is that because the rewrite is handed back off to another level of the Netlify stack to process?
How many Deno instances might an edge server run? Does each tenant have an instance or is there multi-tenancy? What interesting tweaks have you made making a cloudified offering of Deno tailored for http serving?
We’ve been Netlify paying customers for 2 years now. While I appreciate the new features, the core platform has been becoming unreliable in the past 6 months. We’ve had a decent amount of downtime.
I do not recommend them anymore. We will move somewhere else.
Almost every few days we get a report that some customers can’t access our site from where they are. Our US east engineers can confirm that their POP is down.
Netlify’s status page says everything is working, but in reality it’s not.
Netlify as a CDN has failed for us on its core promise.
Does anyone know how those compare to regular Netlify Functions, other than running on the edge nodes? The main difference I’ve found is that they have much stricter CPU time budgets, but it seems to me that the use cases overlap quite a bit.
You have a hosted static web app but want to dynamically change the <meta> tags in your index.html to provide a unique url preview for each route (/about, /careers, etc)
For one (minor) thing, they're a great way to add certain HTTP headers which can't be handled through other means. I use a Cloudflare Worker to give my site the necessary headers for its Content Security Policy (some parts of which aren't to be added via a <meta> tag[0]), as well as the nonces[1] for that CSP. This only scratches the surface, of course.
Serverless API functions, like if you were going to use Amazon AWS lambda functions to add interactivity or simple APIs to a site without have to manage and run a full server.
I was was told to use firebase cloud functions literally yesterday.
You can pre-parse and pre-process JSON responses to minimize the payload size and customize it for your frontend needs. Makes dealing with client secrets and configuration easier too I believe. I didn't want to rewrite a bunch of backend code so this was one of the simplest solution.
Deno Deploy (https://deno.com/deploy) uses the same optimizations as CFW to achieve effectively 0ms cold starts.
Netlify Edge Functions are still in beta and don't have all of the same optimizations yet, but we're going to be working with Netlify over the next few months to enable these optimizations to Netlify Edge Functions too.
I don't want edge functions, I want edge appliances. An edge function means I still have to run my own janky devops for that specific appliance. Edge IPv6 Appliances or Bust.
It sounds like Netlify is essentially reselling a third-party service here. Isn't operating infrastructure Netlify's job? Why outsource this? Can requests end up taking circuitous paths where Netlify and Deno's infra don't line up?
lewisjoe|3 years ago
While the general trend today is to back the serverless environment with Javascript runtimes (Cloudflare runs its edge on top of V8, Netlify uses deno, most other serverless runtimes use nodejs), I'm optimistic that WebAssembly will take over this space eventually, for a bunch of reasons like:
1. Running a WASM engine on the cloud means, running user code with all security controls, but with a fraction of the overhead of a container or nodejs environment. Even the existing Javascript runtimes, comes with WebAssembly execution support out of the box! which means these companies can launch support for WASM with minimal infra changes.
2.It unlocks the possibility of running a wide range of languages. So there’s no lock-in with the language that the Serverless provider mandates.
3.Web pages that are as ancient as the early 90s are perfectly rendered even today in the most modern browsers because the group behind the web standards strive for backward compatibility. WebAssembly’s specifications are driven by those same folks - which means WASM is the ultimate format for any form of code to exist. Basically, it means a WASM binary is future proof by default.
I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique, here - https://writer.zohopublic.com/writer/published/nqy9o87cf7aa7...
syrusakbary|3 years ago
At Wasmer [1] we have been working actively towards this future. Lately more companies have been also doing awesome work on these fronts: Lunatic, Suborbital, Cosmonic (WasmCloud) and Fermyon (Spin). However, each of us with a different take/vision on how to approach the future of Computation at the Edge. I'm very excited about to see what each approach will bring into the table.
[1] https://wasmer.io/
electroly|3 years ago
I mean, only in theory or when looking at it from the right angle, right? Or are you only comparing against JavaScript (unclear)? WASM is still much slower than native code. Containers spend most of their time executing native code; the "overhead" of containers is at the boundaries and is minor compared to the slowdown by moving from native code to WASM. In the future WASM may approach native performance, but it's not there now. I'm 100% certain that transitioning my native-code-in-containers workloads to WASM would be slower, not faster.
adam_arthur|3 years ago
Anybody know if a common API for serverless components is being worked on?
melony|3 years ago
me_me_mu_mu|3 years ago
For example, I write a simple single responsibility piece of code in Go `add_to_cart.go` and build it, deploy it, and somehow map it to some network request. dot slash pass args, and return the result?
No need to have containers or runtime?
olah_1|3 years ago
yencabulator|3 years ago
As far as I can tell from the outside, that's still "WASM-called-by-Javascript", and many of their JS optimizations don't work the same way. E.g. if a Worker calls JS `fetch` and returns that `Response`, they recognize that and remove the JS from the data path; same is not true for WASM at this time.
tedk-42|3 years ago
To be honest on the server side of things containers are so nice because 99% of the time they include all your dependencies you need to run the app.
api|3 years ago
How would that work? Don't these tend to facilitate cloud lock-in or at least be cloud-only in the sense that they make it hard to operate your own metal infrastructure?
youngtaff|3 years ago
danamit|3 years ago
dmitriid|3 years ago
Cloud Functions is literally code you're running in the cloud. And the moment you approach their limit(ation)s, you will see the same "rising cloud deployment costs"
> Running a WASM engine on the cloud means ... a fraction of the overhead of a container or nodejs environment
You do realise that there are other languages than javascript in nodejs? That there other environments than cloud functions? And that you can skip that overhead entirely by running with a different language in a different environment? Or even run Rust in AWS Lambda if you so wish?
> so there’s no lock-in with the language that the Serverless provider mandates.
And at the same time you're advertising for a runtime lock in. This doesn't compute.
> Web pages that are as ancient as the early 90s are perfectly rendered even today... Basically, it means a WASM binary is future proof by default.
It's not future proof.
Web Pages from the 90s are not actually rendered perfectly today because browsers didn't agree on a standard rendering until late 2000s, and many web pages from the 90s and 2000s were targeting a specific browser's feature set and rendering quirks. Web Pages from the 90s are rendered good enough (and they had few things to render to begin with).
As web's standards approach runaway asymptotical complexity, their "future-proofness" is also also questionable. Chrome broke audio [1], browsers are planning to remove alert/confirm/prompt [2], some specs are deprecated after barely seeing the light of day [3], some specs are just shitty and require backtracking or multiple additional specs on top to fix the most glaring holes, etc.
> I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique
"Let's replace somewhat unlimited code with severely limited, resource constrained code running in a slow VM in a shared instance" is not a good take.
[1] https://www.usgamer.net/articles/google-chromes-latest-updat...
[2] https://dev.to/richharris/stay-alert-d
[3] https://chromestatus.com/feature/4642138092470272 and https://www.w3.org/TR/html-imports/
ksajadi|3 years ago
This could have been a great story but then tons and tons of VC money came in and now you’d have to think of ways to make the valuation worth it and make the product sticky: so now we have edge Deno powered functions, lambda-esq applications, form embedded HTML and so much other features that are used by the long tail of their customer base while they changed their price to charge by git committees and have daily short downtimes of 1 to 5 mins for the past month (monitored by external services, as they wouldn’t reflect that in their status page).
Soon, they’ll sell the company to some corp like Akamai or similar “enterprise” outfit leaving us high and dry.
There is a lot of money in building businesses that do boring stuff that just makes peoples lives easier. But when you take VC money, you’d need to build a moat to fend off cloud providers from the bottom, capture the value for the top from developers and everything in between.
bradgessler|3 years ago
Chime in if you’d like to be one of the first few customers. If there’s enough interest here’s how I’d play it:
1. I won’t raise VC money. I know how to build a SaaS business without it—I bootstrapped Poll Everywhere from $0 to $10m+.
2. My motivations these days are to build low complexity products. Ideally they’re “evergreen”, meaning I can ship a core feature set that I know will be the same in 10 years. The feature I’m selling their is stability.
3. I like to price things in a way that makes them accessible to as many people as possible while being sustainable for the business so it can operate for a long time with the support it needs for customers.
madeofpalk|3 years ago
It fits within their goal of a 'heroku for frontend websites', for easily deploying sites.
throwthere|3 years ago
TobyTheDog123|3 years ago
It seems that the free plan is 3M invocations/mo, starter is 15M/mo, and business is 150M/mo, but there aren't any ways to increase those limits (business says to contact them for higher limits).
Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive. To me the point is to sign-up-and-forget-it without having to worry if I'm within those limitations.
foxbarrington|3 years ago
webmobdev|3 years ago
Please, a hard no to that. That's the worse aspect of AWS, Azure and all those new huge hosting centers - hard to calculate the real cost and set a budget.
I don't know about Netlify, but the old Linode (before it got acquired), was flexible with its "hard" limits in a plan - for example, if your site got slashdotted / Digged (or was that dug?) and suddenly saw a spike on its resource, exceeding the limits, they were quite accommodating in not charging their users for the unexpected extra usage. Linode even wouldn't mind an occasional surge in resource a few times a year. But if it happened more frequently, they would recommend that you upgrade to a more suitable plan. They earned a lot of goodwill that way from their clients who really appreciated that their server / site wasn't unexpectedly taken offline because of a resource crunch they hadn't paid for and / or anticipated.
phphphphp|3 years ago
The marginal cost of a request is probably negligible, hence the tens of millions of requests included, but there is a cost associated with each user making use of their platform because it includes a lot more than just compute, and that's the value they're charging for.
I think if you're looking for a compute provider that offers pay as you go billing in order to minimise your costs, then Netlify probably isn't the platform for you, and you'd be better off using their service provider directly (in this case, Deno, but many Netlify alternatives use Lambda, Cloudflare Workers etc.).
mavsman|3 years ago
This has been one of the big knocks on AWS, that a poor little old lady can setup a "free" AWS account then when her website (and accompanying Lambda function) goes viral she gets hit with a $100k bill from uncle Jeff.
realmod|3 years ago
Sure, if you can set a max budget. Otherwise, you'd constantly have to worry about the unbounded cost.
dbbk|3 years ago
It seems the only way to have control over this is to write your own Cloudflare Workers. There must be a better way? I can't imagine this is an infrequent problem for people at scale.
bobfunk|3 years ago
For anything you can do at build time as a static HTML pages we already strip query parameters from cache keys.
eli|3 years ago
ianand|3 years ago
patches11|3 years ago
cxr|3 years ago
You're experiencing friction trying to use something in a way that it's supposed to not be used. (I.e., click-tracking by junking up URLs.) You could look for an answer, or you could take a step back, evaluate your expectations, and then decide not to do what you're trying to do.
AaronO|3 years ago
robertlagrant|3 years ago
simonw|3 years ago
ignoramous|3 years ago
rektide|3 years ago
nojvek|3 years ago
I do not recommend them anymore. We will move somewhere else.
Almost every few days we get a report that some customers can’t access our site from where they are. Our US east engineers can confirm that their POP is down.
Netlify’s status page says everything is working, but in reality it’s not.
Netlify as a CDN has failed for us on its core promise.
unknown|3 years ago
[deleted]
mfsch|3 years ago
the_common_man|3 years ago
tppiotrowski|3 years ago
brycewray|3 years ago
[0]: https://content-security-policy.com/examples/meta/
[1]: https://content-security-policy.com/nonce/
valbaca|3 years ago
How to use them Drop JavaScript or TypeScript functions inside an edge-functions directory in your project.
Use cases Custom authentication, personalize ads, localize content, intercept and transform requests, perform split tests, and more.
qbasic_forever|3 years ago
anyfactor|3 years ago
You can pre-parse and pre-process JSON responses to minimize the payload size and customize it for your frontend needs. Makes dealing with client secrets and configuration easier too I believe. I didn't want to rewrite a bunch of backend code so this was one of the simplest solution.
markhaslam|3 years ago
lucacasonato|3 years ago
Netlify Edge Functions are still in beta and don't have all of the same optimizations yet, but we're going to be working with Netlify over the next few months to enable these optimizations to Netlify Edge Functions too.
NHQ|3 years ago
ajdegol|3 years ago
vorticalbox|3 years ago
slimebart|3 years ago
CF always seems so cheap compared to alternatives, if you ever expect to scale beyond the developer plans.
pcj-github|3 years ago
unknown|3 years ago
[deleted]
wizwit999|3 years ago
wmf|3 years ago
carnitine|3 years ago
undefined_void|3 years ago
Tajnymag|3 years ago