top | item 15364896

Cloudflare Workers: Run JavaScript Service Workers at the Edge

327 points| thomseddon | 8 years ago |blog.cloudflare.com | reply

132 comments

order
[+] js2|8 years ago|reply
This is probably the best annoucement of a new feature I have ever read. It makes an analogy to an existing technology. It provides a clear description of the new feature. It provides clear examples of how to use the new feature with a link to a sandbox so you can run and modify the examples. And it explains the thought process behind the implementation. In additon, I didn't notice a single typo, spelling or grammar error.

Also, this feature is pretty cool!

[+] kentonv|8 years ago|reply
Daww, thanks!

I wish I'd included more images and diagrams in the post, but I'm generally terrible at coming up with those.

I also wish we'd been able to enable it in production for everyone immediately... but with a change this big we need to be cautious.

[+] konradzikusek|8 years ago|reply
> I didn't notice a single typo

JavaScript is spelled incorrect all over the place.

Awesome feature and post!

[+] lima|8 years ago|reply
You misspelled announcement though. And addition.

Sorry, had to :)

[+] kentonv|8 years ago|reply
Hey all! This is my project at Cloudflare.

(You may remember me as the tech lead of Sandstorm.io and Cap'n Proto.)

Happy to answer questions!

[+] jayrox|8 years ago|reply
This is really interesting. I have an idea where this could be helpful to the Plex user community. Recently Plex added a header that blocks the page from being iframed (X-Frame-Options).

Would doing something like this, obviously replacing example.com with their own domain.com, replace the offending header?

  addEventListener('fetch', event => {  
    let request = event.request;
    if (request.headers.has('X-Frame-Options')) {
      let newHeaders = new Headers(request.headers);
      newHeaders.set('X-Frame-Options', 'ALLOW-FROM https://example.com/');
      event.respondWith(fetch(request, {headers: newHeaders}));
    }
  
    // Use default behavior.
    return;
  });
[+] predakanga|8 years ago|reply
Congratulations on the announce - it seems like a game changer of a feature, and one I look forward to testing.

Will Cloudflare be curating a list of useful worker scripts? I imagine there will be certain usecases that get a lot of attention (e.g. ESI)

Do requests made from the API go through Cloudflare's usual pipeline, or do they go straight to the backend? In short, will we need to manage the cache ourselves?

And finally, does this change Cloudflare's role as a "dumb" intermediary?

[+] aboodman|8 years ago|reply
Hey Kenton, should've guessed you were behind this. Lovely design, great work.

I'd be curious to learn more about the implementation (did you lift the existing SW implementation from blink somehow, or reimplement it)?

[+] vvanders|8 years ago|reply
Pretty awesome stuff, will echo that this is one of the cleanest feature pages I've seen.

I'd love to hear more about your evaluation of Lua. LuaJIT is so blazingly fast(and small!) that I'm sure it'd be some pretty significant compute savings.

What sandbox solutions did you look into? Separate lua states, just overriding ENV/setfenv() or something completely different?

[+] aclelland|8 years ago|reply
Are you able to talk about the pricing structure? Is this going to be per request, total cpu time, etc?

edit. In another reply you said that pricing hasn't been finalised which is understandable. We've got a few use cases which CF Workers would be ideal for but we'd be looking at 10-15k requests per minutes which could get expensive if pricing is per request.

[+] gok|8 years ago|reply
The blog post mentions WebAssembly support in V8, but I can't get it to work in the playground. Coming later?
[+] richdougherty|8 years ago|reply
Hi Kenton - awesome new feature!

In the blog post you talk about the trade-offs of using JavaScript vs other options like containers. I thought you might be interested in this comparison of using JavaScript vs other sandboxing options.

https://blog.acolyer.org/2017/08/22/javascript-for-extending...

> Hosts of V8 can multiplex applications by switching between contexts, just as conventional protection is implemented in the OS as process context switches… A V8 context switch is just 8.7% of the cost of a conventional process context switch… V8 will allow more tenants, and it will allow more of them to be active at a time at a lower cost.

[+] jamesrwhite|8 years ago|reply
Will there be any limits on memory/execution time etc like with AWS Lambda?
[+] kentonv|8 years ago|reply
Oh yeah, BTW: If you're an experienced systems engineer interested in working on a young codebase written in ultra-modern C++, let me know (kenton at cloudflare).
[+] lming|8 years ago|reply
Would Cloudflare endorse VPN services over Service Workers? Sounds like a great option to help people in censored country.
[+] abritishguy|8 years ago|reply
Is there any way to store state?
[+] niftich|8 years ago|reply
Kudos for re-using an existing API when one was already available in the same language for a very similar usecase.

Any time you commit to someone else's API -- whether it's an actual industry standard, or simply some de facto widely used paradigm -- you incur risks; conversely, now that you're a vested participant, consider being involved in the future of the spec so it can evolve where it needs to meeting emerging needs around its new uses.

[+] kentonv|8 years ago|reply
Absolutely! One reason we wanted to get this out in public before it's ready is so we can properly engage with the spec writers (and the V8 team).

That said, I am amazed by how well the spec fits as-is. I don't usually like other people's API designs but in this case I think they did a really good job, and I've been pleased not to have to think about API design myself.

[+] j_s|8 years ago|reply
Paging HN user johansch from his comment on the Cloudflare Apps mitm JavaScript injection discussion 3 months ago:

johansch: All I want is my code running on your nodes all around the world with an end-to-end ping that is less than 10 ms to the average client

dsl: Akamai Edge Compute is what they are asking for

https://news.ycombinator.com/item?id=14650025

[+] manigandham|8 years ago|reply
This is fantastic. Fastly is great but hard to use with Varnish VCL, and no other CDN had any real scripting capabilities. The service worker API is also a lot better than serverless cloud functions or lambda@edge.
[+] narsil|8 years ago|reply
Great product announcement post! I especially liked the Q&A section's reasons for not choosing alternatives.

Is the lack of maturity also the reason for not choosing something like vm2 for NodeJS https://github.com/patriksimek/vm2

[+] kentonv|8 years ago|reply
I actually didn't know about that one.

Looking briefly, it looks like it's based on creating separate contexts, but not separate isolates. Contexts within the same isolate can be reasonably secure (it's how Chrome sandboxes iframes from their parent frames, after all), but they still share a single heap and must run on the same thread. Isolates can run on separate threads. We prefer to give each script its own isolate, so that one script using a lot of CPU does not block other scripts. We also want to be able to kill a script that does, say, "while(true) {}".

So yeah, it looks like a neat library but it probably wouldn't suit our needs.

[+] TheAceOfHearts|8 years ago|reply
This sounds like a really powerful feature. I love how it uses JavaScript, which makes it much more approachable for web developers.

My experience with Service Worker APIs hasn't been very positive, although I don't have any suggestions for ways it could be improved, so I apologize for the non-constructive feedback. Maybe after using it more I'll change my mind. I recognize that everyone involved is likely working hard to provide an API that's capable of handling a wide range of problems, many of which I likely haven't even considered.

Here's a more actionable complaint: fetch doesn't support timeout or abortion. I have a hard time understanding how this isn't a problem for more people. Say what you will about XMLHttpRequest, at least it supports these basic features. As an end-user, I always find it absolutely infuriating when things hang forever because a developer forgot to handle failure cases.

I'd love it if you published a locally runnable version. Aside from making it easier to configure and experiment, it would give me peace of mind to know that I could continue to use the same configuration if Cloudflare decided to terminate my service.

[+] wcdolphin|8 years ago|reply
Could this be used for HTTP Push across domains? I would love an easy way to take advantage of HTTP Push for assets hosted on S3 without routing requests to my origin server.
[+] kentonv|8 years ago|reply
Yes, your worker can make subrequests to other domains, and serving some assets out of S3 is a use case we specifically want to enable.

I haven't looked into how specifically to expose HTTP Push in the API but that certainly seems like something we should support.

[+] Gys|8 years ago|reply
So in a way this is similar to for example AWS Lambda ? It can process incoming http requests in many ways ? Fascinating idea.

Is there any indication on price level ? And what about runtime duration ?

[+] kentonv|8 years ago|reply
It's actually somewhat different. AWS Lambda is intended to act as your origin server. Generally your Lambda functions run in a small number of locations, not necessarily close to users.

Cloudflare Workers will run in all of Cloudflare's 117 (and growing) locations. The idea is that you'd put code in a worker if you need the code to run close to the user. You might want that to improve page load speed, or to reduce bandwidth costs (don't have to pay for the long haul), or because you want to augment Cloudflare's feature set, or a number of other reasons. But, generally, you would not host your whole service on this. (Well, you could, but it's not the intent.)

We haven't nailed down pricing yet, but we've worked hard to create the most efficient possible design so that we can make pricing attractive.

[+] throwaway84736|8 years ago|reply
Have you guys considered side channels?

Seems like whenever there's co-execution (VMs, JavaScript, etc) there seem to be side channel leakages.

[+] kentonv|8 years ago|reply
We're certainly aware of them, but haven't spent a lot of time focused on this issue yet. Of course, the issue exists on all forms of shared compute. So if you're going to do crypto, you'd better make it constant-time. Which is... not easy in Javascript. (But we will provide the WebCrypto API, which might help.)

There is a theoretical solution that we might be able to explore at some point: If compute is deterministic -- that is, always guaranteed to produce the same result given the same input -- then it can't possibly pick up side channels. It's possible to imagine a Javascript engine that is deterministic. The fact that Javascript is single-threaded helps here. In concrete terms, this would mean hooking Date.now() so that it stays constant during continuous execution, only progressing between events.

That said, this is just a theory and there would be lots of details to work out.

[+] polskibus|8 years ago|reply
I read the article, but I'm not sure what this technology is enabling really. Can it be thought of as a new player in the lambda/serverless category? Does it have any advantages to using other serverless stacks like aws lambda or azure?
[+] poorman|8 years ago|reply
Seems like this plus the Cloudflare Apps could yield some interesting projects.
[+] dknecht|8 years ago|reply
definitely. We plan on exposing the workers to app developers later this year.
[+] skrebbel|8 years ago|reply
Sounds like this could be a pretty trivial way to load balance React prerendering. As long as the react code fetches all data in 1 call it should be at least as efficient as doing it all in Nodejs on your server.
[+] peterwwillis|8 years ago|reply

  - Is it "Cloudflare Workers" or "Cloudflare Service Workers"?
      A "Cloudflare Worker" is JavaScript you write that runs on Cloudflare's edge.
      A "Cloudflare Service Worker" is specifically a worker which handles
      HTTP traffic and is written against the Service Worker API.
Consufing naming convention. Now you have to say 'worker worker' or 'non-service worker' so nobody has to wonder if you meant 'service worker' when you only said 'worker'.
[+] kentonv|8 years ago|reply
Not really, because once there are workers other than Service Workers, they'll have their own names. To be clear, a Service Worker is one kind of Worker. At the moment it's the only kind, but we could introduce others in the future. For instance, maybe we'd introduce a "DNS Worker" that responds to DNS requests.

Also note we didn't invent these terms. "Workers" and "Service Workers" are W3C standards.

[+] forcer|8 years ago|reply
Could this be used to make Cloudflare respond to HTTP POST requests?
[+] kentonv|8 years ago|reply
Yep.

    if (request.method == "post") { ... }
[+] drdaeman|8 years ago|reply
Oh my. Is there any form of persistence, present or planned?

If there is - this means that there (eventually) will be a way to have logs from the edge servers. I'm just thinking about a worker that would collect the requests and responses data in some circular buffer, and try to push it to the origin server. Eventually, the data will get through, so no CDN-returned 52x ("web server is not responding" etc) errors would go unnoticed.

[+] kentonv|8 years ago|reply
We won't offer writable storage at the edge in v1, but you could always make a background request to push these events back to some log service you maintain (on separate infrastructure from your main service to make simultaneous outages unlikely). Note that you can make background requests that happen after the main response has already returned (but still subject to the per-request CPU limits, etc.).

We're thinking about how to do writable storage, but it's tricky, since one thing we don't want is for developers to have to think about the fact that their code is running in hundreds of locations.

[+] renke1|8 years ago|reply
So, can I use this to do this:

Render your SPA (different index.html) when a login cookie is set and otherwise render your landing page (yet another index.html)? - Such that that my http://example.com can always be cached (unless it needs to hit the server where the same logic is implemented).

And in general, how do you manage your landing page vs. your SPA?

[+] scottmf|8 years ago|reply
Maybe I’m not quite understanding what you’re asking, but I think you can do this with client side service workers already.
[+] kentonv|8 years ago|reply
Yep. At the origin you might serve the SPA as /spa.html and the worker rewrites the URL for people hitting /.
[+] mxuribe|8 years ago|reply
This is really interesting; kudos to Cloudflare for launching what seems like a cool thing!

Also, agree with other commentators here; nicely-written blog post!

[+] tyingq|8 years ago|reply
Very nice. Are there plans to expand on it? For example, some way to allow state to be kept at the edge as well.
[+] jgrahamc|8 years ago|reply
Yes. We plan to. We are releasing this to find out precisely what people need.