* Cloudflare is probably slightly closer to customers, but cloudfront is still very close
* lambda@edge should be cheaper with amazons scale and ecosystem
* cloudfront itself is very expensive compared to Cloudflare, almost every project can integrate Cloudflare and use this, where small projects don’t really make sense for cloudfront
> lambda@edge should be cheaper with amazons scale and ecosystem
I suspect price will have more to do with the tech's underlying ability to scale to lots of (separately-sandboxed) customers at lots of edge locations. Our tech is pretty different from Amazon's so it will be interesting to see how that shakes out.
It would be amazing if those workers supported WebSockets. Running game servers in them would be tempting, as long as they don't charge exorbitant prices for bandwidth.
WebSocket support is planned (not sure if it will be in v1).
The issue with game servers is that you probably need to make sure all the players in the same game instance hit the same worker. There won't be any way to do that with workers in v1. But, this is definitely something we've thought about, and as a big gamer myself I would like to see it happen someday. I have some particular ideas for a different kind of worker (not a Service Worker) that serves this use case. But it's probably a year out.
This is pretty interesting, especially if one get some kind og local storage where one can cache the data from the users and also keep some extra data one need to handle the requests.
One can also imagine some kind of MQ solution that distribute to the edges and then updates the clients (for games and such scenarios)
As an example, I would assume worker is making requests with the internal view of the site, but can not have an internal view of other sites or security problems would ensue.. So what happens when two of my sites have service workers fetching something from each other on each request?
As you guessed, when your worker makes a subrequest to your own zone, it goes directly to your origin server, but when you subrequest to some other domain, it goes in "the front door", and that other domain's scripts apply.
If a request bounces back such that the same worker script would need to run twice as a result of a single original request, then it fails with an error. There's nothing else we can do here: we can't let the request loop, but we also can't let it skip your script after it's bounced through a third-party script.
[+] [-] dx034|8 years ago|reply
That's a bold claim. I'd expect Akamai, Google and Amazon to be peered at more locations, maybe even more. Or have they become that big?
[+] [-] jgrahamc|8 years ago|reply
https://bgp.he.net/report/exchanges#_participants
[+] [-] cavisne|8 years ago|reply
Trade offs I see
* Cloudflare is probably slightly closer to customers, but cloudfront is still very close
* lambda@edge should be cheaper with amazons scale and ecosystem
* cloudfront itself is very expensive compared to Cloudflare, almost every project can integrate Cloudflare and use this, where small projects don’t really make sense for cloudfront
[+] [-] kentonv|8 years ago|reply
I suspect price will have more to do with the tech's underlying ability to scale to lots of (separately-sandboxed) customers at lots of edge locations. Our tech is pretty different from Amazon's so it will be interesting to see how that shakes out.
[+] [-] Matheus28|8 years ago|reply
[+] [-] kentonv|8 years ago|reply
The issue with game servers is that you probably need to make sure all the players in the same game instance hit the same worker. There won't be any way to do that with workers in v1. But, this is definitely something we've thought about, and as a big gamer myself I would like to see it happen someday. I have some particular ideas for a different kind of worker (not a Service Worker) that serves this use case. But it's probably a year out.
[+] [-] Clear12f|8 years ago|reply
[+] [-] kennethh|8 years ago|reply
[+] [-] jgrahamc|8 years ago|reply
[+] [-] remline|8 years ago|reply
As an example, I would assume worker is making requests with the internal view of the site, but can not have an internal view of other sites or security problems would ensue.. So what happens when two of my sites have service workers fetching something from each other on each request?
[+] [-] kentonv|8 years ago|reply
If a request bounces back such that the same worker script would need to run twice as a result of a single original request, then it fails with an error. There's nothing else we can do here: we can't let the request loop, but we also can't let it skip your script after it's bounced through a third-party script.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] Navarr|8 years ago|reply
Very interesting, especially since it allows for a Cloudflare ESI system.
Wonder if many will take advantage of it
[+] [-] _asummers|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]