Just the other day I was revisiting the notion that using SRV records in DNS for web traffic would be absolutely ideal for small to medium sites. Unfortunately, the browsers (which have faults listed against them) refuse to implement them. Instead we have these CDNs giving us the ever-valued low latency, rather than giving the next layer down infrastructure (you and me using our VPSes) the ability to do it without needing BGP AS# agreements and fixed IPs and the like.
Now, the reasons the browsers don't implement them is because of the RFC which introduced SRV records suggested that existing tech shouldn't use SRV...
Now the most damning thing about this is the Chrome [1] won't do it because Firefox didn't do it[0]...(see comment 2 on chrome/webkit bug). Both are excellent reads, with lots of coherent thought... and yet are just...rejected.
So, yes, I'm concerned. Yes there are alternate ways so that everyone doesn't need to use a CDN. Yes, it's the browsers themselves that have been stopping us....for 19 years.
That worries me the same way the consolidation of email services does, even though I don't find CloudFlare themselves as scary as I find some of the email providers. We've seen monocultures before. They can be really unhealthy for the ecosystem.
If stuff like Face- and TouchID are really as secure as they claim, biometrics (and being able to afford an iPhone) could prove you are human and compete with elaborate DDoS protection.
CDN and WAF are really competitive businesses though.
OT but my clients are small players (10s of GB traffic a month); but they value availability and low latency for their dynamic content. Fastly (and varnish) are ideal for this, but every time I talk to Fastly about services beyond the PAYG-CDN it's "min $2k/month", "$5k/month + fees".
I'd rather use them than Cloudflare, I've been accused of being a walking advert for Fastly - but they are pushing me towards other providers and proving actively hostile to small clients.
Good for them if they are raking it in from enterprise customers. But AWS manages to cater for both ends of the market, and it's a shame to see good services price themselves into "enterprise-only" territory.
In general, what you can expect to see when visiting a Cloudflare site via Tor is a CAPTCHA challenge to authenticate your browser to show you're not a bot.
As to whether we can enable the `Onion Routing` setting, which would cause your browser to connect to Cloudflare via Tor and not via an exit-node, I don't know if we may enable it. There might be some compliance issues with that, which may prevent us from enabling that.
What are the privacy implications of this? Will Cloudflare be able to see GitLab traffic? Was/is Fastly able to before? And is there any reason to trust one over the other?
I trust Cloudflare a whole lot more than Gitlab. CF makes all the right noises around privacy for their 1.1.1.1 service, for example.
Gitlab is still the team that didn't notice that three of their three independent backup mechanisms had not been working for weeks to months. And while wholly cloning Github is legal, I started having serious doubts about their ethical instincts when they repeatedly tried to incite people against Github for cloning minor features.
Beyond that: it depends. I tend to believe large vendors are worse for privacy of generic consumers, because they have the scale where it pays off to sell data or do "big data" analysis. They are better for individual high-value targets because they have more. to lose, and less to win, relative to whatever they could expect by screwing you over. Protection against rogue employees is also something that happens in larger teams, if at all.
> As a result of the complexity and requirements, we realized we would like to have a solution for CDN, WAF, and DDOS protection with one vendor. [emphasis mine]
Cloudflare will see All The Things™ as TLS termination at Cloudflare is required for Cloudflare to be able to analyze web traffic and block common attacks. Just based on that one sentence, it seems like a value play more than anything.
That they care enough for the defense-in-depth is probably a good sign, but that also shows a bias in favor of security over privacy. Depending on your personal or corporate privacy preferences/politics, that may or may not be desirable, but it's also for you to decide if any resulting risks are acceptable.
My hope is that Gitlab pressed Cloudflare on the process changes that took place after the WAF outage last year.
---
(this comment has been heavily edited since it was originally published)
Here are my answers to those questions: (Note: I am a GitLab employee, so they are probably a bit biased)
- What are the privacy implications of this?
Theoretically, any 3rd party in line of traffic (in this case it is Cloudflare) has the ability to monitor and alter traffic. This is especially true when said 3rd party does inspections into TLS traffic. The privacy implications are, that they can see and alter any traffic between the client and the origin. But in the case of Cloudflare and GitLab, that is desired, as we want to take advantage of Cloudflare's Web Application Firewall to further protect us from 0-days and other threats, where mitigation using traditional methods might cause a delay, that cloud lead to exploitation of those, causing bigger harm.
My personal opinion on this (it does not necessarily reflect the view of GitLab) is, that Cloudflare is a vendor I personally trust in terms of adhering to what they say in regards to privacy. While they, as any other vendor, could be breached, I see the benefit of their Security solution bigger, than the breach-risk.
- Will Cloudflare be able to see GitLab traffic?
Yes, partially. SSH is not inspected by CF, despite being proxied by them. HTTP can be inspected by anybody on the net and yes, thus CF, too. HTTPS will be terminated by Cloudflare (and re-encrypted when talking to our origin) for some reasons:
- CDN: They need to know which resource you request in order to serve it from a cache
- WAF: Once enabled (not from the start) it will scan requests for malicious content and either block or challenge the request
- Workers: A highly integrated FaaS platform, we can use to dynamically authenticate requests to cached resources and other logic we may want to execute on the edge of Cloudflare's network.
Despite all that Cloudflare does not log raw requests. On our end, we only see the information about where the request came from, user agents, etc. You can find out more about what Cloudflare logs over here: https://blog.cloudflare.com/what-cloudflare-logs/
- Was/is Fastly able to before?
Fastly is in the same position as Cloudflare will be in the future. Right now https://about.gitlab.com is served via fastly pointing to an origin, which we control. If fastly would be breached, the attacker could control the contents of about.gitlab.com. Static assets are also hosted via fastly and are also subject to be possibly altered by an attacker.
- And is there any reason to trust one over the other?
The decision within GitLab was made purely on technical grounds. Cloudflare was the only solution meeting our criteria to keep serving traffic via SSH on the same host as we do right now.
Speaking for myself again, I trust Cloudflare and have personally been a customer of theirs for a long time 2012-ish). While I do not have that kind of relationship with fastly, I believe both vendors to be trustworthy in doing their best to protect the privacy of their users.
> 1. Once traffic is ensured to flow though Cloudflare, we initiate decommission of Route53.
> We would disable the transfer lock and generate an auth code.
> immediately after, we move the domain over to the Cloudflare registry
I love the overall plan but this part would worry me... The most obvious contingency plan when you're putting so many eggs into one basket is to keep a kill-switch somewhere like your domain registrar, where you can abandon ship entirely by switching nameservers in the worst of scenarios, right?
Correct me if I'm wrong but when the Cloudflare dashboard went down a few weeks (months?) ago, no sort of DNS-level changes would have been possible. (Either way, I don't think you can even set external nameservers for domains on Cloudflare Registrar yet?)
Just curious about the thinking behind this particular move and why the pros outweigh the cons of leaving the domain where it is.
Oh - we're back to talking about trust & Cloudflare again.
Cloudflare have always hosted [] booter / stresser / DDoS-for-hire sites. Right now I can find 8 sites on their network when I search for "booter" (it's been higher). So Cloudflare are responsible for keeping criminal activity online, and defending them from visibility to their downstream hosts, who might otherwise cut them off.
Cloudflare also sell DDoS protection from the same criminals. Because they host them, they can see where attacks will happen next - an advantage other big CDNs won't tolerate.
That adds up to a protection racket - in the 90s this type of shady business wouldn't have been able to source connectivity. But now they're the good guys :( I don't get it.
[] Cloudflare have tended to cry "we don't host anything!". But if they stopped providing service, these sites probably couldn't exist. I call that mission-critical hosting.
As a result of the complexity and requirements, we realized we would like to have a solution for CDN, WAF, and DDOS protection with one vendor.
Depending on your definition of ddos protection Fastly has those things along with granular and semi scriptable configurability via vcl. Granted it’s been 2-3 years since I last did a comparison between Fastly and Cloudflare but I can’t understand why a company with presumably a need for configurability at the edge would make this choice. Cloudflare’s entry tiers are unbeatable in price but beyond that it’s extremely bulky.
One of the key reasons we decided to utilize Cloudflare as our vendor for WAF and CDN was, that they uniquely offer to run SSH (or any TCP application, really) and HTTP(s) on the same hostname.
[+] [-] deeblering4|6 years ago|reply
[+] [-] cmroanirgo|6 years ago|reply
Just the other day I was revisiting the notion that using SRV records in DNS for web traffic would be absolutely ideal for small to medium sites. Unfortunately, the browsers (which have faults listed against them) refuse to implement them. Instead we have these CDNs giving us the ever-valued low latency, rather than giving the next layer down infrastructure (you and me using our VPSes) the ability to do it without needing BGP AS# agreements and fixed IPs and the like.
Now, the reasons the browsers don't implement them is because of the RFC which introduced SRV records suggested that existing tech shouldn't use SRV...
Now the most damning thing about this is the Chrome [1] won't do it because Firefox didn't do it[0]...(see comment 2 on chrome/webkit bug). Both are excellent reads, with lots of coherent thought... and yet are just...rejected.
And finally, there's this page: https://jdebp.eu/FGA/dns-srv-record-use-by-clients.html, which has a section:
> The SRV Lookup Laggards' Hall of Shame
For which Mozilla sits there as #1.
So, yes, I'm concerned. Yes there are alternate ways so that everyone doesn't need to use a CDN. Yes, it's the browsers themselves that have been stopping us....for 19 years.
[0] https://bugzilla.mozilla.org/show_bug.cgi?id=14328 [RESOLVED WONTFIX]
[1] https://bugs.chromium.org/p/chromium/issues/detail?id=22423 [RESOLVED INVALID]
[+] [-] hoistbypetard|6 years ago|reply
[+] [-] jesseb|6 years ago|reply
[+] [-] Snetry|6 years ago|reply
[+] [-] skocznymroczny|6 years ago|reply
[+] [-] filmgirlcw|6 years ago|reply
Like, I understand the argument, I’m just not sure why we get angsty about one type of market dominance and not another.
[+] [-] doctorpangloss|6 years ago|reply
CDN and WAF are really competitive businesses though.
[+] [-] porker|6 years ago|reply
OT but my clients are small players (10s of GB traffic a month); but they value availability and low latency for their dynamic content. Fastly (and varnish) are ideal for this, but every time I talk to Fastly about services beyond the PAYG-CDN it's "min $2k/month", "$5k/month + fees".
I'd rather use them than Cloudflare, I've been accused of being a walking advert for Fastly - but they are pushing me towards other providers and proving actively hostile to small clients.
Good for them if they are raking it in from enterprise customers. But AWS manages to cater for both ends of the market, and it's a shame to see good services price themselves into "enterprise-only" territory.
[+] [-] jgrahamc|6 years ago|reply
[+] [-] m4lvin|6 years ago|reply
https://support.cloudflare.com/hc/en-us/articles/203306930-U...
[+] [-] T4cC0re|6 years ago|reply
In general, what you can expect to see when visiting a Cloudflare site via Tor is a CAPTCHA challenge to authenticate your browser to show you're not a bot.
As to whether we can enable the `Onion Routing` setting, which would cause your browser to connect to Cloudflare via Tor and not via an exit-node, I don't know if we may enable it. There might be some compliance issues with that, which may prevent us from enabling that.
I have however created an issue on this for further investigation whether that might be possible: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues...
[+] [-] eat_veggies|6 years ago|reply
[+] [-] IfOnlyYouKnew|6 years ago|reply
Gitlab is still the team that didn't notice that three of their three independent backup mechanisms had not been working for weeks to months. And while wholly cloning Github is legal, I started having serious doubts about their ethical instincts when they repeatedly tried to incite people against Github for cloning minor features.
Beyond that: it depends. I tend to believe large vendors are worse for privacy of generic consumers, because they have the scale where it pays off to sell data or do "big data" analysis. They are better for individual high-value targets because they have more. to lose, and less to win, relative to whatever they could expect by screwing you over. Protection against rogue employees is also something that happens in larger teams, if at all.
[+] [-] eganist|6 years ago|reply
Cloudflare will see All The Things™ as TLS termination at Cloudflare is required for Cloudflare to be able to analyze web traffic and block common attacks. Just based on that one sentence, it seems like a value play more than anything.
That they care enough for the defense-in-depth is probably a good sign, but that also shows a bias in favor of security over privacy. Depending on your personal or corporate privacy preferences/politics, that may or may not be desirable, but it's also for you to decide if any resulting risks are acceptable.
My hope is that Gitlab pressed Cloudflare on the process changes that took place after the WAF outage last year.
---
(this comment has been heavily edited since it was originally published)
[+] [-] T4cC0re|6 years ago|reply
- What are the privacy implications of this? Theoretically, any 3rd party in line of traffic (in this case it is Cloudflare) has the ability to monitor and alter traffic. This is especially true when said 3rd party does inspections into TLS traffic. The privacy implications are, that they can see and alter any traffic between the client and the origin. But in the case of Cloudflare and GitLab, that is desired, as we want to take advantage of Cloudflare's Web Application Firewall to further protect us from 0-days and other threats, where mitigation using traditional methods might cause a delay, that cloud lead to exploitation of those, causing bigger harm. My personal opinion on this (it does not necessarily reflect the view of GitLab) is, that Cloudflare is a vendor I personally trust in terms of adhering to what they say in regards to privacy. While they, as any other vendor, could be breached, I see the benefit of their Security solution bigger, than the breach-risk.
- Will Cloudflare be able to see GitLab traffic? Yes, partially. SSH is not inspected by CF, despite being proxied by them. HTTP can be inspected by anybody on the net and yes, thus CF, too. HTTPS will be terminated by Cloudflare (and re-encrypted when talking to our origin) for some reasons: - CDN: They need to know which resource you request in order to serve it from a cache - WAF: Once enabled (not from the start) it will scan requests for malicious content and either block or challenge the request - Workers: A highly integrated FaaS platform, we can use to dynamically authenticate requests to cached resources and other logic we may want to execute on the edge of Cloudflare's network. Despite all that Cloudflare does not log raw requests. On our end, we only see the information about where the request came from, user agents, etc. You can find out more about what Cloudflare logs over here: https://blog.cloudflare.com/what-cloudflare-logs/
- Was/is Fastly able to before? Fastly is in the same position as Cloudflare will be in the future. Right now https://about.gitlab.com is served via fastly pointing to an origin, which we control. If fastly would be breached, the attacker could control the contents of about.gitlab.com. Static assets are also hosted via fastly and are also subject to be possibly altered by an attacker.
- And is there any reason to trust one over the other? The decision within GitLab was made purely on technical grounds. Cloudflare was the only solution meeting our criteria to keep serving traffic via SSH on the same host as we do right now. Speaking for myself again, I trust Cloudflare and have personally been a customer of theirs for a long time 2012-ish). While I do not have that kind of relationship with fastly, I believe both vendors to be trustworthy in doing their best to protect the privacy of their users.
You can find more information about how we are going to deploy Cloudflare here: https://gitlab.com/gitlab-com/gl-infra/readiness/tree/master...
[+] [-] jakejarvis|6 years ago|reply
> 1. Once traffic is ensured to flow though Cloudflare, we initiate decommission of Route53.
> We would disable the transfer lock and generate an auth code.
> immediately after, we move the domain over to the Cloudflare registry
I love the overall plan but this part would worry me... The most obvious contingency plan when you're putting so many eggs into one basket is to keep a kill-switch somewhere like your domain registrar, where you can abandon ship entirely by switching nameservers in the worst of scenarios, right?
Correct me if I'm wrong but when the Cloudflare dashboard went down a few weeks (months?) ago, no sort of DNS-level changes would have been possible. (Either way, I don't think you can even set external nameservers for domains on Cloudflare Registrar yet?)
Just curious about the thinking behind this particular move and why the pros outweigh the cons of leaving the domain where it is.
[+] [-] mattbee|6 years ago|reply
Cloudflare have always hosted [] booter / stresser / DDoS-for-hire sites. Right now I can find 8 sites on their network when I search for "booter" (it's been higher). So Cloudflare are responsible for keeping criminal activity online, and defending them from visibility to their downstream hosts, who might otherwise cut them off.
Cloudflare also sell DDoS protection from the same criminals. Because they host them, they can see where attacks will happen next - an advantage other big CDNs won't tolerate.
That adds up to a protection racket - in the 90s this type of shady business wouldn't have been able to source connectivity. But now they're the good guys :( I don't get it.
[] Cloudflare have tended to cry "we don't host anything!". But if they stopped providing service, these sites probably couldn't exist. I call that mission-critical hosting.
[+] [-] cagenut|6 years ago|reply
[+] [-] secondo|6 years ago|reply
Depending on your definition of ddos protection Fastly has those things along with granular and semi scriptable configurability via vcl. Granted it’s been 2-3 years since I last did a comparison between Fastly and Cloudflare but I can’t understand why a company with presumably a need for configurability at the edge would make this choice. Cloudflare’s entry tiers are unbeatable in price but beyond that it’s extremely bulky.
[+] [-] T4cC0re|6 years ago|reply
[+] [-] eganist|6 years ago|reply