Fuck this. It increases the complexity of browsers and servers and erects yet another burden for makers. And for what? So orgs with broken infrastructure (and budgets for running it) and device manufacturers (with budgets of their own) can keep up their garbage practices instead of fixing them? And they can push their costs onto a deep-pocketed company like Google who's willing to subsidize them? Here's to hoping that WebKit and Mozilla say "No."
Huh? I think you may have a very incorrect idea about who is being burdened here.
A major effect of these changes is to make it simpler for anyone ("makers" included) to securely implement devices which expose web services, by making web browsers refuse to allow external web sites to send arbitrary requests to those devices.
Most network devices have no need to receive such requests. No action is necessary on their part; these changes will make those devices stop receiving those requests.
The few network devices that do need to receive those requests can opt in to receiving them by responding to a CORS probe (an OPTIONS HTTP request) with a specific HTTP header. This is not complicated to implement.
Web browsers already implement complex CORS policies. Adding this is not a huge burden upon them, and may actually obviate the need for other more complex defenses.
It's probably wrong to consider any 'private' network to be secure any more. With zero-trust principles in mind, you wouldn't necessarily believe every device or system on your network is trusted, so you ensure that what is able to send and receive traffic on your network has other mechanisms in place for trust and authenticity.
Requiring HTTPS for services on the private network seems rather extreme. How do you even do HTTPS on a private network not attached to some publicly-resolvable external domain without installing the root certificate on all devices?
If you are running a private intranet type thing, installing your own root certs seems fairly reasonable to me.
You could also just have the dns be public. No reason why private networks can't be in public dns. If you are really paranoid use wildcard certs, and only have top domain be in the public dns.
The proposal does not attempt to force private network resources to use TLS. That would be an excellent outcome, but it's difficult to do in the status quo, and is a separate problem to address separately.
The proposal _does_ require pages that wish to request resources across a network boundary to be delivered securely, which therefore requires resources that wish to be accessible across network boundaries to be served securely (as they'd otherwise be blocked as mixed content). This places the burden upon those resources which wish to be included externally, which seems like the right place for it to land.
I just read the proposal. I do not see where it says that internal services will require https.
My reading is that public websites making an ajax request to http://10.0.0.12 will need to be https. I'm not sure how that protects against anything, but it also doesn't affect the internal services themselves.
Just because your domain is public doesn't mean your DNS is public as well. You can use a cert signed by a public CA in a private network just fine so long as you're using the right DNS setup.
Zero-trust architecture would note that there's not really anything 'private', once the traffic is 'inside'. The old way of thinking with firewalls and DMZs falls apart now, so you have to treat all traffic, even what you think of as "inside" as potentially hostile or disruptive. Thus, TLS everywhere.
How to do it? self-signed certs and distribute your own CA and install it across devices that are authorized to be on your network.
I can't fathom why anyone would be up in arms against preventing random websites from accessing servers running on localhost or inside home networks. This is not about "web bloat," as some people are suggesting, but rather closing a gaping security hole that should never have existed from the start.
I don't get it. What's wrong with normal CORS? They do work for private networks. Also I don't think browsers are the right place to do this kind of protection.
> status quo CORS protections don’t protect against the kinds of attacks discussed here as they rely only on CORS-safelisted methods and CORS-safelisted request-headers. No preflight is triggered, and the attacker doesn’t actually care about reading the response, as the request itself is the CSRF attack.
The example feedback goes a bit in that direction, but how would that interact with mixed content?
Say, I have a server running on 192.168.1.1. The box is only accessible through its IP address, so I can't get a public certificate for it and therefore can't enable https.
I cannot access the box from a http site due to the new restriction.
I cannot access the box from a https site due to mixed content.
My understanding is that you can access it directly. But you can't embed e.g. images or JavaScript from that server within a website running on a public IP address. I consider that a good thing.
Correct. In the status quo, you will be best-served by looking at solutions similar to what Plex is shipping (https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...). ACME's DNS-based challenges might even make this easier today than it was when that mechanism was designed.
Longer-term, it seems clear that it would be valuable to come up with ways in which we can teach browsers how to trust devices on a local network. Thus far, this hasn't been a heavy area of investment. I can imagine it becoming more important if we're able to ship restrictions like the ones described here, as they do make the capability to authenticate and encrypt communication channels to local devices more important.
This change can't come soon enough. My only objection is that the rollout is way too slow, and the first step is only a baby step. What we have now is essential a 0day situation.
Unfortunately a lot of commenters completely misunderstand the proposal, as well as the attacks it is designed to mitigate. Here's a good paper describing the issue, "Attacking the internal network from the public Internet using a browser as a proxy":
https://www.forcepoint.com/sites/default/files/resources/fil...
The "HTTPS" part of Google's proposal is a bit of a red herring, as I said, only a baby step. It does not mean that your local services have to be https. It just means that public http sites can't make requests to local IP addresses.
> The "HTTPS" part of Google's proposal is a bit of a red herring, as I said, only a baby step.
It's in there for the same reason that risky APIs like geolocation require HTTPS -- so that injecting content into an insecure HTTP transaction cannot allow an attacker to hijack a trusted HTTP site's permissions.
I am thinking of use cases and I think one would be folding@home. Their Web client is hosted on their public website and it makes http calls to a localhost:port running on a background service. Is that correct or have I misunderstood?
Also I don't understand how this works with IPv6 initiative that wants every device have a public IP address. The same goes for IPv4 routers - on many of them you could use the public IP to do the drive-by attack.
IPv6 does indeed complicate things. I suspect we'll end up trying a few things before finding the right answer, starting with a) allowing network admins to configure IP ranges that correspond to the network they control, and b) examining the local network to infer a private range.
Happily(?), IPv4 networks are still pervasive, and this proposal seems clearly valuable in those environments.
Assuming I'm running malware.com, I would make 192-168-0-1.router.malware.com resolve to 192.168.0.1 so the origin matches and I can prod the router as much as I'd like without crossing the origin.
The proposal talks about sites resolving to private/local addresses, so presumably, the browser would still apply the checks to all requests to that domain.
The only thing that would not trigger CORS is if you somehow loaded a top-level document from that domain. (The address is in the browser's address bar) - however, a malicious website can't do that as this server is not under their control.
[+] [-] pwdisswordfish4|5 years ago|reply
[+] [-] duskwuff|5 years ago|reply
A major effect of these changes is to make it simpler for anyone ("makers" included) to securely implement devices which expose web services, by making web browsers refuse to allow external web sites to send arbitrary requests to those devices.
Most network devices have no need to receive such requests. No action is necessary on their part; these changes will make those devices stop receiving those requests.
The few network devices that do need to receive those requests can opt in to receiving them by responding to a CORS probe (an OPTIONS HTTP request) with a specific HTTP header. This is not complicated to implement.
Web browsers already implement complex CORS policies. Adding this is not a huge burden upon them, and may actually obviate the need for other more complex defenses.
[+] [-] cratermoon|5 years ago|reply
[+] [-] lilyball|5 years ago|reply
[+] [-] bawolff|5 years ago|reply
You could also just have the dns be public. No reason why private networks can't be in public dns. If you are really paranoid use wildcard certs, and only have top domain be in the public dns.
[+] [-] mikewest|5 years ago|reply
The proposal _does_ require pages that wish to request resources across a network boundary to be delivered securely, which therefore requires resources that wish to be accessible across network boundaries to be served securely (as they'd otherwise be blocked as mixed content). This places the burden upon those resources which wish to be included externally, which seems like the right place for it to land.
[+] [-] bawolff|5 years ago|reply
My reading is that public websites making an ajax request to http://10.0.0.12 will need to be https. I'm not sure how that protects against anything, but it also doesn't affect the internal services themselves.
[+] [-] nickphx|5 years ago|reply
My understanding of it is the site making the call to the resource on the 'private' network, must be served via HTTPS.
[+] [-] mchristen|5 years ago|reply
[+] [-] cratermoon|5 years ago|reply
How to do it? self-signed certs and distribute your own CA and install it across devices that are authorized to be on your network.
[+] [-] soraminazuki|5 years ago|reply
[+] [-] janci|5 years ago|reply
[+] [-] cratermoon|5 years ago|reply
> status quo CORS protections don’t protect against the kinds of attacks discussed here as they rely only on CORS-safelisted methods and CORS-safelisted request-headers. No preflight is triggered, and the attacker doesn’t actually care about reading the response, as the request itself is the CSRF attack.
[+] [-] xg15|5 years ago|reply
Say, I have a server running on 192.168.1.1. The box is only accessible through its IP address, so I can't get a public certificate for it and therefore can't enable https.
I cannot access the box from a http site due to the new restriction.
I cannot access the box from a https site due to mixed content.
So I cannot access the box anymore at all?
[+] [-] TimWolla|5 years ago|reply
My understanding is that you can access it directly. But you can't embed e.g. images or JavaScript from that server within a website running on a public IP address. I consider that a good thing.
[+] [-] mikewest|5 years ago|reply
Longer-term, it seems clear that it would be valuable to come up with ways in which we can teach browsers how to trust devices on a local network. Thus far, this hasn't been a heavy area of investment. I can imagine it becoming more important if we're able to ship restrictions like the ones described here, as they do make the capability to authenticate and encrypt communication channels to local devices more important.
[+] [-] AntonyGarand|5 years ago|reply
If you need passive mixed content, I believe it is and will remain supported by browsers for some time.
If you need active mixed content though you probably will need a workaround via passive content instead.
[+] [-] _qulr|5 years ago|reply
Unfortunately a lot of commenters completely misunderstand the proposal, as well as the attacks it is designed to mitigate. Here's a good paper describing the issue, "Attacking the internal network from the public Internet using a browser as a proxy": https://www.forcepoint.com/sites/default/files/resources/fil...
The "HTTPS" part of Google's proposal is a bit of a red herring, as I said, only a baby step. It does not mean that your local services have to be https. It just means that public http sites can't make requests to local IP addresses.
[+] [-] duskwuff|5 years ago|reply
It's in there for the same reason that risky APIs like geolocation require HTTPS -- so that injecting content into an insecure HTTP transaction cannot allow an attacker to hijack a trusted HTTP site's permissions.
[+] [-] politelemon|5 years ago|reply
[+] [-] mpeklar|5 years ago|reply
[+] [-] janci|5 years ago|reply
[+] [-] mikewest|5 years ago|reply
Happily(?), IPv4 networks are still pervasive, and this proposal seems clearly valuable in those environments.
[+] [-] djweis|5 years ago|reply
[+] [-] xg15|5 years ago|reply
The only thing that would not trigger CORS is if you somehow loaded a top-level document from that domain. (The address is in the browser's address bar) - however, a malicious website can't do that as this server is not under their control.
[+] [-] xg15|5 years ago|reply
Does the proposal also apply to private-to-private?
E.g., if a page at 192.168.1.1 fetches a resource from 192.168.1.2, will that also trigger the new rules?
[+] [-] mikewest|5 years ago|reply
[+] [-] pgt|5 years ago|reply
[+] [-] bawolff|5 years ago|reply
I guess this still wouldn't protect against attacks on non http(s) services. So things like https://samy.pl/slipstream/ still work?