I filed this bug (along with http://crbug.com/458362), and there's a bit of context that may be missed. First, this is to protect weak devices and services on private networks or localhost, that really can't be safely exposed to attacks pivoting from the web. It will block only sub-resource loads and navigation from the web to private networks. Navigating directly (via bookmarks or omnibox) or in the other direction (from private to web) is still fine. Essentially, this is applying similar rules to what's already in place for file URLs.
We also appreciate the concerns for legacy apps, local servers, etc. That's why we plan on supporting blocking UI overrides, opt-in by the page on localhost/private (maybe via CORS headers), and permanent exceptions via the command-line and enterprise policy. So, there might need to be some small tweaks, but all the use cases I've seen described in this thread should still work in a portable manner after these changes are made.
Also, to my knowledge no one is actively working on these changes yet. Hence why the details in the bugs are still a bit nebulous. Although that is likely to change pretty soon.
They're not blocking all access, just sub-resources from Internet pages (you can still develop/use local servers). And they're adding a command line switch and enterprise policy to allow localhost.
Also this is preferred way to communicate with localhost:
Sub-resources includes websocket connections. The link you point to is not a standard and will not work cross-browser. You will also need to package your app using Chrome's format AFAIK. Basically, it's not really an option at this point in time.
Yep, this is a good thing. Apart from dev work, I like the pattern of local Web UIs for installed apps, I wish they were more commonplace - without risking the user's security.
Sorry, but I didn't find
"sub-resources" in my Webster's.
Could we please have a definition or a link
to a definition?
In general in documentation for computing,
or other technical documentation, or any
serious writing on any subject, we should
avoid undefined terminology.
Sorry to be critical, but computer security
is a serious subject; since all it takes is
one little gap to have a terrible
computer virus infection, we need to be
quite clear right down to the level of
each little issue, quite clear and
explicit.
So far this year, I've spent over half
my time fighting viruses. Bummer.
We'd be blocking sub-resources and navigation from web to private. Of course, direct navigation via a bookmark or the omnibox would still work. And navigation or resource inclusion from private to the web would still work.
As I explained in another comment, we're also working on exception mechanisms (including per-page opt-in, possibly via CORS) to cover the valid use cases.
This is a good thing. I recently found an exploit in a widely-distributed bit of software that listened on localhost but had minimal protection against malicious inputs. Anyone who could trigger your browser to make a crafted request to locahost (which is very easy) could make this bit of software download and execute any remote executable.
Yes it is, but if you read closer, you will see that they are also planning to block access to secure websocket servers with public dns resolving to 127.0.0.1 and valid ssl certificates setup precisely for the purpose of being accessed by the public web. This is completely unnecessary.
I absolutely agree that an exception for legitimate use cases is necessary. (And that a proprietery Chrome API - which in addition is only available to approved extensions - is not an option)
However, I don't see what security benefits TLS or DNS records would bring in this case and why they should be requirements for the exception.
If your goal is to protect local daemons from malicious web sites, then I don't see why the usual CORS restrictions aren't enough. Maybe they aren't (You could tighten them a bit, e.g. not allowing "Allow: *" responses, treating all requests as CORS non-simple or even only allowing WebSocket connections if you must), but forcing TLS or a DNS entry doesn't seem to increase the security: If I were an attacker, there is nothing that keeps me from trying wss://localhost or wss://www.dropboxlocalhost.com in addition to plain localhost. So what security benefit would that bring?
If you want to protect a legitimate web page from malicious daemons, those measures won't help you either: I can simply download your legitimate daemon, extract the TLS private key and generate a valid certificate for www.dropboxlocalhost.com myself - or I just install a custom root CA certificate and use my own private key.
But I don't think defending against the latter scenario is very useful anyway: Either the "malicious" daemon has been purposefully installed by the user, in which case blocking it would be against user interests, or it is part of a malware. In that case, the malware will have dozens of other ways to compromise your web page and you will generally have bigger things to worry about.
So, why can't we simply use CORS to protect connections to localhost like we do for everything else?
The reason that web apps such as Dropbox currently go to all the effort of a public DNS record (www.dropboxlocalhost.com) resolving to 127.0.0.1 plus an SSL certificate for that domain, plus mutual authentication after the websocket is established, is because it's currently the only pragmatic way to setup a non-"mixed content" connection to a localhost websocket server. It doesn't increase security as you point out. It's just that it's the only way to do it.
Any other connection would be blocked by browsers (e.g. a connection from https web app to a non-tls ws localhost server). The bug ticket referred to would seek to block this last remaining way of connecting, forcing localhost daemons to connect to the web app via server proxy.
This includes blocking access by web apps such as Dropbox to their local daemon, which currently rely on a public DNS record resolving to 127.0.0.1 and a secure localhost websocket server (using a certificate for the public DNS record's FQDN) to make the connection (with additional mutual authentication after the websocket handshake).
After this change, there will be no way for a web app to communicate directly with a locally installed daemon.
I mentioned this above, but we plan to let the localhost/private page support an explicit opt-in, maybe via CORS headers. Those details may not be captured in the bugs yet because currently no one is actively working on this.
As a normal user, i.e. someone who does't really keep up to speed with whatever is the latest news in cross-platform-XRGS-injection-attack, is this good or bad?
To complete with an example of that, easel, the hipster Computer Aided Manufacturing software is hosted in the web, and it assumes that your CNC router is on the local network to receive the instructions in a HTTP post request.
I can see to not allow connections to be made to any host in the local network. That's fine. But blocking access to the loopback interface is IMHO going a bit too far as this was until now a very portable and secure way for websites to talk to locally installed helper applications.
Dropbox does this, Github for Mac does this (and incidentally, my own home-grown solution for reading barcode scanners does this too).
I really don't want to have to end up in a world where we have to write browser-specific solutions (the method recommended in the issue is chrome-specific) for these kind of things.
FWIW, the title is beyond sensationalist, and outright wrong. The title on the link is "Block sub-resource loads from the web to private networks and localhost".
You want _your_ browser to connect to _your_ local resources?
BITCH PLEASE! Whats next? Maybe you want non gimped SD card access for you android apps? HAHA.
We have ze Cloud specifically so you are forced to use us as the middle man, and we can see all of your data. How else are we going to make any money off of you?
PS: Oh, and its about security in journa^^ of your data!
From the description, it only means that external pages won't be able to access resources on localhost, but you could still develop on localhost as usual.
"In conjunction with the change, we will need a command line switch and enterprise policy to revert to the older behavior for testing and legacy applications."
If you rely on connecting your web app with a locally installed daemon (properly secured using wss and additional mutual authentication) then that's not really an option either, as you can't expect your users to know how to use command line switches. Cutting off access to wss is a big deal.
[+] [-] justinschuh|11 years ago|reply
We also appreciate the concerns for legacy apps, local servers, etc. That's why we plan on supporting blocking UI overrides, opt-in by the page on localhost/private (maybe via CORS headers), and permanent exceptions via the command-line and enterprise policy. So, there might need to be some small tweaks, but all the use cases I've seen described in this thread should still work in a portable manner after these changes are made.
Also, to my knowledge no one is actively working on these changes yet. Hence why the details in the bugs are still a bit nebulous. Although that is likely to change pretty soon.
[+] [-] pfraze|11 years ago|reply
By that do you mean, an in-browser prompt to allow the connection?
[+] [-] _urga|11 years ago|reply
But it's essential that access to secure websocket on localhost not be blocked in the process (which appears to be the case is it stands).
That's the reason for posting here in the first place.
[+] [-] logn|11 years ago|reply
Also this is preferred way to communicate with localhost:
https://developer.chrome.com/extensions/messaging https://developer.chrome.com/extensions/nativeMessaging
[+] [-] _urga|11 years ago|reply
[+] [-] dvirsky|11 years ago|reply
[+] [-] graycat|11 years ago|reply
In general in documentation for computing, or other technical documentation, or any serious writing on any subject, we should avoid undefined terminology.
Sorry to be critical, but computer security is a serious subject; since all it takes is one little gap to have a terrible computer virus infection, we need to be quite clear right down to the level of each little issue, quite clear and explicit.
So far this year, I've spent over half my time fighting viruses. Bummer.
[+] [-] justinschuh|11 years ago|reply
As I explained in another comment, we're also working on exception mechanisms (including per-page opt-in, possibly via CORS) to cover the valid use cases.
[+] [-] orf|11 years ago|reply
[+] [-] _urga|11 years ago|reply
[+] [-] xg15|11 years ago|reply
If your goal is to protect local daemons from malicious web sites, then I don't see why the usual CORS restrictions aren't enough. Maybe they aren't (You could tighten them a bit, e.g. not allowing "Allow: *" responses, treating all requests as CORS non-simple or even only allowing WebSocket connections if you must), but forcing TLS or a DNS entry doesn't seem to increase the security: If I were an attacker, there is nothing that keeps me from trying wss://localhost or wss://www.dropboxlocalhost.com in addition to plain localhost. So what security benefit would that bring?
If you want to protect a legitimate web page from malicious daemons, those measures won't help you either: I can simply download your legitimate daemon, extract the TLS private key and generate a valid certificate for www.dropboxlocalhost.com myself - or I just install a custom root CA certificate and use my own private key. But I don't think defending against the latter scenario is very useful anyway: Either the "malicious" daemon has been purposefully installed by the user, in which case blocking it would be against user interests, or it is part of a malware. In that case, the malware will have dozens of other ways to compromise your web page and you will generally have bigger things to worry about.
So, why can't we simply use CORS to protect connections to localhost like we do for everything else?
[+] [-] _urga|11 years ago|reply
Any other connection would be blocked by browsers (e.g. a connection from https web app to a non-tls ws localhost server). The bug ticket referred to would seek to block this last remaining way of connecting, forcing localhost daemons to connect to the web app via server proxy.
[+] [-] _urga|11 years ago|reply
After this change, there will be no way for a web app to communicate directly with a locally installed daemon.
[+] [-] justinschuh|11 years ago|reply
[+] [-] wingerlang|11 years ago|reply
[+] [-] nraynaud|11 years ago|reply
[+] [-] diminoten|11 years ago|reply
Well, you could just not use Chrome.
[+] [-] pilif|11 years ago|reply
Dropbox does this, Github for Mac does this (and incidentally, my own home-grown solution for reading barcode scanners does this too).
I really don't want to have to end up in a world where we have to write browser-specific solutions (the method recommended in the issue is chrome-specific) for these kind of things.
[+] [-] silon3|11 years ago|reply
[+] [-] andor|11 years ago|reply
http://code.metager.de/source/xref/chromium/net/base/net_uti...
[+] [-] easytiger|11 years ago|reply
[+] [-] justinschuh|11 years ago|reply
[+] [-] rasz_pl|11 years ago|reply
BITCH PLEASE! Whats next? Maybe you want non gimped SD card access for you android apps? HAHA.
We have ze Cloud specifically so you are forced to use us as the middle man, and we can see all of your data. How else are we going to make any money off of you?
PS: Oh, and its about security in journa^^ of your data!
[+] [-] narrowrail|11 years ago|reply
[+] [-] Theodores|11 years ago|reply
[+] [-] dvirsky|11 years ago|reply
[+] [-] yskchu|11 years ago|reply
"In conjunction with the change, we will need a command line switch and enterprise policy to revert to the older behavior for testing and legacy applications."
[+] [-] _urga|11 years ago|reply