I posted about another macOS app called “Dropshare.app” a few days ago [1].
I didn’t bother to write a blog post about it because my English is not good enough.
Basically, anyone who uses this app is vulnerable to cross-infection and data leaks. Assume that user John installed this app, then hacker Alice tricked John to visit their malicious website. In this website, they added code that sends requests to “http://localhost:34344/upload” to upload malicious files to any of the services that John’s computer is connected to via Dropshare, this includes private servers via SSH, Amazon S3, Rackspace Cloud Files, Google Drive, Backblaze B2 Cloud Storage, Microsoft Azure Blob Storage, Dropbox, WeTransfer, and custom Mac network connections. The port number is also static, saving the hacker the need to run a port scanner.
I already contacted Dropshare’s developers to fix the issue, but got not response.
I found a malware on a JavaScript errors notifications for Chrome in 2015 that affected 86k users and the developer was mad because apparently he made some money out of it (I was called a paranoyd).
The malware basically logged your accessed URLs and solt it to a statistics service called Fairshare.
Agreed with the other comments, this comment = blog, looks like you know what you're talking about, short, to the point.
Good find, and thanks for sharing
off topic comment:
For all, what are people blogging on these days? I am not too fond of Medium, but I just got into Ghost and really liking it. It seems like a great short post / resource to write something. I have it installed on a cheap DO server, plain text like lapcat?
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web pages to send requests not only to localhost but also to any IP address on your Local Area Network! Can you believe that? I'm both astonished and horrified.
I'm sorry but WtF who is this guy, every web developer will know this, and every developer should expect it since it's just an extension of basic networking knowledge applied to web browsers. It's not horrifying, it's basics, a great many things have depended on this fact to function for a very long time.
Yes functionality can work against you when abused, no this is not a special case.
This is Hacker News. We all know this. That's not the point. The point is: is it reasonable in 2019 that websites you visit can make requests to devices on your local network?
To be honest I'm not sure. But I sure think it's a relevant discussion to have.
> a great many things have depended on this fact to function for a very long time
Such as? Breaking "website makes a call to local (localhost or RFC1918) web server" would be a feature; any use of this is an abuse. It'd take a transition period and some careful opt-outs, but any kind of call to a local web server should require the same kind of special privileges a browser extension needs, at the very least.
It's a bug. Calling it out is the right thing to do. If you're complaining that the author should have known about it already, then you're just mocking someone for not already knowing a particular fact; that shouldn't stop them from writing up a report on it and trying to get it fixed.
EDIT: please note that I'm talking about calls from Internet origins to localhost/RFC1918 origins here, not calls from one Internet origin to another or one localhost/RFC1918 origin to another.
This complaint is real cute, but the trite answer is that this is how things have worked for a long time. Awareness of it is spreads for a while whenever high-profile events receive media and blog coverage, and perhaps the exploitability of this has increased compared to several years ago when products that opened up various HTTP-accessible servers were less common (or secured by obscurity).
This isn't necessarily an excuse to not explore mitigations through consensus in future browser behavior -- after all, that process of loose but eventual consensus of incremental UX and airquote "security" improvements is how SOP and CORS and C-S-P came about [1] and the cookie saga evolves [2][3].
But consider that legitimate uses of cross-domain requests to localhost exist (e.g. an OAuth callback endpoint), while also keeping in mind that users from all walks of life are, perhaps unbeknownst to them, are managing LANs of computing devices running dozens of servers, often with modern encryption such that communications between the program and the remote server are becoming harder to intercept and oversee, and lack a comprehensive capability to monitor, analyze, blacklist, whitelist, or snipe traffic in a way that's not cumbersome or borderline user-hostile. Such is the world where we've arrived. Etching away on one or two widely deployed corners of it won't fix the overall landscape, even if it may significantly reduce the change of "drive-by" exploitation through websites accessed through commonly used browsers.
First, I think this is right and that websites shouldn't be able to hit any localhost or private address spaces.
But this leads to a bigger question, what makes private address space special? Not really all that much. Running an internal network using public addresses isn't super common these days but isn't uncommon by any stretch. Does it make any sense that any website on the internet is allowed to hit any other site accessible by your machine that uses a public address? There is definitely a security boundary being crossed here.
Say for example I run a web service that's private to my work's office. So spin up a machine on my VPS account, give it a public address, and lock down the firewall to my office's address range. Someone running Spotify in browser shouldn't have to worry about a malicious page hitting a potentially sensitive internal service.
Does it make any sense for me have to establish a VPN connection to my VPS for the sole purpose of giving it a private address so browsers will block it? Ew. I could also configure a CORS policy but we're talking about a service that used a trick to bypass this protection -- and plus nobody knows how to set that up right anyway.
By default, browsers do block sites from doing dangerous things to other sites, like sending authenticated API requests; they only let by stuff that is supposed to be harmless, like hotlinking images. And then they have a mechanism called CORS that lets those services say "this particular site can make API requests and such".
The problem is that Zoom, since they didn't understand CORS, and yet did want to allow their site to make API requests, turned what should have been an harmless action (GET an image) into a dangerous one.
Browsers could block everything, but all I think would happen is that Zoom would just find some other silly (and potentially more dangerous) way of doing the same thing, because they want the site to be able to talk to the service.
If you're writing your own service to be used on an internal network, you don't need a VPN or anything. Just don't accept unauthenticated requests that make changes, and ignore CORS.
It's always been the case, back to NCSA Mosaic in 1993, that web pages could hit URLs of local web servers. Before javascript, you had to use an embedded image, like:
Fortunately, most protocols bail out on the first 4 bytes "GET ". One of the reasons that Gopher support was phased out was that you could make a gopher request contain more or less arbitrary bytes and attack many local servers.
Servers have always had the burden of defending against this.
I believe some networking equipment let you go to "www.routercompany.com" that loads up the router's config webpage without having to remember its lan IP.
How do you differentiate between a valid and invalid request to localhost / the LAN?
Lots of websites will link to something like `http://localhost:9200` (e.g. Elasticsearch) in the documentation.
So you decide to make it impossible to load that page in the context of a page loaded from a public IP address. Great.
What is stopping them from tricking you into clicking it (or filling out a fake form), which is basically the same thing?
You haven't really solved the problem. You've just made it slightly more difficult.
The solution is:
a) fix your applications so that they do not expose unsafe endpoints that can cause unintended side-effects merely by navigating to them
b) stop using session cookies (at least stop using them alone) to authenticate actions. Use token-based authentication (like CSRF tokens)
Edit: and before you say "check the referer header!", no, that will not solve the problem. The bad web page can simply not include the referer with something like `rel="noreferrer"`
> How do you differentiate between a valid and invalid request to localhost / the LAN?
With the same origin policy? I think the post advocates for something like allowing localhost and private IP addresses only from those very same addresses or from the URL bar. Any other page shouldn't be able to access them.
This will probably break something but what's the case for a web app to legitimately access local host? Maybe access to some local service installed by the user and managed "from the cloud".
There are legitimate reasons to open a webserver locally. However, the benefits from these restrictions are great not to consider some sort of protection. Perhaps there could be an authorization request the user could allow (similar to how we got rid of the pop-ups) in the most natural way possible (we don't want to break intranets, for example).
Another security-related bad pattern that annoys me is how some of this authorization stuff steal your focus making it impossible for you to ignore them (like, you cannot move to another tab before deciding to allow or not something).
Another thing is how sometimes it is not completely clear if something is an element of a website or your browser or system. For example, imagine you have to type your user password for a random update to complete, but you are browsing some website... Suddenly you see a prompt with your username and a password field matching your system's... However, you can only know for sure this isn't phishing if you try to cmd+tab and it is still there. Heck, the system should try to detect you are on a window showing unsigned/unsafe content and paint something out of the frame (like coming from the top address bar) so you can easily identify it's legit (because a website shouldn't be able to print a portion of your screen outside of 'window').
> In my opinion, web pages should not be allowed to make requests to LAN addresses unless the user has specifically and intentionally configured the browser to allow this.
Is there a way to know definitively if an address is "local" rather than "wide"? Should that be more granular, e.g. host, LAN, WAN? How does that work with bridged networking and such?
If I'm already browsing something on the LAN, it seems reasonable to be able to browse other sites on the LAN. But then that seems like an overly broad definition of LAN would allow privilege escalation.
If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that was a LAN address, but that's a convention and depends very much on routing being set up properly.
> If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that was a LAN address, but that's a convention and depends very much on routing being set up properly.
This convention on the address is actually backed by RFCs. eg. rfc 1918. There is similar for ipv6.
However, blocking traffic to private IPs without careful consideration seems like it could block some legitimate use. So one does have to tread carefully when special-casing those.
I knew that localhost could be accessed, but the fact that local IP addresses on the LAN can be accessed is actually quite surprising to me. I suppose it makes sense, but it definitely makes me much more concerned with the security of local devices on my home and office networks now.
Are there any best-practices for keeping things locally safe (i.e. LAN devices like printers, testing boxes, tvs etc.) , beyond just treating them the same way you would an external facing machine?
Yes, there are quite a few attacks on default local credentials for home routers because of this. Now if only home routers were better about actually following through on changing credentials..
Yeah, forget about home, this is a nightmare. Who knows how many devices are in a corporate network. Internal networks are usually not as well protected as the perimeter.
Jonathan Leitschuh shared the same complaint in his original writeup of the Zoom Zero Day, but also mentioned CORS-RFC1918 – a proposal to obtain permission from the user before allowing a public website to access a resource that DNS lookup reveals to be hosted on the private or local address space as defined by RFC1918:
Wait wait WAIT! It is much more complicated than that.
You can make XHR (aka ajax) requests only if the CORS policy allows it (concretely, this local web server you are trying to access is answering with a specific HTTP header saying "I authorize the website xyz.com to send XHR request to me via the web browser of the client of xyz.com).
Now for everything outside of XHR(ajax), you can send different type of requests :
<script src="..."></script> but this let you only load js files.
<img src="..." /> but this lets you only load images, you can't really do much other than try to load images with that.
So if you get into the detail of each "web api" (XHR, <img/>, <script/>, etc) you will see that you are actually very limited.
I worked in a company that used custom DNS names to identify the environments:
- www.mydomain.com
- stage.mydomain.com
- local.mydomain.com
The last one referred to the version of the app that developers ran on their own machines. So they had a DNS-level entry that sent local.mydomain.com to 127.0.0.1.
This isn't a browser issue at all. I think the security issue is "applications can install local web servers" and "some local web servers are insecure".
We already have XSS controls in place to prevent a domain from accessing the contents of another browser window or an iframe.
It's not a browser issue. There are plenty of legitimate reasons for wanting a browser to access a local web server. It might not be common, but it's not illegitimate nor a security issue.
Yep, I have a sub domain under a personal domain pointing to a few a few specific specific 192.* addresses and localhost. It makes it easy to test HTTPS stuff without having to jump through hoops (and with Let's Encrypt it's free).
> In general, there's no reason why a page on the internet should be allowed to access devices on your local area network. Of course, if the user enters a LAN IP into the browser location bar, this should be allowed, but that's not a cross-origin request.
What's a local area network? 10.x.x.x? That's going to break VPNs and enterprise integrations in a variety of ways. With IPv6 it's even less predictable.
The solution to this problem is CORS — accessing LAN servers, or any cross-origin destination, requires affirmative consent from the LAN server in the form of the Access-Control-Allow-Origin header.
I was too wondering about how this could work in IPv6.
There is no equivalent of RFC1918 for IPv6 and filtering link-local addresses won't do much as every host on the LAN is still adressable by its publicly routable address. These are probably too hard to predict, though.
I don't see any problems here. Even though they are on the same LAN, they are still on different host, thus subjecting to CORS restrictions.
That is, as long as your devices on LAN do not send a access-control-allow-origin header, the web pages are not capable of getting the actual response. Also, the only http method available to them is GET (when preflight is not required) and OPTIONS (when preflight is required), which are methods that are almost always side-effect free and only return some value - which the script cannot even get due to CORS restrictions.
I do agree in principle that web browsers probably should not allow non-local web sites to make requests to local IP addresses.
However, I don't really see that as the fundamental problem with the Zoom web server. They just happened to use local web requests to externally trigger the Zoom application, because it's probably the most convenient to implement. But couldn't they have, at least in theory, had the Zoom application snoop on the display output until it finds an image of a QR code and open a conference call based on the data in that QR code?
Obviously that's a more intensive listening mechanism, but my point is that the fundamental problem seems to be that their application installs a backdoor that is designed to expose the webcam without confirmation based on user actions that do not necessarily imply intent (like clicking on a web link). The local web request thing is really just an implementation detail: one that probably should be fixed by browsers, but far from the only way Zoom could have implemented this feature.
After all, the Zoom client could just have a socket connection to Zoom's servers, and start a conference call whenever someone requests one. That's how all native apps for conferencing/message work. They just usually require confirmation from the user, and they usually (I hope) uninstall that process when I uninstall the app, so people tend to be less upset.
DNS rebinding attacks leverage this very behaviour. And they exist since years. Nothing new. And I think fixing the approach is complex and error prone. I can still make the browser connect to myhost.mydomain.com and have it resolve to 127.0.0.1 - what then?
Of course if your local webservers have a really open CORS header, that could be a problem. But it's a matter for local webserver, mostly. And DNS rebinding still applies.
To mitigate this, I configure my LAN’s DNS server to drop records which specify local or private addresses. Of course, this doesn’t help outside my LAN. In Unbound:
I recently posted a blog post that exposed a similar issue involving Chrome extension. The issue in particular is how LinkedIn makes local web requests to try an identify which extensions you have installed: https://prophitt.me/articles/nefarious-linkedin
This approach was used for years by Spotify[1], to allow websites embedding their player to load content directly into a running instance of the desktop app.
The browser doesn't know by default what websites should be accessed. They don't know that l337haxor.dev doesn't have access to api.bank.com but bank.com does.
Instead it's the server at api.bank.com 's responsibility to tell the browser it only accepts requests from bank.com.
It is absolutely the job of the server to determine the source and validity of a request. Web browsers, for better or for worse, fundamentally allow websites to make requests to other sites. "Rogue" websites can "trick" users into requesting images, videos, music, javascript, stylesheets or trick users into making POST requests to your site. This is why tech like csrf tokens exist.
[+] [-] guessmyname|6 years ago|reply
I didn’t bother to write a blog post about it because my English is not good enough.
Basically, anyone who uses this app is vulnerable to cross-infection and data leaks. Assume that user John installed this app, then hacker Alice tricked John to visit their malicious website. In this website, they added code that sends requests to “http://localhost:34344/upload” to upload malicious files to any of the services that John’s computer is connected to via Dropshare, this includes private servers via SSH, Amazon S3, Rackspace Cloud Files, Google Drive, Backblaze B2 Cloud Storage, Microsoft Azure Blob Storage, Dropbox, WeTransfer, and custom Mac network connections. The port number is also static, saving the hacker the need to run a port scanner.
I already contacted Dropshare’s developers to fix the issue, but got not response.
[1] https://news.ycombinator.com/item?id=20399551
[+] [-] icelancer|6 years ago|reply
Basically everyone who says this has better English than most American technical developers. We appreciate your modesty, but please, write it up :)
[+] [-] wolf4earth|6 years ago|reply
Maybe I can get you in touch?
[+] [-] pavel_lishin|6 years ago|reply
[+] [-] henvic|6 years ago|reply
I found a malware on a JavaScript errors notifications for Chrome in 2015 that affected 86k users and the developer was mad because apparently he made some money out of it (I was called a paranoyd).
The malware basically logged your accessed URLs and solt it to a statistics service called Fairshare.
https://news.ycombinator.com/item?id=10309432
[+] [-] sdoering|6 years ago|reply
[+] [-] KingFelix|6 years ago|reply
Agreed with the other comments, this comment = blog, looks like you know what you're talking about, short, to the point.
Good find, and thanks for sharing
off topic comment:
For all, what are people blogging on these days? I am not too fond of Medium, but I just got into Ghost and really liking it. It seems like a great short post / resource to write something. I have it installed on a cheap DO server, plain text like lapcat?
[+] [-] amingilani|6 years ago|reply
Edit: I guess not, see Dang's reply below.
[0]: https://news.ycombinator.com/item?id=20402070
[+] [-] oplav|6 years ago|reply
[deleted]
[+] [-] tomxor|6 years ago|reply
I'm sorry but WtF who is this guy, every web developer will know this, and every developer should expect it since it's just an extension of basic networking knowledge applied to web browsers. It's not horrifying, it's basics, a great many things have depended on this fact to function for a very long time.
Yes functionality can work against you when abused, no this is not a special case.
[+] [-] sqren|6 years ago|reply
[+] [-] dang|6 years ago|reply
Can you please make your substantive points without making things personal?
https://news.ycombinator.com/newsguidelines.html
[+] [-] JoshTriplett|6 years ago|reply
Such as? Breaking "website makes a call to local (localhost or RFC1918) web server" would be a feature; any use of this is an abuse. It'd take a transition period and some careful opt-outs, but any kind of call to a local web server should require the same kind of special privileges a browser extension needs, at the very least.
It's a bug. Calling it out is the right thing to do. If you're complaining that the author should have known about it already, then you're just mocking someone for not already knowing a particular fact; that shouldn't stop them from writing up a report on it and trying to get it fixed.
EDIT: please note that I'm talking about calls from Internet origins to localhost/RFC1918 origins here, not calls from one Internet origin to another or one localhost/RFC1918 origin to another.
[+] [-] niftich|6 years ago|reply
This isn't necessarily an excuse to not explore mitigations through consensus in future browser behavior -- after all, that process of loose but eventual consensus of incremental UX and airquote "security" improvements is how SOP and CORS and C-S-P came about [1] and the cookie saga evolves [2][3].
But consider that legitimate uses of cross-domain requests to localhost exist (e.g. an OAuth callback endpoint), while also keeping in mind that users from all walks of life are, perhaps unbeknownst to them, are managing LANs of computing devices running dozens of servers, often with modern encryption such that communications between the program and the remote server are becoming harder to intercept and oversee, and lack a comprehensive capability to monitor, analyze, blacklist, whitelist, or snipe traffic in a way that's not cumbersome or borderline user-hostile. Such is the world where we've arrived. Etching away on one or two widely deployed corners of it won't fix the overall landscape, even if it may significantly reduce the change of "drive-by" exploitation through websites accessed through commonly used browsers.
[1] https://news.ycombinator.com/item?id=12408328#12408680 [2] https://news.ycombinator.com/item?id=13689697#13691022 [3] https://news.ycombinator.com/item?id=19853090#19855518
[+] [-] saagarjha|6 years ago|reply
Safari whitelists a number of URL schemes used by its first-party and internal apps:
[+] [-] Spivak|6 years ago|reply
But this leads to a bigger question, what makes private address space special? Not really all that much. Running an internal network using public addresses isn't super common these days but isn't uncommon by any stretch. Does it make any sense that any website on the internet is allowed to hit any other site accessible by your machine that uses a public address? There is definitely a security boundary being crossed here.
Say for example I run a web service that's private to my work's office. So spin up a machine on my VPS account, give it a public address, and lock down the firewall to my office's address range. Someone running Spotify in browser shouldn't have to worry about a malicious page hitting a potentially sensitive internal service.
Does it make any sense for me have to establish a VPN connection to my VPS for the sole purpose of giving it a private address so browsers will block it? Ew. I could also configure a CORS policy but we're talking about a service that used a trick to bypass this protection -- and plus nobody knows how to set that up right anyway.
[+] [-] icebraining|6 years ago|reply
The problem is that Zoom, since they didn't understand CORS, and yet did want to allow their site to make API requests, turned what should have been an harmless action (GET an image) into a dangerous one.
Browsers could block everything, but all I think would happen is that Zoom would just find some other silly (and potentially more dangerous) way of doing the same thing, because they want the site to be able to talk to the service.
If you're writing your own service to be used on an internal network, you don't need a VPN or anything. Just don't accept unauthenticated requests that make changes, and ignore CORS.
[+] [-] tlb|6 years ago|reply
<img src="http://192.168.1.1/cgi-bin/reboot-router">
And it didn't have to be port 80 -- you could try fuzzing someone's X server with
<img src="http://127.0.0.1:6000/lsjdfjlk23jlrkj">
Fortunately, most protocols bail out on the first 4 bytes "GET ". One of the reasons that Gopher support was phased out was that you could make a gopher request contain more or less arbitrary bytes and attack many local servers.
Servers have always had the burden of defending against this.
[+] [-] la_barba|6 years ago|reply
[+] [-] anaphor|6 years ago|reply
Lots of websites will link to something like `http://localhost:9200` (e.g. Elasticsearch) in the documentation.
So you decide to make it impossible to load that page in the context of a page loaded from a public IP address. Great.
What is stopping them from tricking you into clicking it (or filling out a fake form), which is basically the same thing?
You haven't really solved the problem. You've just made it slightly more difficult.
The solution is:
a) fix your applications so that they do not expose unsafe endpoints that can cause unintended side-effects merely by navigating to them
b) stop using session cookies (at least stop using them alone) to authenticate actions. Use token-based authentication (like CSRF tokens)
Edit: and before you say "check the referer header!", no, that will not solve the problem. The bad web page can simply not include the referer with something like `rel="noreferrer"`
[+] [-] pmontra|6 years ago|reply
With the same origin policy? I think the post advocates for something like allowing localhost and private IP addresses only from those very same addresses or from the URL bar. Any other page shouldn't be able to access them.
This will probably break something but what's the case for a web app to legitimately access local host? Maybe access to some local service installed by the user and managed "from the cloud".
[+] [-] bpfrh|6 years ago|reply
For example display a notification:
"This website wants to interact with devices in your network, if your are at home, this includes all of your
smart things, smart phones and other devices".
I mean, we have a windows firewall and sandboxing on every level and then a webrowser suddenly acts as a vpn for every website?
That can't be good security design
[+] [-] henvic|6 years ago|reply
There are legitimate reasons to open a webserver locally. However, the benefits from these restrictions are great not to consider some sort of protection. Perhaps there could be an authorization request the user could allow (similar to how we got rid of the pop-ups) in the most natural way possible (we don't want to break intranets, for example).
Another security-related bad pattern that annoys me is how some of this authorization stuff steal your focus making it impossible for you to ignore them (like, you cannot move to another tab before deciding to allow or not something).
Another thing is how sometimes it is not completely clear if something is an element of a website or your browser or system. For example, imagine you have to type your user password for a random update to complete, but you are browsing some website... Suddenly you see a prompt with your username and a password field matching your system's... However, you can only know for sure this isn't phishing if you try to cmd+tab and it is still there. Heck, the system should try to detect you are on a window showing unsigned/unsafe content and paint something out of the frame (like coming from the top address bar) so you can easily identify it's legit (because a website shouldn't be able to print a portion of your screen outside of 'window').
[+] [-] ben509|6 years ago|reply
Is there a way to know definitively if an address is "local" rather than "wide"? Should that be more granular, e.g. host, LAN, WAN? How does that work with bridged networking and such?
If I'm already browsing something on the LAN, it seems reasonable to be able to browse other sites on the LAN. But then that seems like an overly broad definition of LAN would allow privilege escalation.
If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that was a LAN address, but that's a convention and depends very much on routing being set up properly.
[+] [-] asveikau|6 years ago|reply
This convention on the address is actually backed by RFCs. eg. rfc 1918. There is similar for ipv6.
However, blocking traffic to private IPs without careful consideration seems like it could block some legitimate use. So one does have to tread carefully when special-casing those.
[+] [-] NikkiA|6 years ago|reply
If the routing table entry for the address is a single interface entry, then it's 'local', if it has a gateway hop it's probably 'wide'.
[+] [-] mdtusz|6 years ago|reply
Are there any best-practices for keeping things locally safe (i.e. LAN devices like printers, testing boxes, tvs etc.) , beyond just treating them the same way you would an external facing machine?
[+] [-] bdamm|6 years ago|reply
[+] [-] outworlder|6 years ago|reply
Yeah, forget about home, this is a nightmare. Who knows how many devices are in a corporate network. Internal networks are usually not as well protected as the perimeter.
[+] [-] redoPop|6 years ago|reply
https://wicg.github.io/cors-rfc1918/
[+] [-] lasryaric|6 years ago|reply
You can make XHR (aka ajax) requests only if the CORS policy allows it (concretely, this local web server you are trying to access is answering with a specific HTTP header saying "I authorize the website xyz.com to send XHR request to me via the web browser of the client of xyz.com).
Now for everything outside of XHR(ajax), you can send different type of requests : <script src="..."></script> but this let you only load js files. <img src="..." /> but this lets you only load images, you can't really do much other than try to load images with that.
So if you get into the detail of each "web api" (XHR, <img/>, <script/>, etc) you will see that you are actually very limited.
[+] [-] unreal37|6 years ago|reply
- www.mydomain.com
- stage.mydomain.com
- local.mydomain.com
The last one referred to the version of the app that developers ran on their own machines. So they had a DNS-level entry that sent local.mydomain.com to 127.0.0.1.
This isn't a browser issue at all. I think the security issue is "applications can install local web servers" and "some local web servers are insecure".
We already have XSS controls in place to prevent a domain from accessing the contents of another browser window or an iframe.
It's not a browser issue. There are plenty of legitimate reasons for wanting a browser to access a local web server. It might not be common, but it's not illegitimate nor a security issue.
[+] [-] banana_giraffe|6 years ago|reply
[+] [-] schappim|6 years ago|reply
Yup, that's exactly it!
[+] [-] iameli|6 years ago|reply
What's a local area network? 10.x.x.x? That's going to break VPNs and enterprise integrations in a variety of ways. With IPv6 it's even less predictable.
The solution to this problem is CORS — accessing LAN servers, or any cross-origin destination, requires affirmative consent from the LAN server in the form of the Access-Control-Allow-Origin header.
[+] [-] felipelemos|6 years ago|reply
[+] [-] rnhmjoj|6 years ago|reply
There is no equivalent of RFC1918 for IPv6 and filtering link-local addresses won't do much as every host on the LAN is still adressable by its publicly routable address. These are probably too hard to predict, though.
[+] [-] ianhowson|6 years ago|reply
[+] [-] SCLeo|6 years ago|reply
That is, as long as your devices on LAN do not send a access-control-allow-origin header, the web pages are not capable of getting the actual response. Also, the only http method available to them is GET (when preflight is not required) and OPTIONS (when preflight is required), which are methods that are almost always side-effect free and only return some value - which the script cannot even get due to CORS restrictions.
[+] [-] mehrdadn|6 years ago|reply
[+] [-] baddox|6 years ago|reply
However, I don't really see that as the fundamental problem with the Zoom web server. They just happened to use local web requests to externally trigger the Zoom application, because it's probably the most convenient to implement. But couldn't they have, at least in theory, had the Zoom application snoop on the display output until it finds an image of a QR code and open a conference call based on the data in that QR code?
Obviously that's a more intensive listening mechanism, but my point is that the fundamental problem seems to be that their application installs a backdoor that is designed to expose the webcam without confirmation based on user actions that do not necessarily imply intent (like clicking on a web link). The local web request thing is really just an implementation detail: one that probably should be fixed by browsers, but far from the only way Zoom could have implemented this feature.
After all, the Zoom client could just have a socket connection to Zoom's servers, and start a conference call whenever someone requests one. That's how all native apps for conferencing/message work. They just usually require confirmation from the user, and they usually (I hope) uninstall that process when I uninstall the app, so people tend to be less upset.
[+] [-] alanfranz|6 years ago|reply
Of course if your local webservers have a really open CORS header, that could be a problem. But it's a matter for local webserver, mostly. And DNS rebinding still applies.
[+] [-] amarshall|6 years ago|reply
[+] [-] asaasinator|6 years ago|reply
[+] [-] ficklepickle|6 years ago|reply
[+] [-] jscholes|6 years ago|reply
[1] http://cgbystrom.com/articles/deconstructing-spotifys-builti...
[+] [-] vorticalbox|6 years ago|reply
[+] [-] penagwin|6 years ago|reply
Instead it's the server at api.bank.com 's responsibility to tell the browser it only accepts requests from bank.com.
[+] [-] daxterspeed|6 years ago|reply