I like this on the first glance. The idea of a random website probing arbitrary local IPs (or any IPs for that matter) with HTTP requests is insane. I wouldn't care if it breaks some enterprise apps or integrations - enterprises could reenable this "feature" via management tools, normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
This is a misunderstanding. Local network devices are protected from random websites by CORS, and have been for many years. It's not perfect, but it's generally quite effective.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
This sounds crazy to me. Why should websites ever have access to the local network? That presents an entirely new threat model for which we don’t have a solution. Is there even a use case for this for which there isn’t already a better solution?
> normal users could configure it themselves, just show a popup "this website wants to control local devices - allow/deny".
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
This Web Security lecture by Feross Aboukhadijeh has a great example of Zoom's zero-day from 2019 that allowed anyone to force you to join a zoom meeting (and even cause arbitrary code execution), using a local server:
It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!
edit: localhost won't be restricted:
"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"
I wish they'd (Apple/Micrsoft/Google/...) would do similar things for USB and Bluetooth.
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
I am still holding out hope that eventually at least Apple will offer fake permission grants to applications. Oh, app XYZ "needs" to see my contact list to proceed? Well it gets a randomized fake list, indistinguishable from the real one. Similar with GPS.
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
> Lately, every app I install, wants bluetooth access to scan all my bluetooth devices.
Blame Apple and Google and their horrid BLE APIs.
An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.
What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.
It's crazy to me that this has always been the default behavior for web browsers. A public website being able to silently access your entire filesystem would be an absurd security hole. Yet all local network services are considered fair game for XHR, and security is left to the server itself. If you are developer and run your company's webapp on your dev machine for testing (with loose or non-existent security defaults), facebook.com or google.com or literally anyone else could be accessing it right now. Heck think of everything people deploy unauthed on their home network because they trust their router's firewall. Does every one of them have the correct CORS configuration?
I majored in CS and I had no idea that was possible: public websites you access have access to your local network. I have to take time to process this. Beside what is suggested in the post, are there any ways to limit this abusive access?
Honestly I just assumed a modern equivalent existed. That it doesn’t is ridiculous. Local network should be a special permission like the camera or microphone.
Although those were typically used to give ActiveX controls on the intranet unfettered access to your machine because IT put it in the group policy. Fun days.
I guess this would help Meta’s sneaking identification code sharing between native apps and websites with their sdk on them from communicating serendipitously through localhost, particularly on Android.
While this will help to block many websites that have no business making local connections at all, it's still very coarse-grained.
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
I worry that there are problems with Ipv6. Can anyone explain to me if there actually is a way to determine whether an IPv6 is site local? If not, the proposal is going to have problems on IPv6-only networks.
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
HTTPS doesn't care about IP addresses. It's all based on domain names. You can get a certificate for any domain you own. You can also set said domain to resolve to any address you like, including a "local" one.
NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.
An IP address is local if you can resolve it and don't have to communicate via a router.
It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".
IPv6 still has the concept of "routable". You just have to decide what site-local means in terms of the routing table.
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
> Can anyone explain to me if there is any way to determine whether an inbound IPv6 address is "local"?
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
Cors doesnt stop POST request also not fetch with 'no-cors'supplied in javascript its that you cant read response that doesnt mean request is not sent by browser
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
Do note that since the removal of NPAPI plugins years ago, locally-installed software that intends to be used by one or more public websites has to run an HTTP server on localhost.
It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
Doesn't most software just register a protocol handler with the OS? Then a website can hand the browser a zoommtg:// link, which the browser opens with zoom ?
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
It would be amazing if that method of communicating with a local app was killed entirely, because it's been a very very common source of security vulnerabilities.
It appears to not have been enabled by default on my instance of uBlock; it seems a specific filter list is used to implement this [0]; that filter was un-checked; I have no idea why. The contents of that filter list are here [1]; notice that there are exceptions for certain services, so be sure to read through the exceptions before enabling it.
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
This has the potential to break rclone's oauth mechanism as it relies on setting the redirect URL to localhost so when the oauth is done rclone (which is running on your computer) gets called.
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
IIUC this should not break redirects. This only affects: (1) fetch/xmlhttprequests (2) resources linked to AND loaded on a page (e.g. images, js, css, etc.)
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
I do not understand. Doesn't same-origin prevent all of these issues? Why on earth would you extend some protection to resources based on IP address ranges? It seems like the most dubious criteria of all.
I don’t see this mentioned anywhere but Safari on iOS already does this. If you try to access a local network endpoint you’ll be asked to allow it by Safari, and the permission is per-site.
One of the very few security inspired restrictions I can wholeheartedly agree with. I don't want random websites be able to read my localhost. I hope it gets accepted and implemented sooner than later.
OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.
The alternative proposal sounds much nicer, but unfortunately was paused due to concerns about devices not being able to support it.
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
Is the so-called "modern" web browser too large and complex
I never asked for stuff like "websockets"; I have to disable it, why
I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources
It is relatively small, fast and reliable; very useful
It can read larger HTML files that make so-called "modern" web browsers choke
It does not support online ad services
The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems
Assuming that RFC1918 addresses mean "local" network is wrong. It means "private". Many large enterprises use RFC1918 for private, internal web sites.
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
Is it possible to do this today with browser extensions? I ran noscript 10 years ago and it was really tough. Kinda felt like being gaslit constantly. I could go back, only enabling sites selectively, but it's not going to work for family. Wondering if just blocking cross origin requests would be more feasible.
[+] [-] mystifyingpoi|10 months ago|reply
[+] [-] buildfocus|10 months ago|reply
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
[+] [-] jm4|10 months ago|reply
[+] [-] lucideer|10 months ago|reply
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
[+] [-] broguinn|10 months ago|reply
https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...
It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!
edit: localhost won't be restricted:
"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"
[+] [-] unknown|10 months ago|reply
[deleted]
[+] [-] donnachangstein|10 months ago|reply
[deleted]
[+] [-] socalgal2|10 months ago|reply
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
[+] [-] 3eb7988a1663|10 months ago|reply
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
[+] [-] totetsu|10 months ago|reply
[+] [-] bsder|10 months ago|reply
Blame Apple and Google and their horrid BLE APIs.
An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.
What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.
[+] [-] rjh29|10 months ago|reply
[+] [-] Thorrez|10 months ago|reply
[+] [-] _bent|10 months ago|reply
[+] [-] paxys|10 months ago|reply
[+] [-] 3abiton|10 months ago|reply
[+] [-] Too|10 months ago|reply
Talking about MCP agents if that’s not obvious.
[+] [-] thaumasiotes|10 months ago|reply
I would guess it's closer to 0% than 0.1%.
[+] [-] reassess_blind|10 months ago|reply
Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.
[+] [-] pacifika|10 months ago|reply
https://learn.microsoft.com/en-us/previous-versions/troubles...
[+] [-] donnachangstein|10 months ago|reply
[+] [-] nailer|10 months ago|reply
[+] [-] bux93|10 months ago|reply
[+] [-] sroussey|10 months ago|reply
[0] https://www.theregister.com/2025/06/03/meta_pauses_android_t...
[+] [-] will4274|10 months ago|reply
[+] [-] skybrian|10 months ago|reply
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
[+] [-] rerdavies|10 months ago|reply
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
[+] [-] globular-toast|10 months ago|reply
NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.
An IP address is local if you can resolve it and don't have to communicate via a router.
It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".
[+] [-] gerdesj|10 months ago|reply
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
Bon chance mate
[+] [-] donnachangstein|10 months ago|reply
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
[+] [-] G_o_D|10 months ago|reply
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
[+] [-] ronsor|10 months ago|reply
It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
[+] [-] michaelt|10 months ago|reply
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
[+] [-] RagingCactus|10 months ago|reply
[+] [-] IshKebab|10 months ago|reply
[+] [-] AdmiralAsshat|10 months ago|reply
I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...
[+] [-] potholereseller|10 months ago|reply
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
[1] <https://github.com/uBlockOrigin/uAssets/blob/master/filters/...>
[+] [-] nickcw|10 months ago|reply
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
[+] [-] 0xCMP|10 months ago|reply
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
[+] [-] AshamedCaptain|10 months ago|reply
[+] [-] grahamj|10 months ago|reply
[+] [-] bmacho|10 months ago|reply
OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.
[+] [-] avidiax|10 months ago|reply
That's what WebRTC does. There's no requirement that WebRTC is used to send video and audio as in a Zoom/Meet call.
That's how WebTorrent works.
https://webtorrent.io/faq
[+] [-] foota|10 months ago|reply
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
[+] [-] gostsamo|10 months ago|reply
[+] [-] junkblocker|10 months ago|reply
[+] [-] 1vuio0pswjnm7|10 months ago|reply
Is the so-called "modern" web browser too large and complex
I never asked for stuff like "websockets"; I have to disable it, why
I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources
It is relatively small, fast and reliable; very useful
It can read larger HTML files that make so-called "modern" web browsers choke
It does not support online ad services
The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems
[+] [-] profmonocle|10 months ago|reply
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
[+] [-] benob|10 months ago|reply
Servers can do all the hard work of gathering content from here and there.
[+] [-] globular-toast|10 months ago|reply