That's an awesome feature, although it doesn't seem to be complete.
I did some research whether it is possible to stop fingerprinting using a browser extension that patches JS environment before loading the page, and it turns out that it is difficult or impossible. Because first of all, there is no API for patching a JS environment from an extension.
I think that many of new HTML standards are poorly designed in regards to privacy. For example, WebGL reports the video card that you use. Who needs that? Well, maybe there is a tiny percentage of sites that use this information to detect bugs but main use of this feature is a reliable unforgeable signal for fingerprinting a device. Most of sites do not need WebGL at all.
It seems that browser vendors hurry to push forward as many features as possible without much thinking about user's privacy.
So basically today we have lots of standards that provide signals for fingerprinting (Web Audio, WebGL, Canvas API, WebRTC, port scanning via fetch or websocket, probing extensions list) and zero APIs or settings that allow to control it or block (unless you are ready to patch a browser).
> For example, WebGL reports the video card that you use. Who needs that? Well, maybe there is a tiny percentage of sites that use this information to detect bugs but main use of this feature is a reliable unforgeable signal for fingerprinting a device. Most of sites do not need WebGL at all.
WEBGL_debug_renderer_info is an optional extension to webgl specifically so it can be denied to the page at the browser's discretion. And Firefox's privacy.resistFingerprinting (the subject of this article) disables it: https://developer.mozilla.org/en-US/docs/Web/API/WEBGL_debug...
>It seems that browser vendors hurry to push forward as many features as possible without much thinking about user's privacy.
Vendors in plural makes it sound like the market isn't a monopoly of Google at this point. Google naturally loves to be able to fingerprint people based on as many metrics as possible, for their ads.
At this point, what's really needed is an additional layer of abstraction. The site needs to run in a virtual computer within the browser, identical to all others, and every interaction where any information from the user is needed for the site needs to go through a filter that will sufficiently randomize and sanitize identifying inputs to make it much much harder to fingerprint. That will probably break some content like games and whatnot, but the reality is, as much as everyone loves their web-apps, it really is a tiny fraction of web-browsing.
Also what I don't like that enabling this feature disables page zoom. This is inconvenient because you have to choose between using zoom and fingerprint protection.
Actually I think that when zoom is less than 100% (i.e. the page is made smaller) it is possible to report to the page original window size rather than scaled up size. Many sites today use gigantic font sizes and they are difficult to read without zoom or high-DPI display.
> WebGL reports the video card that you use. Who needs that?
Basically anybody trying to do a high performance game in webgl. Even today, the quirks of individual cards are enough that a game relying on sufficient feature capabilities is going to need a quirks list of cards to substitute implementations on. There's nothing to be done about it if a card just lies about its capabilities in the gestalt API or has an honest to God bug that you have to work around in the shader.
But, if an API like that isn't behind a permission allow dialogue and is instead on by default, that's a major privacy mistake.
I really, really wish they would make it site-specific: i.e., give me the option to disable it for certain domains. Currently, I have it enabled, but this causes sites (like GMail) to not know the current time. So my GMail shows emails' times in GMT.
I mean, I'm already logged into the site; I have already given them my name, password, etc. So what am I trying to hide from them??
Does disabling tracking protection for the website not work? When I click the little shield in the address bar and toggle tracking protection, most sites relying on tracking scripts start working again.
I suppose there's no field to enter these domains beforehand but I don't really run into any trouble because of this feature.
I'm all for anti-fingerprinting, but i'm also for interactive graphics on the web, and getImageData() is essentially your way of accessing a pixel buffer... I would be better if it was more conditional.
e.g instead gate the call that ultimately attempts to send any derivatives of that data over the network - although I understand that may entail significant complexity in the JS engine. Alternatively, gate getImageData() only if fingerprintable context methods have previously been called, i.e those with antialiasing, compositing, blending differences etc or any other rendering method with potential differences emerging from the underlying algorithm. That way someone just trying to use the pixel buffer as an output doesn't get punished by needlessly causing modals to be thrown in front of the user.
> interactive graphics on the web, and getImageData() is essentially your way of accessing a pixel buffer
I'm sure it's a great feature when you need it, but most websites have no legitimate need for it. Having this sort of feature off/blocked by default and whitelisted on a case-by-case basis makes a lot of sense to me. Are you trying to use a webapp image editor? Makes sense to whitelist it. Are you trying to read a local newspaper article online? Keep that shit off by default.
In fact, that's how I treat even first-party javascript, because most websites are made worse by turning javascript on. It seems to follow a 90-9-1 rule; 90% of websites need no javascript, 9% need first-party javascript whitelisted, and 1% require some 3rd-party javascript.
I wonder whether introducing slight per-pixel-per-draw noise into the values could be used here, to mask this sort of detail? Or would you be able to average it out somehow
I've also encountered problems with getImageData(). I just want to be able to blit a sprite sheet for god's sake. It would be tolerable if there were a couple pre-defined functions that could handle blitting, compositing and other operations, and return an HTMLImageElement with the same permissions as the source.
The problem I forsee with getImageData() and gl.readPixels() is that despite all this security, it's possible that it leaks some data through the openGL implementation, like with a spectre/meltdown type attack. Like imagine there is some cached data on the GPU that lingers between draw calls, I don't know.
If you want even better protection from tracking and fingerprinting, I recommend arkenfox user.js [1]. It's a configuration file for firefox.
I have created tmpfox [2] a simple program that creates a temporary firefox profile on /tmp, and installs arkenfox user.js and some plugins I find useful.
Are these settings available individually? For example I would he happy with most of these except I can't live with UTC time zone and no site-specific zoom. I would probably also keep the performance API available.
Maybe this will become available when they roll out more broadly with a configuration UI.
If the site notices your canvas data is returning garbage, but your timezone is somewhat legit (ie. not UTC), then they can conclude you have canvas protection enabled but not timezone spoofing. That can be used to build a fingerprint of your browser depending on what fingerprinting protection settings you enabled.
The goal is to provide a testbed of Tor features so that Tor is not broken when they update their Firefox ESR version. It has a secondary behavior of making Firefox roughly look like a Tor user thus slightly increasing the population of possible Tor users. Allowing people to select individual PRF features would make you look extremely unique.
I tried a before-and-after on the EFF Cover Your Tracks tool[0].
Before: One in 69970.67 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 16.09 bits of identifying information.
After: One in 104957.5 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 16.68 bits of identifying information.
So according to that, my browser is more fingerprintable after enabling the setting!
I didn't expect this experimental feature to be a silver bullet, but I certainly didn't expect it to make me more unique. I'm not sure what to think of that.
The pref makes you look like a Tor user, mostly. So yes that population of users is pretty small. The Firefox/Tor privacy protections favor uniformity over randomizing everything.
Sometimes resistFingerprinting can break some site, but rarely. But if it happens the addon "Toggle Resist Fingerprinting" [1] can be helpful to temporarily inactivate it with a simple click on an button. Instead of having to go to about:config and change "privacy.resistFingerprinting" to "false" manually.
What bothers me is that RFP breaks many addons as well. For example, the reduced timer precision breaks Surfingkeys on Windows (vim combinations are behaving erratically, jerky scrolling etc). Another example, Alt key is completely disabled by RFP as some national keyboard layouts can be used for fingerprinting. [1] As a result, hotkeys with Alt become inaccessible for addons. etc etc etc
The problem with a lot of these attempts at fingerprinting prevention is that they cause additional data which can be used to more accurately fingerprint users.
getImageData() is blocked - datapoint
Any detectable difference from what a “regular” browser would return is another point of entropy.
However, the accuracy of device fingerprinting with `getImageData()` is as far as I can tell a lot higher than the accuracy from trying to fingerprint people based on whether they're returning blank data from that call.
If turning off a feature reveals a new 3 bits of information, but leaving it on would have revealed 5 bits, then it's still probably a good idea to turn it off.
Again, not to say that people shouldn't care about those 3 bits, they should. But it's not necessarily a waste of time even if a site tries to use anti-fingerprinting as its own metric. It only becomes a waste of time if the anti-fingerprinting is more unique than leaving the holes open.
Probably not online banking, but a lot of game applications or media intense applications in the browser are going to get real squirrely if they cannot detect the user's configuration.
Even things like the GPU make and model can end up necessary because the webgl mechanisms for determining those things are allowed to lie (i.e. I've seen cards that report in the gestalt data that they allow various features, when those features are in reality implemented in software and therefore basically unusable).
Stuff like this is why I hope Firefox remains viable as a browser long term. Chrome is now what? 80% of the browser market share and they have no incentive to protect user privacy.
I'm really worried that we're going to head down the road of chrome becoming the only browser anyone tests their site against and we're going to go back to the bad old days of IE 6 compatible sites that are completely broken in other browsers.
I would agree with other posters that it's already like that, reddit.com, one of the biggest websites in the internet, will ask if you want to continue on chrome or the app if accessing from a mobile phone, essentially calling the mobile browser Chrome.
It's already like that TBH, especially with really important sites like banking or government services.
That in particular makes me nervous sometimes. If the implementation can't even run properly on other browsers, I don't want to know what other corners they're cutting behind the scenes.
yet for most the inconvenience of having to port over all of their profiles, history, cookies will be enough to keep them on Chrome. Most can't be bothered with installing another browser or even aware that they can import Chrome profiles into Firefox.
Even for me for some stuff I keep on Chrome since its too connected to all the business/saas/hosting logins and etc.
those days have never gone away. all browser rendering and JS engines have their own quirks and bugs. even a very simple website with only basic CSS will look different in all three desktop browsers, often different enough to be broken.
For every step in the right direction on privacy, Mozilla has taken 2 steps back. They're a bit like Apple in that they market themselves as the guy who cares about privacy, when in reality they're just slightly better than Google.
I don't believe you can care about privacy with half your org being hyper-political lefty activists, and Mozilla seems to be infested with them. Having monitored the Firefox reddit for 2 years, FF devs & leadership are often at odds with FF users who are people who want privacy above all.
This is not sustainable and fingerprinting is just one side of this whole fragmented mess we're in. Browsers should present very few fingerprintable attributes by default. By now I'm convinced user-preferred languages is the only really defensible header. Everything else? Ask for permission.
The way we're doing capability permissions on the web (to the extent browsers do it at all) is just broken. A barrage of piecemeal modal dialog boxes is not the way forward. It needs to be drastically simplified. A website should be treated exactly like any other kind of app: if it needs to use extended features, it should put that into a manifest so the browser can provide a specific list of items for the user to approve or reject.
If none of these permissions are given, sites should be extremely restricted in what they can do, including cookies and localStorage.
Let's get rid of UserAgent and codec compatibility headers. Especially UserAgent is already useless and both should be replaced entirely by an improved feature detection system.
There are only 3 major browser vendors left. They could fix this within months. This is not a technology problem, it's a question of will and ad revenue.
I'd like to have this capability but not if it makes typical web browsing annoying with too many alerts. I'd prefer to have the feature work by default with a blacklist of sites known or likely to do fingerprinting (ie larger social and media sites). Of course it can also have an optional strict mode for those who want a higher degree of anonymity in exchange for more disruption of their browsing.
My personal concern with fingerprinting isn't so much any individual low-traffic site recognizing my browser. It's the higher-traffic sites working together to aggregate profiles. I don't need "zero tolerance" anti-fingerprinting. I just want to make it harder for big data aggregators to compile highly accurate, large population databases. Hopefully, a sweet spot can be found in testing which is minimally disruptive for typical users but frustrates data aggregator's ability to compile highly-lucrative data products across sites. I'd imagine just applying anti-fingerprinting to the 1,000 highest traffic websites might be enough to cut the profitability of cross-site aggregation significantly.
I think virtualized rendering might be the future both for privacy and security. MS has (had?) Appguard with edge for example, although it wasn't for privacy. Hardware enforced boundaries.
Although, ultimately my opinion is this is a legal problem. The entities that are a threat to my privacy that also use fingerprinting operate within the boundaries of criminal law.
Somewhat tangential — now that webrender is in, are there any recent benchmarks for Firefox comparing against Chrome and Safari for real world web pages?
I’d also love to see interoperability with history, keychain, and bookmarks with other browsers. Would make it far easier to switch between.
> The browser window prefers to be set to a specific size
Any idea how this works? Can you still set your browser window to any size you want in your window manager, does it misreport it? Could that not cause rendering issues too?
1. by default, any non-maximized windows will default to a 1000x1000 viewport. This is consistent with how the tor browser works. Of course, this doesn't do anything when your window is maximized. On tor browser they warn you not to maximize your window for this reason. The idea here is that if everybody's window is 1000x1000, you won't be able to fingerprint based on people's monitor sizes, OS decoration sizes, and the user's window size preferences.
2. you can optionally enable a feature called "letterboxing", which rounds the viewport size to multiples of 100px. This works even if you maximize/resize your browser window.
I can't seem to find this option in firefox, perhaps it refers to the feature in Tor browser that restricts the window size to a few most common resolutions.
Given that not many people use Firefox, wouldn't the usage of fingerprinting protection within a given geography (can be inferred from IP address) be considered a fingerprint itself?
[+] [-] codedokode|3 years ago|reply
I did some research whether it is possible to stop fingerprinting using a browser extension that patches JS environment before loading the page, and it turns out that it is difficult or impossible. Because first of all, there is no API for patching a JS environment from an extension.
I think that many of new HTML standards are poorly designed in regards to privacy. For example, WebGL reports the video card that you use. Who needs that? Well, maybe there is a tiny percentage of sites that use this information to detect bugs but main use of this feature is a reliable unforgeable signal for fingerprinting a device. Most of sites do not need WebGL at all.
It seems that browser vendors hurry to push forward as many features as possible without much thinking about user's privacy.
So basically today we have lots of standards that provide signals for fingerprinting (Web Audio, WebGL, Canvas API, WebRTC, port scanning via fetch or websocket, probing extensions list) and zero APIs or settings that allow to control it or block (unless you are ready to patch a browser).
[+] [-] magicalist|3 years ago|reply
WEBGL_debug_renderer_info is an optional extension to webgl specifically so it can be denied to the page at the browser's discretion. And Firefox's privacy.resistFingerprinting (the subject of this article) disables it: https://developer.mozilla.org/en-US/docs/Web/API/WEBGL_debug...
[+] [-] rosmax_1337|3 years ago|reply
Vendors in plural makes it sound like the market isn't a monopoly of Google at this point. Google naturally loves to be able to fingerprint people based on as many metrics as possible, for their ads.
[+] [-] Enginerrrd|3 years ago|reply
[+] [-] codedokode|3 years ago|reply
Actually I think that when zoom is less than 100% (i.e. the page is made smaller) it is possible to report to the page original window size rather than scaled up size. Many sites today use gigantic font sizes and they are difficult to read without zoom or high-DPI display.
[+] [-] shadowgovt|3 years ago|reply
Basically anybody trying to do a high performance game in webgl. Even today, the quirks of individual cards are enough that a game relying on sufficient feature capabilities is going to need a quirks list of cards to substitute implementations on. There's nothing to be done about it if a card just lies about its capabilities in the gestalt API or has an honest to God bug that you have to work around in the shader.
But, if an API like that isn't behind a permission allow dialogue and is instead on by default, that's a major privacy mistake.
[+] [-] 1024core|3 years ago|reply
I mean, I'm already logged into the site; I have already given them my name, password, etc. So what am I trying to hide from them??
[+] [-] Slylencer|3 years ago|reply
edit: wording
[+] [-] jeroenhd|3 years ago|reply
I suppose there's no field to enter these domains beforehand but I don't really run into any trouble because of this feature.
[+] [-] tomxor|3 years ago|reply
I'm all for anti-fingerprinting, but i'm also for interactive graphics on the web, and getImageData() is essentially your way of accessing a pixel buffer... I would be better if it was more conditional.
e.g instead gate the call that ultimately attempts to send any derivatives of that data over the network - although I understand that may entail significant complexity in the JS engine. Alternatively, gate getImageData() only if fingerprintable context methods have previously been called, i.e those with antialiasing, compositing, blending differences etc or any other rendering method with potential differences emerging from the underlying algorithm. That way someone just trying to use the pixel buffer as an output doesn't get punished by needlessly causing modals to be thrown in front of the user.
[+] [-] robonerd|3 years ago|reply
I'm sure it's a great feature when you need it, but most websites have no legitimate need for it. Having this sort of feature off/blocked by default and whitelisted on a case-by-case basis makes a lot of sense to me. Are you trying to use a webapp image editor? Makes sense to whitelist it. Are you trying to read a local newspaper article online? Keep that shit off by default.
In fact, that's how I treat even first-party javascript, because most websites are made worse by turning javascript on. It seems to follow a 90-9-1 rule; 90% of websites need no javascript, 9% need first-party javascript whitelisted, and 1% require some 3rd-party javascript.
[+] [-] infinityio|3 years ago|reply
[+] [-] diebeforei485|3 years ago|reply
[+] [-] cowtools|3 years ago|reply
The problem I forsee with getImageData() and gl.readPixels() is that despite all this security, it's possible that it leaks some data through the openGL implementation, like with a spectre/meltdown type attack. Like imagine there is some cached data on the GPU that lingers between draw calls, I don't know.
[+] [-] jimmaswell|3 years ago|reply
[+] [-] cmitsakis|3 years ago|reply
[1] https://github.com/arkenfox/user.js
[2] https://github.com/cmitsakis/tmpfox
[+] [-] kevincox|3 years ago|reply
Maybe this will become available when they roll out more broadly with a configuration UI.
[+] [-] gruez|3 years ago|reply
[+] [-] kbrosnan|3 years ago|reply
[+] [-] apeace|3 years ago|reply
Before: One in 69970.67 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 16.09 bits of identifying information.
After: One in 104957.5 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 16.68 bits of identifying information.
So according to that, my browser is more fingerprintable after enabling the setting!
I didn't expect this experimental feature to be a silver bullet, but I certainly didn't expect it to make me more unique. I'm not sure what to think of that.
[0] https://coveryourtracks.eff.org/
[+] [-] kbrosnan|3 years ago|reply
[+] [-] MasterYoda|3 years ago|reply
[1] https://addons.mozilla.org/en-US/firefox/addon/toggle-resist...
[+] [-] orbital-decay|3 years ago|reply
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1598862#c3
[+] [-] jackbeck|3 years ago|reply
getImageData() is blocked - datapoint
Any detectable difference from what a “regular” browser would return is another point of entropy.
[+] [-] danShumway|3 years ago|reply
However, the accuracy of device fingerprinting with `getImageData()` is as far as I can tell a lot higher than the accuracy from trying to fingerprint people based on whether they're returning blank data from that call.
If turning off a feature reveals a new 3 bits of information, but leaving it on would have revealed 5 bits, then it's still probably a good idea to turn it off.
Again, not to say that people shouldn't care about those 3 bits, they should. But it's not necessarily a waste of time even if a site tries to use anti-fingerprinting as its own metric. It only becomes a waste of time if the anti-fingerprinting is more unique than leaving the holes open.
[+] [-] m-p-3|3 years ago|reply
[+] [-] calvinmorrison|3 years ago|reply
[+] [-] brobinson|3 years ago|reply
"Look at me: I'm the Chrome now."
[+] [-] shadowgovt|3 years ago|reply
Even things like the GPU make and model can end up necessary because the webgl mechanisms for determining those things are allowed to lie (i.e. I've seen cards that report in the gestalt data that they allow various features, when those features are in reality implemented in software and therefore basically unusable).
[+] [-] JonathanBuchh|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] wing-_-nuts|3 years ago|reply
I'm really worried that we're going to head down the road of chrome becoming the only browser anyone tests their site against and we're going to go back to the bad old days of IE 6 compatible sites that are completely broken in other browsers.
[+] [-] newguynewphone|3 years ago|reply
[+] [-] spicybright|3 years ago|reply
That in particular makes me nervous sometimes. If the implementation can't even run properly on other browsers, I don't want to know what other corners they're cutting behind the scenes.
[+] [-] criddell|3 years ago|reply
If those numbers are right, Safari has about 5x the number of users. Realistically, Safari is our only hope.
[1]: https://gs.statcounter.com/browser-market-share
[+] [-] tomatowurst|3 years ago|reply
Even for me for some stuff I keep on Chrome since its too connected to all the business/saas/hosting logins and etc.
[+] [-] throwaway0x7E6|3 years ago|reply
[+] [-] Melatonic|3 years ago|reply
[+] [-] wolongong942|3 years ago|reply
I don't believe you can care about privacy with half your org being hyper-political lefty activists, and Mozilla seems to be infested with them. Having monitored the Firefox reddit for 2 years, FF devs & leadership are often at odds with FF users who are people who want privacy above all.
[+] [-] Udo|3 years ago|reply
The way we're doing capability permissions on the web (to the extent browsers do it at all) is just broken. A barrage of piecemeal modal dialog boxes is not the way forward. It needs to be drastically simplified. A website should be treated exactly like any other kind of app: if it needs to use extended features, it should put that into a manifest so the browser can provide a specific list of items for the user to approve or reject.
If none of these permissions are given, sites should be extremely restricted in what they can do, including cookies and localStorage.
Let's get rid of UserAgent and codec compatibility headers. Especially UserAgent is already useless and both should be replaced entirely by an improved feature detection system.
There are only 3 major browser vendors left. They could fix this within months. This is not a technology problem, it's a question of will and ad revenue.
[+] [-] mrandish|3 years ago|reply
My personal concern with fingerprinting isn't so much any individual low-traffic site recognizing my browser. It's the higher-traffic sites working together to aggregate profiles. I don't need "zero tolerance" anti-fingerprinting. I just want to make it harder for big data aggregators to compile highly accurate, large population databases. Hopefully, a sweet spot can be found in testing which is minimally disruptive for typical users but frustrates data aggregator's ability to compile highly-lucrative data products across sites. I'd imagine just applying anti-fingerprinting to the 1,000 highest traffic websites might be enough to cut the profitability of cross-site aggregation significantly.
[+] [-] badrabbit|3 years ago|reply
Although, ultimately my opinion is this is a legal problem. The entities that are a threat to my privacy that also use fingerprinting operate within the boundaries of criminal law.
[+] [-] azinman2|3 years ago|reply
I’d also love to see interoperability with history, keychain, and bookmarks with other browsers. Would make it far easier to switch between.
[+] [-] Aardwolf|3 years ago|reply
Any idea how this works? Can you still set your browser window to any size you want in your window manager, does it misreport it? Could that not cause rendering issues too?
[+] [-] gruez|3 years ago|reply
1. by default, any non-maximized windows will default to a 1000x1000 viewport. This is consistent with how the tor browser works. Of course, this doesn't do anything when your window is maximized. On tor browser they warn you not to maximize your window for this reason. The idea here is that if everybody's window is 1000x1000, you won't be able to fingerprint based on people's monitor sizes, OS decoration sizes, and the user's window size preferences.
2. you can optionally enable a feature called "letterboxing", which rounds the viewport size to multiples of 100px. This works even if you maximize/resize your browser window.
[+] [-] cowtools|3 years ago|reply
[+] [-] yamrzou|3 years ago|reply
[+] [-] dheera|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] turpialito|3 years ago|reply