Google's reCAPTCHA makes it impossible to use large portions of the web once you take reasonable measures to protect your privacy. The challenge will continuously fail, despite you spending time to carefully solve it. This cruel behavior is described in a patent [1] by Kyle Adams of Juniper Networks.
Remember that reCAPTCHA v1 used to be noble: reading books and converting them to text.
Now you're just training many Google machine learning algorithms by classifying data. In which they get more useful for the consumer, thus more powerful.
The audio CAPTCHA always works first try for me. The image CAPTCHA can go eff itself, it would always take me five tries while the images loaded super slowly.
I dislike Google reCAPTCHA, however, it brought down contact form and comment spam to almost zero. (With the price of an unknown number of false positives and some frustrated users.)
“A lot of the scripts that are run to enable tracking delay webpage load times while all these tracking scripts fire and run in the background,” said Peter Dolanjski, Firefox Product Lead.
Over at Google they are trying to sell fingerprinting as a feature.
The revolt against surveillance capitalism seems to just be starting in earnest. I think a lot of smart people are finally realizing they have wasted their considerable talents on advertising tech while pretending it was something else and are now very angry.
I worked in the ad industry. Every web-browser including brave, tor,safari is uniquely identifiable even on same hardware.
All the public computer researchers and browser vendors are years behind the techniques to fingerprint devices (probably 5+).
Canvas, WebGl etc are techniques of the past. There are much more advanced ones, than can identify devices with completely uniquely (on both desktop and mobile)
Also we know when users fake their fingerprints, and the algorithm respects the decision even though we know who the user is despite faking with all state of the art methods.
Latest methods dont even use JavaScript. Just CSS is enough to identify every device uniquely but you'd need JS to send the data back.
Every public researcher I've seen are given honeypot techniques that they consider state of the art even thought the industry is way ahead of the researchers.
Not to be picky, because to be honest I completely believe that what you say is plausible, but that's a lot of outrageous claims with very little in the way of examples or evidence.
In order to believe what you're claiming here, we have to believe that
1. There is magic css/js that can not only tell different browsers and devices apart, but can tell two phones from the same manufacturing run with the same software apart.
2. Despite the fact that this magic code would have to run in the client browser where its content, execution, and the data it sends back are all plainly visible to anyone who can hit ctrl-shift-j, no "public researcher or browser vendor" knows anything about it.
3. This technology is not used to combat ad fraud because of some weird conspiracy at Google.
It could be true, I suppose, but I don't see why anyone would believe this based on the evidence so far.
Just today I decided to switch to FF and try noscript experience. Works good enough so far. Funny that crippled experience is even better in some weird ways. I used to scroll reddit forums, now I can read just few first posts and that is good. I used to expand a lot of comments, now I can't expand them, but it saves time. Sure, self-control would be better, but that is good too :) It's good to know that without JavaScript I'll send less data to that anti-human industry.
Firefox is playing its trump, the privacy, very well lately. This is very smart as the competition has no good answer. Equalizing on privacy level would go against their business model so they won't ever do it wholeheartedly.
> Equalizing on privacy level would go against their business model...
More importantly it would go against the mission statement.
Mozilla isn't around to make money, it's around to make progress toward a mission.
(Search revenue helps fund that, but revenue is not the end goal for Mozilla).
It seems that almost weekly, I am reminded why I love Firefox because of some new thing Mozilla is doing. A lot of good decisions have been coming from them lately.
I finally made the switch from Brave today and I'm never going back. Firefox is just as privacy-conscious, supports built-in tracker blocking, fingerprinting, and has full sync that Brave hasn't implemented yet.
FF is my primary browser, yet people I know that work in security laugh at me as they claim FF is always the first browser to fail in the hacker games. I don't know enough about why, but I'd love for that to not be a thing. Taking into account my threat profile (types of sites I visit, JS blocking, etc), I feel the hacking risk is still a worth while trade off for the lack of tracking.
Can someone paste their results (or at least bits of fingerprinting entropy) from https://panopticlick.eff.org with the latest Firefox?
With the fancy new anti-fingerprinting Safari on macOS Mojave I get just over 14.5 bits of entropy with the most entropic source being my canvas fingerprint (1 in 600).
With Safari on iOS I get 11.71 bits of entropy, with the most entropic value being my screen size and color depth.
I think it's funny that panopticlick gives me a little red X for not allowing trackers from companies that have "promised" not to track me. I have no incentive to do so, as I do not get any sort of compensation if they are found to be in violation of those terms.
17.62 bits on firefox, 11.0 on Tor, 17.63 on chrome.
On firefox, the big contributors are HTTP headers (my native language is announced), hash of WebGl fingerprint and time zone.
On Tor big contributors are hash of webGL fingerprint, screen size.
On chrome, they are system fonts, hash of canvas fingerprint, user agent, and time zone.
I am not too concerned about the fingerprinting in firefox since I have strict blocking on, ublock origin, and separate containers for facebook and google. Based on the small amount of data facebook has on me, all the blocking is working pretty well.
I wonder if these fingerprint checks look for the more stealthy and sinister approaches, like localhost port scanning [1] and specific CSS selector behavior...?
Please note: the fingerprinting protection in this blog post is different from the resistFingerprinting about:config pref which would affect your entropy bits on panopticlick.
Interestingly enough, uBlock Origin actually stops that site from working, seems to break the fingerprinting step. If i disable uBlock, I get 16.63 bits of identifying information. Likewise the canvas fingerprint is the biggest, in my case 1 in 101154.
I've been really impressed with Firefox Quantum for the steps they've taken towards privacy and transparency.
This definitely seems like the edge that Mozilla will have when trying to stand out against Chromium-based browsers going forward (especially now that everyone else seems to base their browser off of Chromium).
As much as I hate more legislation I think the only way to solve this is to make it very onerous to own and compile this data and to level heavy fines (as in criminal charges and/or force the company into bankruptcy via 90% revenue fines) in cases when the database is breached.
Everything else will just turn into an arms race between those who don't wanna be tracked and those who wanna track.
If the US did something like the GDPR but in our constitution I wouldn't be surprised if a cottage industry opened up overnight in secure data-warehousing. I get that it would complicate things for small companies but we brought this on ourselves.
> Keep in mind that blocking fingerprinting may cause some sites to break.
This represents the sad state of Internet we are all living through. I have noticed that when I turn on privacy settings on Firefox, some major websites are broken and rendered unusable. It seems that the Internet is rampant with tracking and privacy violation, and we consumers are passively accepting it, by and large.
> I have noticed that when I turn on privacy settings on Firefox, some major websites are broken
In some cases it is because Firefox's tracking protection is based off of a curated list of websites [1]. This breaks a site I built called reVddit [2].
In my uneducated opinion, this list is weird. I had some discussion about this with Mozilla devs [3]. In that message chain, devs acknowledged reVddit is not doing anything wrong, rather it is reddit who could infringe users' privacy. Yet it is the non-infringing site that is rendered broken.
Further, the devs' suggestions for remedy are not workable. They propose moving requests to the server so that reVddit.com makes the requests to reddit.com. There are multiple problems with this,
* It would hide more code from users
* Reddit rate-limits requests coming from a single source
* Infrastructure becomes expensive on what is supposed to be a low cost website
My conversation with devs was good but needs more. I don't understand their point and they do not seem to understand mine.
I haven't had an issue, but I avoid "major websites" like the plague, as they are the modern equivalent (though measles is making a comeback). If a site breaks with good privacy settings, it's a decent indicator you're better off not visiting. If a breaking site shows up on my radar too much, I add the domain to an add-on I made to hide links to it on any page. My HN/reddit/search results/etc views usually have a few blank lines, they're links to domains I have determined I never want to visit ever again. My RSS reader gets a variation of the filter, so they don't show up there either. It feels really good to have the power to remove an entire site from my personal internet.
Serious question. Suppose we have this. I suppose my expectation is that instead of seeing the same ad over and over again, now I'm seeing effectively a random one.
Why is this necessarily better? I guess personally, I've always thought that regulating the content of the ads, rather than the usage of sufficiently anonymized data for ad targeting.
I've had some issues maintaining reVddit.com while keeping Firefox's tracking protection in mind. I'd love some help if there is anyone who can provide insight.
Basically, you can't load reVddit pages on Firefox because reVddit accesses reddit's API, and reddit is listed on Firefox's list of websites that are considered trackers [1].
In my uneducated opinion, this list is weird. I had some discussion about this with Mozilla devs [2]. In that message chain, devs acknowledged reVddit is not doing anything wrong, rather it is reddit who could infringe users' privacy. Yet it is the non-infringing site that breaks.
Further, the devs' suggestions for remedy are not workable. They propose moving requests to the server so that reVddit.com makes the requests to reddit.com. There are multiple problems with this,
* It would hide more code from users
* Reddit rate-limits requests coming from a single source
* Infrastructure becomes expensive on what is supposed to be a low cost website
My conversation with devs was good but needs more. Is there any solution here, or do we just go our separate ways?
All of this is fantastic. I just hope the day comes, when Google is no longer the default search engine in Firefox. Safari is my default browser, but I use Firefox heavily for “social” media accounts. I love the extensions.
I am posting this again, because I still didn't get opinions about it and I think it is important.
How much of Firefox success depends on donations?
I have seen successful crowd-funding projects where the budget is always transparent and communicated to the public. I am certain this motivates the masses to donate.
Wouldn't it be better for Mozilla to make their funding fully transparent to attract the masses?
[+] [-] dessant|6 years ago|reply
[1] https://patents.google.com/patent/US9407661
[+] [-] kabacha|6 years ago|reply
If everyone would block it the website owners would have no choice other than to move to a different captcha system.
[+] [-] sebazzz|6 years ago|reply
Now you're just training many Google machine learning algorithms by classifying data. In which they get more useful for the consumer, thus more powerful.
[+] [-] StavrosK|6 years ago|reply
[+] [-] chmars|6 years ago|reply
I dislike Google reCAPTCHA, however, it brought down contact form and comment spam to almost zero. (With the price of an unknown number of false positives and some frustrated users.)
[+] [-] stefek99|6 years ago|reply
[+] [-] OJFord|6 years ago|reply
Great to read something like this on mozilla.org.
[+] [-] ngold|6 years ago|reply
Over at Google they are trying to sell fingerprinting as a feature.
[+] [-] tacosx|6 years ago|reply
[+] [-] ngokevin|6 years ago|reply
[+] [-] msla|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] kakkksaknmdm|6 years ago|reply
All the public computer researchers and browser vendors are years behind the techniques to fingerprint devices (probably 5+).
Canvas, WebGl etc are techniques of the past. There are much more advanced ones, than can identify devices with completely uniquely (on both desktop and mobile)
Also we know when users fake their fingerprints, and the algorithm respects the decision even though we know who the user is despite faking with all state of the art methods.
Latest methods dont even use JavaScript. Just CSS is enough to identify every device uniquely but you'd need JS to send the data back.
Every public researcher I've seen are given honeypot techniques that they consider state of the art even thought the industry is way ahead of the researchers.
[+] [-] flatb|6 years ago|reply
[+] [-] emtel|6 years ago|reply
1. There is magic css/js that can not only tell different browsers and devices apart, but can tell two phones from the same manufacturing run with the same software apart.
2. Despite the fact that this magic code would have to run in the client browser where its content, execution, and the data it sends back are all plainly visible to anyone who can hit ctrl-shift-j, no "public researcher or browser vendor" knows anything about it.
3. This technology is not used to combat ad fraud because of some weird conspiracy at Google.
It could be true, I suppose, but I don't see why anyone would believe this based on the evidence so far.
[+] [-] TeMPOraL|6 years ago|reply
[+] [-] senorjazz|6 years ago|reply
[+] [-] vbezhenar|6 years ago|reply
[+] [-] user32556|6 years ago|reply
https://browserleaks.com/css#explanation
[+] [-] rypskar|6 years ago|reply
[+] [-] Laaas|6 years ago|reply
[+] [-] b-3-n|6 years ago|reply
Trying out Firefox now...
[+] [-] jopsen|6 years ago|reply
More importantly it would go against the mission statement.
Mozilla isn't around to make money, it's around to make progress toward a mission. (Search revenue helps fund that, but revenue is not the end goal for Mozilla).
[+] [-] geekamongus|6 years ago|reply
[+] [-] codsane|6 years ago|reply
[+] [-] dylan604|6 years ago|reply
[+] [-] xvector|6 years ago|reply
With the fancy new anti-fingerprinting Safari on macOS Mojave I get just over 14.5 bits of entropy with the most entropic source being my canvas fingerprint (1 in 600).
With Safari on iOS I get 11.71 bits of entropy, with the most entropic value being my screen size and color depth.
[+] [-] ouid|6 years ago|reply
[+] [-] abdullahkhalids|6 years ago|reply
On firefox, the big contributors are HTTP headers (my native language is announced), hash of WebGl fingerprint and time zone.
On Tor big contributors are hash of webGL fingerprint, screen size.
On chrome, they are system fonts, hash of canvas fingerprint, user agent, and time zone.
I am not too concerned about the fingerprinting in firefox since I have strict blocking on, ublock origin, and separate containers for facebook and google. Based on the small amount of data facebook has on me, all the blocking is working pretty well.
[+] [-] wybiral|6 years ago|reply
[1] https://twitter.com/davywtf/status/1132026581038190592
[+] [-] groovecoder|6 years ago|reply
[+] [-] Twirrim|6 years ago|reply
[+] [-] user17843|6 years ago|reply
It just bloocks a couple of known scripts based on the disconnect list.
Of course they don't tell us in their marketing posts.
[+] [-] clairity|6 years ago|reply
anyone know of a list of the most used values for these so we could lower our uniqueness by setting our browser values to them?
[+] [-] wybiral|6 years ago|reply
This definitely seems like the edge that Mozilla will have when trying to stand out against Chromium-based browsers going forward (especially now that everyone else seems to base their browser off of Chromium).
[+] [-] vxNsr|6 years ago|reply
Everything else will just turn into an arms race between those who don't wanna be tracked and those who wanna track.
If the US did something like the GDPR but in our constitution I wouldn't be surprised if a cottage industry opened up overnight in secure data-warehousing. I get that it would complicate things for small companies but we brought this on ourselves.
[+] [-] stockkid|6 years ago|reply
This represents the sad state of Internet we are all living through. I have noticed that when I turn on privacy settings on Firefox, some major websites are broken and rendered unusable. It seems that the Internet is rampant with tracking and privacy violation, and we consumers are passively accepting it, by and large.
[+] [-] rhaksw|6 years ago|reply
In some cases it is because Firefox's tracking protection is based off of a curated list of websites [1]. This breaks a site I built called reVddit [2].
In my uneducated opinion, this list is weird. I had some discussion about this with Mozilla devs [3]. In that message chain, devs acknowledged reVddit is not doing anything wrong, rather it is reddit who could infringe users' privacy. Yet it is the non-infringing site that is rendered broken.
Further, the devs' suggestions for remedy are not workable. They propose moving requests to the server so that reVddit.com makes the requests to reddit.com. There are multiple problems with this,
* It would hide more code from users
* Reddit rate-limits requests coming from a single source
* Infrastructure becomes expensive on what is supposed to be a low cost website
My conversation with devs was good but needs more. I don't understand their point and they do not seem to understand mine.
[1] https://github.com/disconnectme/disconnect-tracking-protecti...
[2] https://revddit.com
[3] https://groups.google.com/d/msg/mozilla.dev.privacy/XO84Ezrw...
[+] [-] kgwxd|6 years ago|reply
[+] [-] banachtarski|6 years ago|reply
Why is this necessarily better? I guess personally, I've always thought that regulating the content of the ads, rather than the usage of sufficiently anonymized data for ad targeting.
[+] [-] bojanvidanovic|6 years ago|reply
[+] [-] thomasfedb|6 years ago|reply
[+] [-] rhaksw|6 years ago|reply
Basically, you can't load reVddit pages on Firefox because reVddit accesses reddit's API, and reddit is listed on Firefox's list of websites that are considered trackers [1].
In my uneducated opinion, this list is weird. I had some discussion about this with Mozilla devs [2]. In that message chain, devs acknowledged reVddit is not doing anything wrong, rather it is reddit who could infringe users' privacy. Yet it is the non-infringing site that breaks.
Further, the devs' suggestions for remedy are not workable. They propose moving requests to the server so that reVddit.com makes the requests to reddit.com. There are multiple problems with this,
* It would hide more code from users
* Reddit rate-limits requests coming from a single source
* Infrastructure becomes expensive on what is supposed to be a low cost website
My conversation with devs was good but needs more. Is there any solution here, or do we just go our separate ways?
[1] https://github.com/disconnectme/disconnect-tracking-protecti...
[2] https://groups.google.com/d/msg/mozilla.dev.privacy/XO84Ezrw...
[+] [-] Despegar|6 years ago|reply
[+] [-] securityfreak|6 years ago|reply
[+] [-] tadzik_|6 years ago|reply
...did anyone figure out what the hell that is supposed to be, or look like? Why wouldn't they just put a screenshot.
Apparently they just mean the security settings and selecting Custom in those. Except it's on the right. And it's stripes, not a circle. shrug
[+] [-] cypherpunks01|6 years ago|reply
[+] [-] hwj|6 years ago|reply
> Clearly, you don't want to throw your computer out of the window and never use the internet again, just to get rid of ads.
https://blog.mozilla.org/firefox/de/loesche-deinen-digitalen...
[+] [-] anoplus|6 years ago|reply
How much of Firefox success depends on donations?
I have seen successful crowd-funding projects where the budget is always transparent and communicated to the public. I am certain this motivates the masses to donate.
Wouldn't it be better for Mozilla to make their funding fully transparent to attract the masses?
[+] [-] Justsignedup|6 years ago|reply
You lose your timezone and ALL dates appear in UTC. This is definitely not desirable.
Fingerprinting cannot be allowed for some sites, and thus things like android messages cannot work because it cannot accurately identify the browser.
Hopefully they'll do something about all this.