> Google might agree to let a random online shopping company scan what I’m typing into Gmail, but I did not agree.
Google might, in the sense that they could start, but Google doesn't do (and never has done) what is described.
First of all, Google has never let companies scan what you type. It did let companies target based on content of messages, but that involves advertisers sharing targeting information with Google, not Google sharing email content with advertisers.
Second of all, even that stopped last year. From the Google announcement:
> G Suite’s Gmail is already not used as input for ads personalization, and Google has decided to follow suit later this year in our free consumer Gmail service. Consumer Gmail content will not be used or scanned for any ads personalization after this change. This decision brings Gmail ads in line with how we personalize ads for other Google products.
Can we really continue to claim that we're unaware Google, Facebook, and other web companies are monitoring everything they can and sharing the information they collect, sometimes for profit, sometimes accidentally, and sometimes compelled by legal orders?
This isn't merely a legal technicality hidden in the terms of service. We know they're doing it, and by continuing to use the service we are consenting, however unhappily.
If I count the number of times recently I've read and heard people say "Well switch to Instagram or WhatsApp" in reaction to hearing about Facebook's privacy track record, then the answer to your question is No - the public is largely clueless.
(In case you weren't aware, Facebook owns Instagram and WhatsApp)
Every thing about every person (living or dead) is being tracked in near real-time.
I've commented here many times about Seisent (bought by Lexis/Nexus). In the mid 2000s they had uniquely identified everyone in North America, Caribbean, and making in roads into Central & South America. Just from public records. One of their customers was the NSA, which also tied in money and phone.
(My employer was prototyping using Seisent to help uniquely identify patients across heterogenous orgs. It worked great, but cost too much for our use case.)
Seisent was (is) just one of many "independents" (third party). Today, FAANG and all the little dragons are doing the same.
--
The ridiculous part of the controversy is pretending we don't know who everyone is. Our voter registration database could trivially be near perfect in real-time. Every immigrant. Every homeless person. Every missing person (not yet dead).
> Can we really continue to claim that we're unaware Google, Facebook, and other web companies are monitoring everything they can and sharing the information they collect, sometimes for profit, sometimes accidentally, and sometimes compelled by legal orders?
Yes, we can.
1. Some people (mostly technologists) are aware, but many people aren't.
2. Much of the awareness that does exist was not the result of those companies being transparent about their practices. It's the result of inferences based on scraps of information and speculation.
> Can we really continue to claim that we're unaware
Yes, I can. Why?
Because I only have a vague abstract notion that they collect data and do something with it before turning that into cash.
How am I meant to explain this to my friends and family, beyond "they are bad"?
I've never seen a fully worked through example of exactly what they are doing with all this data. Please point me at an article or video or something that everyone else has already seen a hundred times, because I still haven't found that one thing that will convince other people.
We can continue in sense things that benefit us we claim ignorance about that is made possible and provided for 'free'. But deep down I think people will know that something gotta give even if technical/legal/marketing copy says it is all free.
It is same thing that people here consider those half million salaries normal in valley without honestly thinking how this is made possible in first place. For them its cutting edge machine learning work etc, is a good enough answer.
I'd argue that the risk of such vulnerabilities with smaller companies is way higher but they are just not disclosed.
Many people are also fine with the data for free services trade.
I'd like to know the extent/methods of the tracking and whether I can avoid it by not using their services and cleaning out my browsers and not using gmail.
Countless companies every year hire security auditors, and get back a 100-page report in 8 point font filled with vulnerabilities, many of them marked "severe" or "critical." Forcing companies to then publicize those reports will be burdensome and counterproductive.
Forcing them to publicize would also force them to spend the resources to patch them. I'm sure many companies go through those 100 pages and throw out 50 as "wontfix."
Though, if you force them to publicize, you'd probably have to force them to conduct the audits, so they can't avoid the publicity/patching spend by keeping themselves in the dark.
I'm pretty sure you're wrong here. Publicly traded companies have to declare their risks. Cue share holder lawsuits if an identified risk was ignored and caused material damage.
When some medical contractor misconfigures an AWS bucket and exposes 15,000 medical records we all lose our minds. It doesn't matter if the first bloke to find it was the researcher who disclosed it... We still go nuts. We make fun of the companies who come back and say "There was no evidence that the data was accessed by unauthorized parties." We know full well there's no evidence that the data WASN'T accessed by unauthorized parties.
Please stop pretending this isn't a big deal just because of a hard-on for Google. If you'll put a 10 man company out of business for their complacency and ignorance you should be lining up at Google HQ with pitchforks over this. They're supposed to be above this. They are hailed as a gold standard.
According to the announcement, Google can only say that no one abused the vulnerability in the two weeks prior to discovery.
> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.
But..."no evidence of abuse" can mean a lot of things.
The original WSJ article shared this version of "no evidence of abuse", and it's not very reassuring.
"Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time."
It's also not clear that the activity logs would even have the context to distinguish normal access from unauthorized access.
It would probably be useful to explore the distinction between how vulnerabilities are handled in Open Source software, where disclosure is commonplace, versus non-source-available proprietary software, where the opposite is true.
We need disclosure in Open Source because if attention is not drawn to the vuln downstream users won't upgrade, but highly motivated attackers will have what they need. With non-source-available software, there are fewer opportunities for attackers to learn of any vulns.
'No evidence of abuse' is not acceptable grounds for treating it as if there were no abuse. Personally, I consider all three points as being irrelevant.
I wonder if “Strategic Lack of Log Retention” would make a good conference topic.
The problem with what Google is doing is that they are insinuating that a lack of evidence is evidence of lack. This is not unlike when companies like Equifax claim “we have no evidence...”
We should not be rewarding companies for strategically avoiding culpability.
Google deleted most of the logs that could have contained any evidence. Would you feel the same way if Google employees manually deleted the logs after discovering the breach?
400 third-party apps had access to this info. It's not that "Google knows that there wasn't any abuse" but that "Google doesn't know whether there was any abuse, because it didn't have the proper systems in place to check for that anyway."
It's kind of like you saw no crime happening because you didn't look. But that says nothing about whether or not the crime actually happened.
Let's just assume there wasn't any abuse.. say a bug compromised my bank account, but no money was stolen (someone may have looked my balance and decided I was too poor to be robbed), do I expect to be made aware? yes of course and I feel data privacy deserves the same level of diligence because this is still a breach in trust, So ethically, they could and should have at least made a statement and apologized.
We should not equate "no evidence of abuse" to "evidence of zero abuse", that type of plausible deniability is not going to push improvement in protecting user privacy. Especially in this case, no evidence was really a lack of evidence (probably worse), because logs were only kept for a short period of time.
In the legal sense, or based on "industry practice", they might not be _required_ to disclose to the public. But can they, and should they? because we have all witnessed Google gone above and beyond, and done amazing things over the years. I'm a google fan, and I'm very disappointed by how this was handled.
> 1) This data privacy glitch is just like Facebook’s Cambridge Analytica scandal, except it isn’t.
> well, if its not, then why even bring it up? that part smells like sensationalism to me..
It's the same type of glitch, except there's no evidence that it was exploited (which is a different statement than it wasn't exploited; it may very well have been).
I think people are thinking to small. Imagine if you could own your data profile and "invest" it into websites or services. Everyone builds their services to accept this same "profile" formatting and the user takes it where they pleases.
This would mean small upstarts can compete with google and Facebook (who right now have a huge head start on having all this data) by having a better UX.
right now, everything is trapped in all these different walled gardens. I see it like your cellphone only being able to call cell phones of the exact make and model of your own.
How would ownership extend to metadata derived from your profile? For example, one of the claims made during the Cambridge Analytica hearings was that the data that they had in their possession could be used to derive political leanings, sexual orientation, purchasing habits etc.
I'm certain that this is where the value of the data is. No platform genuinely cares about cat pictures and birthday wishes - they care about how likely you are to purchase an advertiser's product. Or, cynically, they care about how many degrees of separation you are from a person under investigation.
This is not data that you've created directly or intentionally.
I've been thinking about this lately. What if the web fundamentally was data collection of what a person is doing, and then things hang off that. Right now it is a collection of information that you can 'find'. Like your photos, web history, and fitbit would be generating your profile which is the fundamental part of the web, not the 'addressable places to get javascript apps' we have now.
Anyway that's as far as I got. Good luck and let me know how it goes.
I suspect that the government will feel compelled to get involved here and I'm guessing the default ask of the public is that they do. But is a class action an option? Given that there's no evidence of a breach, does that means there's no actual damages to claim?
Google is the quintessential evil tech corporation, and the federal government should prevent them from retaining the power they currently hold over the economy and society as a private autocratic monopoly.
I wish Google would let me pay to just have 0 ads and maximum privacy. I would pay a lot for that and I would be a happier user since all my pet peeves seem to come from them dumbing down products so they can fit ads.
YouTube Red is a good start, hopefully this spreads.
[+] [-] adrianmonk|7 years ago|reply
> Google might agree to let a random online shopping company scan what I’m typing into Gmail, but I did not agree.
Google might, in the sense that they could start, but Google doesn't do (and never has done) what is described.
First of all, Google has never let companies scan what you type. It did let companies target based on content of messages, but that involves advertisers sharing targeting information with Google, not Google sharing email content with advertisers.
Second of all, even that stopped last year. From the Google announcement:
> G Suite’s Gmail is already not used as input for ads personalization, and Google has decided to follow suit later this year in our free consumer Gmail service. Consumer Gmail content will not be used or scanned for any ads personalization after this change. This decision brings Gmail ads in line with how we personalize ads for other Google products.
( https://blog.google/products/gmail/g-suite-gains-traction-in... )
[+] [-] leereeves|7 years ago|reply
This isn't merely a legal technicality hidden in the terms of service. We know they're doing it, and by continuing to use the service we are consenting, however unhappily.
[+] [-] harryf|7 years ago|reply
(In case you weren't aware, Facebook owns Instagram and WhatsApp)
[+] [-] specialist|7 years ago|reply
I've commented here many times about Seisent (bought by Lexis/Nexus). In the mid 2000s they had uniquely identified everyone in North America, Caribbean, and making in roads into Central & South America. Just from public records. One of their customers was the NSA, which also tied in money and phone.
(My employer was prototyping using Seisent to help uniquely identify patients across heterogenous orgs. It worked great, but cost too much for our use case.)
Seisent was (is) just one of many "independents" (third party). Today, FAANG and all the little dragons are doing the same.
--
The ridiculous part of the controversy is pretending we don't know who everyone is. Our voter registration database could trivially be near perfect in real-time. Every immigrant. Every homeless person. Every missing person (not yet dead).
Etc.
"dropping off the grid" is not possible.
[+] [-] 394549|7 years ago|reply
Yes, we can.
1. Some people (mostly technologists) are aware, but many people aren't.
2. Much of the awareness that does exist was not the result of those companies being transparent about their practices. It's the result of inferences based on scraps of information and speculation.
[+] [-] noja|7 years ago|reply
Yes, I can. Why?
Because I only have a vague abstract notion that they collect data and do something with it before turning that into cash.
How am I meant to explain this to my friends and family, beyond "they are bad"?
I've never seen a fully worked through example of exactly what they are doing with all this data. Please point me at an article or video or something that everyone else has already seen a hundred times, because I still haven't found that one thing that will convince other people.
[+] [-] geodel|7 years ago|reply
It is same thing that people here consider those half million salaries normal in valley without honestly thinking how this is made possible in first place. For them its cutting edge machine learning work etc, is a good enough answer.
[+] [-] fx32s|7 years ago|reply
[+] [-] JDWolf|7 years ago|reply
[+] [-] fartcannon|7 years ago|reply
Also Amazon and Apple, too.
[+] [-] s73v3r_|7 years ago|reply
[+] [-] petermcneeley|7 years ago|reply
[+] [-] Meekro|7 years ago|reply
[+] [-] tivert|7 years ago|reply
Though, if you force them to publicize, you'd probably have to force them to conduct the audits, so they can't avoid the publicity/patching spend by keeping themselves in the dark.
[+] [-] s73v3r_|7 years ago|reply
[+] [-] specialist|7 years ago|reply
[+] [-] rwestergren|7 years ago|reply
[+] [-] zelon88|7 years ago|reply
Please stop pretending this isn't a big deal just because of a hard-on for Google. If you'll put a 10 man company out of business for their complacency and ignorance you should be lining up at Google HQ with pitchforks over this. They're supposed to be above this. They are hailed as a gold standard.
[+] [-] leereeves|7 years ago|reply
> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.
[+] [-] tyingq|7 years ago|reply
The original WSJ article shared this version of "no evidence of abuse", and it's not very reassuring.
"Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time."
It's also not clear that the activity logs would even have the context to distinguish normal access from unauthorized access.
[+] [-] rectang|7 years ago|reply
We need disclosure in Open Source because if attention is not drawn to the vuln downstream users won't upgrade, but highly motivated attackers will have what they need. With non-source-available software, there are fewer opportunities for attackers to learn of any vulns.
[+] [-] mannykannot|7 years ago|reply
[+] [-] ForHackernews|7 years ago|reply
Put another way, "no evidence of abuse" is not the same thing as "evidence of no abuse".
[+] [-] chubot|7 years ago|reply
A company can claim EVERY bug was never exploited, and nobody can disprove them.
There is an inherent conflict of interest there.
[+] [-] foobiekr|7 years ago|reply
The problem with what Google is doing is that they are insinuating that a lack of evidence is evidence of lack. This is not unlike when companies like Equifax claim “we have no evidence...”
We should not be rewarding companies for strategically avoiding culpability.
[+] [-] throwaway5250|7 years ago|reply
https://en.wikipedia.org/wiki/Moral_luck#The_problem_of_mora...
[+] [-] ArchTypical|7 years ago|reply
Who discovers it or if you can prove it was abused, is not relevant to the security issue.
[+] [-] endymi0n|7 years ago|reply
I'd consider that at least a disingenious reading of "we have no evidence that bad things happened".
[+] [-] daveFNbuck|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] mtgx|7 years ago|reply
400 third-party apps had access to this info. It's not that "Google knows that there wasn't any abuse" but that "Google doesn't know whether there was any abuse, because it didn't have the proper systems in place to check for that anyway."
It's kind of like you saw no crime happening because you didn't look. But that says nothing about whether or not the crime actually happened.
[+] [-] dbllxr|7 years ago|reply
We should not equate "no evidence of abuse" to "evidence of zero abuse", that type of plausible deniability is not going to push improvement in protecting user privacy. Especially in this case, no evidence was really a lack of evidence (probably worse), because logs were only kept for a short period of time.
In the legal sense, or based on "industry practice", they might not be _required_ to disclose to the public. But can they, and should they? because we have all witnessed Google gone above and beyond, and done amazing things over the years. I'm a google fan, and I'm very disappointed by how this was handled.
[+] [-] amarant|7 years ago|reply
1) This data privacy glitch is just like Facebook’s Cambridge Analytica scandal, except it isn’t.
well, if its not, then why even bring it up? that part smells like sensationalism to me..
things doesn't get better when we realize there are no indications of any actual leaking of anyones anything.
The bug this article refers to was pretty bad, and googles handling of it was indeed poor. but this is just bad journalism.
[+] [-] 394549|7 years ago|reply
> well, if its not, then why even bring it up? that part smells like sensationalism to me..
It's the same type of glitch, except there's no evidence that it was exploited (which is a different statement than it wasn't exploited; it may very well have been).
[+] [-] thelasthuman|7 years ago|reply
I think people are thinking to small. Imagine if you could own your data profile and "invest" it into websites or services. Everyone builds their services to accept this same "profile" formatting and the user takes it where they pleases.
This would mean small upstarts can compete with google and Facebook (who right now have a huge head start on having all this data) by having a better UX.
right now, everything is trapped in all these different walled gardens. I see it like your cellphone only being able to call cell phones of the exact make and model of your own.
[+] [-] hunterjrj|7 years ago|reply
How would ownership extend to metadata derived from your profile? For example, one of the claims made during the Cambridge Analytica hearings was that the data that they had in their possession could be used to derive political leanings, sexual orientation, purchasing habits etc.
I'm certain that this is where the value of the data is. No platform genuinely cares about cat pictures and birthday wishes - they care about how likely you are to purchase an advertiser's product. Or, cynically, they care about how many degrees of separation you are from a person under investigation.
This is not data that you've created directly or intentionally.
[+] [-] pgroves|7 years ago|reply
Anyway that's as far as I got. Good luck and let me know how it goes.
[+] [-] outside1234|7 years ago|reply
[+] [-] mbostleman|7 years ago|reply
[+] [-] imhelpingu|7 years ago|reply
[+] [-] slenk|7 years ago|reply
[+] [-] throw2016|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] relyio|7 years ago|reply
YouTube Red is a good start, hopefully this spreads.
[+] [-] izacus|7 years ago|reply
[+] [-] pleasecalllater|7 years ago|reply
a. oh no!!!
b. nobody will go to prison
c. a programmer will be fired
d. managers will get bonuses
e. nobody will change the way they write programs, process data, etc.
f. go to point a
[+] [-] jhabdas|7 years ago|reply
Also, this has got to stop mentality is too soft. That time passed when Uber pulled the wool over everyone's eyes while the CEO stepped down.
We need more Captains, less crew.
[+] [-] jhabdas|7 years ago|reply
[+] [-] OnlyRepliesToBS|7 years ago|reply
[deleted]
[+] [-] unknown|7 years ago|reply
[deleted]