So, when is the FTC going to actually bring down the hammer on FB for violating the consent agreement? There's no way this was "unintentional."
At $40,000 per user per day [1], even at just one day of violation, that's a $60 billion fine FB should be liable for. "Under the settlement, Facebook agreed to get consent from users before sharing their data with third parties," so this seems to be EXACTLY in violation of that agreement.
*Edit: on second thought, it should be even higher, as each of the 1.5M users had multiple contacts uploaded. So, for example, let's say 1 user had 150 contacts who were not part of the other 1.5M users who had contacts uploaded. That alone should be a violation of the consent rights of those 150 people, so $6 million per day. If every one of the 1.5 million people had, on average, 150 contacts exclusive of the other 1.5 million people who had contact info uploaded, that's a $9 trillion liability for one day of violation.
The FTC has been toothless on this for quite some time now, so I'm expecting no significant action as FB lawyers will defend that no one had data shared with "third parties," technically. Well, shouldn't my contact info shared by a friend with FB be a consent violation as FB is a "third party" from my perspective?
Maybe I'm just ignorant, but I do not really see how this violates the FTC agreement, because it covers Facebook sharing user data (stored/tracked/gathered by Facebook) with third parties.
However, what Facebook did is far worse than violating that agreement. Facebook gained accessed to user data on third party systems, to which they should never have had access. They gained this (unauthorized) access (at best without clear consent) on a false pretense (disguising as security related requirement). Then they imported user data, with no relationship to their stated goal/requirement, into their platform.
Associative contact information is a highly valuable commodity to any company involved in marketing and social media. I've seen a lot of people argue how this could have been the result of a laps of oversight, but that sounds like arguing how a gem stone trader might have "accidentally" stolen a large quantity of rough gem stones, while claiming to not have known their value. Even if theoretically possible, it's extremely unlikely that nobody within Facebook knew/realized the value of this data.
Either way, Facebook gained access to highly valuable assets. Even in the unlikely event of sincere lack of oversight, it would demonstrate a level of incompetence that warrants them to still be held criminally liable.
Moreover, Facebook might actually have outright violated the Computer Fraud and Abuse Act (CFAA), in particular the "access in excess of authorization" part, but I'm not sure.
Also Let’s see a list of the various FTC settlements with FB. And a list of FTC employees who worked on those settlements now working for big tech.
I know one FTC employee who worked on the 2011 FTC/FB settlement (which required FB to obtain independent 3rd party audits certifying their privacy program for 20 years...never mind the subsequent violations and settlements) is now “head of privacy” for a certain social networking company.
FB's public comments about these remind me a lot of the "5 Standard Excuses" scene in the '80s BBC sitcom Yes Minister, where a civil servant lists the best CYA mea culpas for politicians to use when something goes wrong.
1. It occurred before certain important facts were known, and couldn’t happen again
2. It was an unfortunate lapse by an individual, which has now been dealt with under internal disciplinary procedures.
3. There is a perfectly satisfactory
explanation for everything, but security forbids its disclosure.
4. It has only gone wrong because of heavy cuts in staff and budget which have stretched supervisory resources beyond their limits.
5. it was a worthwhile experiment, now abandoned, but not before it had provided much valuable data and considerable employment.
For those who haven't seen the clip, [1]. Yes Minister is a brilliant piece of satire (though it does have a somewhat unfortunate Thatcher-esque streak when it comes to discussion of unions -- though it would've been difficult to avoid ridiculing unions in satire from the 1980s).
(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value
A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
> A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
I don't understand this. Claiming that something is an accident and not intentional usually isn't much of an excuse where it comes to the criminal acts.
"obtaining anything of value" could be satisfied by getting personal data which today is akin to profit, but the "intent to defraud" would be hard to prove in court, save for some very broad and dangerous intepretation of "intent" which could equal sloppiness to malice, a precedent that might ruin the lives of honest people who just happen to be clueless sysadmins or developers.
Totally agree though on investigating whether this was really accidental or not; if it was done on purpopse I would expect FB to be hit really hard.
Not a lawyer, but at least in my jurisdiction, fraud requires a monetary loss by the victim.
Generally, civil law is better suited for this sort of thing, no matter how good a pitchfork feels in your hand. As but one of the reasons, the required standard of proof is much lower.
Simply asking for email passwords indicates an intent to gain unauthorized access, and disguising the request as being part of a security-enhancing action eliminates all doubt.
- developer A is tasked to create the prompt to ask for username and password of the email account
- developer B is tasked to call some API to upload contacts from email account
- developer C is tasked to bind two functionalities.
Now replace developers with teams and you see how simple is for the average developer to underestimate the scope and the ethical bounds of a given task.
>what is your tipping point? Would you say no to that assignment?
When FB stops giving them a check.
At least that has been my experience watching programmers at other companies. Unless ethically bound by regulation and law, few people seem to have ethics.
From the article it sounds like there was a prompt for permission that got removed:
> Facebook told Gizmodo via email that in May 2016 it made a revision to the registration process, which originally asked the affected users for permission to upload contact lists. That change removed the opt-in prompt, though the company did not realize the underlying functionality was still operating in some cases.
It doesn't take a conspiracy to understand how a bug like that could happen.
“For the FB employees reading this: what is your tipping point? Would you say no to that assignment?”
There is a good chance that they didn’t know how their work would eventually be used. That’s the problem with big companies. Most people are far away from seeing the consequences of their work.
In The Fine Article, it says that the feature was built on purpose, and previously asked for permission. The accident is that it wasn't completely removed.
They did have the upload-your-address-book functionality before they instituted this check. I’m very much hoping to see Facebook suffer for this, but I could conceivably see a scenario where they reused code that did more than they wanted.
It also takes extra work to ask consent. You build it. You don't notice that your confirmation screen fails to trigger. You've just unintentionally uploaded a bunch of data without consent, when your intention was to do it with consent.
It's still pretty darn negligent, but it's easy to see how it could be done unintentionally.
> It takes extra work to upload those contacts, which means several managers and developers decided to do it and then spent time implementing it.
Not really. Facebook is a bunch of autonomous services (registration, access, tracking, activities, etc.) accessing shared databases (chat logs, activities, media uploads, etc.) with some kind of automatic implicit and explicit ACL in place. The suggestion/contact service got access to data provided through the email-not-working-with-oauth-so-let-us-use-automatic-token-delivery-and-confirmation-by-accessing-user-emails because it was told a new source of contacts were available for those users. So, not a straight path.
Accident/Blunder > Evil.
Now. GDPR ? GDPR. And because of GDPR those things aren't supposed to happen in Europe.
Considering vast crowds of folks happily working for amoral places like investment banks (2008 crisis and its consequences) or wealth management (rich folks trying to keep as much money untaxed as possible and used for public spending), the moral bar for usual smart person is actually pretty low. Optimizing some ads seems pretty harmless when compared to.
As long as you don't see the evil being literally done ie in form or row of inmates being sent to gas chambers, there are almost endless ways to persuade yourself that all is actually OK and fine.
First they ask for email passwords. Then the new users assume Facebook won't comprehensively mine their emails. Then Facebook awkwardly gets caught uploading 1.5 million users' email contacts.
It doesn't make sense for people to trust the service at all unless you assume one of two things:
1 - Despite all the outrage on hackernews, and the NWT stories, our neighbours down the street and family members still don't know how Facebook works or what is done with their data
2 - They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories. I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting.
Group #2 somehow lacks the imagination to see what could go wrong. They will learn when a cause effect of Facebook usage is put in their face. I guess the recent news does not push it in their face enough.
Its like that with skimming, lock picking, server security, infrastructure security, basically everything security related.
>They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories.
"People don't care about a problem initially, then when it becomes graver they start to care"
I try to be an advocate for privacy. I really do. But everyone just calls me paranoid, asks why I need to be worried about my government like I have something to hide, or just stares blankly at me because they can't be bothered to actually think about the words climbing through their ears.
I'm going mental over the explosion of televisions in the last half decade which identify and report any content you watch on the TV by default, in exchange for 100-150 off the television (which was fluff to begin with... it's not a direct trade of $100 for your data).
I've set up about a dozen of these now for people and they just stare blankly while I try to explain what "Auto Content Recognition" means... Hello 1984.
I think the backlash is mostly just delayed. At one point revenue will take a hit because engineers might refuse to implement these "unintentional" and "accidental" features on time.
There is no doubt that the public image about FB is significantly changing - a year from now things will not look better for Facebook then they are today, most likely worse I'd say. This is not something they can turn around anymore - the leadership is not making any learnings and repeats the same mistakes over and over again.
> "I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting."
I think you hit the nail on the head. Even on HN, it's not uncommon to see a few comments on each negative story about facebook accusing the media of a conspiracy against Facebook; claiming that the media is wrongly maligning Facebook who is merely the unfortunate victim of a series of coincidental accidents.
They have trouble accepting that a tech corporation like facebook actually might be rotten.
FB has said they'll be notifying the people whose contacts they "unintentionally" uploaded. How about notifying those contacts whose private details they illicitly obtained that their privacy has been compromised by Facebook - the innocents who signed up for FB and had their contact-list stolen (let's call it what it is) may or may not feel any moral obligation (more likely, don't even see the issue) to notify their friends/family/plumber whose details they "lost" to a thief.
This seems like a case similar to the Google WiFi data collection. Code written for one reason was reused in a different project without understanding what it would do.
Here’s an example page from 2011 talking about facebook’s old feature to import contacts via providing them your email username and password. This was at a point when many web mail services didn’t offer an OAuth API to do this, so it did make some sense at the time. It was still safer to do a csv export and then import, but much easier for users to provide the password directly.
> Type your email address and password for the Web-based email or instant-messaging service that you want to import into the dialog boxes and click "Find Friends."
LinkedIn pulled something similar a few years back. At the time, I was using the same password for both my email and LinkedIn account, and found that people from my email address book were showing up as suggested connections. I can only assume "consent" for this was buried in the T&Cs.
Since FB has gone out of their way to weaponize "friendship", my suggestion to everyone who actually likes to have some standards in their life and don't like to be manipulated like that is simple. Just do it back to them. "Unfriend" (IRL) everyone you know who works at Facebook and tell them you will "friend" them back once they leave the company.
This may be an unpopular opinion, but things like this happen. Someone gets the task to implement a login and either doesn't realize they should be using OAuth or is simply too lazy to do so. Next, someone has the idea to suggest friends, so let's grab some email contacts for that purpose.
That stuff happens all the time at small companies. While it's certainly bad practice, it's often not evil intent, but just lack of technical skills (for the former issue) and missing sense for potential privacy issues (for the latter).
In case of a large company like Facebook, one could expect they'd have processes and education in place to prevent such incidents, but I guess this happened a while back when FB was much smaller than it is now.
Not for one second I believe this was unintentionally. After all data scandals where Facebook didn't actively care or even empowered the problem by not acting towards privacy.
I think this company is inherently bad from the top and everyone working there is enabling them. Sure, it pays well.
Problem is, most bigger companies do bad things. See VW and the emission scandal and I hope Winterkorn and other top managers goes to jail for that. Also I'm biased, for me Facebook and Instagram are pretty useless, the only useful product they have is Whatsapp...
Can't someone file a class action lawsuit against Facebook?
I mean, it's nice that they are deleting the information now, but they clearly did something wrong, and by basic standards, they should be punished. And the deleting the stolen information isn't punishment, and since they probably won't delete any new ad targeting information they gathered as a conclusion from the contacts, they are still profiting from it, so the punishment should be more then just a small fine (that I hope they get).
I'm just sick of them (and other companies) "accidentally" doing something wrong, and barely get a slap on the wrist.
>Facebook says that it didn't mean to upload these contacts
How can you not mean to? It's one thing to say that, were it something tangible, like paper, "Sorry, mate. These pages snuck in with the others. Sorry about that. We'll pull it out. No worries."
Pulling contacts and uploading them is not a passive action but takes active action.
>and is now in the process of deleting them.
So, the question must then be asked: How do they differentiate the sources of contacts associated with an account, unless they're logging that, as well? If they're not logging that, then how are they, presumably, deleting those contacts?
Are we taking bets on Facebook being in the news again, in a months' or so time, for being found to not have deleted them? :)
I don't recall ever hearing that Facebook made a mistake which decreased the amount of data they collected or their usage thereof. Can anyone provide an example?
I'm sure Facebook has had bugs that broke various forms of data collection, or missed data they could have collected. We wouldn't hear about it, but it would be surprising if it hadn't happened.
At some point, some government is going to have to step in and stop Facebook. Five years ago, I would not have believed that I would have supported government action. Now, I’m afraid for the future if there is no intervention.
Phones need better features to entirely prevent these things - so apps can't trick the user. I want no application to have access, something like Incognito mode for all apps basically. The permission dialogues are typically not very helpful to make a meaningful decision and apps don't function at all without certain permissions. So why not allow to "fake" contacts,storage,location,etc...
How is LinkedIn not under more scrutiny right now? They used to ask for my email password all the time along with re-asking for access to contacts at EVERY LOGIN.
I know this isn’t a contest, but I always felt LinkedIn was twice as scummy as fb.
Why are companies even asking users to provide passwords for unrelated services? For example, when I added an external account on Etrade, they gave me the option of same day verification of that account if I provided them my online banking account credentials.
This practice opens up a significant potential for abuse and should be illegal.
[+] [-] Rafuino|7 years ago|reply
At $40,000 per user per day [1], even at just one day of violation, that's a $60 billion fine FB should be liable for. "Under the settlement, Facebook agreed to get consent from users before sharing their data with third parties," so this seems to be EXACTLY in violation of that agreement.
[1] https://www.cnet.com/news/facebooks-ftc-consent-decree-deal-...
*Edit: on second thought, it should be even higher, as each of the 1.5M users had multiple contacts uploaded. So, for example, let's say 1 user had 150 contacts who were not part of the other 1.5M users who had contacts uploaded. That alone should be a violation of the consent rights of those 150 people, so $6 million per day. If every one of the 1.5 million people had, on average, 150 contacts exclusive of the other 1.5 million people who had contact info uploaded, that's a $9 trillion liability for one day of violation.
The FTC has been toothless on this for quite some time now, so I'm expecting no significant action as FB lawyers will defend that no one had data shared with "third parties," technically. Well, shouldn't my contact info shared by a friend with FB be a consent violation as FB is a "third party" from my perspective?
[+] [-] elmo2you|7 years ago|reply
However, what Facebook did is far worse than violating that agreement. Facebook gained accessed to user data on third party systems, to which they should never have had access. They gained this (unauthorized) access (at best without clear consent) on a false pretense (disguising as security related requirement). Then they imported user data, with no relationship to their stated goal/requirement, into their platform.
Associative contact information is a highly valuable commodity to any company involved in marketing and social media. I've seen a lot of people argue how this could have been the result of a laps of oversight, but that sounds like arguing how a gem stone trader might have "accidentally" stolen a large quantity of rough gem stones, while claiming to not have known their value. Even if theoretically possible, it's extremely unlikely that nobody within Facebook knew/realized the value of this data.
Either way, Facebook gained access to highly valuable assets. Even in the unlikely event of sincere lack of oversight, it would demonstrate a level of incompetence that warrants them to still be held criminally liable.
Moreover, Facebook might actually have outright violated the Computer Fraud and Abuse Act (CFAA), in particular the "access in excess of authorization" part, but I'm not sure.
[+] [-] will_brown|7 years ago|reply
I know one FTC employee who worked on the 2011 FTC/FB settlement (which required FB to obtain independent 3rd party audits certifying their privacy program for 20 years...never mind the subsequent violations and settlements) is now “head of privacy” for a certain social networking company.
[+] [-] michaelmior|7 years ago|reply
[+] [-] rchaud|7 years ago|reply
1. It occurred before certain important facts were known, and couldn’t happen again
2. It was an unfortunate lapse by an individual, which has now been dealt with under internal disciplinary procedures.
3. There is a perfectly satisfactory explanation for everything, but security forbids its disclosure.
4. It has only gone wrong because of heavy cuts in staff and budget which have stretched supervisory resources beyond their limits.
5. it was a worthwhile experiment, now abandoned, but not before it had provided much valuable data and considerable employment.
[+] [-] cyphar|7 years ago|reply
[1]: https://www.youtube.com/watch?v=6Y4PEqvk0Jg
[+] [-] carnagii|7 years ago|reply
(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value
https://www.law.cornell.edu/uscode/text/18/1030
A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
[+] [-] levosmetalo|7 years ago|reply
I don't understand this. Claiming that something is an accident and not intentional usually isn't much of an excuse where it comes to the criminal acts.
[+] [-] squarefoot|7 years ago|reply
[+] [-] howard941|7 years ago|reply
[+] [-] matt4077|7 years ago|reply
Generally, civil law is better suited for this sort of thing, no matter how good a pitchfork feels in your hand. As but one of the reasons, the required standard of proof is much lower.
[+] [-] mannykannot|7 years ago|reply
[+] [-] smt88|7 years ago|reply
It takes extra work to upload those contacts, which means several managers and developers decided to do it and then spent time implementing it.
For the FB employees reading this: what is your tipping point? Would you say no to that assignment?
[+] [-] iamrobschiavone|7 years ago|reply
- developer A is tasked to create the prompt to ask for username and password of the email account
- developer B is tasked to call some API to upload contacts from email account
- developer C is tasked to bind two functionalities.
Now replace developers with teams and you see how simple is for the average developer to underestimate the scope and the ethical bounds of a given task.
[+] [-] pixl97|7 years ago|reply
When FB stops giving them a check.
At least that has been my experience watching programmers at other companies. Unless ethically bound by regulation and law, few people seem to have ethics.
[+] [-] jayawayjayyay|7 years ago|reply
> Facebook told Gizmodo via email that in May 2016 it made a revision to the registration process, which originally asked the affected users for permission to upload contact lists. That change removed the opt-in prompt, though the company did not realize the underlying functionality was still operating in some cases.
It doesn't take a conspiracy to understand how a bug like that could happen.
[+] [-] maxxxxx|7 years ago|reply
There is a good chance that they didn’t know how their work would eventually be used. That’s the problem with big companies. Most people are far away from seeing the consequences of their work.
[+] [-] NoodleIncident|7 years ago|reply
[+] [-] matt4077|7 years ago|reply
[+] [-] tjpnz|7 years ago|reply
I have an open ended question aimed mainly towards founders. Would you have any issues in hiring a candidate with Facebook on their resume?
[+] [-] lkbm|7 years ago|reply
It also takes extra work to ask consent. You build it. You don't notice that your confirmation screen fails to trigger. You've just unintentionally uploaded a bunch of data without consent, when your intention was to do it with consent.
It's still pretty darn negligent, but it's easy to see how it could be done unintentionally.
[+] [-] codedokode|7 years ago|reply
[+] [-] dwighttk|7 years ago|reply
[+] [-] johnchristopher|7 years ago|reply
Not really. Facebook is a bunch of autonomous services (registration, access, tracking, activities, etc.) accessing shared databases (chat logs, activities, media uploads, etc.) with some kind of automatic implicit and explicit ACL in place. The suggestion/contact service got access to data provided through the email-not-working-with-oauth-so-let-us-use-automatic-token-delivery-and-confirmation-by-accessing-user-emails because it was told a new source of contacts were available for those users. So, not a straight path.
Accident/Blunder > Evil.
Now. GDPR ? GDPR. And because of GDPR those things aren't supposed to happen in Europe.
[+] [-] saiya-jin|7 years ago|reply
As long as you don't see the evil being literally done ie in form or row of inmates being sent to gas chambers, there are almost endless ways to persuade yourself that all is actually OK and fine.
[+] [-] jammygit|7 years ago|reply
It doesn't make sense for people to trust the service at all unless you assume one of two things:
1 - Despite all the outrage on hackernews, and the NWT stories, our neighbours down the street and family members still don't know how Facebook works or what is done with their data
2 - They don't care about their data privacy. I've heard this claim many times, but the people saying it often change their minds when they read more news stories. I really do think people have trouble assuming the worst about the intentions of others and are inclined to be trusting.
edit: clarification
[+] [-] Fnoord|7 years ago|reply
Its like that with skimming, lock picking, server security, infrastructure security, basically everything security related.
[+] [-] nvssj|7 years ago|reply
"People don't care about a problem initially, then when it becomes graver they start to care"
So normal, expected behaviour?
[+] [-] soulofmischief|7 years ago|reply
I'm going mental over the explosion of televisions in the last half decade which identify and report any content you watch on the TV by default, in exchange for 100-150 off the television (which was fluff to begin with... it's not a direct trade of $100 for your data).
I've set up about a dozen of these now for people and they just stare blankly while I try to explain what "Auto Content Recognition" means... Hello 1984.
[+] [-] kerng|7 years ago|reply
There is no doubt that the public image about FB is significantly changing - a year from now things will not look better for Facebook then they are today, most likely worse I'd say. This is not something they can turn around anymore - the leadership is not making any learnings and repeats the same mistakes over and over again.
[+] [-] p1esk|7 years ago|reply
[+] [-] mnm1|7 years ago|reply
[+] [-] darkpuma|7 years ago|reply
I think you hit the nail on the head. Even on HN, it's not uncommon to see a few comments on each negative story about facebook accusing the media of a conspiracy against Facebook; claiming that the media is wrongly maligning Facebook who is merely the unfortunate victim of a series of coincidental accidents.
They have trouble accepting that a tech corporation like facebook actually might be rotten.
[+] [-] mikro2nd|7 years ago|reply
[+] [-] javagram|7 years ago|reply
Here’s an example page from 2011 talking about facebook’s old feature to import contacts via providing them your email username and password. This was at a point when many web mail services didn’t offer an OAuth API to do this, so it did make some sense at the time. It was still safer to do a csv export and then import, but much easier for users to provide the password directly.
https://www.techwalla.com/articles/how-to-import-contacts-to...
> Type your email address and password for the Web-based email or instant-messaging service that you want to import into the dialog boxes and click "Find Friends."
[+] [-] james246|7 years ago|reply
[+] [-] throwaway_9168|7 years ago|reply
[+] [-] blauditore|7 years ago|reply
That stuff happens all the time at small companies. While it's certainly bad practice, it's often not evil intent, but just lack of technical skills (for the former issue) and missing sense for potential privacy issues (for the latter).
In case of a large company like Facebook, one could expect they'd have processes and education in place to prevent such incidents, but I guess this happened a while back when FB was much smaller than it is now.
[+] [-] tonyjstark|7 years ago|reply
I think this company is inherently bad from the top and everyone working there is enabling them. Sure, it pays well.
Problem is, most bigger companies do bad things. See VW and the emission scandal and I hope Winterkorn and other top managers goes to jail for that. Also I'm biased, for me Facebook and Instagram are pretty useless, the only useful product they have is Whatsapp...
[+] [-] gyaniv|7 years ago|reply
I mean, it's nice that they are deleting the information now, but they clearly did something wrong, and by basic standards, they should be punished. And the deleting the stolen information isn't punishment, and since they probably won't delete any new ad targeting information they gathered as a conclusion from the contacts, they are still profiting from it, so the punishment should be more then just a small fine (that I hope they get).
I'm just sick of them (and other companies) "accidentally" doing something wrong, and barely get a slap on the wrist.
[+] [-] maxheadroom|7 years ago|reply
How can you not mean to? It's one thing to say that, were it something tangible, like paper, "Sorry, mate. These pages snuck in with the others. Sorry about that. We'll pull it out. No worries."
Pulling contacts and uploading them is not a passive action but takes active action.
>and is now in the process of deleting them.
So, the question must then be asked: How do they differentiate the sources of contacts associated with an account, unless they're logging that, as well? If they're not logging that, then how are they, presumably, deleting those contacts?
Are we taking bets on Facebook being in the news again, in a months' or so time, for being found to not have deleted them? :)
[+] [-] nathan_long|7 years ago|reply
[+] [-] hhanesand|7 years ago|reply
[+] [-] jefftk|7 years ago|reply
[+] [-] hluska|7 years ago|reply
[+] [-] kerng|7 years ago|reply
Majority of apps are just spyware anyware.
[+] [-] yakubin|7 years ago|reply
[+] [-] galfarragem|7 years ago|reply
[+] [-] dangero|7 years ago|reply
I know this isn’t a contest, but I always felt LinkedIn was twice as scummy as fb.
[+] [-] u801e|7 years ago|reply
This practice opens up a significant potential for abuse and should be illegal.