> The researchers (w/ the help of others) are responsible for a browser plug-in called Ad Observer, which allows FB users to voluntarily share very limited and anonymous data about the political ads that FB shows them. You can read about Ad Observer here:
Absolutely preposterous takedown demand. Facebook doesn't get to dictate what software I run on my own client devices, including browser plugins, or even what browser I use. If I want to install a plugin that sends a screenshot or data of every advertisement I receive, to a third party of my choice, that's up to me. Or maybe I want to install ublock origin and see no ads.
It sounds like they're complaining because they have no way of detecting this or preventing it on the user client end, thankfully, because of the way browsers are architected to prevent a website from screwing with the software on your computer. The only way fb could detect or block this would be to force users to install their own fb-written browser plugin, with extensive permissions required.
Obviously fb has a high level of motivation to get every user to use their officially app-store-published android or ios app, where the whole experience is centrally controlled, and such a plugin is impossible to use. Rather than having the user browse facebook in Firefox or Chrome or Edge.
If I can display something on my own computer screen it's my right to choose to share it however I damn well please.
A lot of people felt Facebook should do more to proactively stop people from scraping and aggregating data from their site after the CA debacle. Which is what they are doing here, and you are labeling “absolutely preposterous”.
Which way is it? Should they let people do whatever they want with their accounts as you suggest, and risk a repeat of the CA fiasco? Or try to proactively stop it like they are now?
“Meanwhile: the NYU app has access to friend data in your feed and friend data is also in the ads it scrapes. And it replaces an actual security model with our trust that NYU are nice people and won't abuse this access. That is exactly how Cambridge Analytica happened.”
Comparing Cambridge Analytica, who harvested data though means that were not transparent to users (and for malicious purpose), to NYU has explained what data and why, AND has the consent of its users, seems disingenuous at best.
Cambridge Analytica happened with an app hosted on Facebook. This is hosted on your browser. So it’s not exactly how Cambridge Analytica happened because the trust model is completely different.
Well, what exactly should NYU do instead? There is no API with fine-grained permissions that they can use: To get the data they are interested in (ads), they have to resort to scraping - and a scraper will always have access to all data on the page.
So there is no way for NYU to not have access to friend data if they want access to ad data.
And, unlike Facebook which sucks up an ever increasing amount of data on you, this project takes only basic demographic information (age group, gender, ethnicity) and what ads that you're shown. No personal data is retained by NYU.
I mean, I trust NYU researchers a lot more than I trust Facebook execs.
All these big data-harvesting companies (FB, Google, etc.) start with the false premise that well-informed users have affirmatively chosen to trust that company with their private data.
>"The supposed scandal around the data analytics supplied to campaign groups by Cambridge Analytica was manufactured by people with a political agenda.
>...UK Information Commissioner’s Office has published the findings of its three-year investigation (predating the scandal) into the matter, which concluded there was no illegal electoral interference whatsoever...In other words, the data was commercially available and concerned US voters. The only ‘special sauce’ in CA’s model was the hyperbole of its sales people..." [1]
the left has pushed a false narratives and misinformation making Cambridge Analytica, like Russia, the convenient scapegoat for all the things. The same tricks are in play now with Hunter Biden's laptop coverage, which is non-existent from MSM
If I'm understanding Alex right, he's saying that Facebook's 2019 FTC consent decree requires them to limit the personal information collected by apps on the platform.
FB claims automated data collection, the data is apparently input to the app by a user , i think the hairs will be split between copy paste mechaniics, or ten finger entry in response to prompt, thus massaging the definition of automated entry.
(1)FB has consistently refused to publish anything about how the ads are targeted.
(2)The NYU researchers have tried to fill that gap, offering the Ad Observer plug-in to users who want to voluntarily donate the ads they see — along with the limited targeting data FB displays to users.
(3)Here’s where things get troubling: Facebook is now trying to shut down the Ad Observer plug-in, saying that it violates Facebook’s terms of service by automating the collection of data that Facebook shows to its users.
I don't understand what the researchers violated - it looks like they didn't sign the FB's EULA, nor did they perform anything resembling CFAA violation.
It is the FB users who signed the EULA and use "unapproved" user-agent to access FB services and to voluntarily share the data (isn't FB a sharing platform btw?) in "unapproved" way. Thus FB should go after the real violators - their users. I wonder why FB didn't do it ...
I mean i can write any stupid EULA, yet until you agree to it my C&D based on that EULA is just my personal hallucinations, and even if you agree to it, your communication/business/etc. partners don't magically become bound by it too.
Facebook says it's a privacy issue. So it doesn't make sense to go after individual users - users can violate their own privacy if they really want - but it might make sense to go after the researchers if Facebook thinks they're tricking people into giving up more privacy than they expect. (I hate to sound like a broken record, because this comes up truly constantly, but the Cambridge Analytica scandal was caused by an academic researcher collecting voluntarily shared data.)
Facebook's complaint about user privacy is really a red herring. I am sure that someone is worried about negative stories written based on the data collected and tasked another person to shut it down in a way that makes the researchers look like a villain and Facebook looks like a hero.
It's well within their rights as a private company, right? We need to have a serious discussion about the power these FAANGs are exerting and we need to stop making excuses for them.
Is it within the rights of a company to dictate what software I have installed on my personal machine? If Facebook wants to ban every account using the software that is within their rights, but they don't get a say in what I have installed.
It's interesting. What would be an "acceptable" mix of demographics for political ads recipients? That is to say, what is the answer to that question where there is no PR story, where Facebook doesn't look bad? If the answer is, none at all... I'm sure we can see where they are coming from.
In my opinion you don't really need to do the NYU study. Intellectually honestly, many political ads will disproportionately appear in front of users with different demographics than their census tracts, regardless of their targeting parameters. In my experience many of the demographics of users in many software products are arbitrary, telling you nothing about the content and much more about acquisition channels and technology usage patterns at a particular point in time.
As far as I know, Facebook allows some targeting parameters for political ads. So they should publish how often those targeting parameters are selected. Great, advocate for that.
Intellectually honestly, that will conclusively show that ad buyers have a wide diversity of targeting parameters that, in aggregate, represent a complex mix of objectives oftentimes only adjacent to a specific election. Almost certainly Facebook already looked at this and found that geography, gender, age and proxies for user's race (like "multicultural affinity") are among the top choices, and that looks bad, even though it may be an important part of all ads targeted anywhere.
Is NYU's study going to have enough power to measure targeting in an intellectually honest way? They can certainly write something descriptive.
That descriptive, "Well here are some ads we looked at, and some of them disproportionately appeared in front of users with e.g. this ethnicity more often than others, which we editorially chose" - I can see how that is a lose-lose for Facebook.
Did they even agree to the ToS? It seems like they wrote their own plugin to harvest the data, what agreement did they made with Facebook that they are in violation of?
WSJ: "In a letter sent Oct. 16 to the researchers behind the NYU Ad Observatory, Facebook said the project violates provisions in its terms of service that prohibit bulk data collection from its site."
That's the key point here. The researchers are not a party to Facebook's terms of service. The user installing the add-on may be, but that does not bind the add-on developer. (This is called "privity" in law; contract constraints do not obligate third parties who didn't agree to the contract.)
Facebook could disconnect Facebook users using the add-on, if they can detect them. That would be a bad PR move.
I legitimately don't understand how Facebook has any grounds to C&D this application.
Its a Terms and Conditions violation. Ok, I get that. But, at what point did the developers of this application ever agree to any terms and conditions? Its not like its accessing data via the API, or requires some kind of privileged access levels.
Morals, ethics, security, whatever aside; I just don't understand the legal angle Facebook is using here.
Facebook, and Twitter and Google were built by particular libertarian-types who were young and idealistic. They erred on the side of openness and really disliked regulating content. I don't mind them.
The people who are taking over these companies, those people I'm afraid of because they are political and their first instinct is to push ideology and censorship.
Monitoring Facebook needs to work like the Black hole photo project.
Large alliances spanning countries and multiple institutions, hundreds of researchers working in tandem. Thats the best way to tame a beast this large.
Same goes for regulators/legal strategies/journalism etc. Associations and alliances are key.
Mixed feelings on this. On the one hand, I'm against Facebook censorship, especially of academics. On the other hand, the researchers (and journalists) are going to use this research to browbeat Facebook into more censorship (and specifically censorship of the right, though some non-mainstream progressives will be caught up in the dragnet as well) as they have already done with a ton of hit pieces [1].
What to do, what to do.
[1] Brings to mind the hit-piece:
"The Making of a YouTube Radical - The New York Times" article. But there is an ongoing effort to continue pushing Facebook, Twitter and YouTube into more and more censorship of anyone deemed on the right and the Overton window keeps getting smaller and smaller. The situation with the Hunter Biden laptop story and NYPost is absolutely bonkers. Not only do journalists at mainstream center/center-left news outfits not care that Twitter and Facebook outright decided that the story is false and therefore shouldn't be shared by anyone and banned the account of their colleagues at NYPost .. but worse, actually applaud it and justify it.
[+] [-] walrus01|5 years ago|reply
https://adobservatory.org
=================
Absolutely preposterous takedown demand. Facebook doesn't get to dictate what software I run on my own client devices, including browser plugins, or even what browser I use. If I want to install a plugin that sends a screenshot or data of every advertisement I receive, to a third party of my choice, that's up to me. Or maybe I want to install ublock origin and see no ads.
It sounds like they're complaining because they have no way of detecting this or preventing it on the user client end, thankfully, because of the way browsers are architected to prevent a website from screwing with the software on your computer. The only way fb could detect or block this would be to force users to install their own fb-written browser plugin, with extensive permissions required.
Obviously fb has a high level of motivation to get every user to use their officially app-store-published android or ios app, where the whole experience is centrally controlled, and such a plugin is impossible to use. Rather than having the user browse facebook in Firefox or Chrome or Edge.
If I can display something on my own computer screen it's my right to choose to share it however I damn well please.
[+] [-] ninth_ant|5 years ago|reply
Which way is it? Should they let people do whatever they want with their accounts as you suggest, and risk a repeat of the CA fiasco? Or try to proactively stop it like they are now?
[+] [-] senectus1|5 years ago|reply
[+] [-] mobileexpert|5 years ago|reply
“Meanwhile: the NYU app has access to friend data in your feed and friend data is also in the ads it scrapes. And it replaces an actual security model with our trust that NYU are nice people and won't abuse this access. That is exactly how Cambridge Analytica happened.”
[+] [-] MAGZine|5 years ago|reply
[+] [-] strawberrypuree|5 years ago|reply
[+] [-] xg15|5 years ago|reply
So there is no way for NYU to not have access to friend data if they want access to ad data.
[+] [-] thesausageking|5 years ago|reply
https://adobserver.org/privacy-policy/
And, unlike Facebook which sucks up an ever increasing amount of data on you, this project takes only basic demographic information (age group, gender, ethnicity) and what ads that you're shown. No personal data is retained by NYU.
[+] [-] mcguire|5 years ago|reply
[+] [-] leothecool|5 years ago|reply
[+] [-] ForHackernews|5 years ago|reply
All these big data-harvesting companies (FB, Google, etc.) start with the false premise that well-informed users have affirmatively chosen to trust that company with their private data.
[+] [-] trhway|5 years ago|reply
[+] [-] propogandist|5 years ago|reply
>...UK Information Commissioner’s Office has published the findings of its three-year investigation (predating the scandal) into the matter, which concluded there was no illegal electoral interference whatsoever...In other words, the data was commercially available and concerned US voters. The only ‘special sauce’ in CA’s model was the hyperbole of its sales people..." [1]
the left has pushed a false narratives and misinformation making Cambridge Analytica, like Russia, the convenient scapegoat for all the things. The same tricks are in play now with Hunter Biden's laptop coverage, which is non-existent from MSM
[1] https://telecoms.com/506834/uk-information-commissioner-conf...
[+] [-] christkv|5 years ago|reply
[+] [-] 2arrs2ells|5 years ago|reply
If I'm understanding Alex right, he's saying that Facebook's 2019 FTC consent decree requires them to limit the personal information collected by apps on the platform.
[+] [-] rolph|5 years ago|reply
(1)FB has consistently refused to publish anything about how the ads are targeted.
(2)The NYU researchers have tried to fill that gap, offering the Ad Observer plug-in to users who want to voluntarily donate the ads they see — along with the limited targeting data FB displays to users.
(3)Here’s where things get troubling: Facebook is now trying to shut down the Ad Observer plug-in, saying that it violates Facebook’s terms of service by automating the collection of data that Facebook shows to its users.
[+] [-] Lendal|5 years ago|reply
[+] [-] trhway|5 years ago|reply
It is the FB users who signed the EULA and use "unapproved" user-agent to access FB services and to voluntarily share the data (isn't FB a sharing platform btw?) in "unapproved" way. Thus FB should go after the real violators - their users. I wonder why FB didn't do it ...
I mean i can write any stupid EULA, yet until you agree to it my C&D based on that EULA is just my personal hallucinations, and even if you agree to it, your communication/business/etc. partners don't magically become bound by it too.
[+] [-] SpicyLemonZest|5 years ago|reply
[+] [-] Emendo|5 years ago|reply
[+] [-] ethanwillis|5 years ago|reply
[+] [-] jandrese|5 years ago|reply
[+] [-] an_opabinia|5 years ago|reply
In my opinion you don't really need to do the NYU study. Intellectually honestly, many political ads will disproportionately appear in front of users with different demographics than their census tracts, regardless of their targeting parameters. In my experience many of the demographics of users in many software products are arbitrary, telling you nothing about the content and much more about acquisition channels and technology usage patterns at a particular point in time.
As far as I know, Facebook allows some targeting parameters for political ads. So they should publish how often those targeting parameters are selected. Great, advocate for that.
Intellectually honestly, that will conclusively show that ad buyers have a wide diversity of targeting parameters that, in aggregate, represent a complex mix of objectives oftentimes only adjacent to a specific election. Almost certainly Facebook already looked at this and found that geography, gender, age and proxies for user's race (like "multicultural affinity") are among the top choices, and that looks bad, even though it may be an important part of all ads targeted anywhere.
Is NYU's study going to have enough power to measure targeting in an intellectually honest way? They can certainly write something descriptive.
That descriptive, "Well here are some ads we looked at, and some of them disproportionately appeared in front of users with e.g. this ethnicity more often than others, which we editorially chose" - I can see how that is a lose-lose for Facebook.
[+] [-] tolbish|5 years ago|reply
[+] [-] jMyles|5 years ago|reply
[+] [-] Solvitieg|5 years ago|reply
Who governs information and speech online? Washington DC or Silicon Valley?
[+] [-] duxup|5 years ago|reply
To do what? What does that mean?
[+] [-] CalChris|5 years ago|reply
[+] [-] amelius|5 years ago|reply
[+] [-] X6S1x6Okd1st|5 years ago|reply
[+] [-] Animats|5 years ago|reply
That's the key point here. The researchers are not a party to Facebook's terms of service. The user installing the add-on may be, but that does not bind the add-on developer. (This is called "privity" in law; contract constraints do not obligate third parties who didn't agree to the contract.)
Facebook could disconnect Facebook users using the add-on, if they can detect them. That would be a bad PR move.
[+] [-] dfxm12|5 years ago|reply
[+] [-] 013a|5 years ago|reply
Its a Terms and Conditions violation. Ok, I get that. But, at what point did the developers of this application ever agree to any terms and conditions? Its not like its accessing data via the API, or requires some kind of privileged access levels.
Morals, ethics, security, whatever aside; I just don't understand the legal angle Facebook is using here.
[+] [-] tomcat27|5 years ago|reply
[+] [-] macspoofing|5 years ago|reply
The people who are taking over these companies, those people I'm afraid of because they are political and their first instinct is to push ideology and censorship.
[+] [-] op03|5 years ago|reply
Large alliances spanning countries and multiple institutions, hundreds of researchers working in tandem. Thats the best way to tame a beast this large.
Same goes for regulators/legal strategies/journalism etc. Associations and alliances are key.
[+] [-] lmilcin|5 years ago|reply
[+] [-] duxup|5 years ago|reply
This seems like a pretty big stretch.
[+] [-] macspoofing|5 years ago|reply
What to do, what to do.
[1] Brings to mind the hit-piece: "The Making of a YouTube Radical - The New York Times" article. But there is an ongoing effort to continue pushing Facebook, Twitter and YouTube into more and more censorship of anyone deemed on the right and the Overton window keeps getting smaller and smaller. The situation with the Hunter Biden laptop story and NYPost is absolutely bonkers. Not only do journalists at mainstream center/center-left news outfits not care that Twitter and Facebook outright decided that the story is false and therefore shouldn't be shared by anyone and banned the account of their colleagues at NYPost .. but worse, actually applaud it and justify it.
[+] [-] wmeredith|5 years ago|reply
[+] [-] dredmorbius|5 years ago|reply
https://news.ycombinator.com/item?id=24874602 on the WSJ (linked in tweet here) story https://www.wsj.com/articles/facebook-seeks-shutdown-of-nyu-... (paywalled)
Politico: https://www.politico.com/news/2020/10/23/facebook-block-tran...
[+] [-] dr-detroit|5 years ago|reply
[deleted]
[+] [-] chriscappuccio|5 years ago|reply
[deleted]
[+] [-] TLightful|5 years ago|reply
[deleted]