>But it’s clear now that we didn’t do enough to prevent these tools from being used for harm as
well. That goes for [...] data privacy. We didn’t take a broad enough view of our responsibility, and that
was a big mistake. It was my mistake, and I’m sorry.
I was going through some old news archives about Facebook and their privacy policies. I came across the dire EFF warning in December 2009 [1]:
>"The issue of privacy when it comes to Facebook apps such as those innocent-seeming quizzes has been well-publicized by our friends at the ACLU and was a major concern for the Canadian Privacy Commissioner, which concluded that app developers had far too much freedom to suck up users' personal data, including the data of Facebook users who don't use apps at all. Facebook previously offered a solution to users who didn't want their info being shared with app developers over the Facebook Platform every time a one of their friends added an app: users could select a privacy option telling Facebook to "not share any information about me through the Facebook API.""
Well, it turns out EFF was correct and accurately predicted the unethical scenario of Cambridge Analytica siphoning data from Facebook users who didn't even take their quiz.
The bullet points of "fixes" that MZ outlined don't really address the fundamental problem. Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.
Zeynep Tufecki does a good takedown of Facebook's "14-Year Apology Tour" [0].
It's a long-term, calculated, deliberate strategy to methodically abuse privacy with disastrous consequences, then when caught dead-to-rights, say "oops, sorry, made a mistake, we'll fix it".
This game worked amazingly well for 14 years, will people fall for it again this time?
>The bullet points of "fixes" that MZ outlined don't really address the fundamental problem. Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.
I wholly disagree. They don't have to disclose the data to anyone in order to use it to target ads. Their targeting system works by allowing advertisers to specify targeting criteria, and then using logic on Facebook's own servers to match users to targeted ads. They aren't selling or disclosing the data to anyone. People keep conflating the CA situation with the business of targeted ads. One has nothing to do with the other.
Your conclusion doesn't really make sense. The EFF was talking about this data being shared with app developers, they were not talking about Facebook collecting this data. Facebook only needs to do the latter to run ads against it.
> That goes for [...] data privacy. We didn’t take a broad enough view of our responsibility, and that was a big mistake.
Judging from [1], this wasn't any "mistake":
They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
This was a deliberate policy of allowing people "on their side" to access and use these data. Now, when it turned out people on the "other side" can do it too, it became a "big mistake" suddenly.
"What We Are Doing" under the "Cambridge Analytica" section is crap, i.e. lightweight or hypothetical (e.g. "we’re in the process of investigating every app that had access to a large amount of information before we locked down our platform in 2014").
By contrast, the same section under "Russian Election Interference" is well thought out. There's some hand-wavy stuff (e.g. "in the U.S. Senate Alabama special election last year, we deployed new AI tools that proactively detected and removed fake accounts from Macedonia trying to spread misinformation"). But requiring "every advertiser who wants to run political or issue ads...confirm their identity and location" and mandating the ads "show...who paid for them" is meaningful. That they're "starting this in the U.S. and expanding to the rest of the world in the coming months" is more encouraging. I'm also genuinely optimistic about their "tool that lets anyone see all of the ads a page is running" and "searchable archive of past political ads."
With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction. With Russia, Facebook and political advertisers' interests are aligned. Hence, action.
I think the result of this may be regulation, or even breaking Facebook up. A few months ago I couldn’t have imagined feeling justified saying that was likely, but my god this story has legs. On CBS news (which is mediocre at best) they had someone from Mozilla explaining that this whole mess isn’t a breach or mistake, but their business model. The piece played at least three times in an hour.
This feels like a real change in public awareness to me. I’ve never seen this kind of real talk about privacy in the non technical press before, and it just keeps going and going. How can Facebook thrive as people become aware of just how crooked they are?
Validation of genuine political identity is a logical next step. One that will require an Army of human curation (not an AI problem).
However, I wonder about the reliability of "PO Box" as proxy to political identity.
Or how FB is going to stop Mom-and-Pop retail pages from advertising false political messages ("2-for-1 sale! All proceeds go towards stopping the Trump-ordered seal beatings in Antartica!")
> With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction. With Russia, Facebook and political advertisers' interests are aligned. Hence, action.
I think more conspiratorial thinking is necessary. With [fake] political ads, the ability of FB to usurp the ruling class in the US is apparent. In this case, it was Russia but in another hypothetical case it could be FB themselves (picking and choosing the propaganda err messages to show). Since the powers that be won't take kindly to that, action.
The powers that be don't actually care about privacy, in fact they actively don't want people to have privacy. Hence, inaction for CA.
Just storing it is only a tiny fraction of the cost. It requires at least a datacenter full of systems + supporting staff to actually collect it and make use of it.
This line of thinking is extremely reductionist to the point of uselessness. Anyone who's built products at scale can tell you this. There are thousands of things you aren't thinking of.
"From now on, every advertiser who wants to run political or issue ads will need to
be authorized. To get authorized, advertisers will need to confirm their identity
and location. Any advertiser who doesn’t pass will be prohibited from running
political or issue ads. We will also label them and advertisers will have to show
you who paid for them. We’re starting this in the U.S. and expanding to the rest of
the world in the coming months."
There are so many other ways to "influence" people's opinions by distorting what they see on their news feed, front page Google results, etc.
For instance (from Wikipedia):
> In April 2016, Correct the Record announced that it would be spending $1 million to find and confront social media users who post unflattering messages about Clinton.[1][4] The organization's president, Brad Woodhouse, said they had "about a dozen people engaged in [producing] nothing but positive content on Hillary Clinton" and had a team distributing information "particularly of interest to women".
This crafty term "political or issue ads". Is that a legal / enforceable term, or is this something that Facebook gets to decide?
Whatever it is, we're going to see that "ads which influence the public for political aims" are going to bleed juuuust on the other edge of that definition.
Also interesting that Facebook's response to this issue is to collect more data -- this time about "advertisers with specific political agendas", which seems like an interesting database to mine (although maybe not that hard to collect, idk).
Also, extremely scary. This means that if you want to do any political action, you need to entrust your most detailed personal identification to Facebook. And if Facebook is latter served with a subpoena to disclose these data, they will be only too happy to oblige. It may not be too much a concern in a country where there is a functioning democracy and strong judiciary protections, but in countries where you could disappear for criticizing the government it makes Facebook completely unfit for use by anyone but government propaganda outlets. It will also have chilling effects for speech even in the US - there are numerous examples of people being subject to bullying and personal destruction campaigns for either publishing or even sponsoring messages that some influential groups did not like. Of course, Facebook has no legal responsibility to support this - or any kind of - speech, but elimination of this kind of speech from any public place hurts the democratic process much more than minuscule influence of some bad actors.
Anonymous or pseudonymous political speech is very valuable for a robust democratic debate, and it is extremely sad - though not exactly surprising - that it is being eradicated under the guise of "transparency" and "protecting the elections from foreign influence" and all kinds of bullshit like that.
Was the problem political ads though? I think a lot of the content shared across Facebook was meme-like images that received shares and likes organically after an initial push by bots and Russian facebook users?
"For even greater political ads transparency, we have also built a tool that lets anyone see all of the ads a page is running [and are] creating a searchable archive of past political ads."
"In order to require verification for all of these pages and advertisers, we will hire thousands of more people."
I don't see (yet?) how this directly solves the problem.
Let's say I'm a foreign intelligence agency and want to influence people through ads. Feels like I just need to set up a shell corporation based in the states. Once I'm authorized, I can load the creative offshore.
I actually feel for anyone that buys ad space for politics on Facebook as they obviously don't handle it particularly well. Tons of times in 2016 I saw ads for where someone was running for in-state legislature of another state entirely (one I had never even been to or driven through) being rendered for me. What value is an impression if the receiver couldn't vote for you if they wanted? I did actually comment on a couple of them saying good luck from a guy out of state (and that the ad showed up in my state, hoping they can recoup some of their advertising costs).
I wonder how will this go together with GDPR. For showing paid ads facebook does not need to know identity of the advetiser, so they should probably have a right to refuse to provide it under GDPR, right?
Edit: I mistook this from being a government action, not a private company action. I totally agree a private company is allowed to restrict things in this way.
>Any advertiser who doesn’t pass will be prohibited from running political or issue ads.
How does this interact with the First Amendment? If I'm paying for a local TV company to run an add, wouldn't restricting my ability be in violation? What if I buy the local TV company and choose which ads are ran?
Edit: What about non-political or non-issue ads that still have a political component? For example, if I was a billionaire wanting to cause some certain political divides, I could definitely create ads that still cause great controversy. For example, spend some money developing a bullet proof school outfit, and then advertise it heavily. It is a bunch of extra work and expenditure I wouldn't do if I could just run gun control ads, but if those were banned are you going to ban any advertisements related to any merchandise that is related to political issues?
>We also learned about a disinformation campaign run by the Internet Research Agency (IRA) —
a Russian agency that has repeatedly acted deceptively and tried to manipulate people in the US,
Europe, and Russia. We found about 470 accounts and pages linked to the IRA, which generated
around 80,000 Facebook posts over about a two-year period.
Our best estimate is that approximately 126 million people may have been served content from a
Facebook Page associated with the IRA at some point during that period. On Instagram, where
our data on reach is not as complete, we found about 120,000 pieces of content, and estimate that
an additional 20 million people were likely served it.
This part seems to be rather interesting. The number of people who viewed the content created by IRA in general is appalling.
Is this appalling? What's appalling to me is the number of people who can't tell the difference between fantasy and reality.
I'm sure governments are going to come down hard on Zuck for "allowing disinformation to be spread", but won't give a second thought about cutting education budgets.
It's hard for politicians to make it illegal to lie or to run a platform where people might lie. People are always going to lie and try to deceive others. It would be more effective for these politicians to actually educate their constituents. This is just one of many benefits of having educated citizenry...
> In 2007, we launched the Facebook Platform with the vision that more apps should be social.
Your calendar should be able to show your friends’ birthdays, your maps should show where
your friends live, and your address book should show their pictures. To do this, we enabled
people to log into apps and share who their friends were and some information about them.
Anyone who was in a FB sales meeting when OpenGraph was launched knows this is a very calculated understatement. I've heard them explicitly sell the information of an entire user's friends list to anyone willing to pay.
> "It’s not enough to just connect people, we have to make sure those connections are positive. It’s
not enough to just give people a voice, we have to make sure people aren’t using it to hurt people
or spread misinformation."
And be sure to report your fellow citizens for reeducation when they spread "negativity" and "misinformed opinions".
Before anyone says Facebook's testimony is just fluff, we're a business that relies on Facebook's API to monitor activity, and we've been severely impacted. We no longer can get data as before - it's having a large effect on our business. So we definitely feel that action is being taken.
I am not greatly concerned with Facebook's privacy policy. People voluntarily give up their personal information in order to use this free service.
I am more concerned about his identification of "fake news" and "hate speech" as issues without any clarification. These are both subjective descriptions with the potential for damaging abuse, which we are already seeing with the banning of Diamonds and Silk from Facebook.
In my opinion, Facebook, Google, and other social media platforms should be required to be content neutral and its users given first amendment protection (with its concomitant limits). This would require federal legislation.
Seems weak, I hope some of the representatives have read Tim Wu.
For example when Zuck says:
>My top priority has always been our social mission of connecting people, building community and bringing the world closer together. Advertisers and developers will never take priority over that as long as I’m running Facebook.
This is either misleading or a lie. Facebook is a corporation, what Zuckerberg wants to do as an individual isn't even relevant, what's relevant is what the corporation "wants" to do to increase profit. So far that has been pursuing connecting people and building communities and then taking that information and selling it to developers and advertisers. The phrasing places community and monetization in competition but Facebook's model isn't of competition between those it's model is a monetization of community. So the testimony never speaks to the fundamental problem of having advertisers as any level of priority here.
Also to say that Cambridge Analytica abused the system sidesteps the issue of whether it was in Facebook's interest to allow this data to be collected and then misused. Cambridge Analytica purchased data which made Facebook the most attractive advertising platform for them because of the targeting they could do. That is good for Facebook's bottom line. They also don't address the total amount of information that may be out there, floating through advertisers outside of Facebook's control right now. Despite new restrictions how much about me is already out there? That's what I want to know.
Overall, it's basically what I expected but I really hope there are a few representatives who can set aside the specifics of this one incident and attack the wider notion of Facebook's purpose.
Interesting. He really puts Facebook into the role of policing its users (and advertisers). He describes taking down thousands of fake accounts, investigating whether certain pages have connections with IRA and taking them down if so.
This will probably sound reassuring to legislators, but it pretty much permanently accepts the burden of responsibility for misinformation on the site. It sounds likely to force Facebook into a costly permanent arms race against every malicious political actor in the world. I wonder how regular users will get caught in the crossfire.
Not that I feel bad for FB here -- they make tons of money because they have the most captive attention of any entity in the world. Attention is valuable because it allows platforms to influence people's behavior toward certain actions. Advertisers are not the only ones who realize this, and FB has to take responsibility for all forms of influence on its site, not just commercial ads.
As shitty as Facebook is as a service, I don't think they should have any legal obligation to 'protect' users from their own neglect.
I don't use facebook, because I think a free-software based & decentralized social network would be the right way to do social networking, but if the rest of the people want to give away their info, fine by me.
If someone I know wants to give my info to fb, it's fine too, but he/she loses part of my trust.
I value the freedom to do whatever you want with the data you have more than the convenience of the government protecting whatever data I foolishly gave away to someone.
I stopped reading at that paragraph. That Zuckerberg is seriously out of its league.
>> It’s not enough to just connect people, we have to make sure those connections are positive.
Does he really think he can define what "positive" means with a platform hosting hundred of millions of communications ? There is no positive, there's human nature. Regulating that by FB itself won't work but worse, it will be a tyranny. The gated wall of FB will mean gated psychology, sociology, etc. FB could be great if it was run by people who think about people, not shareholders...
All this says is they don't have control over their network and they're just bleeding out data everywhere. The primary takeaway especially with regard to the election tampering is that this (Facebook) is a huge, free, open tool to abuse and control elections.
Your credit data had been stolen, your health data had been stolen, your government records every word you say online, your "smart" fridge is ddosing wikipedia, you worry about being unreachable by silly spam that may or may not effect your voting for one of the two equally horrible politicians
"This is me telling you what you want to hear, but I don't mean a word of it and I will continue to get away with whatever I can until I get caught at which time I will tell you what you want to hear again."
Surprised at Zuckerberg/Facebook's response to all this. The cynical part of me says it's only due to the negative PR all this has generated but I hope this works out for better privacy for all those who continue to use Facebook.
I wonder if other tech companies will be called on to testify, Dorsey, Page, et. al.
[+] [-] jasode|8 years ago|reply
I was going through some old news archives about Facebook and their privacy policies. I came across the dire EFF warning in December 2009 [1]:
>"The issue of privacy when it comes to Facebook apps such as those innocent-seeming quizzes has been well-publicized by our friends at the ACLU and was a major concern for the Canadian Privacy Commissioner, which concluded that app developers had far too much freedom to suck up users' personal data, including the data of Facebook users who don't use apps at all. Facebook previously offered a solution to users who didn't want their info being shared with app developers over the Facebook Platform every time a one of their friends added an app: users could select a privacy option telling Facebook to "not share any information about me through the Facebook API.""
Well, it turns out EFF was correct and accurately predicted the unethical scenario of Cambridge Analytica siphoning data from Facebook users who didn't even take their quiz.
The bullet points of "fixes" that MZ outlined don't really address the fundamental problem. Facebook's "data privacy" problem is not fixable if they have to ultimately run valuable ads against that data.
[1] https://www.eff.org/deeplinks/2009/12/facebooks-new-privacy-...
[+] [-] panarky|8 years ago|reply
Zeynep Tufecki does a good takedown of Facebook's "14-Year Apology Tour" [0].
It's a long-term, calculated, deliberate strategy to methodically abuse privacy with disastrous consequences, then when caught dead-to-rights, say "oops, sorry, made a mistake, we'll fix it".
This game worked amazingly well for 14 years, will people fall for it again this time?
[0] https://www.wired.com/story/why-zuckerberg-15-year-apology-t...
[+] [-] iooi|8 years ago|reply
You can donate here: https://supporters.eff.org/donate
[+] [-] downandout|8 years ago|reply
I wholly disagree. They don't have to disclose the data to anyone in order to use it to target ads. Their targeting system works by allowing advertisers to specify targeting criteria, and then using logic on Facebook's own servers to match users to targeted ads. They aren't selling or disclosing the data to anyone. People keep conflating the CA situation with the business of targeted ads. One has nothing to do with the other.
[+] [-] cjhopman|8 years ago|reply
[+] [-] smsm42|8 years ago|reply
Judging from [1], this wasn't any "mistake":
They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
This was a deliberate policy of allowing people "on their side" to access and use these data. Now, when it turned out people on the "other side" can do it too, it became a "big mistake" suddenly.
[1] https://twitter.com/cld276/status/975568208886484997
[+] [-] polarix|8 years ago|reply
Do you have a citation for this? I've been trying to clarify the argument recently and am looking for other perspectives.
[+] [-] s2g|8 years ago|reply
read in the voice of the South Park version of Tony Hayward.
[+] [-] JumpCrisscross|8 years ago|reply
By contrast, the same section under "Russian Election Interference" is well thought out. There's some hand-wavy stuff (e.g. "in the U.S. Senate Alabama special election last year, we deployed new AI tools that proactively detected and removed fake accounts from Macedonia trying to spread misinformation"). But requiring "every advertiser who wants to run political or issue ads...confirm their identity and location" and mandating the ads "show...who paid for them" is meaningful. That they're "starting this in the U.S. and expanding to the rest of the world in the coming months" is more encouraging. I'm also genuinely optimistic about their "tool that lets anyone see all of the ads a page is running" and "searchable archive of past political ads."
With Cambridge Analytica, a core component of Facebook's advertising business model is threatened. Hence the inaction. With Russia, Facebook and political advertisers' interests are aligned. Hence, action.
[+] [-] oconnor663|8 years ago|reply
If they already disabled the API that CA was using, is "inaction" really the right word?
[+] [-] dbasedweeb|8 years ago|reply
This feels like a real change in public awareness to me. I’ve never seen this kind of real talk about privacy in the non technical press before, and it just keeps going and going. How can Facebook thrive as people become aware of just how crooked they are?
[+] [-] matchagaucho|8 years ago|reply
However, I wonder about the reliability of "PO Box" as proxy to political identity.
Or how FB is going to stop Mom-and-Pop retail pages from advertising false political messages ("2-for-1 sale! All proceeds go towards stopping the Trump-ordered seal beatings in Antartica!")
[+] [-] cjhopman|8 years ago|reply
[+] [-] jiveturkey|8 years ago|reply
I think more conspiratorial thinking is necessary. With [fake] political ads, the ability of FB to usurp the ruling class in the US is apparent. In this case, it was Russia but in another hypothetical case it could be FB themselves (picking and choosing the propaganda err messages to show). Since the powers that be won't take kindly to that, action.
The powers that be don't actually care about privacy, in fact they actively don't want people to have privacy. Hence, inaction for CA.
[+] [-] VikingCoder|8 years ago|reply
1) Go to pcpartpicker, pick Storage, sort by Price/GB:
https://pcpartpicker.com/products/internal-hard-drive/#sort=...
3TB (for $57.50)
2) Go to Google, look up number of people in the United States:
https://www.google.com/search?q=us+population
325.7 million
3) Divide
You can store 9.2 kilobytes for every single person in the United States, for just $57.50.
9.2 kilobytes is actually a pretty decent amount of data. For comparison, this is Chapter 40 of Pride and Prejudice, at 9kb:
http://www.kellynch.com/e-texts/Pride%20and%20Prejudice/Prid...
The complete works of Shakespeare fit into 5 MB.
To store the equivalent of the complete works of Shakespeare for every person in the US would cost: $31,222.50.
Heck, I know small businesses that could afford that, let alone the mega-corp media conglomerates.
How much data does Verizon store about me? Comcast? Target? Visa?
[+] [-] craftyguy|8 years ago|reply
[+] [-] askafriend|8 years ago|reply
[+] [-] corrigible|8 years ago|reply
Interesting!
[+] [-] jimmytucson|8 years ago|reply
For instance (from Wikipedia):
> In April 2016, Correct the Record announced that it would be spending $1 million to find and confront social media users who post unflattering messages about Clinton.[1][4] The organization's president, Brad Woodhouse, said they had "about a dozen people engaged in [producing] nothing but positive content on Hillary Clinton" and had a team distributing information "particularly of interest to women".
[+] [-] nfoz|8 years ago|reply
Whatever it is, we're going to see that "ads which influence the public for political aims" are going to bleed juuuust on the other edge of that definition.
Also interesting that Facebook's response to this issue is to collect more data -- this time about "advertisers with specific political agendas", which seems like an interesting database to mine (although maybe not that hard to collect, idk).
[+] [-] smsm42|8 years ago|reply
Anonymous or pseudonymous political speech is very valuable for a robust democratic debate, and it is extremely sad - though not exactly surprising - that it is being eradicated under the guise of "transparency" and "protecting the elections from foreign influence" and all kinds of bullshit like that.
[+] [-] giarc|8 years ago|reply
[+] [-] quantumwoke|8 years ago|reply
"For even greater political ads transparency, we have also built a tool that lets anyone see all of the ads a page is running [and are] creating a searchable archive of past political ads."
"In order to require verification for all of these pages and advertisers, we will hire thousands of more people."
Sounds like Facebook has a lot more work to do.
[+] [-] wiremine|8 years ago|reply
Let's say I'm a foreign intelligence agency and want to influence people through ads. Feels like I just need to set up a shell corporation based in the states. Once I'm authorized, I can load the creative offshore.
[+] [-] nugi|8 years ago|reply
[+] [-] jsgo|8 years ago|reply
[+] [-] pjc50|8 years ago|reply
It doesn't apply to online ads, which has become a serious problem with the amount of money flowing into Brexit from illegally-anonymous sources.
[+] [-] gray_-_wolf|8 years ago|reply
[+] [-] s2g|8 years ago|reply
[+] [-] PurpleBoxDragon|8 years ago|reply
>Any advertiser who doesn’t pass will be prohibited from running political or issue ads.
How does this interact with the First Amendment? If I'm paying for a local TV company to run an add, wouldn't restricting my ability be in violation? What if I buy the local TV company and choose which ads are ran?
Edit: What about non-political or non-issue ads that still have a political component? For example, if I was a billionaire wanting to cause some certain political divides, I could definitely create ads that still cause great controversy. For example, spend some money developing a bullet proof school outfit, and then advertise it heavily. It is a bunch of extra work and expenditure I wouldn't do if I could just run gun control ads, but if those were banned are you going to ban any advertisements related to any merchandise that is related to political issues?
[+] [-] vthallam|8 years ago|reply
This part seems to be rather interesting. The number of people who viewed the content created by IRA in general is appalling.
[+] [-] dfxm12|8 years ago|reply
I'm sure governments are going to come down hard on Zuck for "allowing disinformation to be spread", but won't give a second thought about cutting education budgets.
It's hard for politicians to make it illegal to lie or to run a platform where people might lie. People are always going to lie and try to deceive others. It would be more effective for these politicians to actually educate their constituents. This is just one of many benefits of having educated citizenry...
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] dsfyu404ed|8 years ago|reply
[+] [-] fortythirteen|8 years ago|reply
Anyone who was in a FB sales meeting when OpenGraph was launched knows this is a very calculated understatement. I've heard them explicitly sell the information of an entire user's friends list to anyone willing to pay.
[+] [-] fortythirteen|8 years ago|reply
And be sure to report your fellow citizens for reeducation when they spread "negativity" and "misinformed opinions".
[+] [-] shamino|8 years ago|reply
[+] [-] joshuaheard|8 years ago|reply
I am more concerned about his identification of "fake news" and "hate speech" as issues without any clarification. These are both subjective descriptions with the potential for damaging abuse, which we are already seeing with the banning of Diamonds and Silk from Facebook.
In my opinion, Facebook, Google, and other social media platforms should be required to be content neutral and its users given first amendment protection (with its concomitant limits). This would require federal legislation.
[+] [-] frgtpsswrdlame|8 years ago|reply
For example when Zuck says:
>My top priority has always been our social mission of connecting people, building community and bringing the world closer together. Advertisers and developers will never take priority over that as long as I’m running Facebook.
This is either misleading or a lie. Facebook is a corporation, what Zuckerberg wants to do as an individual isn't even relevant, what's relevant is what the corporation "wants" to do to increase profit. So far that has been pursuing connecting people and building communities and then taking that information and selling it to developers and advertisers. The phrasing places community and monetization in competition but Facebook's model isn't of competition between those it's model is a monetization of community. So the testimony never speaks to the fundamental problem of having advertisers as any level of priority here.
Also to say that Cambridge Analytica abused the system sidesteps the issue of whether it was in Facebook's interest to allow this data to be collected and then misused. Cambridge Analytica purchased data which made Facebook the most attractive advertising platform for them because of the targeting they could do. That is good for Facebook's bottom line. They also don't address the total amount of information that may be out there, floating through advertisers outside of Facebook's control right now. Despite new restrictions how much about me is already out there? That's what I want to know.
Overall, it's basically what I expected but I really hope there are a few representatives who can set aside the specifics of this one incident and attack the wider notion of Facebook's purpose.
[+] [-] thedrake|8 years ago|reply
(using https://personality-insights-demo.ng.bluemix.net/)
[+] [-] bo1024|8 years ago|reply
This will probably sound reassuring to legislators, but it pretty much permanently accepts the burden of responsibility for misinformation on the site. It sounds likely to force Facebook into a costly permanent arms race against every malicious political actor in the world. I wonder how regular users will get caught in the crossfire.
Not that I feel bad for FB here -- they make tons of money because they have the most captive attention of any entity in the world. Attention is valuable because it allows platforms to influence people's behavior toward certain actions. Advertisers are not the only ones who realize this, and FB has to take responsibility for all forms of influence on its site, not just commercial ads.
[+] [-] sandov|8 years ago|reply
I don't use facebook, because I think a free-software based & decentralized social network would be the right way to do social networking, but if the rest of the people want to give away their info, fine by me.
If someone I know wants to give my info to fb, it's fine too, but he/she loses part of my trust.
I value the freedom to do whatever you want with the data you have more than the convenience of the government protecting whatever data I foolishly gave away to someone.
[+] [-] wiz21c|8 years ago|reply
>> It’s not enough to just connect people, we have to make sure those connections are positive.
Does he really think he can define what "positive" means with a platform hosting hundred of millions of communications ? There is no positive, there's human nature. Regulating that by FB itself won't work but worse, it will be a tyranny. The gated wall of FB will mean gated psychology, sociology, etc. FB could be great if it was run by people who think about people, not shareholders...
[+] [-] notananthem|8 years ago|reply
[+] [-] AzzieElbab|8 years ago|reply
[+] [-] Mc_Big_G|8 years ago|reply
[+] [-] leviathan|8 years ago|reply
[+] [-] nothis|8 years ago|reply
[+] [-] TranceMan|8 years ago|reply
Edit - Explanation: https://news.ycombinator.com/item?id=16795723
[+] [-] llbowers|8 years ago|reply
I wonder if other tech companies will be called on to testify, Dorsey, Page, et. al.