Couldn't fit the whole tweet, so just to be clear, she's talking about manual, non-legal requests:
> One of the surprising things about working on the slander series is how few people in the field, even experts, know that Google voluntarily removes some search results. (No court order needed!) You have to visit this generic url: https://support.google.com/websearch/troubleshooter/3111061?...
Down the thread, she adds this:
> Because so few people know about it, "reputation managers" are charging people like $500 a pop to "remove damaging information from Google results." And ALL THEY DO is fill out that form for free. Someone tried to hawk this service to my husband after he came under attack.
I didn't know about this service URL, and had just assumed "reputation managers", if they did anything at all, were limited to SEO spamming.
edit: Searched HN for mention of this link and found exactly 5 results, 3 of which look unique, and the earliest in Dec. 23, 2015:
> Because so few people know about it, "reputation managers" are charging people like $500 a pop to "remove damaging information from Google results." And ALL THEY DO is fill out that form for free. Someone tried to hawk this service to my husband after he came under attack.
While that sounds like a high price, and you usually don't want to just hire the first person you see advertise a service, the concept of specialization there, and some people just not wanting to devote the time to becoming experts everywhere, seems perfectly fine and normal.
People often pay electricians even for tasks that are just "flip a switch and turn some screws," after all.
> are charging people like $500 a pop to "remove damaging information from Google results." And ALL THEY DO is fill out that form for free. Someone tried to hawk this service to my husband after he came under attack.
'ALL THEY DO' is what anyone does in a similar way. You are paying for what they know that you don't know how to do. There is nothing wrong or deceptive about making money in this way. (use of 'all they do' seems to imply this). Plenty of people are busy and willing to pay for things to have someone else handle the details. Google could easily publicize this but they choose not to.
Here is the thing. Someone charging $500 (or any amount) has already filtered people that feel they have a real need from everyone attempting to do similar. Now you could argue the fee should be less but the friction caused by that higher fee (for those who can afford it) is worth it. And those that can't can do research and effort (like with anything) to find a way to achieve the same.
People often expect everything to be free and nobody to make money off of knowledge. By the way if reputation managers are charging $500 nothing to prevent someone from doing the same for less and changing the market price. Also I'd image the rep managers possibly massage some things to get the job done something many people wouldn't know or want to do.
They are fairly narrow categories though, and don't handle things like mugshot sites, even if it's something you were arrested...but not convicted for. Still useful to know for sure.
I'd like to take this opportunity to share some relevant exercepts from In The Plex[^1] about Eric Schmidt, then-CEO of Google.
> One day Denise Griffin got a call from Eric Schmidt’s assistant. “There’s this information about Eric in the indexes,” she told Griffin. “And we want it out.” In Griffin’s recollection, it dealt with donor information from a political campaign, exactly the type of public information that Google dedicated itself to making accessible. Griffin explained that it wasn’t Google policy to take things like that out of the index just because people didn’t want it there. Principles always make sense until it’s personal,” she says.
> Then in July 2005, a CNET reporter used Schmidt as an example of how much personal information Google search could expose. Though she used only information that anyone would see if they typed Schmidt’s name into his company’s search box, Schmidt was so furious that he blackballed the news organization for a year.
> “My personal view is that private information that is really private, you should be able to delete from history,” Schmidt once said. But that wasn’t Google’s policy...
To Google's credit, it's literally the first search results for "how do I remove myself from Google". It is a bit ironic that people looking to remove something for being too easy to find via search are being stymied by a simple search.
Still, good to provide some visibility about it. I certainly never knew this was a thing.
The problem is they almost never actually honor these requests. I know several victims of stalking (including myself) who have had no luck with these forms, even when we've met the narrow and extreme criteria required including doxxing, death threats, etc. And if the forms don't work, there is no option B: courts will reject any challenges to remove content due to section 230, and if they won't take it down, then Google won't either and you're just stuck dealing with it indefinitely. It's been a special kind of hell that I wouldn't wish upon my worst enemy. It impacts your reputation, your career, your friends, etc. It makes you suspicious of everyone because you don't know when it's just another stalker digging for more dirt. Section 230 really needs a carve-out for cyberstalking and extortion.
This is an example of a large central authority censoring information.
How does the notion of a purely distributed, unregulated, uncensorable, blockchain-backed internet handle "revenge porn" or other genuinely harmful content?
An argument I hear from the crypto community is that blockchain is good because it enables freedom of speech that can't be banned by governments or other central authorities.
The crypto community needs to address the other side of that coin too. Are there circumstances when something should be banned, and how does that work on a blockchain?
Everyone, please DO NOT underestimate people's malicious will to abuse weak minority groups having un-moderated platforms as their primary tool. Due to lack of Telegram's basic moderation, those victims had been severely abused to the unrecoverable level, more than two years. This could've been at least mitigated if they simply closed the room based on user reporting, which they refused to do so.
> Are there circumstances when something should be banned, and how does that work on a blockchain?
My answer is No.
The "genuinely harmful content" is minimally harmful compared to the damage censorship can do.
The problem is always with people being dicks, not the information. E.g. a jilted lover posting porn videoes of an ex is only a problem because the lover will deliberately attempt to show them to the ex and the ex's social contacts (and even that is only harmful because some people are judgemental bastards. Yes, some people like scat, get over it), so target the lover with harassment charges rather than making the information distribution illegal.
I'm a bit of a freedom of information purist, I think a world with 0 censorship would be better than the current one, as the censors will always be tempted to censor more and more things to advance their interests. The maximum badness from zero censorship is much much lower than the maximum badness with censorship.
Since none of us are the operator in the matrix, we're not looking at the raw bytes of the blockchain, we're always using some view layer over the data. So the view layer can choose to not see the content of some block.
This could be done by having community operated blocklists to the effect of "you don't want to know what's at these locations" - you can't take back what's been said on chain, but you can cover your ears.
The honest answer here is that I don't trust any authority to be the "authority on information" and to be able to decide what I may or may not be able to see. I expect the search engine to be a search engine and not an "allowed material" repository.
History shows that an authority on anything would be abused sooner or later and if you live in a totalitarian country the technical ability to censor information is the last thing you want anyone having
I seem to lean quite far on the freedom side of most arguments, but I do acknowledge there are times when action may need to be taken in the interest of the public. My objection is that I neither want it to be impossible to take action or for a foreign company to unilaterally decide what action to take.
What we need in situations like this is a legal process. If one doesn't exist, it's not for companies to start deciding what information the public should have access to and what is "harmful", but for democratic countries to pass laws with the consent of their local electorates to decide what legal protections and processes need to be put in place.
The problem we have today is that there are too many foreign companies deciding what we can and can't say or do. Crypto has the exact opposite problem, but unless our governments step up and regulates these companies in the interests of the public our only option (if you don't agree with the censorship) is to create something uncensorable.
...what? I've known of this for at least a decade. Do people expect Google to be some free-for-all of unmoderated knowledge that can be used to defame and degrade others?
The story focuses on a software engineer who discovered that he was the slander target of someone who had been fired by his father 30 years ago. He found her identity and took her to court in 2018. But the libelous Google results didn't change. The NYT story even ends with this:
> Yet even that hasn’t solved the problem. See for yourself: Do a Google search for “Guy Babcock.”
If a software engineer with the resources to find and take an anonymous libeler didn't know that Google could manually intervene and remove results that listed him as a pedophile, I'm assuming the vast majority of people are equally unaware.
Cool, the section "Remove content about me on sites with exploitative removal practices from Google" seems custom-crafted to handle sites like RipoffReport. https://support.google.com/websearch/answer/9172218
Which makes one wonder why Google doesn't simply make it a policy to delist sites like RipoffReport and the mugshot harvesting sites (and public-record-PII-publishing sites) as spammy dictionary bullshit in the first place and be proactive about this.
They run the index, they get to decide what's in it or not.
What I was wondering is how do they verify that the person is the one requesting themselves be removed? Can you technically have all results of someone that you don't like be removed? I could see that harming businesses if your name is often associated with your websites around your business.
It raises the question of how they can find that many hits to be removed, they must be spamming the hell out of the index, and Google must be allowing it/providing an API.
I'm a dox victim and live in Europe. I've tried to use my right to be forgotten and used Google's removal request, to which the answer was:
"It is Google’s understanding that the information about you on this URL - with regard to all the circumstances of the case we are aware of - is still relevant in relation to the purposes of data processing, and therefore the reference to this document in our search results is justified by the public interest.
Based on the information available to us at this time, Google LLC has decided not to take action on this URL."
I've now tried the other URL, but I doubt this will help.
What's also surprising is how bad Google is at processing these requests. It’s almost like a PR stunt. I’ve had to use their EU Privacy Removal form in the past and a single response can take anything from a few days to several weeks to no response at all. Half the time it seems like you’re emailing a bot as I’ve received the same canned reply to simple inquiries. In the end I just gave up.
The Internet Archive may, in appropriate circumstances and at its discretion, remove certain content or disable access to content that appears to infringe the copyright or other intellectual property rights of others.
On a slightly unrelated but nevertheless interesting issue: a while back I wanted to find an “incel” board to see how discourse thereon actually was, and Google returned no direct results to any of them but DuckDuckGo immediately returned the results one might expect when searching “incel forum”.
That's particularly sad. It is proof that the incel community really does enough "damage" that they have to be suppressed. I am curious as to when this was removed, and why it was removed.
It really wouldn't be so much of a problem if Google didn't command 90%+ market share in many countries.
Anecdotally I remember a fine example from a few years ago when Matt Cutts proclaimed they had algorithmically solved spam for queries such as payday loans. A day later, one appeared with Mr Cutts selling payday loans. It was removed within 24 hours.
If people don't know about this, they haven't been paying attention. I find the idea that 'even experts' (haha) don't know about this rather hilarious.
Few people know that Google Search also voluntarily censors search results for political reasons. Probably by request of CIA/State department. Such blacklists appeared on wikileaks. Which was esp. ironic that Google employees fought the very same feature on the search product for the Chinese market "Dragonfly" (blacklisting tianmanmen and such)
[+] [-] danso|4 years ago|reply
> One of the surprising things about working on the slander series is how few people in the field, even experts, know that Google voluntarily removes some search results. (No court order needed!) You have to visit this generic url: https://support.google.com/websearch/troubleshooter/3111061?...
Down the thread, she adds this:
> Because so few people know about it, "reputation managers" are charging people like $500 a pop to "remove damaging information from Google results." And ALL THEY DO is fill out that form for free. Someone tried to hawk this service to my husband after he came under attack.
I didn't know about this service URL, and had just assumed "reputation managers", if they did anything at all, were limited to SEO spamming.
edit: Searched HN for mention of this link and found exactly 5 results, 3 of which look unique, and the earliest in Dec. 23, 2015:
https://imgur.com/EEUJn9M
[+] [-] majormajor|4 years ago|reply
While that sounds like a high price, and you usually don't want to just hire the first person you see advertise a service, the concept of specialization there, and some people just not wanting to devote the time to becoming experts everywhere, seems perfectly fine and normal.
People often pay electricians even for tasks that are just "flip a switch and turn some screws," after all.
[+] [-] tlogan|4 years ago|reply
[+] [-] gist|4 years ago|reply
'ALL THEY DO' is what anyone does in a similar way. You are paying for what they know that you don't know how to do. There is nothing wrong or deceptive about making money in this way. (use of 'all they do' seems to imply this). Plenty of people are busy and willing to pay for things to have someone else handle the details. Google could easily publicize this but they choose not to.
Here is the thing. Someone charging $500 (or any amount) has already filtered people that feel they have a real need from everyone attempting to do similar. Now you could argue the fee should be less but the friction caused by that higher fee (for those who can afford it) is worth it. And those that can't can do research and effort (like with anything) to find a way to achieve the same.
People often expect everything to be free and nobody to make money off of knowledge. By the way if reputation managers are charging $500 nothing to prevent someone from doing the same for less and changing the market price. Also I'd image the rep managers possibly massage some things to get the job done something many people wouldn't know or want to do.
[+] [-] tyingq|4 years ago|reply
[+] [-] judge2020|4 years ago|reply
[+] [-] reaperducer|4 years ago|reply
It's like I tell my boss: "You don't pay me to push buttons. You pay me to know which buttons to push."
(I think I stole it from the plumbing industry.)
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] spondyl|4 years ago|reply
> One day Denise Griffin got a call from Eric Schmidt’s assistant. “There’s this information about Eric in the indexes,” she told Griffin. “And we want it out.” In Griffin’s recollection, it dealt with donor information from a political campaign, exactly the type of public information that Google dedicated itself to making accessible. Griffin explained that it wasn’t Google policy to take things like that out of the index just because people didn’t want it there. Principles always make sense until it’s personal,” she says.
> Then in July 2005, a CNET reporter used Schmidt as an example of how much personal information Google search could expose. Though she used only information that anyone would see if they typed Schmidt’s name into his company’s search box, Schmidt was so furious that he blackballed the news organization for a year.
> “My personal view is that private information that is really private, you should be able to delete from history,” Schmidt once said. But that wasn’t Google’s policy...
I guess they've since changed the policy a bit?
[^1]: https://www.amazon.com/Plex-Google-Thinks-Works-Shapes/dp/14...
[+] [-] jhncls|4 years ago|reply
[0] https://www.huffpost.com/entry/google-ceo-on-privacy-if_n_38...
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] nova22033|4 years ago|reply
[+] [-] legitster|4 years ago|reply
Still, good to provide some visibility about it. I certainly never knew this was a thing.
[+] [-] cirno|4 years ago|reply
[+] [-] oh_sigh|4 years ago|reply
[+] [-] dkokelley|4 years ago|reply
How does the notion of a purely distributed, unregulated, uncensorable, blockchain-backed internet handle "revenge porn" or other genuinely harmful content?
An argument I hear from the crypto community is that blockchain is good because it enables freedom of speech that can't be banned by governments or other central authorities.
The crypto community needs to address the other side of that coin too. Are there circumstances when something should be banned, and how does that work on a blockchain?
[+] [-] summerlight|4 years ago|reply
https://en.wikipedia.org/wiki/Nth_room_case
Everyone, please DO NOT underestimate people's malicious will to abuse weak minority groups having un-moderated platforms as their primary tool. Due to lack of Telegram's basic moderation, those victims had been severely abused to the unrecoverable level, more than two years. This could've been at least mitigated if they simply closed the room based on user reporting, which they refused to do so.
[+] [-] concordDance|4 years ago|reply
My answer is No.
The "genuinely harmful content" is minimally harmful compared to the damage censorship can do.
The problem is always with people being dicks, not the information. E.g. a jilted lover posting porn videoes of an ex is only a problem because the lover will deliberately attempt to show them to the ex and the ex's social contacts (and even that is only harmful because some people are judgemental bastards. Yes, some people like scat, get over it), so target the lover with harassment charges rather than making the information distribution illegal.
I'm a bit of a freedom of information purist, I think a world with 0 censorship would be better than the current one, as the censors will always be tempted to censor more and more things to advance their interests. The maximum badness from zero censorship is much much lower than the maximum badness with censorship.
[+] [-] jazzyjackson|4 years ago|reply
Since none of us are the operator in the matrix, we're not looking at the raw bytes of the blockchain, we're always using some view layer over the data. So the view layer can choose to not see the content of some block.
This could be done by having community operated blocklists to the effect of "you don't want to know what's at these locations" - you can't take back what's been said on chain, but you can cover your ears.
[+] [-] foxrider|4 years ago|reply
[+] [-] kypro|4 years ago|reply
I seem to lean quite far on the freedom side of most arguments, but I do acknowledge there are times when action may need to be taken in the interest of the public. My objection is that I neither want it to be impossible to take action or for a foreign company to unilaterally decide what action to take.
What we need in situations like this is a legal process. If one doesn't exist, it's not for companies to start deciding what information the public should have access to and what is "harmful", but for democratic countries to pass laws with the consent of their local electorates to decide what legal protections and processes need to be put in place.
The problem we have today is that there are too many foreign companies deciding what we can and can't say or do. Crypto has the exact opposite problem, but unless our governments step up and regulates these companies in the interests of the public our only option (if you don't agree with the censorship) is to create something uncensorable.
[+] [-] f38zf5vdt|4 years ago|reply
[+] [-] danso|4 years ago|reply
https://news.ycombinator.com/item?id=25972121
https://www.nytimes.com/2021/01/30/technology/change-my-goog...
The story focuses on a software engineer who discovered that he was the slander target of someone who had been fired by his father 30 years ago. He found her identity and took her to court in 2018. But the libelous Google results didn't change. The NYT story even ends with this:
> Yet even that hasn’t solved the problem. See for yourself: Do a Google search for “Guy Babcock.”
A day after the NYT story, those results disappeared. Here's what they looked like: https://news.ycombinator.com/item?id=25973045
If a software engineer with the resources to find and take an anonymous libeler didn't know that Google could manually intervene and remove results that listed him as a pedophile, I'm assuming the vast majority of people are equally unaware.
[+] [-] sneak|4 years ago|reply
[+] [-] joecool1029|4 years ago|reply
[+] [-] sneak|4 years ago|reply
They run the index, they get to decide what's in it or not.
[+] [-] drdavid|4 years ago|reply
Can someone be a complete dirtbag and request that legitimate criticism be removed simply because they don't want folks to know they're a dirtbag?
Can convicts request the results be removed? How about sex offenders? How about people convicted of domestic violence assaults or similar?
[+] [-] k2xl|4 years ago|reply
[+] [-] srj|4 years ago|reply
Disclaimer: Work at Google in this area.
[+] [-] vmception|4 years ago|reply
You can still go to individual services or your local municipality and ask.
[+] [-] WediBlino|4 years ago|reply
https://www.bpi.co.uk/news-analysis/bpi-sends-500-millionth-...
https://transparencyreport.google.com/copyright/reporters/18...
It raises the question of how they can find that many hits to be removed, they must be spamming the hell out of the index, and Google must be allowing it/providing an API.
[+] [-] kroeckx|4 years ago|reply
"It is Google’s understanding that the information about you on this URL - with regard to all the circumstances of the case we are aware of - is still relevant in relation to the purposes of data processing, and therefore the reference to this document in our search results is justified by the public interest.
Based on the information available to us at this time, Google LLC has decided not to take action on this URL."
I've now tried the other URL, but I doubt this will help.
[+] [-] seumars|4 years ago|reply
[+] [-] throwaaskjdfh|4 years ago|reply
[+] [-] dredmorbius|4 years ago|reply
https://help.archive.org/hc/en-us/articles/360004716091-Wayb...
In practice, IA will unpublish content on request to [email protected] AFAIU
[+] [-] paulpauper|4 years ago|reply
[+] [-] paxys|4 years ago|reply
[+] [-] gnopgnip|4 years ago|reply
[+] [-] Blikkentrekker|4 years ago|reply
[+] [-] Der_Einzige|4 years ago|reply
[+] [-] ricardo81|4 years ago|reply
Anecdotally I remember a fine example from a few years ago when Matt Cutts proclaimed they had algorithmically solved spam for queries such as payday loans. A day later, one appeared with Mr Cutts selling payday loans. It was removed within 24 hours.
[0] https://www.seroundtable.com/google-payday-loan-cutts-16940....
[+] [-] toby-|4 years ago|reply
[+] [-] H8crilA|4 years ago|reply
[+] [-] rurban|4 years ago|reply
[+] [-] WarOnPrivacy|4 years ago|reply
[+] [-] selimthegrim|4 years ago|reply
[+] [-] godmode2019|4 years ago|reply
[+] [-] wly_cdgr|4 years ago|reply