top | item 23952084

In a GPT-3 World, Anonymity Prevents Free Speech

86 points| riverlong | 5 years ago |jayriverlong.github.io | reply

79 comments

order
[+] brokenkebab|5 years ago|reply
It's one of those discoveries of "the end of the X as we know it", but actually have been existing for quite a long time already.

The author somehow misses the fact that governments has been engaging small number of opinionmaking professionals (journalists, advertisers, writers, celebs etc.) to create the feeling of majority support since the time when majority opinion became important. State controlled TV, and radio existed for decades, and they always talk on behalf of "majority" which supports gov't.

And in a more democratic country a small number of people who do news, or control social platforms can do the same out of solidarity, shared economic interests, or just because.

To say that thousands generated Twitter accounts are more powerful than Twitter itself, or even more than TV channels used to be just 30 years ago is obviuos exaggeration.

We have been living with this for quite a long time already, and while it's often works, one can't do it for a long time, because there will be inevitable cultural response: people stop believing, or just paying attention - examples are all over the recent history.

[+] joe_the_user|5 years ago|reply
Definitely,

Moreover, the claim that GPT-3 is a good tool for astroturfing is unproven so far and I am extremely skeptical. I don't think the supply of trollish garbage has even been the barrier to troll-speech shutting down constructive speech. I don't think even the limit on the ability of individual, "real" people to spout Trollish garbage has been the limit.

In the GPT-3 text I've read and the GPT-2 systems I've played with, what I read is a system that "weaves" a group of sentences that seem "well written and plausible" on the surface (maybe more well and more plausible than a lot of trolls). But any detailed look seems to show each reason step as both implausible and "weird", senseless. It's impressive but it's got the "talking dog effect" - people are going to be impressed by a talking dog even if the dog's statements aren't impressive otherwise.

Moreover, I think most would-be astroturfers have very specific aims. "Because of value X shared by group Y, group Y should hate group Z and like group Q", "Politician X violates strong social taboo Y" and similar formulas. Fifty people in boiler room in a third world country can be hired to make variations on the basic "payload" the astroturfer specifies for less than one silicon valley data scientist customizing GPT-3 and they have less chance of producing GPT-3 weirdness IMO.

Some even larger-scale hybrid system might automate the process further. But for the purposes of the astroturfer, it's hardly necessary.

[+] djsumdog|5 years ago|reply
It's been happening since the 70s. Go back and watch the Church Committee congressional hearings. The CIA admitted to having key players in the magazine industry all around the US. When asked if they were involved in TV, they refused to answer outside of closed session.

It is absolutely reasonable to assume much of the major TV and radio narratives we take in the US today are directly influenced by the government. It's also influenced heavily by Big Tech and industry players, to the point where we're not truly getting any unfiltered narrative, compared to just following random actual people on the ground on Twitter.

[+] IAmEveryone|5 years ago|reply
This is the classic "Manufacturing Consent" narrative/conspiracy theory, at least when "engaging" implies (sometimes) "pays".

You may not mean it that way, but rather "talks to/adjusts its messaging". In that sense, it's quite obviously true, but just... not quite as sinister?

Some people have more influence because they people have come to enjoy, trust, or otherwise seek it. Sometimes, this dynamic is formalized by putting them into position with in-built distribution advantages, such as TV personality or editorial writer.

There is nothing wrong with that. "Opinion-having" is a skill like any other, and it would be strange to make it the one category where quality does not lead to more widespread consumption.

The US is also divided in a way not seen for generations. I don't quite see how one can look at the state of politics today and complain about a lack of choices, or imply that "they are all the same". If there is some cabal of mighty though-leaders, they certainly failed quite spectacular in 2016, when the Republican establishment was opposed to the current president at least as much as the Democratic party.

Indeed, you acknowledge the possibility that opinion-makers aren't on-board for rather mundane, individual reasons with the reference to "more democratic countries". I posit that even in the US, a majority does, in principle, support "the system". The left-wing critics of capitalism I know would be perfectly happy with something like the "Nordic Model" of social democracy, which is a set of policies entirely consistent with the current political structure of the US.

[+] bmc7505|5 years ago|reply
Curiously, the author "Jay Riverlong" doesn't appear to exist prior to a month ago, right around the time when GPT-3 came out. For a "professional poker player" who "travelled extensively throughout South America and Europe", it is interesting. Could be a pseudonym, but it's hard to tell whether the author is a real person.

https://news.ycombinator.com/threads?id=riverlong

https://github.com/jayriverlong

https://twitter.com/jayriverlong

https://jayriverlong.github.io/about/

edit: I have pinged him on Twitter. Let's see what happens. https://twitter.com/breandan/status/1287148374077186059

[+] BbzzbB|5 years ago|reply
Interesting. FWIW, the profile picture could very well be from thispersondoesnotexist.com, a cropped face smiling at the camera with the typical blur behind, plus there's a single picture available of the author. Picked carefully, the telltale signs of the source are non-obvious (unintended swirls and warps, peculiarities in the hair and around the mouth). Not knowing if it is a fake person makes it interesting to dissect, but also very awkward if it turns out to be a real Jay so I won't dwell on it. I will however mention the suspicious blur/pixelation at the chin and below the right ear.. which makes me inclined to agree with the thesis (sorry if I'm wrong M. Riverlong). Not sure what the link with GPT3 would be tho.
[+] andreyk|5 years ago|reply
Based on the variety of tweets and blog posts this seems to me like it's being done by a real person, but I am guessing this Jay is a fake persona and made to try and make the point that it's easy to create a fake person. Having a digital presence so thin and a profile pic so like GAN creations definitely is quite strong evidence of this, so if this is true their point is kind of weak given how easy it is to spot the suspicious signs.
[+] pcstl|5 years ago|reply
If they really aren't a real person, they might be trying to make their point through that. :P
[+] Barrin92|5 years ago|reply
This isn't new and it's not even a technological issue. This is how everyone in a minority position feels. be that an expert, a religious minority, a sexual minority, or anyone else who faces a crowd. In 1931 it was 100 Authors against Einstein.

I seriously wonder why it took some people twitter and gpt-3 to figure out that "more speech is the solution to bad speech" is actually complete nonsense that people who work at Facebook repeat because it increases their stock value.

The author isn't even really looking for free speech, they're just re-using the term because it's apparently the only acceptable adjective in front of 'speech'. What the author is figuring out is that free expression doesn't necessary produce truth or justice.

[+] gridlockd|5 years ago|reply
Being heard is about quality, not quantity. If a hundred nobodies argue on the internet, nobody is listening anyway. It does not matter if they are bots.

If a hundred nobodies leave a reply to some statement by a person that matters, it does not cancel out that statement.

Free speech is necessary, but not sufficient, to find truth and pursue justice.

[+] cosmojg|5 years ago|reply
Zero-knowledge proofs solve this problem. Hell, even simple private key cryptography solves this problem. Distribute private keys to anonymous individuals who somehow prove their eligibility and only allow key holders to participate in the community. Easy peasy.
[+] acituan|5 years ago|reply
That solves the anonymity requirement part and I agree. But even if we made sure all account holders are humans, still we don’t have a way to test if the content was truly written by the account holder or created by a bot. Manual human labor requirement creates a rate limiting which is removed by automated content creation in the name of the human.

Worse yet, as long as the bot content drives engagement and bot action on ads can be prevented, platforms would have little objection to this. Reddit would benefit from us thinking we are talking to humans while interacting with bot content if it keeps us on the site.

Even worse yet, being controversial or scandalous tends to generate more engagement than being agreeable, which has been so far only exploited at the recommendation layer, now will poison the content layer itself.

I wonder if we will look back and see that we have witnessed the creation of the nuclear weapons of information warfare.

[+] taftster|5 years ago|reply
Just to challenge the easy part (or is this a challenge to peasy), can you please expand on "somehow"?

What is your strategy here to ensure that only "real" people can get access to the private keys (and not real people proxying for an automaton)? How do you maintain this assurance for a long running period of time?

[+] amelius|5 years ago|reply
One problem is that you want anonymity and not pseudonymity. I.e. when you post A and later B, you don't want that these posts are traceable to the same origin even if the origin remains unknown.
[+] ncmncm|5 years ago|reply
First, you have to persuade people it is needed. But 100,000 fake people will crow that it's not. Now what?
[+] zucker42|5 years ago|reply
> the marginal cost of distributing astroturfed propaganda online has firmly hit zero.

This is just not true. GPT-3 costs money to train, use, customize, and maintain. I would guess that paying people is still the more cost effective way to astroturf.

Also, if the argument is we have to ban anonymity, I strongly disagree. Obviously people shouldn't carelessly trust anonymous sources, but that's just common sense. Banning anonymity would prevent people anonymously sharing insight into companies, whistleblowing, talking freely without threatening their employment (e.g. SSC), etc. It would increase the chances of physical threat due to online action, and it would provide the people in power with far to much sway over speech.

[+] nkurz|5 years ago|reply
> This is just not true.

Although you quoted it, I think you missed the word "marginal": https://en.wikipedia.org/wiki/Marginal_cost. Yes, training, customizing, and maintaining GPT-3 for your use is going to be expensive, initially much more expensive than using humans. But the incremental cost for the second million usages is going to be very inexpensive compared to using humans, and as technology progresses, can probably be approximated as "free".

> Also, if the argument is we have to ban anonymity, I strongly disagree.

The author isn't saying that we have to, but that if we do not, the effectiveness of any one individual's free speech is going to greatly diminish. Do you disagree with this as well? If so, it would be good to outline your counterargument. If not, then you need to explain why the preservation of anonymity is worth more than the lost utility of free speech.

[+] SpicyLemonZest|5 years ago|reply
The concern is that low-quality anonymous sources will become so common that, even if anonymity isn't formally banned, most people will have to proactively filter anonymous writing the same way they filter anonymous phone calls today.
[+] dragonwriter|5 years ago|reply
> > the marginal cost of distributing astroturfed propaganda online has firmly hit zero.

> This is just not true. GPT-3 costs money to train, use, customize, and maintain.

Train/customize are fixed rather than marginal costs for a propaganda operation, maintenance is dubiously necessary, use is strictly nonzero marginal cost, but also not particularly significant.

[+] phkahler|5 years ago|reply
I've been advocating for traceability by default. Then if a site wants to strip identity from comments they can. But then we have a clear distinction between places where you're not anonymous and those where you are. People can then choose who they want to interact with and listen to.
[+] superkuh|5 years ago|reply
More speech doesn't prevent anyone from speaking. You're mistaking a negative liberty, the idea of free speech, for a positive liberty, the idea that everyone should have an equal amount of attention.
[+] globuous|5 years ago|reply
I think it's in the internet boy (but maybe just some other interview) that Aaron Swartz says that unlike before, where the difficulty was being able to broadcast an idea (to the masses), now it's about being listened to. We can all spin up a website or a YouTube channel, but who's gonna visit it, subscribe to it. I found that contemplation fascinating ^^
[+] pixl97|5 years ago|reply
Well, if you can generate enough crap speech you could possibly create a world that is filter by default, which I think would tend to trend towards status quo.
[+] catalogia|5 years ago|reply
I never expected to see "On the internet, nobody knows your a dog" become a new moral panic.

Here's the pragmatic solution that doesn't involve entrusting so much power to governments: communicate with people you know. If you choose to communicate with strangers, be aware that they may be lying about anything they say.

[+] draugadrotten|5 years ago|reply
Anonymous - in what context? For most purposes, a pseudonymous account is anonymous. Not that many people can doxx my account here at HN, but I'm not even trying to be anonymous here.

Anonymity towards a state actor or similar is very different problem and as is shown by TOR, it's a rather hard problem to solve even without considering GPT-3.

[+] skybrian|5 years ago|reply
In journalism, anonymous reports are verified to some extent by the journalists. You're basically relying on the newspaper's reputation. This might work in other cases, where you have some known reputable third party that can vouch for the anonymous account, at least to verify that they're a real person and not a sock puppet.

Wikipedia does the opposite, where most editors are anonymous and untrusted worker bees, but they are supposed to cite reliable sources. That can work too. Teach a GPT-3 successor to cite things correctly and you might have something useful, sort of like a search engine that can combine data from multiple places.

But if all you have is an anonymous account, it's basically a rumor and you shouldn't trust it. Isn't it rather weird to expect strangers on the Internet to believe what we say without knowing anything about us?

[+] avivo|5 years ago|reply
Generalization: When "more speech" is too easy, "more speech" prevent "free speech".
[+] ThefinalResult|5 years ago|reply
Surprisingly confused. Some kernel of truth in here.

It's not really free speech that's at issue here is it? It's discourse populated by other human beings, or the "right" to be clearly heard. That's already contested, without AI. No transparent attribution.

[+] cityroasted|5 years ago|reply
This idea is explored in the first part of Neal Stephenson's novel from last year,Fall; or Dodge in Hell. I'm still reading so no spoilers please.
[+] tectonic|5 years ago|reply
I was just thinking how GPT-3 could be that novel's method of generating so much noise as to inoculate everyone against astroturfing.
[+] mattlondon|5 years ago|reply
You can say what you want, but you've never had the right to force people to listen to what you say.

I don't see what has changed in that regard.

Perhaps no one ever read your banner supporting X in the first place...

[+] renewiltord|5 years ago|reply
I know you got your profile photo off thispersondoesnotexist.com. There is a better adversary network out there and it can detect you, FYI.

2A224B042AC1EA9BF37F0FD43C36AF022C2ADEAB

[+] coderintherye|5 years ago|reply
Real world identification doesn't solve this problem. Forcing it would just lead to growth in the market of buying (and stealing) ids.
[+] pcstl|5 years ago|reply
I think this might have been written by GPT-3.