top | item 24439282

Welcome to the Next Level of Bullshit

49 points| NoRagrets | 5 years ago |nautil.us | reply

27 comments

order
[+] reilly3000|5 years ago|reply
At risk of being too pedantic even for HN, I have to say that a news article written by GPT-3 isn't "Fake News". Maybe you could called it "artificially authored news" or something, but nothing about synthesizing and regurgitating words is inherently fake. "Fake News" is a loaded term that fact-checkers tend to avoid for its ambiguity. Generally its usage refers to disinformation, which is the use of media to intentionally deceive the reader for political or social motivations.

Its terrifying to imagine artificially authored disinformation, but from my sparse understanding of GPT-3, it wouldn't be the right tool for crafting novel disinformation without a lot of inputs for its user. Disinformation is dangerous when represented as truth by platforms with credibility, and no content creation tool can garner and wield credibility on its own. That said, it could certainly wreak havoc with mass commenting campaigns and such.

[+] warent|5 years ago|reply
"GPT-3 is a marvel of engineering due to its breathtaking scale. It contains 175 billion parameters (the weights in the connections between the “neurons” or units of the network) distributed over 96 layers. It produces embeddings in a vector space with 12,288 dimensions."

I don't know much about AI, though I do know about programming, and to me this vaguely smells like "our program is so great because it has 1 million lines of code!"

Does the number of parameters, dimensions, etc, really have anything to do with how breathtaking and marvelous something like this is?

[+] patrickas|5 years ago|reply
As far as I understand in this specific case yes.

The whole schtick of GPT-3 is the insight that we do not need to come up with a better algorithm than GPT-2. If we dramatically increase the number of parameters without changing the architecture/algorithm its capabilities will actually dramatically increase instead of reaching a plateau like it was expected by some.

Edit: Source https://www.gwern.net/newsletter/2020/05#gpt-3

"To the surprise of most (including myself), this vast increase in size did not run into diminishing or negative returns, as many expected, but the benefits of scale continued to happen as forecasted by OpenAI."

[+] dawg-|5 years ago|reply
I love articles about the cutting edge of AI - because it always ends up putting the spectacular glory of nature front and center.

Any time you see an AI promo with really big numbers claiming to be close to replicating human cognitive functions, just remember that the human brain has 100 trillion synapses. So much of our discourse around AI right now is nothing more than chimpanzees scribbling on a wall with a crayon and calling it a "self portrait".

[+] GoatOfAplomb|5 years ago|reply
It _can_ be an analog to your million lines of code. But often times a given architecture won't scale to greater capabilities just by adding more parameters in the same pattern as before. (Oversimplifying, but) the signal from the training data gets weaker and weaker the "further away" the parameters are from it in the model. It can take actual ingenuity to figure out how to get those further layers in a network to contribute anything useful.
[+] ashtonkem|5 years ago|reply
If nothing else, actually training an AI algorithm that large is an extremely large engineering challenge.

Googling around, it looks like most neural networks have somewhere in the neighborhood of tens of thousands of parameters. If nothing else GPT-3 is much, much bigger than most of its peers.

[+] throwawaygh|5 years ago|reply
Not necessarily, obviously. I can add a trillion parameters to any model. Just like I can add a few mloc of useless bullshit to any code base.
[+] senux|5 years ago|reply
Technically, since they are talking about engineering, yes. The sentence is about the complexity of the system, not its capabilities.
[+] throwawaygh|5 years ago|reply
No. I can add a trillion parameters to any model. Just like I can add a few mloc of useless bullshit to any code base.
[+] donw|5 years ago|reply
It does.

We can see a very strong correlation between brain size, and overall intelligence within the animal kingdom. The larger the brain, the smarter the animal.

Effectively, GPT-3 has a bigger brain.

[+] curiousgal|5 years ago|reply
The biggest bulkshit, to me, is people confusing pattern matching with intelligence. Sure, the model is outputing coherent text, but it has no fucking clue what it's talking about.
[+] GarrisonPrime|5 years ago|reply
Fair enough, but the argument could be made that even human-level intelligence is just an advanced degree of pattern matching.
[+] phobosanomaly|5 years ago|reply
Much like most people on the internet...........
[+] johndoe42377|5 years ago|reply
Well, this could be explained in a few meta-principles or just principles of a proper (non-abstract) philosophy.

1. A map is not a territory. Weighted connections are not semantic relations.

2. Environment and its laws and constraints comes first.

3. Language is a tool of describing What Is, not a tool of producing what could be.

4. Like untyped lambda calculus, applying anything to anything produce bullshit.

5. A proper use of a language require a type discipline which reflect the laws and constraints of the environment, and reject sentences with is not type-correct.

Everything else will produce a bullshit. Theoretical physics and other abstraction based fields are thus flawed.

[+] axegon_|5 years ago|reply
For years we've been hearing how AI will build robots and wipe out humanity and the bollocks that is the trolley problem. I remember when I was reading the Unsupervised Cross-Domain Image Generation[1] paper and my immediate thought was "yep, I can see this going south". And sure enough, not long after deepfakes became a thing. GPT-3 is absolutely astonishing in terms of it's capabilities and I'd love to be able to dig into it's inner workings and scroll through it's code. The truth is there are three stoppers for the large majority of people who would love to exploit it.

1. Data. For better or worse obtaining a dataset that big isn't a great deal if you really want to. Gutenberg project, wiki corpus, the reddit dumps - difficult but definitely doable.

2. Costs - training the model costs $5M which is a considerable amount of money by anyone's standard(rich people will also have a second thought when they hear that number). But there is a catch - the hardware is becoming more and more accessible. Remember when a server grade GPU like P100 was ~10k a piece? And now the high end 30X series are 1/10th of that and have better specs... Adjust those numbers and you get something close to 20x price decrease in the course of 4 years(iirc p100 came out 2016).

3. Finding the people with the adequate knowledge to build something like this. This I think is the only blocker at this point. Realistically we are talking a few dozen people on earth that have the mental capacity to build something like this.

If there is one thing that I see as a potential threat in this field: information losing credibility.

[1] https://arxiv.org/abs/1611.02200

[+] phobosanomaly|5 years ago|reply
Regarding point number 3, I wonder if there might be more than we think, but they are sequestered in various defense projects in different countries around the globe?
[+] tetrisgm|5 years ago|reply
We tried GPT-3 for our email startup (Mailscript), initially as a fancy way to detect and understand the content of an email. That didn't work out great, because it's really prone to false positives and ultimately requires more work than doing fancy Regex. We're hopeful it will solve other problems we'll run into though, but we're not going to push for it before we find the problems.

I'll share the lessons learned from the implementation if that's interesting to people here.

[+] phobosanomaly|5 years ago|reply
Maybe bullshit could wind up being useful in unexpected ways.

It could be an interesting tool to use to workshop ideas.

For example, if you were trying to work your way through an idea, you could throw various aspects of your idea at it, and it would throw back a slightly different take.

Rubber-duck debugging, but the duck talks back, and you can throw your fundamental assumptions about life at it.

[+] grensley|5 years ago|reply
Oh god, was this written by GPT-3 too?

Feels like we're headed for the next level of SEO dark ages, where the shovel-ware content that was written by humans before can now be automated.

[+] snuxoll|5 years ago|reply
I was literally sitting waiting for the punchline, and was mildly disappointed.