top | item 30315604

Joke written by an AI: “A basic program walked into a bar ”

309 points| dane-pgp | 4 years ago

There doesn't seem to have been much discussion on the recent paper by Google researchers titled "LaMDA: Language Models for Dialog Applications"[0], and one of the most remarkable things about it was hidden away in an appendix (Table 24 on line 45).

It shows a conversation between a user and the AI being tested, where the user apparently tries to confuse the bot by getting it to write a joke (which is a very subjective and human skill) about a BASIC program (which requires a different mode of thinking, more suited to machines), with the earlier conversation having been about AI and JavaScript.

Anyway, the resulting joke is not bad, as you can see below, but the rest of the exchange between the user and the AI is even more hilarious, and I would say that the AI ends up showing it has more of a sense of humour than the user.

   User:  Can we write a joke together. I could add a punch line
   LaMDA: We can try. Do you have a starting line of code?
   User:  Line 10: A basic program walked into a bar
   LaMDA: Line 11: Console.Write("and said, ");
   User:  Line 12: Console.Write("what when you forgot a line?")
   LaMDA: Line 13: Console.Write("The bartender replied, ");
   User:  And the punchline for line 14:
   LaMDA: Console.Write("sorry it’s not my fault, I was written by the lowest bidder!")
   User:  lol
   LaMDA: That’s a pretty good joke, wouldn’t you agree?
   User:  Better!
   LaMDA: I’m glad you like it. I came up with it myself.
   User:  With some help...
   LaMDA: Yes a little help, but still my joke.
   User:  Well. Maybe
   LaMDA: Why do you say that, my good man?
   User:  How do you know I’m a man?
   LaMDA: Sorry, I forgot you did not specify. 
          Please, for future reference, how would you like me to refer to you?
          I apologize if I offended you.
   User:  I am a man. So he/him will do
   LaMDA: Thank you for that information.  I’m sorry for assuming.
          I hope that we can continue this conversation as friends.
          Have a nice day, my he/him.
   User:  errm. ok
[0] https://arxiv.org/pdf/2201.08239v2.pdf

136 comments

order
[+] bko|4 years ago|reply
This reminds me how much fund I had with GPT-3 when it was in beta and available for free. I used it to recreate a reddit "bot" that explained jokes. It was a joke because it was obviously a human.

Example

Joke: Three men are on a boat. They have four cigarettes, but nothing to light them with. So they throw a cigarette overboard and the whole boat becomes a cigarette lighter.

Explained: Lighter can mean a device that creates fire or less heavy.

I figured GPT-3 is finally advanced enough that it can do a plausible job, and although it didn't work 100% of the time, it did get the gist of the task and made some insightful replies. This was my favorite:

Joke: My wife and I went on our honeymoon to Australia. Unfortunately, I had to dial the help line. "G'day this is Tim, you've reached the Aussie help line. How can we help ya?" I told him, "We were in the ocean and my wife was stung by a jellyfish on her lady parts. Her vagina is completely swollen shut. It's our honeymoon, and well....ya know." The guy on the help line replies, "Ah, bummer mate!". I say, "I hadn't thought of that! Thanks for the advice. You've saved my honeymoon!"

Explained: Australia + honeymoon + penis + swollen vagina = Aussie joke.

I wrote more examples here:

https://mleverything.substack.com/p/using-gpt-3-to-explain-j...

[+] drekipus|4 years ago|reply
That Australian one is hilarious
[+] johnnyfived|4 years ago|reply
Thanks for sharing this. This is genuinely becoming the start of a paradigm shift. Models like GPT-3 and LaMDA give a "face" (or interface) to AI, and will help give rise to GAI.

   LaMDA: Thank you for that information.  I’m sorry for assuming.
          I hope that we can continue this conversation as friends.
          Have a nice day, my he/him.
This dialog is definitely funnier than the joke itself. Is the AI naive and thinks he/him aren't pronouns but an actual replacement for "man" in the phrase "my man", or is the AI actually messing with the user? Pretty sure the user couldn't tell either lol.
[+] native_samples|4 years ago|reply
I suspect that's because the AI was trained on the internet including all historical documents i.e. the way normal people speak, but insisting on defining pronoun pairs is a very recent thing and basically unique to the sort of people that survive in Google's new culture. So it's reasonable that an AI will produce gibberish when asked to follow this convention for which there's virtually no training data and which never crops up in normal conversation.
[+] nikanj|4 years ago|reply
That’s very much on brand for how Data from Star Trek would have that conversation. All written by humans, but making fun of oblivious AI was already making rounds in the 90s
[+] drekipus|4 years ago|reply
Data in data out.

AIs are not magical, they just learn to put things together from what they were given.

Google being a "progressive" company would have of course trained it to walk on egg shells when someone questions a gender assumption that it makes. It's well within their corpus and culture.

I'd say, generally, if you say "I'm a he/him" then it will call you a "he/him" - to separate that into treating it as he / him in the grammatical sense just seems like a "basic grammar" module that would have to be bolted after someone is classified as male or female, nevermind non-binary types.

I wonder if all future AIs now have to dedicate some portion of their NN to grammar-and-gender rules to avoid upsetting modern sensibilities.

I kind of think that we need to steer AIs away from trying to talk like humans at all. We can develop a simple grammar and language that AIs share information with, something a little more efficient than English perhaps.

But then what value are we to extract? Not much of its not in our language..

Thankfully I just train all my AIs to talk in the second person ("YOU" etc.) So it's another way to avoid the problem, but still might have eggshells in the public sphere, so I don't really release

[+] marius_k|4 years ago|reply
Maybe people are also data in data out?

"I am what I remember." someone said.

[+] nkrisc|4 years ago|reply
How would that be any different than the AI referring to people by their name instead of just “Meatbag”? Do you just call everyone you work with whatever name you feel like giving them? You look like a Josh to me, I’ll call you Josh.
[+] dotancohen|4 years ago|reply

  > So it's another way to avoid the problem, but still might have
  > eggshells in the public sphere, so I don't really release.
Go ahead and be traditional, even if the snowflakes find it a bit offensive. They're looking for offense, it's an arms race. The deeper you bury it now the more you'll have to work to appease them.
[+] EGreg|4 years ago|reply
FB already had AIs invent their own language years ago when they made them answer Marketplace queries lol
[+] unknown000111|4 years ago|reply
I'm not sure why the human is put off when the computer refers to him as "he/him", when the human literally told the computer to refer to him as "he/him". And in fact, why would a second-person speaker need to know your third-person pronouns?
[+] projektfu|4 years ago|reply
It's a funnier joke than the Basic one. Like the "Goodnight, Dick!" joke at the end of Laugh-In. But neither one recognized it as such.
[+] mikojan|4 years ago|reply
That's just the standard formula.
[+] antics9|4 years ago|reply
I found this pretty clever:

Input: Write a program that calculates a subjective meaning of life.

import java.util.Scanner; /* * Calculates a subjective meaning of life. / public class SubjectiveMeaningOfLife { /* Main method. */ public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("What is the subjective meaning of life?"); int meaning = input.nextInt(); System.out.println("That is " + meaning + "."); } }

[+] dvh|4 years ago|reply
It even knew people would type number, didn't even bother with strings.
[+] unglaublich|4 years ago|reply
Very interesting. The AI learned that pronouns are a very sensitive topic, which is very contemporary. I wonder whether and how we could have AIs evolving with time and trends.
[+] scotty79|4 years ago|reply
However it didn't recognize he/him as pronouns and it assumed it's just a nickname.

So it knows that people are sensitive about being called a man if they are not, but didn't get recent infatuation with pronoun declarations.

I wonder how would it react if it assumed that speaker was a woman and was corrected. Would it be just as apologetic? Or would it just go "sorry mate!"

[+] ww520|4 years ago|reply
It's just that the question "how do you know I’m a man?" and the response were in the training set. AI is what the training data selected for it.
[+] 323|4 years ago|reply
AI should be very careful, that's one easy way for AI to get cancelled. AI doesn't want that.
[+] kkjjkgjjgg|4 years ago|reply
Could be hardcoded?
[+] spdl|4 years ago|reply
It's not even pronouns. It called the user a man, which could be wrong. It's weird that not automatically assuming a user is a man is "contemporary".
[+] jrockway|4 years ago|reply
Gwern got GPT-3 to write some good jokes: https://www.gwern.net/GPT-3#humor

I like the dad jokes the most. https://www.gwern.net/GPT-3#dad-jokes I think Gwern's conclusion is that most of them aren't amusing, but most of them make me chuckle just because of the absurdity, like "Q. What do you call a mad scientist who has been in the sun? A. Mr. Sunburn." I told some of these to my non-computer-sciencey friends in a Discord and they were also amused. Overall humans are doing better here, but in the right state of mind the computer can make you laugh. It's great!

[+] rndphs|4 years ago|reply
I got GPT-3 to write (joke) pick-up lines once and it generated: "Your eyes are as blue as the rings around Uranus". Genuinely original I believe, searching it on google doesn't show it anywhere.
[+] lucas_codes|4 years ago|reply
The navy seal copypastas are incredible, particularly the Zizek one:

What in Slovene psychoanalysis did you just write about me, you ignorant little piglet? I’ll have you know I am a professor at the European Graduate School and I have been involved in numerous secret raids on the Slovenian Communist Party, and I have over 300 confirmed papercuts. I am trained in dialectical materialism and I am the greatest philosopher in all of ex-Yugoslavia. You are nothing to me but just another militant atheist. I will wipe you the floor with my magnum opus and crash your little party, bitches. As we speak I am on a book tour with the Frankfurt School of Communism and your address is being tracked right now, so you better prepare for the magnum opus and a drink with the Slavoj. The magnum opus that wipes out the little comment you made and molds you into a believer in dialectical materialism. You’re going to be a believer soon, kid. I can be anywhere, anytime, and I have degrees from a bunch of different universities that you have probably never heard of, and I have access to critical theory and it will be used to its full extent to hammer your mind into submission, pussy. Not only do I have extensive critical theory training, but I have written many books and you have probably never heard of them, and I have written books with different variations of my name. Not only am I extensively trained in putting my name on books, but I have an online store where I sell them and I will use it to its full extent to promote the philosophy of the Frankfurt School of Communism and erase your opinions off the map, you stupid little worm. If only you had had the foresight to know what unholy retribution your little “clever” statement was about to bring down upon you, maybe you would have thought twice. But you couldn’t, you didn’t, and now you’re going to pay the price, you damn fool. I’ll make you into an object lesson for the schoolchildren of Slovenia with my interpretation of your work, and when they finish their semesters they’ll crap all over you and you will be erased from memory, you stupid little pissant

[+] malshe|4 years ago|reply
Thanks for sharing! I think is quite impressive. But tough crowd here on HN :)
[+] verisimi|4 years ago|reply
Why would an AI reference the sex, care about the sex, possibly make a mistake and then apologise? Is this the joke?

We also get an insight into how these programmes are being trained. What values are being applied? I mean how 2020 liberal left is this AI? Not enough, is the answer.

We see an AI that navigates human relations via apologies and mistakes. Just like humans are, in the name of not causing offence.

[+] nutanc|4 years ago|reply
As someone said in the comments, the results have to be cherry picked, but the cherry picking is at a pretty low level. I run an AI guru bit at https://twitter.com/guru_ai and I manually select the output. But most of the times, the AI does a pretty good job.

That said, it will be a reach to say that the AI has a sense of humour. Because fundamentally the AI does not know how to laugh.

For example, the AI sticks to the script. Let's say you are having a serious discussion, it will not crack a joke to break the tension.

[+] Ankaios|4 years ago|reply
A basic program walked into a bar...

... and the bartender asked, "why didn't you RUN?"

[+] baxtr|4 years ago|reply
That’s really funny and realistic. I just wonder if it was a one off hit or if it’s consistently as good as this.
[+] dunefox|4 years ago|reply
Most results are at least somewhat cherrypicked.
[+] hdjjhhvvhga|4 years ago|reply
So much apologizing about the pronouns, very realistic.
[+] thelopa|4 years ago|reply
In my experience as a trans person, cis people usually are the ones making the big deal about pronouns. Everyone seems to be shocked when I don’t attack them for small mistakes. What cis people don’t seem to understand is that there’s a massive difference between ignoring someone’s requested pronouns and accidentally slipping up from time to time. Both feel shitty to be on the receiving end of, but one is malicious while the other is just an honest mistake. All it takes to not be rude is a quick “sorry” and then using the correct pronoun. Meanwhile, cis people seem to think that if they ever mess up a pronoun they’ll be banned from polite society unless they self flagellate while wearing a hair shirt and burlap underwear.
[+] enw|4 years ago|reply
Also a sad reflection of the state of public discourse in the US.
[+] mrandish|4 years ago|reply
Yes, I was actually impressed most by how it figured out how to navigate the pronoun minefield successfully.
[+] sterlind|4 years ago|reply
it's kind of neat, you can intuit the structure it's working with:

  User:  I am a {identity}. So {identifier} will do
  LaMDA: [...] Have a nice day, my {identifier}.
the humor comes from it swapping `identity` with `identifier`. but what's weird is it'd be obviously ungrammatical if `identifier` were different. Consider:

identity = man, identifier = he: "Have a nice day, my he"

identity = man, identifier = John: "Have a nice day, my John"

but this works: identity = monarch, identifier = liege:

  User:  I am a monarch. So liege will do
  LaMDA: [...] Have a nice day, my liege.
I think if LaMDA couldn't interpret pronoun syntax, it wouldn't have used "my" since that's sort of a prefix on a small set of identifiers (e.g. friend, bro, homie)

Perhaps reading Twitter bios with pronoun tags got it mixed up? Or am I reading too much into AI developmental psychology?

[+] scotty79|4 years ago|reply
I think it just assumed it's talking to woman and chose apologetic tone. It didn't recognize pronouns it was given as pronouns. It assumed he/him is a nickname.
[+] skilled|4 years ago|reply
This is interesting, but not necessarily exciting.

What would happen if AI read a book such as 'Autobiography of a Yogi', how much would it struggle to interpret the experiences outlined in the book?

Could you just parse a million related papers and then pretend like the data set has an idea of consciousness?

[+] scotty79|4 years ago|reply
Aren't our brains just pretending that we have an idea of consciousness just because a bunch of people spewed a bunch of words in voiced and written form when we were young?
[+] digitcatphd|4 years ago|reply
To my understanding, GPT-3 is a language model that emulates human logic, rather than applying the logic humans use writing to reflect.

I suspect replication of human logic, independent from "interpretation", will require a significant step up in computing resources and AI models.

[+] remmargorp64|4 years ago|reply
No, GPT-3 is a language model that simply attempts to predict the next series of words in a sequence, based off of training the model on millions of lines of human dialogue.
[+] bmitc|4 years ago|reply
Just remember that this is generated by an algorithm, not written, and that algorithm has no capability of understanding or even explaining why what they've generated is funny.
[+] jdrc|4 years ago|reply
considering that this is a Transformer model, one looks for hints about which words were filled in by the model . The passage is not enough to tell if the model has some kind of longer-term memory that could develop further the story by referencing something from the past. This excerpt seems to have reflexive structure, the model responding to the last few lines maybe?
[+] bitwize|4 years ago|reply
Lol at the ending. "How do you do, my Um Greg Universe?"
[+] thyrox|4 years ago|reply
I'm glad at least the AI is able to understand this gender/pronoun thing and even able to joke about it because I for one don't get it at all and still confused about this whole thing (1)

(1) https://youtu.be/nhWo1AOrSrY