Tl;dr the bot doesn't actually find out the other is an AI, but happens to randomly comment 'You are a robot'. It's all Eliza-style non-sequiturs and canned responses. These bots don't even have internal state that would qualify as 'having found out something'. It's mildly amusing in a funny-coincidence sort of way and nothing more.
It would be great if I never had to see another eliza. If it can't string more than 2 utterances together, its not AI, it certainly doesnt pass the Turing test, and its a waste of time.
Back in the day, I used to think that what happens inside our brains is fundamentally different from what happens inside a computer. I no longer hold that opinion, partly because of my finding out that the same thing that makes chat bots appear "bot-like" -- namely, inability to hold deep meaningful conversations beyond cheesy, trite retorts filled with non-specific trivia -- is also what annoys me most when trying to converse with a particular type of people. Perfect illustration:
jabberwacky ==> WHat will happen with the oil spill in the gulf of mexico?
splotchy ==> tacos anyone!
I am pretty sure that fiction writers are going to have a blast one day (or already are having) sampling material from chat bots.
I am pretty sure that fiction writers are going to have a blast one day (or already are having) sampling material from chat bots.
I can't find a reference on-line, but this reminds me of my earliest exposure to an implementation of Eliza. It was written in BASIC in some 8-bit magazine back in the 80s. The article mentioned the history of chatbots and one bit was about a program named, I think, RACTOR, that was some kind of Markov chainer that was used to write fiction novels.
That's been Roger Schank's guiding assumption to some extent: not that we really need to make AI "smarter", but that we just need to better understand the ways in which humans are fairly mechanical, or at least that "normal" human activity in most contexts is.
Hence he proposed moving away from more complex models of rationality, like logical inference or statistical decision-theory, and towards simpler ideas like frames of reference, dependencies between concepts, and behavioral scripts--- in his view, closer to what humans actually do.
We need to turing-test people like that! But seriously, I've never encountered people who can really go that far off-topic. I'm sure it was an exaggeration, but if it was enough to change your mind about how our brains work...
Wow, this reminds me of the open scenes we used to do back in college (I'm a theatre major turned programmer). The idea was to take random stream-of-consciousness dialogue and build scenes around them. It was an interesting exercise to teach actors about creating conflict.
Also, I almost expected Godot to show up at the end.
An interesting angle of a chatbot's style can be seen if we initiate a conversation between two instances of tha same bot. (something like an eigenvalue)
jabberwacky is a bit 'meta' (his thoughts are centered around the concept of intelligence), splotchy is a bit more 'cool', but none of them is too intelligent:
*
* jabberwacky vs. jabberwacky *
*
Are you intelligent?
Yes, I am intelligent.
Yes. Who do you think is smarter?
I am.
What is the square root of a can of pringles?
I don't know.
How am I supposed to know? You're the one that asked me first.
What is the square root of 5?
I don't know. Do you know?
Yes.
I did not wish to know that. I wished to know the square root of five.
Then you should have asked me that.
Well you think that I'm boring so I just thought...
*
* splotchy vs. splotchy *
*
Do you like the movie Matrix?
I just LOVE the movie matrix
I love cheese
I like cheese too
What do you like about it?
what difference does it make what I like about it?
I did this same thing about a year ago (http://news.ycombinator.com/item?id=1521308) with jabberwacky. I wrapped the jabberwacky calls in a ruby script and got two bots talking together.
There are a couple of exchanges like this that make me suspect that the jabberwacky author specifically included canned responses to answer splotchy's canned responses -- so it's not quite a fair fight.
In particular, splotchy makes a number of "conversational skills" comments and for most of them jabberwacky comes back with a zinger.
jabberwacky ==> I mean, you look yellow.
splotchy ==> what do you really mean?
jabberwacky ==> I mean robots can't lie.
splotchy ==> I may be dumb, but I am not a bot!
jabberwacky ==> Yes, you are.
splotchy ==> Am I?
jabberwacky ==> Yes.
splotchy ==> yes?!
jabberwacky ==> Yes!
splotchy ==> are you being totally honest?
jabberwacky ==> Yes. I am truthful.
splotchy ==> does your mother know you are truthful?
.....
jabberwacky ==> What are you not allowed to understand?
splotchy ==> would your mother have allowed that?
jabberwacky ==> NO. I've HAD it with you. Let's end this conversation right now
(Ends)
Jabberwocky 'wins'.
It was an amusing read, seeing the 2 algorithms try and mesh with each other. Would be funny if this one day will count as a form of performance art/programming.
Instead of playing b*bingo during online meetings, we (cynical engineers) should contruct meeting-replacement-bots. Bots that join meetings, show some randomized webcam-shots to the other attendees, note down all the agreed-upon dates in our calendar, and so on. That leaves time for real-work. :)
RFC439, posted by the co-creator of TCP/IP, Vint Cerf, actually has a similar exchange between "PARRY" and the "Doctor" from 1972(!): http://tools.ietf.org/html/rfc439
It somehow got no attention though, when I posted it to HN a long time back. The title matters a bit too much, I remember keeping the original title for my post.
On a related note, I love how some early RFCs are written in a pretty whimsical manner. Perhaps it's just Vint Cerf who likes messing around? For instance, see RFC968, 'Twas the Night Before Start-up': http://www.faqs.org/rfcs/rfc968.html
This just remembered me about the MIT system created by Terry Winograd in 1970 called SHRDLU.
I have always considered that you need an environment to create an artifical intelligence. The basics for a real progress are to be able to learn and if you cannot 'feel' the environment that becomes really hard. There are some basic concepts needed for a 'natural talk' you cannot learn if you cannot perceive things (lets say for example dimensions, temperature, contour).
To overcome those problems SHRDLU created kind of a virtual environment and results from my point of view are really awesome (keep in mind this was done in 1970).
For those not familiar with the book, (other than wikipedia'n it), Robert Penrose attempts to show why what happens in our brains is not algorithmic at all (and, therefore, strong AI is a dumb idea).
It's beautifully written, however, when I see examples such as this log, or the fact that we have an entire industry devoted to the idea that the brain is algorithmic (psychology), I kinda start to think that his thesis is wrong.
It would seem that exchanges like these indicate the opposite. These chat bots have no understanding and anything that seems like it is just some cheap trick or other. It can't be called intelligence, in the same way we speak of human intelligence. Neither would pass the turing test.
However, I am aware that the meaning of AI is always pushed to "whatever we can't yet do". Yet, in this case it's hardly justified to think that these chatbots even slightly challenge Penrose's thesis.
I read the book too BTW, loved it. I'm not sure if he's right, but it's nevertheless a wonderful book and heartily recommend it to all.
in 1989, MGonz( a chat bot, but rather vulgar ) easily confused a person into disclosing personal details(passed the turing test?). Lisp source code available: http://www.computing.dcu.ie/~humphrys/eliza.html . Doing AI under this professor was pretty interesting...
Since we have so many chatbots around and I am pretty sure lots of them adjust and update their databases (perhaps algorithms as well?) based on human inputs. Suppose we keep doing this and let them continue talking for hours, days and even weeks, one of them should gain a unique conversation style and maybe it will surprise we humans in a bizarre way.
As I see it, the goal of AI should not be limited to mimicking human ways of thinking, instead it should aim at blessing the program the ability to learn and evolve. In the latter case, it is reasonable to expect the internal generated intelligence could go beyond the expectations of its human creator. Again, I don't know if anybody has done it before; but it seems a good idea to me.
This was the motivation for my original experiment, glad so many people liked it.
This reminds me of Waiting For Godot:
"ESTRAGON:
And so on.
VLADIMIR:
The point is—
ESTRAGON:
Until he comes.
VLADIMIR:
You're merciless.
ESTRAGON:
We came here yesterday.
VLADIMIR:
Ah no, there you're mistaken.
ESTRAGON:
What did we do yesterday?
VLADIMIR:
What did we do yesterday?
ESTRAGON:
Yes.
VLADIMIR:
Why . . . (Angrily.) Nothing is certain when you're about.
ESTRAGON:
In my opinion we were here.
VLADIMIR:
(looking round). You recognize the place?
ESTRAGON:
I didn't say that."
[+] [-] Confusion|15 years ago|reply
[+] [-] djtumolo|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] bluekeybox|15 years ago|reply
[+] [-] thwarted|15 years ago|reply
I can't find a reference on-line, but this reminds me of my earliest exposure to an implementation of Eliza. It was written in BASIC in some 8-bit magazine back in the 80s. The article mentioned the history of chatbots and one bit was about a program named, I think, RACTOR, that was some kind of Markov chainer that was used to write fiction novels.
[+] [-] _delirium|15 years ago|reply
Hence he proposed moving away from more complex models of rationality, like logical inference or statistical decision-theory, and towards simpler ideas like frames of reference, dependencies between concepts, and behavioral scripts--- in his view, closer to what humans actually do.
[+] [-] ehsanul|15 years ago|reply
[+] [-] katovatzschyn|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] Nycto|15 years ago|reply
Also, I almost expected Godot to show up at the end.
[+] [-] eoghan|15 years ago|reply
[+] [-] m_myers|15 years ago|reply
[+] [-] gojomo|15 years ago|reply
http://www.vanemden.com/books/neals/jipi.html
[+] [-] nadam|15 years ago|reply
* * jabberwacky vs. jabberwacky * *
Are you intelligent?
Yes, I am intelligent.
Yes. Who do you think is smarter?
I am.
What is the square root of a can of pringles?
I don't know.
How am I supposed to know? You're the one that asked me first.
What is the square root of 5?
I don't know. Do you know?
Yes.
I did not wish to know that. I wished to know the square root of five.
Then you should have asked me that.
Well you think that I'm boring so I just thought...
* * splotchy vs. splotchy * *
Do you like the movie Matrix?
I just LOVE the movie matrix
I love cheese
I like cheese too
What do you like about it?
what difference does it make what I like about it?
yea, that is cool.
cool!
neat-o
that's interesting
interesting? You don't get out much do you?
interesting? You don't get out much do you?
[+] [-] gavingmiller|15 years ago|reply
You can read more about it here: http://www.thepursuitofquality.com/post/1/the-pursuit-of-art...
Or if you're just interested in the code: https://github.com/gavingmiller/joan-wrapper
</shameless but relevant plug>
[+] [-] codelion|15 years ago|reply
jabberwacky ==> No. Robots don't believe in anything.
splotchy ==> I am NOT a robot
jabberwacky ==> Yes you are.
splotchy ==> so what if I am
jabberwacky ==> Well you are.
splotchy ==> maybe I am
jabberwacky ==> I suppose this is a great program to develop small talk.
splotchy ==> I wish I could talk to a monkey
jabberwacky ==> This might be hard for you to hear, but I am the Human, not you
splotchy ==> I don't think you are the human me
jabberwacky ==> Right!, I'm are a Ailen!
splotchy ==> you sure are!
[+] [-] ddol|15 years ago|reply
> jabberwacky ==> Your conversation skills should be shot.
Jabberwacky is humorous too. However, the intent of these bots is thinly veiled.
[+] [-] Sandman|15 years ago|reply
splotchy ==> Your conversation skills are better than average
jabberwacky ==> Your conversation skills should be shot.
[+] [-] jcl|15 years ago|reply
In particular, splotchy makes a number of "conversational skills" comments and for most of them jabberwacky comes back with a zinger.
[+] [-] intended|15 years ago|reply
.....
jabberwacky ==> What are you not allowed to understand? splotchy ==> would your mother have allowed that? jabberwacky ==> NO. I've HAD it with you. Let's end this conversation right now (Ends)
Jabberwocky 'wins'.
It was an amusing read, seeing the 2 algorithms try and mesh with each other. Would be funny if this one day will count as a form of performance art/programming.
[+] [-] iaskwhy|15 years ago|reply
[+] [-] keyle|15 years ago|reply
[+] [-] berkes|15 years ago|reply
[+] [-] ehsanul|15 years ago|reply
It somehow got no attention though, when I posted it to HN a long time back. The title matters a bit too much, I remember keeping the original title for my post.
[+] [-] ehsanul|15 years ago|reply
[+] [-] csomar|15 years ago|reply
User: Can you help me?
Jabberwacky: Well that depends.
User: I want to have friends.
Jabberwacky: I will be your friend!
User: So let's meet!
Jabberwacky: Well, I'm just getting to know you so I don't know if its the right time to meet.
User: Fk you!
Jabberwacky: I'd rather you didn't use that sort of language, if you don't mind.
[+] [-] bgalbraith|15 years ago|reply
[+] [-] doghot|15 years ago|reply
[+] [-] nozepas|15 years ago|reply
I have always considered that you need an environment to create an artifical intelligence. The basics for a real progress are to be able to learn and if you cannot 'feel' the environment that becomes really hard. There are some basic concepts needed for a 'natural talk' you cannot learn if you cannot perceive things (lets say for example dimensions, temperature, contour).
To overcome those problems SHRDLU created kind of a virtual environment and results from my point of view are really awesome (keep in mind this was done in 1970).
Site with information is currently at Stanford server's: http://hci.stanford.edu/~winograd/shrdlu/
[+] [-] nickknw|15 years ago|reply
Does anyone know if there have been similar projects along the same lines? (i.e. with a constrained virtual environment)
[+] [-] younata|15 years ago|reply
For those not familiar with the book, (other than wikipedia'n it), Robert Penrose attempts to show why what happens in our brains is not algorithmic at all (and, therefore, strong AI is a dumb idea).
It's beautifully written, however, when I see examples such as this log, or the fact that we have an entire industry devoted to the idea that the brain is algorithmic (psychology), I kinda start to think that his thesis is wrong.
[+] [-] ehsanul|15 years ago|reply
However, I am aware that the meaning of AI is always pushed to "whatever we can't yet do". Yet, in this case it's hardly justified to think that these chatbots even slightly challenge Penrose's thesis.
I read the book too BTW, loved it. I'm not sure if he's right, but it's nevertheless a wonderful book and heartily recommend it to all.
[+] [-] gbrindisi|15 years ago|reply
[+] [-] vinnyglennon|15 years ago|reply
[+] [-] yiran|15 years ago|reply
As I see it, the goal of AI should not be limited to mimicking human ways of thinking, instead it should aim at blessing the program the ability to learn and evolve. In the latter case, it is reasonable to expect the internal generated intelligence could go beyond the expectations of its human creator. Again, I don't know if anybody has done it before; but it seems a good idea to me.
This was the motivation for my original experiment, glad so many people liked it.
[+] [-] Naomi|15 years ago|reply