I had an ELIZA-like "chatbot" written in BASIC on the laptop I carried in high school (1991-1995). I added logging, let classmates interact with it, and then read the logs. The extent to which people treated the program as though it had agency was kind of horrifying. I can only imagine what's happening with LLMs today. It scares the willies out of me.
re: my ELIZA-like logs - I was at least somewhat ethical, insofar as I didn't share the logs with others, nor did I ever tell anybody that they had been logged or acted upon what I read in the logs. Still, I was pretty shitty to the people who interacted with my computer. The extent to which current "AI" companies won't be shitty to users is, I assume, much less than I was back then.
> The extent to which people treated the program as though it had agency was kind of horrifying
It's also horrifying how much intention people think they can see from looking at logs of people using something. I know there are a lot of "data driven" decisions that people use the same way, where people are reaching all sorts of conclusions to why X suddenly is Y, or likewise.
I'm sure if someone inspected the logs of what I've written to various LLMs they'd think they can extrapolate all sorts of personal characteristics about me, but I'm also a person who plays around with things, tries to find limits and whatever, so just because see me treating a LLM like shit for some reason doesn't mean you can understand the intention behind that.
> Still, I was pretty shitty to the people who interacted with my computer
I think as youngsters exploring computing without limitations, restrictions or honestly much thoughts at all in the beginning, many of us been in the same situation. As long as we learn and improve with experience :)
Way back when, I had a simple hobby site where visitors could upload an image, I'd process it and return a transformed version of it in a template for papercrafting. Nowadays, I'd do it all client-side in javascript, but that wasn't really an option at the time.
So the images were saved when they were uploaded, not for any nefarious reason, but more out of laziness. Then one day, I looked at the images. Yikes. I immediately rewrote it to delete the images after returning them, and pretty soon let the site die.
I am curious, was there any improvement of ELIZA type chatbots, before the advent of LLMs. What was the state of the art of conventional chatbot tech. Perhaps some IRC chatbots were more advanced?
Right before LLMs broke into the scene we had a few techniques I was aware of:
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
We developed ALICE and AIML (https://en.wikipedia.org/wiki/Artificial_Intelligence_Markup...) as a way to program bots (some of my work included adding scripting and a learning mechanism), at the time it was open sourced but AOL literally threw it into it's AIM service at certain points. There were plenty of "connectors" for different services, but the real ironic bit was that there was a central Graphmaster class which was extremely memory intensive. This was all before AWS and Cloud.
> was there any improvement of ELIZA type chatbots, before the advent of LLMs
There were. If you're really interested in that history, one place to look is at the historical record of the Loebner Prize[1] competition. The Loebner was a competition based on a "Turing Test" like setup and was held annually up until 2019 or so. I think they quit holding it once LLM's came along, probably because they felt like LLM's sort of obviated the need for it or something.
;;; Commentary:
;; The single entry point `doctor', simulates a Rogerian analyst using
;; phrase-production techniques similar to the classic ELIZA demonstration
;; of pseudo-AI.
From the docstring [2] of the command:
"Switch to *doctor* buffer and start giving psychotherapy."
Also, here's an example session, in case someone is curious:
M-x doctor
I am the psychotherapist. Please, describe your problems. Each time
you are finished talking, type RET twice.
How can I explain to my friends that Emacs is a lifestyle choice?
What do you think?
That's not an answer, Doctor. Are you avoiding the question?
I'll ask the questions, if you don't mind!
I do mind, actually. This is my buffer, remember?
Why do you say that?
Because I'm the one holding the Ctrl and Meta keys here.
Is it because you are the one holding the ctrl and meta keys there
that you came to me?
C-x k
Clarification: These papers are different enough that I don’t feel like I double dipped by posting both in HN, also the new pub is ... well, new. (Also, thank you for reminding me that I need to update the arXiv entry since it’s not been published!)
Once, way back when, I ported eliza to $lang and hooked it up to my AIM account. All well and good till the boss interacted with it for a couple of minutes before twigging on.
Obligatory: the early 2000s web site 'aoliza' which turned vanilla Eliza loose on AOL Instant Messenger, with predictably hilarious results demonstrating that the Turing Test was beaten decades ago[1].
Holy S! How did I not know about this?! (I curate ElizaGen.org … where this is immediately going! DM me if you want cred by your rn on the elizagen news post; my rn and landline deets are in my hn about.)
EvanAnderson|8 months ago
re: my ELIZA-like logs - I was at least somewhat ethical, insofar as I didn't share the logs with others, nor did I ever tell anybody that they had been logged or acted upon what I read in the logs. Still, I was pretty shitty to the people who interacted with my computer. The extent to which current "AI" companies won't be shitty to users is, I assume, much less than I was back then.
diggan|8 months ago
It's also horrifying how much intention people think they can see from looking at logs of people using something. I know there are a lot of "data driven" decisions that people use the same way, where people are reaching all sorts of conclusions to why X suddenly is Y, or likewise.
I'm sure if someone inspected the logs of what I've written to various LLMs they'd think they can extrapolate all sorts of personal characteristics about me, but I'm also a person who plays around with things, tries to find limits and whatever, so just because see me treating a LLM like shit for some reason doesn't mean you can understand the intention behind that.
> Still, I was pretty shitty to the people who interacted with my computer
I think as youngsters exploring computing without limitations, restrictions or honestly much thoughts at all in the beginning, many of us been in the same situation. As long as we learn and improve with experience :)
kbelder|8 months ago
So the images were saved when they were uploaded, not for any nefarious reason, but more out of laziness. Then one day, I looked at the images. Yikes. I immediately rewrote it to delete the images after returning them, and pretty soon let the site die.
closewith|8 months ago
So the opposite of acting ethically.
No wonder we've ended up in the surveillance nightmare we find ourselves in.
anotheryou|8 months ago
(Port/rewrite I think. More details here https://github.com/anthay/ELIZA )
fsiefken|8 months ago
demosthanos|8 months ago
* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.
* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.
Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.
[0] https://www.personalityforge.com/developers/how-to-build-cha...
[1] https://web.archive.org/web/20200104080459/https://rasa.com/ (old link because Rasa's marketing today is ambiguous about whether they're adding LLMs now).
[2] https://github.com/RasaHQ/rasa
[3] https://rhasspy.readthedocs.io/en/latest/
jonbaer|8 months ago
mindcrime|8 months ago
There were. If you're really interested in that history, one place to look is at the historical record of the Loebner Prize[1] competition. The Loebner was a competition based on a "Turing Test" like setup and was held annually up until 2019 or so. I think they quit holding it once LLM's came along, probably because they felt like LLM's sort of obviated the need for it or something.
[1]: https://en.wikipedia.org/wiki/Loebner_Prize
voxic11|8 months ago
https://en.wikipedia.org/wiki/SmarterChild
susam|8 months ago
[2] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...
susam|8 months ago
cadr|8 months ago
cxr|8 months ago
abrax3141|8 months ago
unknown|8 months ago
[deleted]
wiredfool|8 months ago
mullingitover|8 months ago
[1] https://web.archive.org/web/20030812213928/http://fury.com/a...
abrax3141|8 months ago
spullara|8 months ago
bonoboTP|8 months ago
joyjayking11|8 months ago
[deleted]