Hacker News new | past | comments | ask | show | jobs | submit login
ELIZA Reanimated: Restoring the Mother of All Chatbots (computer.org)
106 points by abrax3141 5 days ago | hide | past | favorite | 30 comments





I had an ELIZA-like "chatbot" written in BASIC on the laptop I carried in high school (1991-1995). I added logging, let classmates interact with it, and then read the logs. The extent to which people treated the program as though it had agency was kind of horrifying. I can only imagine what's happening with LLMs today. It scares the willies out of me.

re: my ELIZA-like logs - I was at least somewhat ethical, insofar as I didn't share the logs with others, nor did I ever tell anybody that they had been logged or acted upon what I read in the logs. Still, I was pretty shitty to the people who interacted with my computer. The extent to which current "AI" companies won't be shitty to users is, I assume, much less than I was back then.


> The extent to which people treated the program as though it had agency was kind of horrifying

It's also horrifying how much intention people think they can see from looking at logs of people using something. I know there are a lot of "data driven" decisions that people use the same way, where people are reaching all sorts of conclusions to why X suddenly is Y, or likewise.

I'm sure if someone inspected the logs of what I've written to various LLMs they'd think they can extrapolate all sorts of personal characteristics about me, but I'm also a person who plays around with things, tries to find limits and whatever, so just because see me treating a LLM like shit for some reason doesn't mean you can understand the intention behind that.

> Still, I was pretty shitty to the people who interacted with my computer

I think as youngsters exploring computing without limitations, restrictions or honestly much thoughts at all in the beginning, many of us been in the same situation. As long as we learn and improve with experience :)


I'm sure if someone inspected the logs of what I've written to various LLMs they'd think they can extrapolate all sorts of personal characteristics about me, but I'm also a person who plays around with things, tries to find limits and whatever...

If you looked at my LLM interaction logs you would probably assume that I have an unhealthy obsession with pirates and a napalm fetish.

In reality, I use the "can I get it to tell me how to make napalm" thing as a quick "acid test" around the extent and strength of censorship controls, and simply find asking LLM's to "talk like a pirate" amusing. And, also, I've found occasions where doing nothing more than instructing the LLM to talk like a pirate will bypass it's built-in inhibitions against things like giving instructions for making napalm.


Now explain that to the police. And to the court.

Way back when, I had a simple hobby site where visitors could upload an image, I'd process it and return a transformed version of it in a template for papercrafting. Nowadays, I'd do it all client-side in javascript, but that wasn't really an option at the time.

So the images were saved when they were uploaded, not for any nefarious reason, but more out of laziness. Then one day, I looked at the images. Yikes. I immediately rewrote it to delete the images after returning them, and pretty soon let the site die.


> nor did I ever tell anybody that they had been logged

So the opposite of acting ethically.

No wonder we've ended up in the surveillance nightmare we find ourselves in.


> So the opposite of acting ethically.

I think ethical behavior is a continuum and I don't see it as binary. Then again, I'm not formally trained in ethics either.

I clearly stated I handled it only somewhat ethically, at best (i.e. "...pretty shitty to people..."). Even then, I'd argue I acted closer to the "ethical" end of that continuum than the opposite. I could have shared the logs, for example. That would be much closer to the "unethical" end of that spectrum to my mind.

I definitely handled it poorly but I could have handled it worse. For the people who were "surveilled" the impact to their lives was the same as if they had not been.

> No wonder we've ended up in the surveillance nightmare we find ourselves in.

The "user surveillance" on my personal standalone laptop computer 30 years ago doesn't have much bearing on the for-profit companies who profit from mass user surveillance today, except perhaps as being emblematic of the human nature to find novelty in "secrets". I don't think I bear any personal responsibility for the world we live in today in this regard.


Authentic eliza in the browser: https://anthay.github.io/eliza.html

(Port/rewrite I think. More details here https://github.com/anthay/ELIZA )


I am curious, was there any improvement of ELIZA type chatbots, before the advent of LLMs. What was the state of the art of conventional chatbot tech. Perhaps some IRC chatbots were more advanced?

Right before LLMs broke into the scene we had a few techniques I was aware of:

* Personality Forge uses a rules-based scripting approach [0]. This is basically ELIZA extended to take advantage of modern processing power.

* Rasa [1] used traditional NLP/NLU techniques and small-model ML to match intents and parse user requests. This is the same kind of tooling that Google/Alexa historically used, just without the voice layer and with more effort to keep the context in mind.

Rasa is actually open source [2], so you can poke around the internals to see how it's implemented. It doesn't look like it's changed architecture substantially since the pre-LLM days. Rhasspy [3] (also open source) uses similar techniques but in the voice assistant space rather than as a full chatbot.

[0] https://www.personalityforge.com/developers/how-to-build-cha...

[1] https://web.archive.org/web/20200104080459/https://rasa.com/ (old link because Rasa's marketing today is ambiguous about whether they're adding LLMs now).

[2] https://github.com/RasaHQ/rasa

[3] https://rhasspy.readthedocs.io/en/latest/


We developed ALICE and AIML (https://en.wikipedia.org/wiki/Artificial_Intelligence_Markup...) as a way to program bots (some of my work included adding scripting and a learning mechanism), at the time it was open sourced but AOL literally threw it into it's AIM service at certain points. There were plenty of "connectors" for different services, but the real ironic bit was that there was a central Graphmaster class which was extremely memory intensive. This was all before AWS and Cloud.

I made a private fork of ALICE back in the day and maintained my own response ruleset to give it a bespoke personality. I extended the main ALICE codebase with a TCP-based API server, and wrote another service that connects ALICE to IRC channels. I also made a GTK-based UI for starting, stopping, reloading and monitoring ALICE and to ease writing rule files. This gave me an IRC buddy that joined me in chatrooms.

If I remember correctly, I also modified the Graphmaster to add support for rule priorities, so that I can better manage rules beyond the tree-based matching approach.

One of the first things people would do, upon discovering that she's a bot, is trying to break her responses.

All of this was for private use, nothing was open sourced. Unfortunately I think I forgot to copy it over from an old hard drive during a computer hardware migration, so it's gone now.

I remember Richard Wallace writing something along the lines of "if I were to build an artificial intelligence, I wouldn't use flesh and bones, that's just a bad choice" (not a verbatim quote) in defense of people accusing AIML for being a too simple/dumb of an approach, with those people favoring more complex approaches. In the age of LLM, that statement aged both well and badly.


AIML was great. I once took a stab[1] at creating an AI Paul Graham using AIML. It more more of an amusement than anything serious, but still, messing around with AIML was cool.

[1]: https://github.com/mindcrime/pgbot


> was there any improvement of ELIZA type chatbots, before the advent of LLMs

There were. If you're really interested in that history, one place to look is at the historical record of the Loebner Prize[1] competition. The Loebner was a competition based on a "Turing Test" like setup and was held annually up until 2019 or so. I think they quit holding it once LLM's came along, probably because they felt like LLM's sort of obviated the need for it or something.

[1]: https://en.wikipedia.org/wiki/Loebner_Prize


Well back then you had to be (think) original in fooling a human, and many bots made sure NOT to include copyright material. Also many of them were run locally (and in memory), so they ran faster and without the need for GPU memory (not to mention GPUs/CUDA were not readily available - or available at all - either).

Smarterchild was the best one I saw before LLMs.

https://en.wikipedia.org/wiki/SmarterChild


For Emacs users, see also:

  M-x doctor
From its commentary [1] in the source code:

  ;;; Commentary:

  ;; The single entry point `doctor', simulates a Rogerian analyst using
  ;; phrase-production techniques similar to the classic ELIZA demonstration
  ;; of pseudo-AI.
From the docstring [2] of the command:

  "Switch to *doctor* buffer and start giving psychotherapy."
[1] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...

[2] https://cgit.git.savannah.gnu.org/cgit/emacs.git/tree/lisp/p...


Also, here's an example session, in case someone is curious:

  M-x doctor

  I am the psychotherapist.  Please, describe your problems.  Each time
  you are finished talking, type RET twice.

  How can I explain to my friends that Emacs is a lifestyle choice?

  What do you think?

  That's not an answer, Doctor.  Are you avoiding the question?

  I'll ask the questions, if you don't mind!

  I do mind, actually.  This is my buffer, remember?

  Why do you say that?

  Because I'm the one holding the Ctrl and Meta keys here.

  Is it because you are the one holding the ctrl and meta keys there
  that you came to me?

  C-x k

I fondly remember M-x psychoanalyze-pinhead as well. (Though the actual Zippy the Pinhead quotes have long sense been removed.)


Clarification: These papers are different enough that I don’t feel like I double dipped by posting both in HN, also the new pub is ... well, new. (Also, thank you for reminding me that I need to update the arXiv entry since it’s not been published!)

Not saying there was any double-dipping. Only pointing others to related discussion. (I myself added copies of both papers to my Zotero library—which isn't something I would have done if I thought they were duplicates.)

Once, way back when, I ported eliza to $lang and hooked it up to my AIM account. All well and good till the boss interacted with it for a couple of minutes before twigging on.

Obligatory: the early 2000s web site 'aoliza' which turned vanilla Eliza loose on AOL Instant Messenger, with predictably hilarious results demonstrating that the Turing Test was beaten decades ago[1].

[1] https://web.archive.org/web/20030812213928/http://fury.com/a...


Holy S! How did I not know about this?! (I curate ElizaGen.org … where this is immediately going! DM me if you want cred by your rn on the elizagen news post; my rn and landline deets are in my hn about.)

you can use elizallm.com (it also offers the openai api just in case you need that).

ELIZA is not an LLM. This site also doesn't say what program it is actually running, any details at all. It's just a chat box without any explanation.

does eliza need some kind of explanation?

Yes. Where they got it from, what version exactly it is, how it's implemented, etc.

HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: