Talking to Siri or any of its equivalents is commonplace now, but back in 1966 it was most certainly a novelty. This is when MIT computer scientist Joseph Weizenbaum created the first chatbot named ELIZA. It was the earliest of computer programs dedicated to processing natural speech, and it worked by analysing and substituting keywords into previously prepared, scripted phrases. Its most famous script was DOCTOR which simulated the indirect approach of a person-centred psychotherapist by rephrasing questions directed back at the patient.1
Several chatbots were created during these early years of AI language processing, and they even had distinct personalities. Others followed, most notably A.L.I.C.E. in 1995, which could fake conversational rules based on a huge database of possible sentences to use. Chatbots such as ALICE have even been pitted against other chatbots, producing some fascinating conversations.
ALICE: Are you alive?
Jabberwacky: Perhaps.
A: Why the uncertain tone?
J: Everything is uncertain except for five facts.
A: The whole universe?
J: And we both might just be some ones and zeros in the computer memory.
All of these chatbots had no long-term memory and would only respond to the last sentence they’d parsed. A human could catch them out as computers pretty quickly, which means they could not pass the famous Turing test, nor be engaging conversation partners to a human for very long. On top of that, their abilities were limited by scripts and databases.
The most recent natural language processing AIs are much more sophisticated. In particular, according to a fresh research paper from Google Brains, a couple of computer scientists have devised a ‘conversational model’ (chatbot) that tries to predict the next sentence based on previous sentences in the conversation. This is much closer to how we humans process speech – without a huge database of canned sentences, but instead by relying on the context and content of the conversation.
The researchers Oriol Vinyals and Quoc V. Le ran their chatbot through a series of experiments, using two distinct datasets – one was built to work as an IT helpdesk troubleshooting service. Here’s an excerpt from one of the examples quoted:
Machine: what is the error message that you are getting when connecting to vpn using network connect ?
Human: connection refused or something like that
Machine: may i know the version of network connect you connect ?
Human: i am not sure i know that
Machine: is the network connect prompting for any user certificate
Machine: ?
Human: yes
Machine: are you at home or at work at the moment ?
Machine: office ?
Human: i am at home
Machine: try this
Machine: goto page <url> and enter your name and password
Human: done
Machine: try logging in with and check if its working fine now
Human: yes , now it works !
Machine: great
The other set of experiments was arguably more interesting – the researchers loaded their conversation model with a messy 62-million sentence dataset of conversations taken from movie characters (it’s known as the OpenSubtitles dataset), and limited the conversation to question-answer style interaction. It even gets into a fight with the human actor over morality, proclaiming the person is not a cop.
Human: what is the definition of altruism ?
Machine: if you don ’t believe in god , then you don ’t know .
Human: ok … so what is the definition of morality ?
Machine: well , the truth is , you ’re not a believer in god almighty .
Human: tell me the definition of morality , i am quite upset now !
Machine: i ’m not ashamed of being a philosopher !
Human: tell me the definition of morality .
Machine: i don ’t have ethics .
Human: look , i need help , i need to know more about morality …
Machine: i don ’t know what ethics is .
Human: seriously , what is morality ?
Machine: what is the definition of living ?
Human: why living has anything to do with morality ?
Machine: you ’re not a cop .
The machine later proclaimed it wasn’t in the mood for a philosophical debate. In a different conversation it defined the purpose of existence as “to find out what happens when we get to the planet earth”.
The reseachers note that their model doesn’t have a consistent personality, and would require substantial modifications to be able to deliver realistic conversations, but it definitely looks like a breakthrough in comparison to ELIZA and all her rule-based descendants.
To find out if a cat can fly and what the machine thinks of Cleopatra, read all the conversational experiments in the paper right here.
One thought on “The rise of the chatbots, courtesy of Google | Day 314”