Chatbots on the Hype Cycle

“Facebook shuts down robots after they invent their own language”[1]. “Facebook AI creates its own language in creepy preview of our potential future”[2]. “Facebook shuts down chatbot experiment after AIs spontaneously develop their own language”[3]. Such were the headlines that flooded the internet just one day ago, titles portending doom as AI threatened to escape the scrutiny and control of humans. Articles continued to stoke the public fear of a AI going rogue, with the usual references to the robot uprising and Skynet making their scheduled appearances.

Thankfully, more balanced and researched articles soon surfaced, with Gizmodo covering the considerably less titillating slant: the chatbot was shut down because it wasn’t capable of talking to people, which was the original goal of the program [4].

In 1950, Alan Turing devised a test, now known as the Turing Test. In this test, a human interacted (blindly) with both a machine and a human, and would use their responses to decide which was which. Numerous movies have come out dramatizing this test, with a recent one being Ex Machina. The goal was to see if a machine could imitate humans to such a degree that it would be indistinguishable from a real human.

Sadly, however, we are far, far from that reality. The truth is, while there have been many advancements in natural language processing, there are substantial roadblocks in the way before we need even worry about anything close to sentient AI.

In the paper “Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models” [5], Sordoni et. al. identified just a few of the many problems that they encountered while trying to build a dialogue system. One particularly pressing problem was that of Generic Responses – Chatbots would often reply with responses like “I don’t know”, or “I’m sorry”. Other frequently seen problems include speaker inconsistency, where a bot would provide a different answer to a question depending on how it was asked, and repetitive responses, where a bot would keep providing the same response over and over again. Indeed, examples abound where having chatbots converse with each other produce laughable results.

There is a reason why companies like Lark and Woebot limit the range of input that a user can provide, either by only allowing them to select from a preset list of responses, or through very targeted and pointed questions. Natural Language Processing is hard. At a recent seminar organized by Professor Andrew Ng and Sherry Ruan at Stanford, titled “The Rise of the Chatbot”, several companies gave presentations on what they had achieved in the space. One thing that stuck with me was that all of them used some form of rule-based system, or at the very least, a hybrid system that would combine ML with certain hard-coded rules. The truth is, as of right now, we have not cracked the technology that will allow us to build truly generalized chatbots.

With all the buzz and occasional fearmongering that accompanies each AI announcement, it would seem like the future is already upon us. It seems to me, however, that as far as chatbots are concerned, we may be somewhere around Gartner’s “Peak of Inflated Expectations”. Hopefully, in time, we will progress to a point where we might actually have to worry about Chatbots creating their own language.

[1] http://www.telegraph.co.uk/technology/2017/08/01/facebook-shuts-robots-invent-language/
[2] https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/
[3] http://www.standard.co.uk/news/techandgadgets/facebook-ai-experiment-shut-down-after-bots-create-their-own-language-a3601196.html
[4] http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922
[5] “Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models”. Sordoni et. al. https://arxiv.org/pdf/1507.04808.pdf

0

6 comments on “Chatbots on the Hype Cycle”

  1. Aaron, Thank you for a really interesting and balanced view on the reporting around the recent Facebook chatbot incident. Your post made me think about a discussion I’ve had about language several times during my stay here at Stanford. I’m completely bilingual (Swedish and Finnish), and when people hear that they tend to ask me in what language I dream. The truth is, I don’t dream in a language, and whereas many people refuse to believe it, I’ve also met others here with a similar background that don’t dream in a language. Language to me is essential for learning new concepts and getting introduced to ideas, but do we really need language for thinking?

    I find it interesting that apparently the Facebook bots came to the conclusion that they didn’t need language (English) and resorted to a more efficient way of communicating. Like you Aaron, I’m waiting for chatbots able to create their own language, or perhaps move past language as we know it alltogether!

    1+

    Users who have LIKED this comment:

    • avatar
  2. Hey Aaron,

    Thanks for the post; I very much appreciated the view on this subject given the poorly written one line articles that have been put out on this topic.

    This question is not necessarily covered in your post but your part on natural language processing got me thinking. Do you know what languages are easier for computers to replicate, interpret and respond to? I don’t know the answer to this but it would be interested to know, I’d guess that languages like Cantonese with lots of embedded slang and alternative meanings would be a lot tougher for computers to process.

    Regarding rules based systems and the implementation of a hybrid system that would combine ML with certain hard coded rules; do you think this is the long term solution? Or is this more of a stop gap measure due to the limits of what we are able to achieve today?

    Looking at the progress of natural language processing and its difficulty, what do you think are the barriers to making the next significant breakthrough…better algorithms? More computing power? More data sets?

    Thanks for the post, don’t mean to pepper you with questions, it definitely got me thinking which is great.

    Cheers,

    Johnny

    1+
    1. Hey Johnny, great questions. Actually the cool thing about neural nets is that the algorithms can stay the same regardless of the language, the model will adapt to the data you provide in order to create its “understanding” of the language. The key point is that we must somehow provide the model with data identifying nuances/subtleties in speech, which I agree could be harder for some languages than others.

      A lot of the rule-based approaches are also partially necessitated because of the sensitivity of the context in which these chatbots are deployed. For instance, in the case of Woebot, it’s critical that its responses are sensitive to the emotional state of the user, since users of Woebot may be emotionally vulnerable. Here, rules are necessary to ensure that nothing out of line is said – Microsoft’s racist twitter bot would be a prime example of what could go wrong if we don’t do any processing ourselves. On a separate note, given what we understand about language, it’s a waste to not guide the models somewhat. We can also combine our knowledge of the lexical structure of a language with ML, for instance, using ML to label parts of speech.

      As for the next breakthrough, I’m not sure I have enough experience to comment on it, but having more data can never hurt!

      0
  3. I enjoyed reading your thoughts, Aaron, and would like to also agree with the comments above. Especially the notion on not having to have a language at all sparked my interest and made me think about a former project where we wanted to connect strangers without them having to talk.
    So many times, what happens besides the actual spoken words is actually what makes us feel connected to others. So how much does language really matter to us?
    As an opposite thought, I also realized that when I haven’t really spoken to someone in a while, that I miss this exchange of thoughts and words. I like the feeling of articulating myself and coming up with well-spoken sentences. Also, sharing our most secret feelings and desires with another person…these words are magical.
    Further, especially when we learn a new language or move to a new country, we realize how important language is and how it influences our wellbeing and connectedness to others.
    It’s a push and pull.
    So, another thought: do we want chatbots and AI to be human like? Maybe it would be better to give these intelligent systems their own character that we can feel close to but can still distinguish to humans. We shouldn’t want to replace humans and their unique traits but rather support them(us) to be better, healthier and more efficient. It will be definitely interesting to see how AI is going to develop and how it will make us feel in the long run.
    Thank you for your article!

    0
    1. Hi Sara, thanks for the thoughts. I was actually listening to a talk by another chatbot startup that was acquired by Google, and they mentioned how avatars were a very important part of their initial chatbot platform, and that they influenced how users talked and reacted. In that sense, it is kind of important to endow the chatbots with human-like characteristics. On the other hand, startups like Woebot have been very conscious about making the distinction between bots and humans, so that people don’t unwittingly substitute necessary human intervention with the bot.

      0
  4. Thank you for your post Aaron. I totally agree with you. I think the AI field is still very green and it will be long before the machines could act as if they were real humans. Maybe in some years they will be able to imitate humans as Alan Turing intended them to but seems like they have some huge roadblock to pass first.

    After what Facebook published, what do you think really happened? What do you think about the dispute than Mark Zuckerberg and Elon Musk are having?

    0

Comments are closed.