Can Consumers Trust Chatbots?

If you asked somebody just two years ago about chatbots, the response would have been extremely optimistic. The retention rates of mobile apps were on the decline, while the popularity of messaging apps was on the rise. Back then, it was not uncommon to see articles with headlines like: "This is how chatbots will kill 99 percent of apps."

Current trends seem to substantiate such predictions. Facebook Messenger crossed the 1 billion users mark in 2016. That was also the year it introduced its Bots API, which now boasts more than 100,000 bots. Western app developers are enthusiastically adopting apps such as China's WeChat, a much more immersive messaging platform whose bots seemed capable of a myriad of tasks, like calling a taxi or making appointments. As facts stand, by 2020, more than 85 percent of customer interactions will be managed without a human in the loop.

But being excited about chatbots and actually implementing a working chatbot are two very different things. A chatbot needs to be easy to use and efficient and get the job done for users to consistently engage with it. Businesses are experimenting with chatbots, with mixed results. In an independent survey of 3,000 U.S. and U.K. consumers by Chatbots.org in late 2017, consumers expressed their dissatisfaction with chatbots. Among the findings were the following:

  • 59 percent of respondents said they disliked having to repeat information and context to a human agent after the chatbot handed off to her.
  • 32 percent disliked chatbots getting stuck and not knowing what to do next.

Perhaps it's time to ask an important question: Are some chatbots not yet ready for duty?

Back to Basics: Chatbots 101

A chatbot is any program that mimics real conversations. This can either be embedded in a site or through a third-party messaging platform like Facebook Messenger or Slack. Chatbots rely on natural language processing (NLP), the same technology behind virtual assistants like Google Now or Apple's Siri.

Chatbots follow three simple steps: 1. Understand 2. Act 3. Respond. In the first case, the chatbot processes what the user sends, then acts according to a series of algorithms that interpret what the user said, and then finally picks from a series of appropriate responses. You can actually program your chatbot to respond the same way each time or to respond differently to messages containing certain keywords.

If you had to a pick an app or a chatbot, you'd pick an app for its user interface, but you'd pick the chatbot for its simplicity. A chatbot should be easy to use, or not at all. As Stefan Kojouharov, founder of Chatbots Life, explains, chatbots are useful in two cases:

  1. Conversational: When an app can't do it because multiple variable inputs are needed to solve the problem.
  2. Simplicity: When a bot offers the most immediate and direct solution to a problem.

That sounds simple enough. So, what's the problem? Because chatbots are so dependent on NLP, they are also constrained by its limitations. Even the best NLP does not often compare to the capabilities of a human. That would be an acceptable situation for many applications that leverage NLP, but not for a chatbot. A user will only tolerate a chatbot if it speeds up communication, not hinders it. But at the current stage of NLP it's very possible for a chatbot to understand words but not necessarily their meanings. There needs to be major improvements in NLP for users to regularly use chatbots.

The other issue is human speech itself. Chatbots try to mimic conversations, but most conversations aren't linear. Discussions restart, and there are tangential topics or multiple topics being discussed at once. This is very tough to follow algorithmically.

Humans in the Loop

So how do we get over this technological hurdle? The answer: data, data, and more data. One of the best ways to improve a chatbot is to constantly train it and get it to respond to different human interactions. The more data you feed a chatbot, the better it can adapt to human speech and all its idiosyncrasies and the better you get at achieving that human-like conversation level.

Rajaih Nuseibeh of boutique.ai states that successful human-like chatbots rely on two things:

  1. Contextual understanding - the ability to remember and track different aspects of a conversation—location, time, preferences of people—and combine all of the inputs to paint a picture of the conversation. Just like humans use surrounding context to inform their interactions, chatbots also need information that keeps the conversation going.
  2. Intent recognition - the ability to extract relevant information from each sentence, word, and verb, and understand the intention and the meaning behind it. This allows the use of long, complex sentences by users, because the chatbot is able to understand and extract multiple intents.

These complex features require a robust NLP foundation that can only be improved and sustained by high-quality training data. That's where AI training data providers are useful. They can provide the training tools you need, including potential questions, answers, and conversation templates. Since text structure is key to understanding and answering requests properly, a very large number of example sentences is needed to show the chatbot the terms on which to focus and what the user wants to achieve.

There are two critical aspects of chatbot training:

  1. Entity annotation - extracting units of information from sentences or unstructured data and making it structured. These units can include names of people, organizations, and locations. It can also be used to identify numeric expressions, such as time, date, money, and percentages. This is helpful in training a chatbot to develop context in a conversation.
  2. Linguistic annotation - assessing the subject of any given sentence. It's a broad genre, but essentially it's anything to do with analysis of text, whether that be sentiment analysis or using NLP to answer questions. Through sentiment analysis, a chatbot can understand the tone and mood of the user and adapt accordingly).

Ultimately, the goal of any chatbot is to maintain natural conversation fluidity and provide consistent engagement. The more data given to a chatbot, the smoother its NLP engine runs and the better the AI experience is. Through rigorous training by expert humans, chatbots can live up to their hype.


Charly Walther is vice president of product and growth at Gengo.ai, providers of a translation platform optimized for developers of multilingual machine learning and artificial intelligence applications. He joined Gengo from Uber, where he was a product manager in the Advanced Technologies Group.


Related Articles

The integration enables organizations using Zendesk Support to talk to customers in more than 150 languages.

Posted January 22, 2019