To Maintain Customer Trust and Loyalty, Be Honest About Your Chatbots

Consumers are having trust issues. Between high-profile data breaches and unsavory revelations about how companies use and share personal information, customers are more wary than ever about placing their trust in companies, particularly technology companies. According to recent research, nearly 60 percent of consumers are uncomfortable with how companies use their information. And less than half of Americans reported trusting businesses in 2018, a 10 percent drop from the previous year. This trust gap is disconcerting, especially given the crucial importance of customer data in creating the personalized consumer experience for which every company strives. And this will only become more critical in the future, considering that 83 percent of millennials and Gen-Zers judge companies as much by the experiences they provide as by the quality of their products and services.

Meanwhile, technology is becoming further ingrained into everyday interactions and is rapidly evolving in intelligence and function. In the context of this trust crisis, businesses must enforce transparent use of their technology because this will be a critical foundation for maintaining and growing their customer bases.

Specifically, for businesses seeking to modernize customer service by deploying chatbots, transparency is imperative for building and maintaining trust with customers. Chatbots that use machine learning and natural language processing can help companies respond to customers more quickly, automate simple requests, save agents time by collecting basic details, and efficiently escalate requests to the right person. But these benefits are only worth it when trust takes center stage.

Why you should be honest about chatbots

Consumers usually know when they're interacting with technology (it's fairly easy to discern between a bank teller and an ATM), but that's not always the case when it comes to bots. Thanks to innovations in machine learning, chatbots are becoming increasingly advanced and adept at providing human-like responses, which makes it easy for consumers to believe they';re interacting with a person rather than a bot. When consumers discover they've been fooled, that feeling of deception can lead to mistrust and negative feelings towards a company.

Furthermore, bots have proven themselves to be unpredictable. Because the technology is imperfect, in some cases bots have offered statements that are illogical or evenoffensive. If a customer is unaware that he or she is communicating with a bot that says something inappropriate, it could become difficult for the company to retract and rectify the statement. Being up front about the fact that this is a bot, however, could give the company the necessary leeway to test and improve the bot during the implementation phase.

Let's be clear. Bots themselves aren't the problem. It's how they're being deployed. Being transparent about how you will use bots to enhance the customer experience helps build trust and sets realistic customer expectations.

How to implement chatbots successfully

One example of a company that is in the process of introducing its consumers to chatbots in a candid and sincere way isDoorDash. The on-demand food delivery service is looking to bots to help address the 30,000 or so support tickets its agents handle each day and increase the overall quality of service for their end customers. When implementing bots, one of the things that is of utmost importance to DoorDash is to ensure they are doing so in a transparent and structured manner, and they are in the process of establishing standardized guidelines and escalation rules that will ensure this is the case. And by thinking about transparency in advance, DoorDash will be able to roll out chatbots in a way that builds up its reputation as a trustworthy brand and strengthens customer loyalty.

Being transparent about chatbots sounds simple, but implementing the concept can be challenging. Companies all too often have good intentions but find themselves misrepresenting themselves to customers.

Preparing for tomorrow's consumer

People are becoming increasingly comfortable interacting with digital assistants, connected cars, and voice-activated tools. This will only continue, as millennials and Gen Zers arenearly twice as likely to use voice-activated personal assistants, such as Siri and Alexa, compared to older generations and are much more likely to value authenticity in their interactions with companies. The difference between these technologies and chatbots is that users are always aware they are not talking to a human. Bots can lead to more efficient support, but they shouldn't risk leaving a consumer feeling deceived and distrustful. The more upfront and transparent a company can be about chatbots, the more value this technology can add to the business and consumer experience.


Clement Tussiot is senior director of product management for Salesforce.com's Service Cloud.