Five Pitfalls of Generative AI Customer Service Bots and How to Prevent Them

It seems you can't go a week without a new major milestone in the use of generative artificial intelligence within customer service. We've seen the launch of the first generative AI chatbots in major industries, 40-minute support phone calls handled entirely by AI, new major use cases within customer experience, major competition to ChatGPT, and so much more.

With the amount of (justified) hype in the space, it's understandable that many customer service leaders want to dive head first into AI. In fact, a recent McKinsey report identified AI among the top priorities of customer service. However, there are also very real, and very scary, pitfalls to using these new technologies without the right security protocols and oversight in place. Below are five major ways uninhibited generative AI can do damage to your brand, and what needs to be in place to ensure that customer service leaders can harness the full powers of AI while bearing none of the risks.>

  1. Pose data liabilities: After a few well-publicized mishaps, in the past few months nearly every major corporation has taken steps to prevent employees from entering corporate data into interfaces like ChatGPT because without the right data and privacy protocols in place, that data is automatically entered into the public realm, for public consumption. Companies using generative AI in customer-facing situations pose that same risk when customers insert sensitive information into their chat interfaces.
  2. Be manipulated: Providers like OpenAI have gone to great lengths to ensure that bad actors can't manipulate their bots to provide harmful or sensitive information. But they aren't perfect. While you can't ask ChatGPT how to create Napalm, people have still found ways to trick the model into getting the information anyway. Anyone interacting with a ChatGPT-powered bot via a customer service site could potentially do the same.
  3. Provide incorrect, nonsensical, and off-brand information: Hallucinations are perhaps the most famous pitfall, considering the many viral examples that we've seen thus far. However, in customer service situations, a bot providing incorrect information confidently or engaging in an off-brand discussion can do serious harm to companies' reputations.
  4. Fail to resolve even seemingly simple inquiries: Let's not forget that ChatGPT was not developed as a customer service tool. In fact, the uninhibited versions of generative AI technologies are not developed for any specific use case at all! Even basic aspects of customer service chat interactions, such as collecting information or following a series of steps to the right resolution, aren't inherent in ChatGPT, making it (on its own) quite ineffective as a customer service agent.
  5. Fail to effectively escalate to a human agent: And when they do go wrong, there's no set process to hand the inquiry off to a human agent. The result is a worse experience for the customer than no bot at all.

Putting the right safety protocols in place

There are three key components to mitigating the pitfalls of generative AI within customer service interactions: security, content, and guardrailing.

From a security standpoint, organizations need iron-clad data and security protocols in place when leveraging ChatGPT. To truly protect both customer and company data from being compromised, organizations need to make sure they and their chatbot providers are leveraging the latest data security tools from major providers, such as Microsoft Azure and Amazon Web Services, alongside personally identifiable information scrubbing tools like Presidio. Ultimately, security and data privacy is the ultimate requisite to have in place, even before deciding to venture into the world of generative AI customer service chatbots.

Second, organizations need to vastly limit the type of content their generative AI tool can access. ChatGPT, for example, pulls information from essentially the entire internet (good, bad, and ugly) from up until 2021. This opens risks of hallucination, incorrect responses, etc. Instead, IT and customer service leaders first need to identify the right sets of data for ChatGPT to leverage and give it only access to that data. By directing its attention toward only relevant and targeted data sets, ChatGPT can generate the right answer for customer inquiries quicker.

Finally, as previously mentioned, ChatGPT is not made for customer service interactions. It is incumbent on customer service organizations and their technology vendors to construct the right guardrails and training parameters to ensure bots can act like the best customer service agents possible. That means they need to be coached on what conversations are brand appropriate, what your company’s policy guidelines are, where to find information pertaining to each customer service inquiry and how to demonstrate (if not truly feel) empathy and rapport with each customer.

I have never seen a technology with so much potential to upend everything we do from a business standpoint for the better. But for generative AI to reach its full potential, both technology providers and their customers need to invest in proper upfront planning to mitigate major customer service and PR disasters. By taking the time to put the right security, content, and guardrailing protocols in place, we will see a future in which 90 percent of customer service inquiries are effectively automated much sooner than later.

Daniel Rodriguez is chief marketing officer of Simplr.