The word "schadenfreude" describes the emotional experience of pleasure in response to another's misfortune. In the world of automated customer care, I propose we use the term "botenfreude" to capture the pure joy people feel when they encounter or read of the inevitable failure of a chatbot or voicebot.
Who among us can't recite the story of the bot that sold a 2024 Chevy Tahoe for $1? Or the New York automated agent that told tenants they don't have to pay rent? How about the early Google Assistant that proposed non-toxic glue to thicken pizza sauce? Talk about an endless source of fodder for the writers at "Saturday Night Live" or "The Daily Show."
Standard operating procedure among customers is to amplify their bad experiences on popular social media platforms, accompanied by an obligatory haha! GIF featuring Nelson Muntz from "The Simpsons" as botenfreude personified.
There is no cause for joy when bots fail; instead there is a lesson that all of us should take very seriously. Failure is not an option, it's inevitable. Customers have long expected customer service bots to fail, and their low expectations were well-founded. To this day, many of the chatbots and voice bots on websites or embedded in mobile apps are tightly scripted to support a short list of functions, like tracking an order, locating a store or ATM, or retrieving a balance. They seem to have been designed to frustrate customers, taking what felt like an interminable amount of time to authenticate a customer, and try to elicit the intent of the caller before giving up and transferring to a human.
The new generation of large language model-informed generative artificial intelligence bots are much better at recognizing each customer's intent and are amazingly quick to produce results. They are also supremely confident in their work. As bots take on the roles of assistant, advisor, or personal shopper, experienced users are becoming conditioned to question their outputs. The more an individual human knows about a given topic, the more likely he is to challenge the results and ask the bot to refine its response. At that point in the conversation, bots are now prompted to respond with a cheerful interjection like "Of course, you're right! I'll just try again now." Pleasant banter while iterating results is the new source of frustration.
Botenfreude has a purpose. It reflects a healthy skepticism and highlights that, by design, bots will arrive at an acceptable response only after meaningful iterations and modifications. This explains why the people with a deeper knowledge of or familiarity with the topic a bot is addressing are much more likely to get useful results. As AI-infused copilots and coaches take on increasingly important roles in contact centers, it is important to develop training programs, corporate policies, procedures, and workflows that anticipate failure and condition employees (as well as customers) to interact with the bots accordingly.
The most successful CX and customer care organizations will effectively fuse human talent with AI capabilities. This requires deliberate efforts to educate customers, train CX team members, redefine workflows, and cultivate a culture of collaboration between and among us humans and the AI-infused tools on which we increasingly rely.
Dan Miller is founder and analyst emeritus of Opus Research.