A year ago, AI seemed, finally, to emerge as the answer to customer service prayers. AI agents would learn product catalogs, customer service records, and company policies to interface directly with customers, increase call and case deflection, accelerate resolutions, and improve customer experience. We all saw the demos and heard the product pitches as vendors raced to deliver AI agents.
But then, many initial efforts didn't produce the positive outcomes for which many people were waiting. At the same time, these failures—the egregious hallucinations, simple mistakes that a human agent might make, or lack of ability to resolve cases—have brought us to a point where actually delivering a correct answer is an acceptable level of performance for an AI agent.
Nonetheless, the pressure is on in the customer service world to deliver AI agents with tech that isn't necessarily ready for prime time. It's time instead to return to our roots to ensure that we're delivering not just automated but quality customer experiences. It's also time to take a stand on what is acceptable performance for an AI agent.
As you do this, here are a few key things to remember:
- When customers interact with AI agents, they are interacting with companies. No matter how often or explicitly you tell them your agent is a piece of AI technology, they&'re engaging with a representative of your organization and the quality of that interaction is a reflection on you. If you're not getting your colleagues in marketing involved to provide input on your AI agents' voice and tone, you should be.
- When it comes to quality, it is not just the accuracy of the response that is important, but the quality of the experience. If your best human agents can escalate a request from a highly valued customer or make exceptions to policies based on certain factors, your AI agents should be able to do so too and be able to identify when requests are beyond their capacity and escalate them to human agents.
- There is no replacement for human input. AI might be great at testing AI responses at scale for accuracy, but when it comes to testing AI agents for prime time, user testing with actual customers is critical for determining whether the tone and style of interaction meets the mark and aligns with your brand.
- As AI agents evolve to seem more human, customers will have more emotional responses—both positive and negative—to their experiences with them. The ability to monitor agents on an ongoing basis for how your human customers feel about them and course correct in real time will be critical.
- As the technology continues to evolve, so will customer expectations and acceptance of it. Keeping up on your vendors' roadmaps will help you to plan how and when to adopt more AI. This is not a set-it-and-forget-it environment; you'll need to keep up on customer expectations and experiences on an ongoing basis as well.
The pressure is on to deliver on AI agents for customer service; but it shouldn't be at the expense of the customer experience. If you're thinking about or are in the process of deploying AI agents, you should be thinking about them as overenthusiastic interns: they'll need not just product and customer information but cultural and social cues about how to represent your brand. Once they're unleashed they have the potential for great acceleration of experiences, both good and bad. Thinking through how you build and train them on interaction quality, not accuracy alone, will ensure they drive both internal efficiencies and positive customer experiences.
Rebecca Wettemann is founder and CEO of Valoir.