Turning the Worst Contact Center Metric into Something Sublime

Completion rates are a logical metric for customer service. Whether it's voice or digital, knowing how many people get the answers to their questions in self-service is critical. It is easy to measure in voice: Did the customer escalate to an agent or did the customer hang up? If he hang up, that is a completion, and you can declare victory and move on.

With digital, things get a bit messier since there isn't necessarily a hang-up event, but the process is similar. For the sake of simplicity, I'm going to talk about voice interactions, but the issues are similar for digital self-service.

This blunt instrument of a metric serves no one well, least of all your customers. In many cases customers will complete their self-service interactions in the interactive voice response and happily hang up. The call was complete, congratulations. In other cases, the call is a disaster and, after yelling, "Agent!, Agent!, Agent!" into the phone, customers throws their phones across the room causing them to break and hang up on the application. The call is complete, but congratulations are not in order.

Obviously, we need to get a layer deeper into the customer experience to understand if a call was a success. Did the caller get the answer to the question? Did the call end in the middle of a process? Did the caller yell "Agent!" or hit 0 repeatedly? None of these are automatically measured, and they are not necessarily straightforward to measure on an application level. Difficult to measure or not, the issue of who is responsible for measuring is likely a bigger issue.

Generally, responsibility for building out completion rate measurements falls to the IVR team, whose bonus is paid based on completion rates.

Because of this inherent conflict of interest, I have never been a fan of completion rates as a metric. There is just too much potential to measure bad outcomes and claim that they are good. With AI however, this could really change.

AI brings a new level of insight into what is happening in customer interactions. This can include the ability to differentiate between a good outcome for the customer and a bad one. This can lead to a more powerful and more nuanced understanding of what happens in self-service. This is already being used by several vendors for outcome-based pricing.

Outcome based pricing is being adopted by several conversational AI vendors that are trying to get their prospects to put a bunch of wild new AI capabilities directly in front of their customers. With outcome-based pricing, the vendor agrees to build out the application ahead of time, and the customer and the vendor agree on the definition of a successful completion. Then the customer only pays the vendor for every successful completion.

I have concerns about outcome-based pricing. On one level it aligns the vendor and its customer. On the other hand, the vendor has taken the risk of building the application and proving its success. The vendor will then rightly expect to be paid well for their successful efforts. This is a pricing model that resonates with prospects who are nervous about taking on generative AI for customer-facing use cases. It gets messy in the long run, with customers paying a premium for work that was done in the past.

For vendors to measure outcomes for the sake of billing, they must have a sophisticated understanding of what happens in the interaction. Only knowing whether the customer hung up is not sufficient. This is exciting to me. It would be entirely practical for vendors to create a set of successful completion metrics that are inherent in their products-things like the customer didn't yell for an agent or hung up after the question was answered instead of in the middle of a transaction. Once defined, AI can measure whether this metric was met in much the same way that this is being done for outcome-based pricing.

This would not be perfect, but it would be practical, and it could take the worst metric in the contact center and transform it into something useful and insightful.


Max Ball is a principal analyst at Forrester Research.