AI and Trust: Language Matters

When it comes to artificial intelligence, language matters. So does how we explain, train, and sell its use to contact center leadership, staff, and customers.

Since the announcement of the first version of Chat GPT more than a year ago, customer service and customer experience vendors have been rushing AI-related product announcements to the market. Those efforts have been met with varying levels of skepticism. Concerns about risk, bias, ethics, and safety have put on hold the broad adoption of many AI tools and applications at all levels of organizations.

As vendors continue to ramp up their AI expertise, they will also need to alter how they communicate the relative strengths of their solutions to the marketplace and, ultimately, communicate why users should trust and adopt them.

To better understand the perceptions of today's business users about AI and its potential value and risk in the workplace, Valoir surveyed more than 300 workers in North America from a variety of industries and job roles and validated the survey responses with in-depth interviews with a smaller sample of workers. We asked them about their experiences with AI to date, how and if AI would be helpful to them in their current job roles, and what vendors needed to do and say to drive adoption and effective use of their AI solutions.

In our report, "Language Matters: AI User Perceptions," here's what we found and what it means for the contact center:

  • Nearly everyone has kicked the tires on generative AI. Eighty-four percent of workers have experimented with some form of generative AI, with varying degrees of success. That means many of your agents have already tried it, either at work or elsewhere. Their personal experiences are already coloring their expectations of what they should accept at work. If they used an early version of many free applications or didn't get guidance on prompts, they likely got some hallucinations or incorrect information.
  • Many workers question AI's potential value and risk. Although that sounds obvious, 17 percent of workers believe AI can't help them at work, meaning vendors and managers still need to make a compelling case for what's in it for individual users. They also need to be shown data lineage and how data will be used. When AI is applied in a customer service setting, vendors will need to make the case at both the organizational and individual (for buyers, managers, and agents) levels that the benefits outweigh the risks.
  • Concerns about risks are high. Workers were most concerned that AI would violate their privacy (51 percent), followed by fears that it would act on its own without human intervention (45 percent). Thirty-eight percent are very concerned that AI could replace them. That means thinking about putting in place policies around AI use, training and reskilling programs for agents that help them take advantage of the technology, and thinking through how we explain AI to users at all levels in a way that increases trust and reduces fears.
  • The vendor AI battle is about trust, and it is just beginning. Although many workers could name companies they wouldn't trust with AI, there was no consensus around the most trusted AI vendor or even what that vendor profile looked like. However, clear data lineage and understandable privacy and ethics policies are the top two factors workers said would increase their trust levels. This will be important both for agents and customers who are being asked to share their information with AI-enabled chatbots and other self-service applications.
  • Despite the current industry momentum around the term copilot, it's not a term that endears users to AI or drives them to adopt it. Workers said they would be most likely to use AI when it's a virtual assistant, with nearly 50 percent choosing that term over other options, including copilot.

There are significant opportunities for benefits from AI in the contact center, from knowledge article generation to assisted search to case summarization to chatbot automation. Like any technology, AI has the potential to amplify good business practices, but it also has the potential to amplify bad ones. So we need to do a better job to explain, train, and sell its use to contact center leadership, staff, and customers.

Rebecca Wettemann is CEO and principal at Valoir, a technology industry analyst firm focused on the connection between people and technology in a modern digital workplace.