Deepfakes are an ever-growing threat to companies and their customers as artificial intelligence is becoming more sophisticated and readily available to scammers and cybercriminals. By maliciously using generative AI technology, bad actors can impersonate virtually anyone, making it increasingly difficult to determine whether customer communications are authentic or phishing attacks.
Customer service personnel could soon find themselves the first line of defense against deepfake phishing attacks, making it important that they know how to identify and stop them. Similarly, customers could soon find themselves in the same position of having to fend off deepfake phishing attacks, purportedly coming from customer service organizations. In one sense, there is nothing new about using impersonation to perpetrate fraud. What is new is the AI technology that gives cyberscammers the means to more realistically impersonate customer service reps or customers to extract sensitive data.
Case in point: deepfake chatbots are being used to capture customer credit card and bank account information from unsuspecting consumers and also are being used to fool unsuspecting customer service personnel into shipping goods or releasing refunds to scammers. Simply put, generative AI makes it easier, through deepfakes, to create realistic and more credible phishing attacks.
Since the fourth quarter of 2022, there has been a 1,265 percent rise in phishing emails. On average, there are 31,000 phishing attacks sent daily, including 77 percent of business email compromise (BEC) attacks. The FBI's Internet Crime Report shows that $2.7 billion was lost to BEC attacks in 2022 and $52 million to other phishing attacks.
Security professionals are increasingly concerned about the potential of AI to create sophisticated email phishing attacks. More than half of the cybersecurity leaders polled by Abnormal Security said they had detected AI-generated attacks in their email systems, and 98 percent are concerned about the threats posed by generative AI tools like ChatGPT, Google Bard, and WorkGPT.
The team at IBM X-Force tested the effectiveness of generative AI to create credible phishing email messages. The team generated convincing email messages in five minutes, a task typically taking up to 16 hours.
According to5a survey by Regula, 37 percent of global businesses reported experiencing deepfake voice fraud, and 29 percent were victims of deepfake videos. Nearly half (46 percent) of companies surveyed said they experienced some form of synthetic identity of Frankenstein fraud, where cybercriminals combine real and fake information to create new identities.
Deepfakes are already being used for legitimate purposes, such as avatars for business presentations and conference calls. They offer an easy solution to bad hair days. AI-generated personas are also used for chatbots to offer customers more personalized interactive service. Some vendors provide customers with deepfake tools for applications such as trying on clothes digitally without going to the store.
The challenges arise when the same AI tools are used to impersonate corporate managers or customer service representatives or to falsify customer identities.
The Potential Business Impact of Deepfakes
Generative AI can make phishing attacks convincing enough that nearly anyone can fall victim and give up sensitive information. Phony emails or telephone calls, pretending to be trusted contacts, can convince customers to surrender account or credit card numbers or even convince employees to refund purchases or transfer cash.
A recent deepfake scam involved an AI-generated audio used to impersonate the CEO of a German-based energy company. The computerized impersonation was used to convince the CEO of a UK-based affiliate company to make a wire transfer of $243,000 to a Hungarian supplier. By the time the deepfake was uncovered, the funds had been transferred to Mexico and other locations, making it nearly impossible to track the perpetrators.
Deepfakes for cyberscams can take various forms:
- Falsified emails, audio files, or videos can be used to impersonate company executives. Unsuspecting employees could be fooled into surrendering sensitive financial or customer data or making financial transactions.
- AI can be used to impersonate customers or even create fictitious customer accounts to place phony orders or demand refunds for goods never sold.
- Customers can be fooled using AI-controlled chatbots that impersonate tech support personnel to extract account and financial information. Ultimately, these attacks could lead to identity theft.
- Deepfakes also can be used to affect corporate reputation. Phony online news reports, reviews, or impersonators can impact brand value by making false claims that can directly affect sales or stock prices.
Hard as it is to believe, this might only be the tip of the iceberg, making it urgent for businesses to take action on how they interact with customers.
As AI technology makes deepfakes more convincing, customer service personnel need more sophisticated tools and techniques to detect fraudsters. There are several steps that service teams can take to protect themselves from phishing and deepfake attacks:
- Look for suspicious signs or activity. Identifying less sophisticated deepfakes is possible by looking for anomalies, such as audio and video out of sync, unusual video eye movements, unusual body movements, or inconsistent lighting and shadows.
- Use multi-factor authentication. Voice, video call, text, and 2FA authenticator confirmations can eliminate spoofing, especially when sharing sensitive information.
- Create additional authentication protocols. Use additional identifiers, such as PINs, to authenticate users before completing transactions or sharing account information. At the same time, rely less on factors such as voice recognition, which can now be spoofed.
- Take a pause. Perhaps the best defense is learning to respond rather than reacting to requests. Does the request seem legitimate? Is there a potential risk? If something doesn't feel right, verify the request before proceeding.
- Deploy deepfake detection solutions. The best way to detect AI-generated deepfakes is with AI-powered fraud authentication technology. Deploy a solution that uses AI authentication and analysis to validate digital media to identify potential deepfakes.
- Provide consistent training and education. Train customer service personnel so they know what to look for to identify and avoid deepfake spoofing. Also, educate customers with similar best practices on how to keep their transactions secure.
Outsmarting cybercriminals starts with knowing the problem and learning to recognize potential fraud. The best defense starts with educating customer service reps, support personnel, customers, and partners and putting secure protocols and AI detection technology to stop deepfakes in their tracks.
Nicos Vekiarides is founder and CEO of Attestiv, which develops solutions to authenticate digital media using AI.