MENU
Business.com aims to help business owners make informed decisions to support and grow their companies. We research and recommend products and services suitable for various business types, investing thousands of hours each year in this process.
As a business, we need to generate revenue to sustain our content. We have financial relationships with some companies we cover, earning commissions when readers purchase from our partners or share information about their needs. These relationships do not dictate our advice and recommendations. Our editorial team independently evaluates and recommends products and services based on their research and expertise. Learn more about our process and partners here.
A new study reveals only 13% of companies have comprehensive deepfake attack protocols.
Since the term “deepfake” was coined in 2017, deepfake scams have already resulted in millions of dollars in losses for companies worldwide. Deepfakes are AI-manipulated photos, videos or audio recordings and are rapidly emerging as a threat to businesses. Still, new research reveals many American executives are unprepared for the potential risks deepfakes pose to their employees and profits.
According to business.com’s recent study of C-suite executives, over 10 percent of companies have faced successful or attempted deepfake fraud, and many more are vulnerable due to outdated cybersecurity measures. Here’s what else the study revealed:
Today, the ability to use artificial intelligence to create deepfakes has become so widely available and simple that even a teenager with a sufficiently robust device can make deepfakes. In 2024, fraudsters used highly sophisticated deepfakes to obtain $25 million from a Hong Kong finance worker. According to CNN, the worker attended a video meeting with what he believed were his colleagues, but the videos were faked. The manipulated videos of his coworkers were so convincing he wired millions of dollars to the scammers.
Despite the financial risks, about one in four company leaders are barely or not at all familiar with deepfake technology, which can be used to create deceptive endorsements from executives. These fake endorsements mislead consumers and damage brands’ reputations. Deepfakes undermine trust and consumer confidence and misuse established reputations for illicit profit. They diminish online trust, slow user engagement and cause internal misinformation, leading to poor decisions, disrupted communications and financial losses for businesses and shareholders.
Outside of the business realm, deepfakes can also emotionally harm individuals. Recently, pornographic AI-generated images of celebrities Taylor Swift and Gal Gadot have been created and circulated online, and high school students have used easily accessible smartphone apps to generate nude photos of their classmates.
Manipulated audio has already been used to influence U.S. elections: an AI-generated robocall using Joe Biden’s voice was circulated in New Hampshire, falsely telling voters to skip the primary. The call, which Biden did not record, aimed to suppress votes. The New Hampshire Attorney General is investigating this as an illegal attempt to disrupt the election.
Based on our research, approximately 10 percent of executives acknowledged that their companies had been the target of deepfake threats or attempts. Furthermore, an additional 10 percent expressed uncertainty regarding whether their businesses had been subjected to deepfake attacks by cybercriminals. These findings suggest that many leaders may lack the ability to recognize these types of attacks or that their teams lack established channels for reporting them.
Businesses may be more vulnerable to deepfake attacks during IPOs, mergers and corporate reorganizations.
Despite the incredible harm they can cause, 37 percent of business leaders believe deepfakes don’t pose a severe risk to their businesses. Some reasons for their lack of concern could be that they don’t think their company is large enough to be targeted, their current cybersecurity policies are sufficiently strong to prevent harm or their businesses aren’t vulnerable to fraud.
Our research revealed a concerning level of unpreparedness among American businesses: only one out of 10 company leaders said they have a comprehensive understanding of deepfakes and how to mitigate and handle threats to their business. [Read related article: Types of Cyber Risks Businesses Should Be Aware Of]
For instance, in the absence of deepfake training, the customer relations department of a prominent bank or financial institution could be at a loss when confronted with a wire fraud incident. Armed with voice cloning technology, the fraudster could impersonate a valued customer by contacting the bank’s call center and authorizing fraudulent transactions.
The evolving threat landscape and the increasing sophistication of cyberattacks have raised questions about the adequacy of traditional cybersecurity measures. Despite technological advancements, many organizations still rely on outdated or insufficient security solutions, leaving themselves vulnerable to various threats.
Four out of five companies don’t fully utilize existing technology to fend off deepfake attacks. Meanwhile, one out of every five business leaders admits that their cybersecurity measures are not up-to-date or strong. This admission highlights the need for organizations to invest in robust cybersecurity solutions and prioritize implementing effective security practices. [Learn how to conduct a cybersecurity risk assessment.]
Moreover, 80 percent of companies lack protocols for handling deepfake attacks. Without a plan, these companies are vulnerable, as they won’t be prepared to address and mitigate such incidents to protect their business.
By adopting a proactive and comprehensive approach to cybersecurity, organizations can better protect themselves from traditional and emerging threats, such as deepfake attacks, and ensure their critical data and systems’ confidentiality, integrity and availability.
Business fraud happens frequently: Phishing, data breaches, and payment fraud are still the most common types, but the tactics are ever-changing. These common fraud schemes can be combined with deepfake media to damage targets further. [Read related article: Scams That Prey on Small Businesses]
For example, AI-generated deepfakes can enhance phishing attempts by creating convincing false identities to deceive targets. They facilitate data breaches by tricking individuals into revealing secure information, such as passwords, or gaining access to data later. Additionally, deepfakes can enable payment fraud by persuading senders to transfer funds under pretenses or to incorrect accounts.
Three percent of business leaders said their companies had been targeted by deepfake attacks in the past year alone. However, more than a third of business leaders believe that deepfakes have not ultimately increased their fraud risk.
In the last year, which of the following types of fraud has your company been targeted by? Select all that apply. | Percent of executives |
---|---|
Phishing scams | 49% |
Data breach or theft | 21% |
Invoice and payment fraud | 17% |
15% | |
Identity theft | 14% |
Payroll fraud | 9% |
Business email compromise (BEC) | 7% |
Intellectual property theft | 6% |
Insider threats | 5% |
Financial statement fraud | 5% |
Supply chain fraud | 4% |
Tax fraud (e.g., false refund claims) | 4% |
Deepfake attacks | 3% |
Other | 2% |
None of the above | 28% |
While deepfake attacks are not the most common type of business fraud, they can cause significant financial damage. Victims in our study shared that damages from deepfake attacks were as high as 10 percent of their companies’ annual profits.
Only five percent of the business leaders in our research said their companies had fully implemented deepfake attack prevention steps across their staff, operating procedures, stakeholders, and communication channels. Without the proper prevention strategies, the odds of an incident happening are higher.
Meanwhile, more company protocols and employee training are needed: Two out of three company leaders have little to no confidence that their employees could recognize a deepfake attack. Prevention requires all employees to be aware of deepfake technology and empowered to identify and respond to deepfake fraud. Here are a few steps experts suggest for reducing risk.
“An executive team can agree on a common code word or code-phrase to use in any remote interaction,” Siu said. For example, the challenge question could be, “What color is your new car?” and the code-phrase answer might be something like, “Fruit juicy red.”
“Incorrect responses, or non-answers (evasion), would be your signal that this interaction is not with the person you think he or she may be,” Siu explained.
In conclusion, deepfakes are a real business risk in the 2020s that require executive leadership to develop comprehensive plans that outline preventive measures and incident response policies. However, many companies are vulnerable to financial losses and reputation damage because they operate with outdated or weak cybersecurity measures. Too many executives admit their employees have not been trained to identify deepfake media.
AI programs can create manipulated videos, photos or even audio with speed and sophistication, so it is easier than ever for scammers to mislead customers or defraud employees. Given the rapidly advancing nature of AI and deepfakes, companies must ensure their cybersecurity policies and employee training programs are up to date and consistently employed across their organizations.
In May 2024, business.com conducted a web-based survey of 244 business leaders currently working as CEOs, C-suite executives, presidents or vice presidents. We did not include individuals who were the only team members in their companies. Thirty-nine percent were female, and 61 percent were male. 69 percent were white, 15 percent Black, 11 percent were multiracial or another ethnicity, 5 percent were Asian and one percent declined to report. The median age of respondents was 44. The median annual profit of the companies employing respondents was $450,000.