BDC Hamburger Icon

MENU

Close
BDC Logo
Search Icon
Advertising Disclosure
Close
Advertising Disclosure

Business.com aims to help business owners make informed decisions to support and grow their companies. We research and recommend products and services suitable for various business types, investing thousands of hours each year in this process.

As a business, we need to generate revenue to sustain our content. We have financial relationships with some companies we cover, earning commissions when readers purchase from our partners or share information about their needs. These relationships do not dictate our advice and recommendations. Our editorial team independently evaluates and recommends products and services based on their research and expertise. Learn more about our process and partners here.

Updated Jun 28, 2024

1 in 10 Executives Say Their Companies Have Already Faced Deepfake Threats

A new study reveals only 13% of companies have comprehensive deepfake attack protocols.

author image
Written By: Chad BrooksManaging Editor & Expert on Business Ownership
Verified CheckEditor Verified:
Verified Check
Editor Verified
Close
A business.com editor verified this analysis to ensure it meets our standards for accuracy, expertise and integrity.
Gretchen Grunburg
Senior Editor & Expert on Business Strategy
Business.com earns commissions from some listed providers. Editorial Guidelines.
Table Of Contents Icon

Table of Contents

Open row

Since the term “deepfake” was coined in 2017, deepfake scams have already resulted in millions of dollars in losses for companies worldwide. Deepfakes are AI-manipulated photos, videos or audio recordings and are rapidly emerging as a threat to businesses. Still, new research reveals many American executives are unprepared for the potential risks deepfakes pose to their employees and profits. 

According to business.com’s recent study of C-suite executives, over 10 percent of companies have faced successful or attempted deepfake fraud, and many more are vulnerable due to outdated cybersecurity measures. Here’s what else the study revealed: 

  • Despite AI’s growing prominence, about one in four company leaders had little to no familiarity with deepfake technology. 
  • 31 percent of business leaders believe deepfakes have not increased their fraud risk.
  • 32 percent of leaders had no confidence their employees would be able to recognize deepfake fraud attempts on their businesses. 
  • More than half of leaders say their employees haven’t had any training on identifying or addressing deepfake attacks. 
  • 61 percent of executives said they haven’t established any protocols for addressing deepfake risks within their companies. 

Deepfake threats are largely underestimated by business leaders

Today, the ability to use artificial intelligence to create deepfakes has become so widely available and simple that even a teenager with a sufficiently robust device can make deepfakes. In 2024, fraudsters used highly sophisticated deepfakes to obtain $25 million from a Hong Kong finance worker. According to CNN, the worker attended a video meeting with what he believed were his colleagues, but the videos were faked. The manipulated videos of his coworkers were so convincing he wired millions of dollars to the scammers.  

deepfake table

Despite the financial risks, about one in four company leaders are barely or not at all familiar with deepfake technology, which can be used to create deceptive endorsements from executives. These fake endorsements mislead consumers and damage brands’ reputations. Deepfakes undermine trust and consumer confidence and misuse established reputations for illicit profit. They diminish online trust, slow user engagement and cause internal misinformation, leading to poor decisions, disrupted communications and financial losses for businesses and shareholders.

Outside of the business realm, deepfakes can also emotionally harm individuals. Recently, pornographic AI-generated images of celebrities Taylor Swift and Gal Gadot have been created and circulated online, and high school students have used easily accessible smartphone apps to generate nude photos of their classmates. 

Manipulated audio has already been used to influence U.S. elections: an AI-generated robocall using Joe Biden’s voice was circulated in New Hampshire, falsely telling voters to skip the primary. The call, which Biden did not record, aimed to suppress votes. The New Hampshire Attorney General is investigating this as an illegal attempt to disrupt the election​.

Deepfakes have already threatened 1 in 10 companies

Based on our research, approximately 10 percent of executives acknowledged that their companies had been the target of deepfake threats or attempts. Furthermore, an additional 10 percent expressed uncertainty regarding whether their businesses had been subjected to deepfake attacks by cybercriminals. These findings suggest that many leaders may lack the ability to recognize these types of attacks or that their teams lack established channels for reporting them.

deepfake pie chart

Businesses may be more vulnerable to deepfake attacks during IPOs, mergers and corporate reorganizations.   

Despite the incredible harm they can cause, 37 percent of business leaders believe deepfakes don’t pose a severe risk to their businesses. Some reasons for their lack of concern could be that they don’t think their company is large enough to be targeted, their current cybersecurity policies are sufficiently strong to prevent harm or their businesses aren’t vulnerable to fraud.

Few companies have taken steps to mitigate deepfake risks

Our research revealed a concerning level of unpreparedness among American businesses: only one out of 10 company leaders said they have a comprehensive understanding of deepfakes and how to mitigate and handle threats to their business. [Read related article: Types of Cyber Risks Businesses Should Be Aware Of]

For instance, in the absence of deepfake training, the customer relations department of a prominent bank or financial institution could be at a loss when confronted with a wire fraud incident. Armed with voice cloning technology, the fraudster could impersonate a valued customer by contacting the bank’s call center and authorizing fraudulent transactions.

The evolving threat landscape and the increasing sophistication of cyberattacks have raised questions about the adequacy of traditional cybersecurity measures. Despite technological advancements, many organizations still rely on outdated or insufficient security solutions, leaving themselves vulnerable to various threats.

addressing deepfakes bar graph

Four out of five companies don’t fully utilize existing technology to fend off deepfake attacks. Meanwhile, one out of every five business leaders admits that their cybersecurity measures are not up-to-date or strong. This admission highlights the need for organizations to invest in robust cybersecurity solutions and prioritize implementing effective security practices. [Learn how to conduct a cybersecurity risk assessment.]

Moreover, 80 percent of companies lack protocols for handling deepfake attacks. Without a plan, these companies are vulnerable, as they won’t be prepared to address and mitigate such incidents to protect their business.

By adopting a proactive and comprehensive approach to cybersecurity, organizations can better protect themselves from traditional and emerging threats, such as deepfake attacks, and ensure their critical data and systems’ confidentiality, integrity and availability.

72 percent of companies have been the target of some type of fraud in the last 12 months

Business fraud happens frequently: Phishing, data breaches, and payment fraud are still the most common types, but the tactics are ever-changing. These common fraud schemes can be combined with deepfake media to damage targets further. [Read related article: Scams That Prey on Small Businesses]

For example, AI-generated deepfakes can enhance phishing attempts by creating convincing false identities to deceive targets. They facilitate data breaches by tricking individuals into revealing secure information, such as passwords, or gaining access to data later. Additionally, deepfakes can enable payment fraud by persuading senders to transfer funds under pretenses or to incorrect accounts.

Three percent of business leaders said their companies had been targeted by deepfake attacks in the past year alone. However, more than a third of business leaders believe that deepfakes have not ultimately increased their fraud risk.

In the last year, which of the following types of fraud has your company been targeted by? Select all that apply.

Percent of executives

Phishing scams

49%

Data breach or theft

21%

Invoice and payment fraud

17%

Ransomware attacks

15%

Identity theft

14%

Payroll fraud

9%

Business email compromise (BEC)

7%

Intellectual property theft

6%

Insider threats

5%

Financial statement fraud

5%

Supply chain fraud

4%

Tax fraud (e.g., false refund claims)

4%

Deepfake attacks

3%

Other

2%

None of the above

28%

While deepfake attacks are not the most common type of business fraud, they can cause significant financial damage. Victims in our study shared that damages from deepfake attacks were as high as 10 percent of their companies’ annual profits. 

How can businesses prevent deepfake fraud?

Only five percent of the business leaders in our research said their companies had fully implemented deepfake attack prevention steps across their staff, operating procedures, stakeholders, and communication channels. Without the proper prevention strategies, the odds of an incident happening are higher.

Meanwhile, more company protocols and employee training are needed: Two out of three company leaders have little to no confidence that their employees could recognize a deepfake attack. Prevention requires all employees to be aware of deepfake technology and empowered to identify and respond to deepfake fraud. Here are a few steps experts suggest for reducing risk.

  1. Conduct regular risk assessments: As technology evolves, so do security threats.  Continuously assess the organization’s cybersecurity risks and vulnerabilities to identify areas that require improvement. This can be done by in-house IT professionals or conducted by a trusted consultant. 
  2. Train all employees to recognize deepfake media: According to Griffith Harris, a certified insurance counselor who manages commercial policies, training employees to recognize the signs of deepfake content is paramount. “Regularly scheduled workshops that update staff on the latest developments in AI-generated fraud can be immensely valuable,” Harris explained. Empowering teams with knowledge and encouraging personal responsibility for fraud enables them to respond quickly to threats. [Learn how cyber insurance can help your business.]
  3. Implement challenge-response authentications and two-factor authentications: Deepfake attacks often exploit visual and audio aspects to deceive victims, but 2FA adds an additional layer of security, making it more challenging for deepfakes to succeed. Multifactor authentications should be used on all high-value transactions to prevent fraud. Tom Siu, chief information security officer at cybersecurity firm Inversion6, also advises authentication for remote interactions. 

“An executive team can agree on a common code word or code-phrase to use in any remote interaction,” Siu said. For example, the challenge question could be, “What color is your new car?” and the code-phrase answer might be something like, “Fruit juicy red.”

“Incorrect responses, or non-answers (evasion), would be your signal that this interaction is not with the person you think he or she may be,” Siu explained. 

  1. Invest in a multi-pronged cybersecurity defense strategy: Investing in cybersecurity tools is essential for heading off fraud of all kinds, including deepfakes. “Advanced detection tools show promise in identifying manipulated media, though more work remains,” said Taimur Ijlal, an information security leader at Netify. Primarily, though, Ijlal said advanced planning is key to cyber defense. “A well-tested incident response roadmap ensures leaders will navigate any crisis with care, honesty and concern for all impacted,” Ijlal added. 

Conclusion

In conclusion, deepfakes are a real business risk in the 2020s that require executive leadership to develop comprehensive plans that outline preventive measures and incident response policies. However, many companies are vulnerable to financial losses and reputation damage because they operate with outdated or weak cybersecurity measures. Too many executives admit their employees have not been trained to identify deepfake media. 

AI programs can create manipulated videos, photos or even audio with speed and sophistication, so it is easier than ever for scammers to mislead customers or defraud employees. Given the rapidly advancing nature of AI and deepfakes, companies must ensure their cybersecurity policies and employee training programs are up to date and consistently employed across their organizations.  

Our data

In May 2024, business.com conducted a web-based survey of 244 business leaders currently working as CEOs, C-suite executives, presidents or vice presidents. We did not include individuals who were the only team members in their companies. Thirty-nine percent were female, and 61 percent were male. 69 percent were white, 15 percent Black, 11 percent were multiracial or another ethnicity, 5 percent were Asian and one percent declined to report. The median age of respondents was 44. The median annual profit of the companies employing respondents was $450,000. 

Did you find this content helpful?
Verified CheckThank you for your feedback!
author image
Written By: Chad BrooksManaging Editor & Expert on Business Ownership
Chad Brooks is the author of "How to Start a Home-Based App Development Business," drawing from over a decade of experience to mentor aspiring entrepreneurs in launching, scaling, and sustaining profitable ventures. With a focused dedication to entrepreneurship, he shares his passion for equipping small business owners with effective communication tools, such as unified communications systems, video conferencing solutions and conference call services. As business.com's managing editor, over the years Brooks has covered everything from CRM adoption to HRIS usage to evolving trends like pay transparency, deepfakes, co-working and gig working. A graduate of Indiana University with a degree in journalism, Brooks has become a respected figure in the business landscape. His insightful contributions have been featured in publications like Huffington Post, CNBC, Fox Business, and Laptop Mag. Continuously staying abreast of evolving trends, Brooks collaborates closely with B2B firms, offering strategic counsel to navigate the dynamic terrain of modern business technology in an increasingly digital era.
BDC Logo

Get Weekly 5-Minute Business Advice

B. newsletter is your digest of bite-sized news, thought & brand leadership, and entertainment. All in one email.

Back to top