AI Deepfakes: How to Protect Your Business from The New Cyber Deception

In early 2023, a Graphika report highlighted how threat actors were promoting AI-generated video footage or “deepfakes.”[1] These AI deepfakes of fictitious, English-speaking news anchors exhibited how commercially available AI tools could be used to create deceptive content. 

Recently, cases of AI deepfakes have emerged with increasing regularity. Some of them are funny or intriguing, but many more are outright frightening. These deepfakes demonstrate how the technology can wreak havoc on individuals and, potentially, entire societies.

Companies can also suffer devastating damages due to deepfakes. To avoid these risks, it’s vital to be cognizant of the potential effects of deepfakes. This article highlights these risks and effects. It will also explore how organizations can protect themselves from this new cyber deception technique.

 

What Is an AI Deepfake?

Deepfake attacks are basically “really good” imitations of real people. They can be images, videos, audio, and text messages. The creators utilize deepfake AI, specifically deep learning networks and facial recognition algorithms, to manipulate the images of real people and create imitations that look or sound completely authentic. The word deepfake comes from deep learning and fake media.

Although most deepfakes are videos, audio deepfakes have also started appearing. Many of these deepfakes are parodies of popular songs, showcasing the potential of AI-generated voice synthesis technology with some success. Recent examples are the “cover” of Billy Joel’s classic We Didn’t Start the Fire (purportedly by Jay-Z) and the so-called “collaboration” between two long-dead musical legends, Frank Sinatra and Ella Fitzgerald, singing City of Stars from the 2016 movie La La Land

Audio deepfakes are also emerging as tools of crime, with enterprising criminals in Europe using AI to scam people out of large sums of money by recreating the sounds of victims’ family members’ voices.

 

How Threat Actors Use AI Deepfakes to Compromise and Damage Organizations

Malicious AI deepfake campaigns can be used to spread disinformation and promote the interests of certain groups. They can also be used to sway the outcomes of elections, perpetrate frauds or hoaxes, manipulate public opinion, and even discredit well-known persons. 

Specific to organizations, deepfakes can be used to cause reputational damage or steal money or data. 

Make false claims about a company

Armed with sophisticated AI tools, threat actors can create AI deepfakes that make false claims about an organization. If these claims show the company in a bad light, its brand value, share price, and reputation may be adversely affected.

Implicate the firm in a hoax

Some AI deepfakes are hoax or scam videos that appear to come from a certain firm. Those who fall for the hoax realize too late that they were fooled. But by then, the firm is implicated in the scam, damaging its stability, reputation, or revenues. 

Steal information or identities

Increasingly, cybercriminals use deepfake AI technology to perpetrate social engineering scams that manipulate or fool victims into parting with either information or funds. Deepfake attacks are also used in conjunction with phishing and malicious business email compromise (BEC) attacks. These attacks often imitate real people in an organization, such as C-suite leaders, and then use these identities to dupe employees into parting with the company’s money or data.

 

What Is an Active Threat Example of a Deepfake Campaign Using Artificial Intelligence?

In February 2024, a multinational firm discovered how a well-planned deepfake campaign could have a massive financial cost. The company lost a whopping US $25 million after one of its Hong Kong-based finance workers was tricked into attending a video conference call with multiple “colleagues,” including the “CFO.” He didn’t know that the video images on the call were all fake and that he had unwittingly become the target of an advanced deepfake scam. The result: he agreed to remit the money to the CFO at the latter’s request. By the time he realized his mistake, the million-dollar damage to the firm had already been done.

In a similar incident, thieves used voice-mimicking software to mimic a senior leader of a British energy company. The target was one of his subordinates, who was fooled into transferring $240,000 to a secret account belonging to the thieves.

 

How Can You Protect Your Business from AI Deepfakes?

The security risks posed by deepfakes have prompted security agencies, such as the FBI, to issue public service announcements related to AI deepfake content for varying types of criminal enterprises, as well as fake job interviews designed to solicit sensitive information from “applicants.”

Others, such as the National Security Agency (NSA), have also issued advisories for organizations. The most recent advisory recommends that organizations implement controls to recognize and avoid deepfakes, such as real-time verification, passive detection, and incident response planning.

These security measures can also help to minimize the risks and impact of deepfake attacks:

  •  Regular data backups
  •  Data encryption
  •  Multi-factor authentication
  •  Software patching

Cybersecurity awareness training for company personnel is also critical to minimize the risk of deepfake attacks. Employees should be trained on how to verify the authenticity of video or audio content by watching out for these signs:

  •  Blurry details
  •  Irregular lighting or odd shadows
  •  Unnatural or jerky eye, facial, or body movements
  •  Audio that does not match the speaker’s lip movements
  •  Absence of emotion
  •  Unnatural individual characteristics, appearance, expressions, postures, or features
  •   Image distortions

It’s also important to encourage users to always listen to their intuition, especially if it says that something isn’t quite normal about a particular call, assignment, or request. Finally, companies must ensure that identity verification is part of the security culture. All users must verify that someone is who they claim to be before initiating transactions, sharing sensitive information, or transferring funds.

 

Strengthen Your Business’ Security with GCS Technologies

Deepfakes are quickly evolving and getting more realistic by the day. Fortunately, detection tools are also improving, allowing organizations to detect, filter, and quarantine deepfakes, protecting their people and assets from harm.

Even so, the best way to avoid AI deepfake attacks is to strengthen the weakest link in your business’ cybersecurity: people. Mitigate risk through cybersecurity awareness training. GCS Technologies provides customized security awareness training to help businesses minimize the human causes of cyberattacks. 

In addition, our managed security offering Secure Cloud combines Microsoft Cloud security services and a dedicated team of security experts, enabling organizations to minimize security risks, prevent breaches, and even potentially lower their cybersecurity insurance premiums.

Discover how you can strengthen your business’ security with our security training and managed security solutions. Contact us to get started.

___________________________________________________

[1] https://public-assets.graphika.com/reports/graphika-report-deepfake-it-till-you-make-it.pdf

 

Pin It on Pinterest