无忧传媒

Deepfakes Pose Businesses Risks鈥擧ere's What to Know

Abuse of GenAI

Generative AI (GenAI), which has immense positive potential, is being abused to create deepfakes, often through generative adversarial networks (GAN). GenAI refers to the ability of machines to create new content, such as text, code, images, and music, that resembles what humans can create.鈥 In parallel with the deepfake problem, there鈥檚 a growing risk that large language models (LLM) will be used to craft very convincing, native-language text for phishing schemes, false backstories, and information manipulation and interference operations. What鈥檚 more, threat actors are combining this language with deepfakes to manufacture potent lies at scale.

It鈥檚 easy for adversaries to find useful material to inform impersonations. Thanks to the multitude of social media sites and personal content readily available online, a skilled threat actor can quickly research their target, develop a deepfake, and deploy the deepfake for malicious purposes. Executives, senior IT staff, and call center management are particularly attractive targets for such schemes because of the high potential to monetize the impersonation.

Emerging Defenses

There is no technological silver bullet to counter the risks posed by deepfakes. Deepfake detection is still an active research challenge and will continue to increase in complexity as the quality of media generation rapidly advances. Leading techniques typically take one of a handful of approaches:

Deep Learning

Using deep learning techniques to develop a model to distinguish between real and fake content by recognizing underlying patterns in the data.

Artifact Detection

Using specific 鈥渢ells鈥 or artifacts within the content to identify key differences (e.g., close examination of subjects鈥 eyes and mouths or monitoring the blood flow or pulse by detecting small movements and color changes in the video content).听

Pairwise Analysis

Taking apairwise view focusing on the direct comparison of content to find which is more likely to be fake, the idea here being that providing a ranked rating may be more successful than a non-contextual prediction.

Within these areas, several studies report deepfake detection accuracy in the听, but there are also limitations to consider.听

Technological Hurdles

A large limitation of deepfake detection techniques is their generalizability. While training a model to perform well on a closed subset of media generation techniques is approachable, training a model that will perform well on previously unseen (or not-yet-invented) generation techniques is much more challenging. AI-based approaches look to find patterns or tease out small differences that can allow them to model a clear separation between classes. However, performance quickly degrades when the model has challenges finding where to look for these differences or if differences occur across several areas.听

Another challenge is creating techniques that withstand reverse engineering attacks. Suppose threat actors can find specific features that lead to the determination that an image, voice sample, or video is fake. In that case, they may be able to manipulate this feature in future deepfakes to trick detection models into classifying them incorrectly and bypassing detection systems. A successful model must also work even with large variations in sample quality.听

As the field continues to advance, new promising approaches must be weighed against the current technological landscape as well as the use case in question. There may be techniques that sufficiently address a given requirement. However, care must be taken to evaluate the changing threat landscape and the overall risk continuously.

Countering Deepfakes Today

To fight AI with AI, detections need to become targeted and refined. While there is no streamlined AI-based defense against deepfake threats, organizations can mitigate the risks by building a robust, security-centered culture:

  1. Educate staff about the risk of deepfakes, the potential for damage, and tips for spotting deepfakes. Personnel can use this understanding to identify where an image or video may be distorted or appear fake鈥攆or instance, characteristics such as hollow eyes, odd shadows, hands, words on signs in the background, or other blurred features that can stand out to a trained eye. Also, track and on countering voice-cloning risks.
  2. Increase protection against deepfake threats with robust authentication and verification, fraud detection, highly tuned phishing detection tools, and a defense-in-depth posture with multiple layers of defense that can withstand the compromise of a single control. Prioritize shoring up existing cybersecurity controls and tools, ensuring they are well-tuned and detecting threats as needed. Also, apply frameworks like DISARM to characterize, discuss, and hunt disinformation threats.听
  3. Review recent U.S. cybersecurity on deepfake threats. It discusses using technologies to detect deepfakes and show media provenance as well as applying authentication techniques and/or certain standards to protect the public data of key individuals. The latter includes planning and rehearsing, reporting, and sharing experiences, training personnel, using cross-industry partnerships, and understanding what companies are doing to preserve the provenance of online content.

Meet the Expert


Contact Us

Learn more about countering deepfake threats and applying technical AI solutions to strengthen cybersecurity and advance strategic business priorities.



1 - 4 of 8