Deepfakes are synthetic media — usually videos, images, or audio — where a person's likeness or voice is convincingly replaced or generated using AI. The term combines "deep learning" and "fake," reflecting the technology behind them. Deepfakes have legitimate uses in entertainment and accessibility but raise serious concerns about misinformation and fraud.

How deepfakes are created:

Face swapping: The most common type. AI models (typically autoencoders or GANs) learn the facial features of two people from photos or video, then swap one face onto the other's body in real time or in post-production. Modern tools need as few as 10-20 clear photos of a target face to generate convincing results.

Face reenactment: Controls a target person's facial expressions using another person's movements. The target appears to make expressions and mouth movements they never actually made. This powers many of the "fake speech" deepfakes.

Voice cloning: AI synthesizes a person's voice from audio samples. Modern systems can clone a voice from as little as 3-15 seconds of audio. The clone can then speak any text in the target's voice with matching intonation, accent, and emotion.

Full body synthesis: Newer models can generate entirely fictional people — both appearance and voice — that have never existed. Tools like these are already used in marketing and virtual influencer creation.

The technology stack: Most deepfakes use generative adversarial networks (GANs) or diffusion models. GANs work by pitting two neural networks against each other — one generates fakes, the other tries to detect them. Through this adversarial process, the generator becomes extremely good at producing realistic output.

Legitimate uses:

  • Film production (de-aging actors, visual effects)
  • Accessibility (translating sign language, synthesizing speech for those who've lost their voice)
  • Education (bringing historical figures "to life" for learning)
  • Gaming and virtual reality
  • Privacy protection (replacing real faces in footage)

Harmful uses:

  • Political disinformation: Fake videos of politicians saying things they never said
  • Financial fraud: Voice cloning for CEO fraud schemes (one case resulted in a $243,000 theft using AI-cloned voice)
  • Non-consensual intimate content: The most prevalent harmful use, disproportionately targeting women
  • Identity theft: Creating fake video for KYC (Know Your Customer) verification
  • Market manipulation: Fake CEO statements or product announcements

How to protect yourself:

  • Be skeptical of sensational video/audio, especially from unverified sources
  • Look for subtle artifacts: unnatural blinking, inconsistent lighting, blurry edges around the face, mismatched lip sync
  • Verify through multiple sources before believing or sharing dramatic video content
  • Use detection tools for critical verification (but don't rely on them solely)
  • Support organizations developing authentication standards (C2PA, Coalition for Content Provenance and Authenticity)

The legal landscape: Several US states have enacted deepfake laws. The EU AI Act classifies deepfakes as high-risk AI requiring disclosure. Federal legislation is progressing but not yet comprehensive.