Skip to main content
Lab Notes
Parent Safety

The Synthetic World: Deepfakes, AI-Generated Content, and Raising Children Who Can See Clearly

Layla Mansour|March 5, 2026|7 min read

In early 2023, a family in Arizona received a phone call that every parent fears. The voice on the line belonged to their teenage daughter — her exact cadence, her exact way of saying "Mom" when she was frightened. She was screaming. She had been in an accident. She needed money immediately.

The daughter was at home. She had never made the call. The voice had been cloned from a few seconds of audio extracted from her social media account — the kind of casual, smiling video that millions of teenagers post without thinking about what they are giving away. The mother who answered the call had to be told several times before she could accept what had happened. "I still believe it was her," she said afterward. "That's the thing that scares me."

This is the shape of what AI has done to deception. It has made the most intimate evidence — a loved one's voice, a familiar face — synthetic and cheap to produce. And it has done so at a moment when children are the most actively documented generation in human history, their images and voices distributed across public platforms, available to anyone who wants to construct a replica.

When Seeing Is No Longer Believing

For most of human history, the evidence of a person's presence — their face, their voice, their likeness — required their actual presence. Photography changed this partially, and imperfectly: photographs could be faked, but faking them was labor-intensive, detectable by trained eyes, and accessible to very few people.

Generative AI has changed this completely. The tools required to produce a convincing video of a person saying something they never said, or a photograph of a body that does not belong to them, are now commercially available, inexpensive, and require no technical expertise. A realistic voice clone can be generated from three to five seconds of audio. A realistic face replacement can be applied to existing video footage in minutes. A photorealistic image of a person in any context, any scenario, any state of dress, can be generated from publicly available photographs.

Only about 40 percent of teenagers, Common Sense Media found, can reliably identify AI-generated content when they encounter it. Expert linguists incorrectly identified 62 percent of AI-generated text as human-created, according to a CISPA study. The European Parliament's 2025 research brief concluded that "children are particularly vulnerable to synthetic content such as deepfakes, and because of their still-developing cognitive abilities, can be manipulated more easily than adults."

Voices From the People We Love

The voice cloning scam — known in its earlier form as the "grandparent scam," in which callers impersonated grandchildren claiming to be in danger — has been transformed by AI from a fraud that the alert could resist into one that defeats even the skeptical. When the voice is indistinguishable from the real person's voice, the rational defenses that protect against conventional fraud are not activated. The call triggers emotion, not analysis.

Global losses from deepfake-enabled voice fraud exceeded $200 million in the first quarter of 2025 alone. Vishing — voice phishing using AI-generated voices — surged by 442 percent in 2025. One in four people surveyed have experienced an AI voice cloning scam or know someone who has. The average victim loses approximately $6,000.

For families with teenagers who maintain active social media presences — the majority of families in most countries — the raw material for a voice clone is already publicly available. Every video posted, every reel uploaded, every spoken message sent to a wider audience provides the audio samples from which a clone can be constructed.

The Crime That Is Growing the Fastest

The Internet Watch Foundation reported 3,440 AI-generated child sexual abuse videos in 2025 — up from 13 in 2024. That is a 26,362 percent increase in a single year. Ninety percent of AI-generated images assessed by the Foundation's analysts were deemed realistic enough to be treated under the same law as real child sexual abuse material.

The National Center for Missing and Exploited Children received 440,419 reports related to AI-generated child exploitation material in the first half of 2025 alone — a 6,345 percent increase over the same period in 2024.

These numbers are almost too large to hold. They describe a technology that has been weaponized against children with a velocity that no institution — legal, regulatory, technological, or educational — has been able to match.

The specific mechanism that is most directly threatening to families is sextortion using AI-generated images. A predator takes ordinary, fully clothed photographs from a child's public social media account or school website, processes them through commercially available AI tools, and produces explicit imagery. That imagery becomes the lever of extortion: pay money, or the images will be distributed. The images do not have to be real. The threat of distribution — and the permanent existence of the synthetic material — is sufficient.

A 2024 survey found that one in ten minors reports that peers have used AI to generate nude images of other children. One in eight victims of sexual extortion reports being threatened with a deepfake specifically.

The TAKE IT DOWN Act, signed in the United States in May 2025, was the first federal law to criminalize the nonconsensual publication of intimate images including deepfakes, requiring covered platforms to remove reported content within 48 hours. Penalties include up to three years imprisonment. It was a meaningful step. It came years after the harm had already reached industrial scale.

Teaching Children to See

The response to a world saturated with synthetic content cannot be purely defensive — a list of things to block, tools to install, behaviors to prohibit. It must also be developmental: the cultivation of habits of mind that function across all the forms the problem will take, including the ones that have not yet been invented.

The research on what actually works points toward a set of dispositions rather than a curriculum. Children who are most resistant to synthetic media manipulation are those who have been taught to ask, habitually: Who made this, and why? Does the same claim appear from multiple independent sources? What would the world look like if this were true? Does something feel slightly wrong about this — the eyes, the voice, the cadence — even if I can't name what?

These questions will not make every deepfake visible. The technology is advancing faster than any recognition technique. But children who have practiced asking them are more likely to pause before sharing, more likely to verify before believing, and more likely to bring a suspicious piece of content to a parent rather than act on it alone.

The MIT Media Lab has found that seven-year-olds tend to attribute real feelings and personality to AI agents — treating them as credible human sources. The window for building skepticism, it turns out, is earlier than most parents think.

In the next piece, we move from synthetic content to the most direct use of AI against children: how predators are using artificial intelligence to identify, approach, and exploit minors in ways that previous generations of parents had no reason to anticipate.


This is Part 5 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.

L

Layla Mansour

Science and policy writer covering artificial intelligence, digital rights, and child safety in the Arab world. Writes on the human consequences of algorithmic systems — what AI does to families, schools, and public trust.

Share this article: