Based in the U.S. | Writing about my research journey
Photo: Cantor Arts Center, Stanford University / Richard Serra, Sequence (2006)
“If we want to win big with teens, we must bring them in as tweens,” according to internal Meta (Instagram) documents from 2018 disclosed in court today in Los Angeles.
Reuters
Today, Mark Zuckerberg took the stand in what could become the most consequential trial yet over social media’s impact on young people’s mental health.
During questioning, an internal Instagram document from 2018 was read aloud in court:
“If we want to win big with teens, we must bring them in as tweens.”
In other words: if you want teenage users, you reach them at 10, 11, 12. The quote was presented as evidence that, internally, Meta viewed pre-teens as a strategic entry point — despite its public position that children under 13 are not allowed on the platform. Under oath, Zuckerberg said enforcing age limits is “very difficult” and acknowledged that many users lie about their age. He denied that Meta deliberately targets children.
But jurors were also shown internal documents indicating:
– significant numbers of 10–12-year-olds were already using Instagram
– company discussions about increasing time spent on the app
– and strategic prioritization of young users
Plaintiff’s lawyers argued that Meta’s strategy was top-down: get users on the platform as early as possible — and then keep them there through design features such as beauty filters, infinite scroll and autoplay. Meta maintains the lawsuit oversimplifies complex mental health issues and says user well-being has always been part of its considerations.
According to Bloomberg and other US media reports, Zuckerberg testified that enforcing Instagram’s minimum age of 13 is “very difficult,” acknowledging under oath that many users lie about their age when signing up.
That admission sat uneasily alongside the internal documents presented in court — documents suggesting the company was well aware that younger users were already on the platform, and in some cases discussing how to grow engagement among them.
The legal question now facing the jury is not simply whether teenagers spend too much time online. It is whether Meta designed its platforms in ways that deliberately maximized engagement among young users while publicly maintaining that children under 13 were not permitted.
For years, tech companies have relied on Section 230 as a legal shield. This case takes a different approach. It treats Instagram less as a neutral platform and more as a product — one whose design choices may carry consequences.
Zuckerberg insisted that user well-being has always been a priority. But today’s testimony once again highlighted the gap between Meta’s public assurances and what internal documents reveal about growth and engagement strategy.
Still, today’s testimony once again exposed the tension between Meta’s public statements about age restrictions and safety — and what internal documents reveal about growth, engagement and competition.
AI Is Moving Fast. Child Protection Isn’t. Not a Future Risk—but a Rapidly Escalating Crisis
AI Is Moving Fast. Child Protection Isn’t. Not a Future Risk—but a Rapidly Escalating Crisis
“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”
— Internet Watch Foundation CEO Kerry Smith
FOTO: Ricard Serra, Sequence (2006) the Cantor Arts Center, Stanford University
It is a new year—one that, more than ever, begins under the sign of rapidly accelerating AI.
At the end of 2025, I visited Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI). I had been invited by Riana Pfefferkorn, who researches the law and policy implications of emerging technologies, including AI.
My research focuses on children and young people, and I am particularly concerned with understanding how their rights can best be protected in the face of these developments. At Stanford HAI, one of the Center’s core efforts is to track, map, and synthesize developments across the rapidly evolving field of artificial intelligence. A key outcome of this work is the AI Index Report 2025—a comprehensive account of how artificial intelligence is accelerating across society, the economy, and governance.
The report shows just how fast things are moving. New, more demanding benchmarks such as MMMU, GPQA, and SWE-bench saw dramatic year-over-year improvements, with performance gains of up to 67 percentage points in a single year. At the same time, AI systems are becoming cheaper, more accessible, and increasingly realistic.
This is especially visible in video generation. Models like Sora and Veo 2 represent a clear leap over 2023 systems, producing highly realistic, cinematic content at a speed and scale that was unthinkable just a year ago.
“AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis”
AI has supercharged the already endless flow of visual information we scroll through every day on social media and other visual, networked technologies. Images and videos are no longer just shared; they are generated, scaled, and optimized by machines.
Over five billion people now collectively produce an estimated 2.5 quintillion bytes of data every day. The volume of data has increased by 90% in just two years (U.S. Chamber of Commerce Foundation, 2023). This acceleration of visual production raises urgent questions about how children and young people are encountered, represented, and protectedwithin digital culture.
As Riana Pfefferkorn has warned, “computer-generated child sex abuse imagery poses significant challenges to law enforcement, including constitutional limits on criminal prosecutions” (Pfefferkorn, 2024). Advances in generative AI are making it increasingly feasible to create highly realistic child sexual abuse imagery, exposing serious gaps in existing legal and regulatory frameworks.
This is a regulatory blind spot. The Internet Watch Foundation (IWF) has warned that AI-generated child sexual abuse material is “not a future risk—it is a current and accelerating crisis.” In 2024, the IWF recorded 245 reports containing actionable AI-generated child sexual abuse imagery, compared to just 51 reports in 2023—a 380% increase in a single year. These reports alone included 7,644 images and a growing number of videos.
So where can we find hope as we enter a new year?
Perhaps in the growing recognition that this is no longer a debate about abstract trade-offs between innovation, privacy, and safety. As the Internet Watch Foundation has argued, the real choice—particularly in Europe—is not between privacy and protection, but between indifference and compassion.
Hope lies in the fact that the harms are now documented, the grey zones exposed, and the technical and legal arguments more precise. We know far more than we did just a few years ago. We know that children’s rights are violated not only when abuse occurs, but when it is recorded, replicated, and endlessly circulated. And we know that inaction is itself a political choice.
Protecting children is not in opposition to freedom or innovation. It is a precondition for both.
Key Takeaways Pfefferkorn’s policy brief.
Schools are largely unprepared to address the risks of AI-generated child sexual abuse material (CSAM), including the use of so-called “nudify” apps and the circulation of deepfake nudes among students. Few schools educate students about these risks or train educators to respond effectively when incidents occur.
Recent criminalization of AI-generated CSAM is insufficient on its own. While many states have updated criminal law, most have failed to provide clear guidance for how schools should handle cases where minors themselves create or share such material.
Schools need clearer legal and policy frameworks. States should update mandated reporting and school discipline policies to clarify when educators are required to report deepfake nude incidents and to explicitly recognize such behavior as a form of cyberbullying or sexual harm.
Punitive approaches are often inappropriate for minors. Responses to student-on-student AI-generated CSAM should prioritize behavioral and educational interventions over criminal punishment, grounded in child development, trauma-informed practices, and principles of educational equity.
References:
IWF - Internet Watch Foundation
Pfefferkorn, R. (2024). Addressing computer-generated child sex abuse imagery: Legal framework and policy implications. Lawfare Institute, in cooperation with Brookings. February 5, 2024.
Pfefferkorn, R., Grossman, S., & Liu, S. (2025). AI-generated child sexual abuse material: Insights from educators, platforms, law enforcement, legislators, and victims. Stanford Digital Repository. https://purl.stanford.edu/mn692xc5736
Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2025). AI Index Report 2025. Stanford University.