Industry Insights

Apr 23, 2024

Putting an End to Deepfake-Enabled Cyberbullying

Person walking forward

As generative AI becomes easily accessible to the youngest and most impressionable members of society, deepfakes are quickly beginning to fuel a new era of cyberbullying. The case of deepfake explicit photos of students generated and distributed by their classmates at Beverly Vista Middle School is only the latest example of a learning environment made hostile and dangerous for children — especially teenage girls. Faceswap and “undressing” apps enable bullies to create materials to harass, exploit, and blackmail their classmates with the use of a single photograph and no technological know-how. Students who have been targeted by this vile form of bullying have found that there is little legal recourse for them, as both state and federal lawmakers have mostly failed to legislate this evolving technology and establish consequences for bullies who utilize deepfakes.

Over the past two decades, social media has become a hotbed for new forms of bullying. According to CDC studies, 14.9 percent of adolescent children have been cyberbullied, and 13.6 percent have made a serious suicide attempt. With the possibilities enabled by generative AI, social media cyberbullying can become even more targeted, specific, and irreversible, given how easily bullies can create extremely convincing deepfake materials of their targets in compromising or sensitive positions, or portraying them as doing or saying things that never happened. As with most cases of bullying among young demographics, this trend is a convergence of powerful technological tools being put in underdeveloped hands, and children’s inability to fully understand the scope of the damage they can cause with these tools until it is too late.

Responding to the recent surge of AI-enabled bullying, specifically the widely reported case involving several high school-aged girls targeted by their classmates in New Jersey, federal lawmakers have introduced a bipartisan bill that would enable victims to collect damages from abusers. As with many attempts at legislation in the U.S. the bill is currently stalled in Congress. While we welcome these early efforts from lawmakers, even as they come late and move too slowly, collecting financial damages after the fact is hardly a sufficient reparation for the deep and possibly lifelong psychological and developmental damage bullying can cause.

Why Cyberbullying Matters to Me

As a father, I am deeply concerned about the many ways in which deepfakes can be misused against children, and I feel we must do far more, and more quickly, to counter these trends. All generative AI platforms that enable the easy creation of deepfakes should have an age minimum requirement. Since users can lie about how old they are, such policies should also come with reliable age verification. This policy should be non-negotiable and would go a long way in deterring many young users. Still, some would find ways to use these tools despite age limits. 

Since lawmakers move slowly to enact legislation that punishes those who leverage generative AI tools to harm others, the burden falls on parents, schools, and social platforms to stay informed about how deepfakes are being perceived and used by children, and to devise methods to minimize the damage. Since social media platforms are where the majority of online bullying takes place, we hope to see these platforms bolster their moderation efforts — or our elected officials’ mandate requiring them to do so — all to ensure that deepfakes contributing to the cyberbullying problem are swiftly detected and taken down. Human moderator teams, no matter how large and skilled, cannot keep up with the onslaught of increasingly sophisticated deepfakes that will continue to flood their sites. Only technologies that leverage AI to detect AI can reliably catch malicious deepfakes in real time and contribute to keeping online spaces safe for their youngest users.

Ultimately, the problem of bullying will not solely be solved via technological tools, but through measures that disincentivize bullies from taking AI-assisted “scorched earth” tactics and spreading online misogyny and hate. We unfortunately cannot afford to wait for such changes to simply happen on a societal level, and, in the near term, must do what we can to focus on quick wins that prevent the misuse of Generative AI by and against children. This means regulation with teeth — laws that have immediate and sustained effects and can turn the tide against the tangible devastation deepfakes have on younger generations. It is our belief that such legislation is for all parents and children, regardless of political ideology or locale, and will put a motor on the demise of deepfake-enabled cyberbullying. 

By working together, we can create a safer environment for children — both online and off — where they can grow, learn, and thrive without the fear of being targeted by malicious deepfakes. It is our responsibility as parents, educators, policymakers, and technology leaders to take immediate action against this growing threat and ensure that our youngest citizens are protected from its harmful effects.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter