2023 has been a rollercoaster ride for technology enthusiasts and futurists alike. Each passing day brings us stories of artificial intelligence painting vivid pictures in realms previously dominated by human creativity, solving complex equations that would stump many, or even setting up new ventures in the digital marketplace. It’s been a time of heady optimism, with tech prophets suggesting no problem is insurmountable with the right code and algorithms. But amid this technological euphoria, there’s a pressing conversation we need to have — a hard look at the underbelly of this digital revolution, especially its impact on our most vulnerable: children and young people.
In the shadows of these developments lies a chilling reality: 1 in 5 women and 1 in 13 men carry the scars of childhood sexual abuse, a crisis magnified and morphed by the internet’s boundless expanse. Child sexual exploitation and abuse, or CSEA, carries deep downstream effects, from lifelong psychophysiological trauma to disrupted relationships and diminished economic prospects. The statistics are sobering: across 13 countries alone, up to 20% of 12 to 17-year-olds, amounting to over 5 million children in 2021, have experienced online CSEA. And these are only the cases we know of.
The digital playground has turned into a battleground. Cases of financial sexual extortion are exploding across the world, while immersive gaming platforms have become breeding grounds for sexual grooming and manipulation. Moreover, a surge in end-to-end encryption, or E2EE, on social media messaging services threatens to shroud millions of abuse reports in darkness.
Navigating the risks of digital technology including AI
The adoption of E2EE by Meta for its messaging services on Facebook and Instagram is a significant and contentious change. While E2EE enhances privacy by ensuring only the communicating users can read the messages, this shift to a model more akin to a social network substantially raises safety concerns, especially for children and young people. The inherent features of the world’s largest social media platform — such as suggesting friends, hosting large groups, offering global user search, and grouping by localities and institutions — when combined with E2EE can conceal dark and dangerous connections. This includes those between victims and abusers, or networks of pedophiles and content sellers already happening on the platform. This presents a high-risk scenario.
The crux of the issue lies in the platform’s ongoing struggle to find the right equilibrium between algorithms that drive user engagement and the safeguarding mechanisms needed to detect and eliminate harmful content. The transition to E2EE messaging cannot responsibly proceed without robust, proven safety measures in place. Until these protections are established and verified, enabling E2EE could lead to a situation where the content of messages becomes invisible to any oversight, potentially escalating the risks of abuse and exploitation on the platform.
AI has exploded in recent years and is rapidly becoming embedded in our lives, likely for good. Despite its widely publicized potential to perform and even outperform humans in digital tasks, the technology comes with real risks — from its potential to impact democratic values and mis-/disinformation to mental health. These risks span all levels of society particularly children and young people.
Additionally, AI has made the online CSEA crisis infinitely harder to solve. Among the varieties of text and images AI can generate is child sexual abuse material, or CSAM. The people making and distributing CSAM have already integrated AI into their methods — often offline and undetectable — and the content has reached a level of sophistication indistinguishable from real CSAM featuring real children. AI “progresses,” so will its speed and inventiveness in generating CSAM, which presents an enormous challenge to legal systems worldwide. Locally, AI-generated CSAM is stretching law enforcement’s already overextended resources, complicating the identification of actual victims, and increasing the risks of revictimizing existing victims. While we struggle to mitigate these risks, newer forms of digital and commercial abuse are emerging.
Progress made, but more needed
Despite this, 2023 has seen some substantial progress. The United Kingdom’s Online Safety Act has ushered in new accountability criteria across social media platforms, while the European Union has proposed a regulation to combat child sexual abuse. The United States executive order on AI has established new standards for AI safety and security to prevent misuse of emerging technologies, and the Child Online Protection Lab of France is making headway in multistakeholder action. Japan has made “agile governance” a fundamental policy to adapt to AI’s rate of advancement while pushing for more sweeping social media standards and survivor-led advocacy in CSEA. Earlier this month, the African Union approved the world’s first comprehensive policy framework for the implementation of children’s rights in the digital environment, making the continent a pioneer in digitalization that prioritizes children’s safety and empowerment by design and default. Individually these changes may be inadequate, but as more of them enter international discourse, their effects are bound to be felt across scales.
We need urgent action to combat digital harms
The progress though significant is nowhere near enough to tackle the scale of the rapidly evolving crisis. As we grapple with the question of how we may tackle online harms to children, it’s evident that the current biggest issues — expanding social media platforms, AI, E2EE, and new gaming platforms — demand pre-emptive and robust measures. To create a resilient digital ecosystem for our youth, we need a three-pronged strategy:
- First, we must recognize the borderless nature of these cyberthreats and strive to level the playing field through increased investment. The digital risks faced by children across the globe don’t respect national boundaries, thus requiring a united, international financial commitment. Investment must be equitable, empowering all nations to defend their children with equal vigor and resources. Since 2016, Safe Online — the only global fund dedicated to child online protection to date — has invested nearly $100 million into over 100 global projects aimed at fighting online CSEA. But we need much more.
- Second, the legislation and standards we enact must be future-proof and tech-neutral, designed to anticipate and counteract emerging dangers. These laws should mandate safety by design and encourage proactive measures to safeguard users before harm occurs. Viewing technology through a lens of vulnerability allows us to predict and protect against potential abuses, ensuring that our defenses evolve in tandem with technological advancements.
- Third, the advancement of safety technology is paramount. Our Safe Online tech portfolio is a testament to the power of innovation used for the protection of our young. We have already seen the impact of pioneering tools in over 100 countries, from classifiers that detect child abuse material and age verification tools to investigative tools piercing the veil of the dark web. Leveraging technology for good is not just an ideal — it’s a proven strategy that has identified perpetrators, rescued victims, and prevented untold instances of abuse.
The world stands at a crossroads today. Technology can and should be advanced, but its advancement can’t carry on unbridled. 2023 has shown us a glimmer of what future generations will face. We need increased investment, stronger regulations, deeper collaboration, and international agreement to pave the way for a safe digital world where technology acts for the good of current and future generations, and never for their harm.
Marija Manojlovic is the executive director of Safe Online, the only global fund dedicated to keeping children safe in the digital world. Since 2016, Safe Online has invested nearly $100 million in 100 projects to strengthen capacities, research, and evidence and develop innovative tech tools to combat digital threats to children across over 85 countries.