Who's Responsible for Keeping Kids Safe Online? Everyone.
- Sarah DeGue
- Apr 14
- 7 min read
Part 3 of a 3-Part Series on Protecting Kids in Digital Spaces
In Part 1, I laid out why we’re failing kids online. In Part 2, I made the case for what prevention can accomplish — reducing the number of people who want to harm children, and reducing children’s vulnerability to being harmed. That work is essential. It’s also not sufficient on its own.

Even the best prevention programs can’t reach every potential perpetrator. They can’t reach someone operating from another country. They can’t undo the design of a platform that makes exploitation easy. They can’t enforce a law that doesn’t exist.
That’s not a failure of prevention — it’s a limit of scale. Keeping kids safe also requires action at the structural level: the platforms, policies, and regulatory frameworks that determine whether harm is easy or difficult to carry out at scale.
The environment is part of prevention too.
Teaching children to recognize grooming is important. Designing platforms so that adults cannot easily locate and contact children they don’t know is more powerful — because it works for every child, including those who never got the lesson.
Right now, our approach to online child safety is weighted heavily toward the individual end of the social-ecological spectrum. We educate kids. We counsel parents. We investigate cases after harm has occurred. What we haven’t done — not nearly enough — is hold the environments where harm happens to the same standard we hold other places where children spend time.
We regulate playground equipment. We require seatbelts. We inspect school cafeterias. Children spend hours every day in digital spaces that operate with essentially no equivalent safety standards. That’s not inevitable. It’s a choice — and it’s one we can change.
What technology companies need to do.
Platforms are not passive infrastructure. Their design choices — who can contact whom, what content gets amplified, what defaults are set for younger users, and what gets detected and removed — directly shape whether harm is easy or hard to do.
Detection and removal of child sexual abuse material (CSAM) should be an industry standard, not voluntary. Tools like PhotoDNA (developed by Microsoft with the National Center for Missing & Exploited Children) and Thorn’s Safer have demonstrated that proactive detection at scale is possible. Reports to NCMEC reached 29.3 million in 2021 — up 35% during the pandemic (1). The technology exists, and the largest platforms use it. But in 2022, fewer than 50 companies voluntarily accessed NCMEC’s hash-matching database, and over 90% of CSAM reports came from just five platforms (2). Smaller platforms — where organized criminal networks actively migrate precisely because oversight is weaker — are largely not scanning for this content at all. That’s not a technology problem. It’s a compliance problem, and it’s one that voluntary frameworks have consistently failed to solve. The choice being made, every day, is to leave children less protected on platforms that could act but aren’t required to.
Age-appropriate design should be the default, not an opt-in. Most platforms come with the same default settings for a 14-year-old and a 40-year-old. That’s a design decision, and it can be changed. In practice, age-appropriate design means:
Privacy-protective settings from day one for users under 18, without requiring families to find and activate them
Contact controls that limit the ability of unknown adults to find, follow, or message children
Recommendation algorithms that don’t push progressively more extreme or exploitative content to young users
Transparent safety reporting so parents, researchers, and regulators can actually evaluate whether platforms are doing what they claim
The UK’s Age-Appropriate Design Code (Children’s Code) has already shown that regulatory standards shift platform behavior — Instagram, TikTok, and others made concrete changes after it took effect. California’s Age-Appropriate Design Code Act is a domestic model worth expanding nationally (3, 4).
Recommendation algorithms are a structural driver of radicalization. Research shows that exposure to radical content online is among the strongest predictors of behavioral radicalization (5). Algorithms that feed young people increasingly extreme content — because engagement metrics reward intensity — are doing active harm. Redesigning them is technically feasible. It requires will, not invention.
Transparency and accountability are baseline expectations, not competitive differentiators. Platforms should publicly report on CSAM detection rates, content moderation outcomes, and child safety metrics — with independent auditing, not self-certification.
What governments and policymakers need to do.
Mandate what is currently voluntary. Under current U.S. law, technology companies are not required to actively scan for CSAM — they’re incentivized to report it if they happen to find it. That’s not a safety standard. Legal analysts have made a credible case for updating Section 230 of the Communications Decency Act to require active detection as a condition of platform immunity (6). European frameworks have gone further and are producing results worth studying.
The Kids Online Safety Act (KOSA) — co-sponsored by Senators Blumenthal and Blackburn and reintroduced in May 2025 — would require platforms to provide a safe environment for minors by default, allow young people to opt out of addictive design features and personalized algorithmic recommendations, and establish a duty of care to prevent specific harms, including sexual exploitation (8). It passed the Senate 91-3 in 2024 before stalling in the House, despite (supposed) bipartisan support.
Fund primary prevention seriously. VAWA reauthorization, STOP grants, and CDC violence prevention funding are all mechanisms for investing in the upstream prevention work described in Part 2. Programs that prevent sexual violence, dating violence, and youth aggression are online safety investments, whether they happen online or not.
Invest in evaluation. Many widely deployed online safety programs have never been rigorously evaluated. Consider I-SAFE, which was used in tens of thousands of U.S. schools before an NIJ evaluation found it reliably increased knowledge — but did not change online behavior. Similarly, programs developed by Common Sense Media reach over 70% of U.S. schools and have evidence supporting gains in knowledge and engagement, but have not yet been tested in a rigorous trial measuring whether they actually change online behavior or risks (although more research is underway). We’re making major decisions — in schools, in policy, in resource allocation — with very little evidence about what actually works.
Take the loneliness epidemic seriously as a prevention investment. The U.S. Surgeon General’s 2023 Advisory on Social Connection documented an epidemic of social disconnection among American youth and called for urgent action (7). Community programs that build youth belonging — such as mentorship, arts, sports leagues, and civic institutions — are also child safety investments.
What parents and communities can do.
Parents have a real role — but they can’t be the last line of defense.
Your relationship with your child is still the most powerful safety tool you have. Kids who believe they can approach a trusted adult without punishment are more likely to disclose harm early, before it escalates. Start conversations earlier than feels necessary. Teach what manipulation and grooming actually look like — not just “don’t talk to strangers.” And if your child is increasingly isolated and turning to online spaces to meet social needs, address the loneliness, not just the screen time.
KidSafeHQ, launched by The Rowan Center, is a good place to start for families looking for accessible resources. Stop It Now! offers a confidential helpline for adults concerned about a child’s safety or their own thoughts toward children — one of the only U.S. resources targeting potential perpetrators directly. NetSmartz and the CDC's HeaRT Parent Training are practical, free tools for families.
Communities matter too. Youth programs that build belonging, bystander cultures that normalize speaking up, community norms that reject exploitation — these create a kind of social fabric that no platform policy or legal framework can replicate.
The bottom line.
Keeping kids safe online requires prevention at every level — individual, family, community, and structural. Individual and community-level strategies reduce the number of people who want to harm children and reduce children’s vulnerability. But they can’t reach every perpetrator in the world, and they can’t undo the structural features of platforms designed without child safety in mind.
Structural strategies — platform accountability, regulatory standards, and industry-embedded safety tools — create safer digital spaces for every child, including those who have never received a single lesson in online safety. But they can’t build the healthy relationships, social-emotional skills, and a sense of belonging in the community that protects children from the inside out.
We need all of it. And right now, we are dramatically underinvested at every level.
The question isn’t which approach we choose. It’s why — with everything we know — we’re still choosing to do so little of any of it.
This is the work EVOLVE was built for — helping organizations, coalitions, and agencies move from understanding the evidence to building strategies that produce results. If you’re trying to figure out where to start or how to strengthen what you’re already doing, reach out at evolveprevention.com.
Sarah DeGue, PhD is the founder of EVOLVE Violence Prevention. EVOLVE helps nonprofits, government agencies, and coalitions translate violence prevention evidence into effective real-world strategies.
References
1. National Center for Missing & Exploited Children. CyberTipline 2021 Report. NCMEC, 2022.
2. Human Trafficking Front. The Role and Responsibilities of Internet Companies to Implement Effective Prevention Measures. 2023.
3. Information Commissioner’s Office (UK). Age Appropriate Design: A Code of Practice for Online Services (Children’s Code). ICO, 2020.
4. California Legislature. AB-2273: The California Age-Appropriate Design Code Act. Signed September 15, 2022.
5. Wolfowicz, M., Hasisi, B., & Weisburd, D. What are the effects of different elements of media on radicalization outcomes? A systematic review. Campbell Systematic Reviews, 2022. https://doi.org/10.1002/cl2.1244
6. Gurriell, M. Born into Porn But Rescued by Thorn: The Demand for Tech Companies to Scan and Search For Child Sexual Abuse Images. Family Court Review, 59(4), 840–854, 2021.
7. Office of the U.S. Surgeon General. Our Epidemic of Loneliness and Isolation. HHS, 2023.
8. U.S. Senator Richard Blumenthal. Kids Online Safety Act (KOSA). Reintroduced May 2025, 119th Congress.





Comments