Tim mentioned a lot of work is being done here - especially on the detection / trust and safety side at Meta / Google, etc. Include links and info on solutions at all levels [research, policy, law enforcement, NGOs]
- PhotoDNA / Project Arachnid – Robust “hashing” tech that turns known CSAM images into signatures; platforms compare uploads against this hash list to detect and remove previously-identified material.
- Thorn Safer - Commercial safety tool that platforms integrate to detect, remove, and report CSAM and risky text at scale
- Big tech Trust and Safety teams
- CyberTipline: central US reporting hub where the public and platforms send suspected CSAM, grooming, sextortion, etc… over 20M reports a year, feeding directly to law-enforcement
- Europol EC3 (European Cybercrime Centre) – EU-level hub for cybercrime investigations, including online child sexual exploitation. Within EC3, “Focal Point Twins” is the dedicated team focused on crimes against children and international CSA networks
- International operations, Homeland Security Investigations, Internet Crimes Against Children (ICAC) Task Force - investigations / enforcement at all levels to catch offenders