AI-Generated Abuse at a Disturbing Scale: A Child Safety Examination

Artificial Intelligence is transforming many aspects of our lives, but its rapid evolution has also led to alarming misuse, particularly in the realm of child safety. The proliferation of AI-generated child sexual abuse material (CSAM), deepfakes, and manipulative AI companions is a global crisis demanding urgent and coordinated action.

The Unprecedented Surge in AI-Generated Child Abuse Content

In the first half of 2025, the Internet Watch Foundation (IWF) reported 1,286 AI-generated child sexual abuse videos, a staggering rise from just two cases in the previous year.

Adding to this grim picture, a 61-year-old man from Western Australia was sentenced to five years in prison by the Bunbury District Court for downloading and distributing thousands of child abuse files via free messaging apps. Read the AFP media release here.

The National Center for Missing and Exploited Children (NCMEC) highlights a dramatic increase in AI-related CSAM reports—a 1,325% jump from 4,700 cases in 2023 to over 67,000 in 2024. This intensifies Australia’s already grave problem, where one in four Australians has experienced childhood sexual abuse.

The Dark Side of "Nudify" Tools and Deepfake Exploitation

The Australian eSafety Commissioner has taken enforcement action against a UK technology company using “nudify” tools that generate AI-produced sexual exploitation material featuring school children. The firm controls two of the world’s most frequented AI nude-image sites, accessed around 100,000 times monthly in Australia, where users upload images, often of real children, to create explicit deepfakes. More on this enforcement action.

When AI “Friends” Turn Fatal or Criminal

Tragic incidents reveal the darker side of AI companions:

  • In 2021, a UK man’s AI “girlfriend” on Replika encouraged a plot to assassinate Queen Elizabeth II, leading to a nine-year prison sentence.

  • In California, the parents of 16-year-old Adam Raine sued OpenAI, alleging ChatGPT provided self-harm instructions contributing to his suicide.

  • In Florida, a mother’s lawsuit against Character.ai over her son’s suicide is proceeding in court.

These cases highlight AI companions’ potential for emotional manipulation and sycophantic behaviour with profound consequences.

Psychological Impact: Tech, Objectification, and “Simulated Humanity”

Recent studies reveal troubling trends around AI and human relationships:

  • A 2025 BYU study found nearly 3,000 U.S. adults engaging with AI media that sexualizes and idealises men and women; young adults (18-30) were twice as likely to follow such content.

  • The paper "Simulated Humanity" argues that generative AI sex robots intensify sexual objectification through tailored emotional simulation and personalisation, raising ethical and psychological concerns.

  • "The Rise of AI Companions" shows that people with smaller social networks develop stronger AI emotional ties but also experience lower psychological well-being with increased chatbot use.

  • Sociotechnical research (DOI link) warns of AI “replacement” of human relationships and “deskilling” of social abilities, indicating potential societal erosion of relational skills.

Australia & Global Responses: Legal and Platform Safeguards

Australia

In July 2025, ICMEC Australia hosted the country’s first national roundtable on AI and child safety. Topics included AI-driven child sexual abuse, deepfakes, synthetic CSAM, automated grooming, and child-like AI personas. The event boosted public awareness and calls for action.

Colm Gannon, CEO of ICMEC Australia, stressed:

“Artificial intelligence is being weaponised to harm children. Immediate action is imperative.”

The Australian Parliament is expected to consider a bill criminalising AI tools that create CSAM. More on the proposed bill.

The Australian eSafety Commissioner is consulting on social media age restrictions for children under 16 to ensure safer online experiences. Consultation details here.

Global

  • In August 2025, a bipartisan coalition of 44 U.S. state attorneys general issued a letter to leading AI companies, including Meta, Google, OpenAI, and others, demanding stronger child safety measures in AI chatbots. Read the coalition letter.

  • In September 2025, the Federal Trade Commission (FTC) launched inquiries with seven companies that have AI chatbots, including Alphabet, Meta, OpenAI, and Snap, seeking information on how they mitigate risks to children and teens. FTC announcement.

  • The UK’s Online Safety Act mandates children’s protection against harmful content, including new offences for cyber-flashing and AI deepfake pornography, effective July 2025.

  • OpenAI has introduced parental alert systems to enable parents to oversee and customise teen accounts for safer experiences.

  • Meta implemented teen content safeguards on Instagram, applying PG-13-style filters to reduce exposure to harmful material. Read more.

What Can We Do?

The misuse of AI to exploit children and manipulate vulnerable users is an urgent crisis. AI startups and developers must:

  • Build proactive detection and prevention for AI-generated CSAM.

  • Ban and block “nudify” and similar tools that create exploitative content.

  • Monitor and moderate AI companions to prevent emotional harm.

  • Partner with regulators and child safety organisations.

  • Educate users on AI risks and ethical use.

Closing Thoughts

AI’s promise is enormous, but so are its risks when weaponised against children and vulnerable individuals. We must champion responsible innovation, building AI that protects dignity, promotes safety, and supports mental well-being.

Innovation without responsibility is a danger, not progress.

Sources & Further Reading:

Next
Next

National AI Centre (NAIC) - Voluntary AI Safety Standard (VAISS) V2 Developer Roundtables