The Metropolitan Police is exploring the use of artificial intelligence to speed up the identification of child sexual abuse victims and reduce the time officers spend reviewing harrowing imagery.
Met outlines plan to use AI
The force has confirmed it is looking at deploying AI tools to help sift, grade and prioritise vast volumes of online child sexual abuse material seized in investigations. The technology would be used to categorise images by severity and flag potential new victims so that specialist officers can intervene more quickly.
In an official statement, the Met said it is “exploring the use of artificial intelligence to support the rapid grading and triage of child sexual abuse imagery,” with the goal of allowing investigators to “identify and safeguard victims more quickly, while significantly reducing the need for officers and staff to manually review deeply distressing material.” According to the force, over the past year it has investigated more than 5,400 child sexual abuse offences, with over 1,300 children needing safeguarding in connection with online child sexual abuse and exploitation crimes.
The Met believes AI could “significantly shorten the time between detection and intervention” by rapidly analysing large volumes of material to highlight content that may relate to previously unknown victims, enabling officers to prioritise cases and “focus human expertise where it is needed most.”
How the technology would work
Traditionally, officers can spend hours manually going through images and videos seized from suspects to work out whether they relate to known cases or point to new victims in danger. Each file must then be graded across categories A, B and C, with category A representing the most severe forms of abuse, before investigators can decide how urgently to act.
Under the plans being considered, AI systems would be trained to recognise patterns, contexts and features in this material, then help classify it and surface the most urgent cases first. The force says AI could “assist by rapidly analysing large volumes of material to help flag content that may relate to previously unknown victims,” effectively acting as a triage layer before human officers review the most critical evidence.
The Met is already testing how such tools could work across the force and has confirmed it is in talks “with multiple companies about the tech.” The work would sit alongside another new system that allows officers to review and risk‑assess 641,000 messages in just 35 minutes, showing how automation is increasingly being used to handle the scale of online abuse cases.
Legal safeguards and human oversight
Senior officers are keen to stress that any use of AI in this area will operate within strict boundaries. The force says that “any use of artificial intelligence would operate within strict legal, ethical and safeguarding frameworks, with specialist officers retaining decision‑making responsibility and human oversight central at every stage.”
That means AI tools would not make final judgments about guilt, innocence or risk on their own. Instead, they would function as decision‑support systems, helping experienced investigators to find crucial evidence faster while still leaving ultimate decisions in human hands.
The move comes against the backdrop of a broader UK‑wide push to clamp down on the misuse of AI in sexual offences involving children. The government has already announced plans to make it illegal to possess, create or distribute AI tools designed to generate child sexual abuse material, and to criminalise so‑called “AI paedophile manuals” that instruct people how to use artificial intelligence to abuse children.
Rising online abuse and pressure on police
Online child sexual abuse is one of the fastest‑growing types of crime in the UK, and the Met now manages more than 12% of all such cases nationally. Official figures show online child sexual abuse and exploitation offences handled by the force have risen by around 25% year on year, piling pressure on specialist teams and increasing the emotional toll on officers who must review graphic material as part of their work.
Experts and charities have warned that the spread of generative AI has created new risks, including tools that can fabricate abuse images by “nudeifying” real photos of children or overlaying their faces onto existing sexual content. This has prompted urgent calls for both tougher laws and more advanced detection tools to prevent images circulating online and to find victims sooner.
In this context, the Met’s interest in AI is being framed as a response to scale and speed. The force argues that without automation, it will become increasingly difficult to keep up with the volume of material and to ensure that children are identified and protected as early as possible.
Protecting officers from trauma
As well as improving victim identification, the Met emphasises that AI could help protect its own staff from repeated exposure to disturbing material. Under the current system, investigators often have to watch, listen to and categorise large numbers of abusive images and videos, a process that can have a serious psychological impact over time.
By allowing AI systems to carry out the first pass over this content, the force hopes to reduce the amount of material that individual officers must personally view, reserving their time and attention for the most critical evidence. The Met says AI tools could therefore play a dual role: “accelerating safeguarding action” while “reducing the repeated exposure of officers and staff to traumatic content.”
£10m investment in child‑focused facilities
The AI initiative is part of a wider £10 million investment programme aimed at improving outcomes for child victims of abuse, both online and offline. Alongside technology, the Met is funding the rollout of new, victim‑dedicated Visual Recorded Interview (VRI) suites across London, designed to make it easier and less traumatic for children to give evidence during criminal investigations.
Six sites are already complete, with Plumstead Police Station chosen as the pilot, and more are planned. The new suites include adjustable furniture for younger children, larger spaces for drawing and communication aids, improved educational and age‑appropriate resources, and calmer, more welcoming environments. The design reflects detailed feedback from child victims, families and frontline officers and is intended to support children of all ages, including those who are disabled or neurodiverse.
These changes form part of the Met’s wider Children’s Strategy, which aims to embed a “child‑first” approach in policing. As part of that strategy, the force has already trained 23,000 officers and staff in trauma‑informed communication with children, expanded specialist child exploitation teams by 72 officers and rolled out Local Missing Hubs across London.
National and global context
The Met’s plans sit within a broader national and international effort to harness AI against child sexual abuse while limiting its misuse. Organisations such as child‑protection charity Thorn have developed AI systems that can detect both known and new child sexual abuse material online, with one tool, Safer Predict, having classified millions of files as potential abuse content. Europol has also highlighted how AI‑generated abuse material is complicating investigations, reporting multi‑country operations targeting networks that use generative tools to create exploitative images of children.
In the UK, new laws are being drafted to close gaps in existing offences and give police clearer powers to target those who use AI to create or share child abuse images. Ministers have described the rise in AI‑generated child sexual abuse material as “deeply disturbing” and have promised that Britain will “lead the way” in protecting children from online predators by criminalising both the tools and the manuals that explain how to misuse them.
Balancing safety, privacy and ethics
Civil liberties groups and technology experts are likely to scrutinise the Met’s AI plans closely, raising questions about data protection, potential bias in algorithms and the risk of over‑reliance on automated systems in highly sensitive cases. The force’s insistence on “strict legal, ethical and safeguarding frameworks” and “human oversight at every stage” is seen as an attempt to address some of those concerns from the outset.
For now, the Met is clear that AI will not replace specialist officers, but will be used to support them in handling an ever‑growing caseload. With online child sexual abuse offences increasing, and with generative AI adding a new layer of complexity to those crimes, the force argues that advanced tools are necessary if it is to protect children effectively and protect its own staff from harm.
As testing continues and discussions with technology companies progress, any deployment of AI in child abuse investigations is likely to become a key test of how far law enforcement in the UK can use artificial intelligence to prevent harm, without compromising the rights and safety of the very children it is trying to protect.
Comments