TECHNOLOGY

Watchdog sounds alarm over ‘astoundingly realistic’ AI images of child sex abuse

Industry is urged to tackle emerging threat at global safety summit
Susie Hargreaves, the head of Internet Watch Foundation, warned of the potential for criminals to produce unprecedented quantities of life-like child sexual abuse imagery
Susie Hargreaves, the head of Internet Watch Foundation, warned of the potential for criminals to produce unprecedented quantities of life-like child sexual abuse imagery

“Astoundingly realistic” AI images of child sex abuse have been discovered on the internet, prompting a watchdog to raise the alarm about its “devastating” potential.

The Internet Watch Foundation (IWF), which removes online images and videos of child abuse, said that tackling the emerging threat must be a priority for the global AI safety summit being hosted in Britain in the autumn.

The foundation conducted a five-month review after The Times and other media outlets highlighted the problem.

The images have previously been found on Midjourney, a popular AI programme that creates pictures from simple text prompts. Midjourney was recently used to generate fake images of Donald Trump and the Pope.

Some Midjourney users had been using real pictures to generate the child abuse images,