As Mark Twain said.....
In 1996 Ronald Riva was arrested in California for child molestation. This was not the result of any high-tech sleuthing. It was down to a smart and sensitive mother. She picked up on her 10 year old daughter’s distress shortly after returning from a “slumber party” at a friend’s house, the Riva family home. Mom eventually managed to get her daughter to tell her what had happened.
When the cops went to Riva’s place they found a computer and all the equipment necessary to do “live photo-shoots”. The police had stumbled on a paedophile network known as “The Orchid Club”. Turns out Ian Baldock, a Brit, was in the Orchid Club. This intelligence led the UK police to mount Operation Cathedral which smashed another, much larger paedophile ring. Members called it “The Wonderland Club”. Cathedral was the first-ever large scale (and highly synchronised) international police operation against online child sex offenders.
In the UK 750,000 still images were seized along with around 1,800 videos. At the time these were thought to be huge numbers. Today…..
Among other things, as with the Orchid Club, members of Wonderland often filmed child sexual abuse with themselves as the active perpetrator. They created digital records of the abuse to swap with each other. The more graphic and extreme the acts shown, linked to the frequency of your postings, the greater was your status within the group. Such was the depravity involved, when it all came to light, many people dimissed the (comparatively) small number of arrests as betokening something which could not possibly be very widespread, nor was it ever likely to be.
Even now I can hear employees of or investors in the UK’s then still emerging internet industry dismissively saying to me
“Where’s your evidence this whole Orchid-Wonderland thing is anything other than a freakish one-off carried out by a tiny group of seriously disturbed individuals? “
Or as I prefer to put it, what they were really saying was
“It’s hard for me to see a problem when my wages, share options, bonuses and other rewards depend on me not seeing it.”
(hardly an original thought or observation but the words are mine - I think)
Moving swiftly through peer-to-peer networks, there was a similar reaction at the beginning of the “modern era” (post-Skype etc.) when the commercial sexual abuse of children via livestreaming started happening in the early 2000s, in particular from 2010 onwards.
I am reminded of all this having today read a report from the Internet Watch Foundation (IWF).
Here’s the headline
AI-generated child sexual abuse videos surging online
Seemingly in the first six months of this year the IWF verified 1,286 AI-made videos of child sexual abuse. In relation to the urls involved, this represented an increase of over 400% when compared with the equivalent period last year. I’m not sure how meaningful percentages are when we’re dealing with small numbers in such a new area but as harbingers…… we should worry.
I get that history is not always bound to repeat itself but, to quote Mark Twain
“ very often it rhymes”
Hyper-realistic
Obviously, I have not seen any AI-generated child sexual abuse material. But I have seen a growing number of AI-created videos. Many are utterly indistinguishable from real filmed footage of real people doing real things. Technically—and again, obviously—these are “pseudo-images.” For very good reasons, they are illegal in the UK and in most jurisdictions.
Hanging in the balance?
While I’m firmly of the view that AI holds huge potential to support children’s health, education, safety and fun, I am just as certain that, without strong legal compulsion (and yes, that should probably be in oversized capital letters), the AI developers will give little or no thought to how their models may do one or both of the following:
Absorb and regurgitate child sexual abuse material—either directly or through indirect, seemingly unrelated prompts;
Facilitate the generation of new child sexual abuse material, based on what their models have “learned” from ingesting existing child sexual abuse material and from ostensibly legal adult porn.
That last point, incidentally, is not speculative. It’s pretty much exactly what the Internet Watch Foundation report is implicitly warning us about.
Remember the refrain in the early days of the internet?
“Trust us. Look—we wear jeans and T-shirts. We are highly educated, young, super-smart, we live in California or somewhere like it. We’re the good guys, just trying to make the world a better place. Don’t hem us in with regulation. Don’t fence off innovation with the barbed wire of laws and bureaucracy.”
We all know how that panned out.
We all also probably strongly suspect it didn’t have to be that way even if it might have meant the internet developed perhaps a tad more slowly.