John Henry Skillern was arrested last Thursday for the possession of child pornography. The 41-year-old restaurant worker was allegedly sending indecent images of children to a friend, but while it was Houston police that obtained a search warrant for Skillern’s tablet and computer and placed him in custody, it was Google that tipped them off to his illegal activities. Skillern was using Gmail to send images of child sexual abuse, each of which had been given a unique digital fingerprint. When those images were sent via Google’s email service, they were identified by the company’s automated systems, allowing it to pass Skillern’s details on to the police via the National Center for Missing and Exploited Children (NCMEC).
David Drummond, Google’s chief lawyer, outlined his company’s automated tagging system last year in The Daily Telegraph. Drummond explained that Google has used the technology since 2008, building up a database that notifies the company when known child porn images are found through its search engine or in the inboxes of its 400 million Gmail users. When Google does identify such images are being shared, as they allegedly were by Skillern, it informs law enforcement officials.
Other companies have access to similar photo tagging technology, including Microsoft, whose PhotoDNA software can also detect flagged images of abuse. PhotoDNA can calculate a mathematical hash for an image of child sexual abuse that allows it to recognize photos automatically even if they have been altered. The tech is now used by both Twitter and Facebook, after Microsoft donated it to the NCMEC in 2009. Videos, too, have become the focus of such digital fingerprinting programs. Google has its own Video ID software for detecting footage of child sexual abuse, and British company Friend MTS donated its Expose F1 detection program to the International Centre for Missing & Exploited Children (ICMEC) earlier this year.
While the technology has helped to halt the activities of people such as John Henry Skillern, the automated image detection systems used by Google and others have some flaws. For one, new pictures won’t be caught by software such as PhotoDNA: only images already recorded in the user’s database can be spotted. They also raise some privacy questions.
Google says it won’t give out precise technical information on specific searches or cases, but it has been quick to make it clear that its automated detection systems were only designed to trawl for child porn. In a statement to the AFP news agency, the company said “it is important to remember that we only use this technology to identify child sexual abuse imagery.” While Google admits it does scan your personal Gmail inbox for advertising purposes, only specifically tagged pictures will trip Gmail’s systems and get you reported to the police— “not other email content that could be associated with criminal activity (for example using email to plot a burglary).”
Source: The Verge