Site will be messed up for a bit as I work on things/break them.
DNL

Apple to Scan iPhones for Child Abuse Content

Apple will be scanning users’ phones for child abuse content, according to a recent announcement from the company.

Stopping the proliferation of child sexual abuse content is a noble goal, no doubt. But this is an obvious nightmare for Apple users living in one of the Five Eyes countries or any other police state.

Matthew Green, a highly respected cryptography professor, covered the news on Twitter. “These tools will allow Apple to scan your iPhone photos for photos that match a specific perceptual hash, and report them to Apple servers if too many appear,” he wrote in one Tweet. “Initially I understand this will be used to perform client-side scanning for cloud-stored photos. Eventually, it could be a key ingredient in adding surveillance to encrypted messaging systems.”

A picture of Green's Twitter thread touches on some of the concerns with this system

Green's Twitter thread touches on some of the concerns with this system

“The way Apple is doing this launch, they’re going to start with non-E2E photos that people have already shared with the cloud. So it doesn’t “hurt” anyone’s privacy. But you have to ask why anyone would develop a system like this if scanning E2E photos weren’t the goal.”

Initially, I thought Green’s source had misled him about the scope of Apple’s new initiative. However, someone in the Twitter thread posted a link to Apple’s announcement. It is real. Here is the relevant portion of the announcement:

Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.

To help address this, new technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.\

A picture of Apple will use machine learning to analyze image attachments and determine if a photo is sexually explicit.

Apple will use machine learning to analyze image attachments and determine if a photo is sexually explicit.

Overall, the changes to Apple’s operating systems seem fairly typical; many social media platforms and cloud storage services implement a similar method of matching the hashes of user content against hashes of known child sexual abuse content provided by NCMEC and other sources. As far as I know, Apple is the first company to implement this type of system in an operating system. The difference between Dropbox scanning content uploaded by users and Apple scanning images on customer devices seems fairly significant.

My concern is roughly the same as the one voiced by Green although his position involves hash collision and mine involves government intervention:

“Imagine someone sends you a perfectly harmless political media file that you share with a friend. But that file shares a hash with some known child porn file?”

Hash collision seems like a valid concern. But I think authoritarian governments are much more likely to arbitrarily force hashes into the database that have nothing to do with child abuse content. One example that seems likely is content shared by political dissidents. Similarly, feds could try to match the hashes of certain pictures of drugs posted on the internet with content on iPhones in an attempt to identify drug dealers or users. And Apple is not the defender of user data in the way they want you to think.

Apple has shown its willingness to bend for the FBI. This might come as a surprise to some readers due to the reputation Apple has cultivated after publicly denying the FBI access to a locked iPhone used by a suspect in the 2015 San Bernardino terror attack. The FBI admitted they did not need Apple to help them access the iPhone in question and criticized Apple’s position as a decision based on “concern for its business model and brand marketing strategy.”

Apple worked with law enforcement across the globe even before this high-profile event. It is possible the FBI’s 2015 request for GovOS was the first of its kind though.

A picture of Apple cooperated with law enforcment to the extent required by law even before the 2015 incident.

Apple cooperated with law enforcment to the extent required by law even before the 2015 incident.

Apple also dropped their plan for encrypted iCloud backups after the FBI complained.

Apple rejected the FBI’s demands in 2015 and made good points in their court filings and public letters. Their counter-arguments included essentially everything that I and likely many readers agree with, including the dangers of such a precedent:

The implications of the government’s demands are chilling. If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone’s device to capture their data. The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone’s microphone or camera without your knowledge.

It would seem as if the same argument used by Tim Cook in response to the FBI in 2015 applies to Apple and law enforcement in 2021.

I do not want to fear monger. But I think governments/law enforcment agencies take advantage of anything remotely exploitable. And this seems prone to exploitation. Of course, it is possible that Apple somehow implements these features in a way that prevents the government from eventually using the software to hunt other types of criminals.


A picture of If you have nothing to hide...

If you have nothing to hide...

12 Comments
It's Called We Engage In A Mild Amount of Tomfoolery
5f64aa09
c74e3ac0 Sat, Aug 7, 2021

Gotta pluck those rotten apples off the tree, legally this has nothing to do with hacking or slanging… so what Apple and FBI, it’s trivial

ca1edec3
afc4dd20 Sat, Aug 7, 2021

I think i’m going to join the amish.

23d94be0
de0cb1c0 Sun, Aug 8, 2021

Ya man, the cart was an ingenious invention, I’m not knocking that and the intrinsic beauty in that is that you can push that cart around all day…

-Silent

a733f4ce
01c6f970 Sun, Aug 8, 2021

Ironically, Mr. Bertino was arrested later that day for indecent exposure and will now spend the rest of his life as a registered sex offender. When asked why he thought it might be a good idea to walk around in public completely nude, he answered…

e7883817
818ff3e0 Mon, Aug 9, 2021

If you haven’t ever run around the block naked once, have you really lived?

-SIlent

d3912738
db3ee3a0 Tue, Aug 10, 2021

this will be so heavily abused, another foot in the door

506b713b
9902c620 Sat, Aug 14, 2021

worst timeline

0d5ab45a
a0912730 Sat, Aug 14, 2021

The ability to perofm this function is the real goal. A simple policy change about that they are scanning for is the danger. They use noble causes as justificatin as no one would disagree with the argument.

3e94b78d
41c56630 Mon, Aug 16, 2021

next thing you know youll need to ask permission from the fbi or government if you can even have sex . the worlds gone to shit. so am i going to get in trouble if i have a photo of my own penis on my phone lol or my friends dick ? i am confused .

9e3f503d
1baaeb80 Fri, Aug 20, 2021

Well, darn, time for me to get an android phone and modify it to make it anonymous/private. I don’t mess with child porn but I like thc and steroids and remaining anonymous/private. I think it’s good that they’re trying to reduce child abuse content… those children are victims of a violent crime. I just have an issue, as suggested in the article, with those methods then being used for targeting nonviolent “crime”. I imagine its a slippery slope

4673780a
ee38d700 Fri, Sep 17, 2021

“those children are victims of a violent crime.”
No, most likely in most cases this is not true.

New comments are disabled after ten days in an attempt to limit spam.