The Apple logo is seen at an Apple Store, as Apple's new 5G iPhone 12 went on sale in Brooklyn, New York, U.S. October 23, 2020.  REUTERS/Brendan McDermid

Apple Inc. announced on Thursday that it will introduce a system that will scan photos taken on iPhones in the United States for similarities to known images of child sexual abuse before uploading them to its iCloud storage services.

Apple claimed that if it detects enough child abuse image uploads, it will conduct a human assessment and report the individual to law enforcement officials. The technique, according to Apple, is supposed to decrease false positives to one in a trillion.

Apple is attempting to fulfill two imperatives with the new system: requests from law enforcement to assist in the prevention of child sexual abuse, and the privacy and security measures that Apple has made a major element of its brand.

Apple has now joined the majority of other large technology companies in screening photographs against a database of known child sexual abuse material, including Alphabet Inc’s, Google, Facebook Inc, and Microsoft Corp.

“With so many people using Apple products, these new safety measures have lifesaving potential for children who are being tempted online and whose horrible photographs are being distributed in child sexual exploitation material,” said John Clark, CEO of the National Center for Missing and Exploited Children. “Privacy and child protection can coexist in reality.”

This is how Apple’s operating system works. Officials keep a database of known photographs of child sexual assault and convert them into “hashes,” which are numerical numbers that positively identify the image but cannot be used to rebuild it.

Apple has created its own version of the database, based on a technology known as “NeuralHash,” which is designed to catch modified but similar versions of the original images. iPhones will be used to store the database.

When a user uploads an image to Apple’s iCloud storage service, the iPhone generates a hash of the image and compares it to a database.

Apple claims that only photos stored on the phone are reviewed, and that human review before reporting an account to law enforcement is done to ensure that any matches are legitimate before suspending an account.

Users who believe their account was suspended inadvertently can file an appeal to have it reinstated, according to Apple.

Apple verifies images kept on phones before they are uploaded, rather than after they arrive on the company’s servers, which is a critical feature that sets it different from other technological businesses.

Some privacy and security experts expressed concerns on Twitter that the system could be expanded to scan phones more broadly for forbidden information or political expression in the future.

“Regardless of Apple’s long-term goals, they’ve sent a strong message. It is safe to design systems that scan users’ phones for forbidden information, in their (very influential) opinion “In response to the previous reporters, Matthew Green, a security researcher at Johns Hopkins University, commented.

“Whether they are correct or incorrect on that point is immaterial. This will cause the dam to burst, as governments will require it of everyone.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here