Apple is Working on a Technology that Allows the Devices to Scan Photos for Signs of Abuse

Apple iPhone
iPhone gallery Pexels/Pixabay

Apple will launch a technology later this year that scans iCloud and check signs of child sexual abuse. The report the technology gathered will be sent to law enforcement.

Apple to Tackle Child Abuse in Latest Technology

According to TechCrunch, detecting child sexual abuse material on devices is just one of the several features that the company is working on to protect children who use its products.

Apple is also said to be working on filters that will block sexually explicit pictures sent and received by minors through their iMessage account. Another feature is designed to intervene when a user tries to search for child sexual abuse materials online.

Apple is the latest tech company working on protecting children online after Google, Twitter, and Facebook teamed up to eradicate child abuse imagery from the internet in 2015.

The company's detection technology called NeuralHash will work on the user's device and identify if a user uploads imagery with signs of child abuse to their iCloud account.

Matthew Green, a cryptography professor at Johns Hopkins University, posted about NeuralHash on his Twitter account. However, not everyone was impressed, as security experts pointed out that it could violate the user's privacy.

The tech giant stated that it would not mess with the user's privacy because the detection feature will have multiple layers of encryption designed to require numerous steps before it can get into the final review, according to Reuters.

When will NeuralHash Launch?

NeuralHash will be included in macOS Monterey and iOS 15, which is scheduled to be released in September or October. It works by changing the pictures in an Apple device into a string of letters and numbers called a hash.

If you modify an image, it changes the string of letters and numbers, preventing matching. NeuralHash will ensure that identical and visually similar images like cropped and edited images will match, so no one can edit a photo to try and undermine the feature.

Before a picture is uploaded to iCloud, the hashes that matched the database of known imagery will be reported to law enforcement, according to The Verge.

The child abuse imagery stored in the database came from child protection organizations like the National Center for Missing and Exploited Children or NCMEC.

NeuralHash uses a cryptographic technique that detects a series of numbers and letters matching those stored in the database of NCMEC. It does not alert the user while it is scanning the image for matches.

The results will be uploaded to Apple, which will use a technique called threshold secret sharing. It will only decrypt the image if it crosses a list of known child abuse imagery thresholds.

Once Apple can decrypt the matching images and verify the contents, it will disable the user's account and report the user and the photos to NCMEC, which is then given to the police.

Users who keep child sexual abuse imagery in their devices could face 15 to 30 years in prison if proven guilty.

The decision to add a detection feature on the devices came after Apple intercepted emails for a sexual abuse case in 2020.

In this way, the device can automatically detect abuse signs, and catching the abuse will be faster.

Apple added that the process is better at handling the user's privacy than scanning files in iCloud. NeuralHash only scans for known child abuse imagery and not the new ones.

The company said that there is a one in one trillion chance that NeuralHash will get the result wrong, but an appeals process is available if an account is reported by mistake.

Related Article: Apple to Detect Sensitive Content on iPhone Photo Libraries-But Security Expert Has a Warning

This article is owned by Tech Times

Written by Sophie Webster

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics