Australia's eSafety Commissioner Demands Tech Companies to Share Strategies to Combat Child Abuse Materials on Platforms

Australia's eSafety commissioner wants tech companies to reveal how they identify and report child abuse materials.

On Tuesday, Aug. 30, Australia's eSafety commissioner Julie Inman Grant asked tech giants such as Apple, Microsoft, and Meta to share their strategies for identifying, blocking, and reporting child exploitation images and videos posted on their respective platforms.

Australia's Fight Against Child Exploitation

According to The Sydney Morning Herald, Grant has sent letters to the tech giants about their procedures to protect children online.

Aside from the major companies, Grant has also asked Omegle about their strategy to prevent child exploitation, especially since pedophiles use the platform to talk to minors.

The tech giants must disclose the measures they take to detect and remove child exploitation materials within 28 days. Companies that fail to comply will be fined $383,000 per day.

According to Bloomberg, this marks the first time Australia's Online Safety Bill will be used by the government to moderate posts on social media platforms.

Also Read: Apple Puts a Hold on the Roll Out of CSAM After Receiving Backlash Due to User Privacy Concerns

Filling in the Gap

Grant pointed out in her letter that tech giants are not doing enough to protect children from predators online.

According to The Sydney Morning Herald, Apple has 1.8 billion devices worldwide but has only reported 160 cases, while Facebook only made 22 million reports despite having 2.9 billion users.

The tech giants are also not working on improving the features of their respective platforms due to privacy issues.

Apple and Meta have introduced end-to-end encryption on iMessage, Messenger, and WhatsApp, preventing messages from being detected and retrieved.

Failed Efforts to Fight Illegal Materials on Platforms

Although the tech companies seem to be doing the bare minimum when it comes to the online protection of minors, there are still efforts to help detect the predators who are exploiting children and bring them to justice.

In 2021, Apple introduced its Child Sexual Abuse Materials feature, or CSAM, that scans images saved on iCloud. If the feature detects inappropriate photos, the system will contact the authorities to have the owner investigated.

The feature sounded good on paper, but it immediately received backlash from the general public and security experts, citing privacy concerns. Due to the heavy criticism, Apple was forced to put a hold on the feature's rollout.

In the same year, Facebook added new tools to help prevent child exploitation on the platform, following reports that it hosted the most child sexual abuse materials than any other site in 2019.

According to Forbes, Facebook introduced a pop-up that will be shown to those searching for terms connected to child exploitation.

The pop-up message also includes links to offender organizations and information about the legal consequences of viewing illegal materials online.

Also, the feature will inform those who have shared exploitative child content about the harm it can cause and warn them about the legal consequences.

However, despite Meta's efforts to remove child abuse materials from the platform, a former Facebook employee told the US Securities and Exchange Commission or SEC that the company's strategies are inadequate and under-resourced.

According to BCC, the whistleblower also admitted that Facebook moderators are not trained and are ill-prepared to handle the removal of child abuse materials.

Related Article: Apple's CSAM Catches San Francisco Doctor With Child Exploitative Images on His iCloud

This article is owned by Tech Times

Written by Sophie Webster

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics