Using AI to Scan Encrypted Messages is Wrong For Child's Protection, Cambridge Expert Argues

More governments are resorting to "magical thinking" rather than addressing the root of the child safety issue.

A Cambridge academic argues that governments should not always stick to "magical software solutions" in the form of artificial intelligence, especially when it comes to dealing with child safety.

According to the expert, the process should happen naturally, which means there's a need to coordinate with the authorities and other people who know the case.

Government Should Not Resort to 'Magical Thinking'

Using AI to Scan Encrypted Messages is Wrong For Child's Protection, Cambridge Expert Argues
More governments are resorting to "magical thinking" rather than addressing the root of the child safety issue. ROBIN WORRALL from Unsplash

Ross Anderson from the University of Cambridge refuted an earlier discussion paper entitled "Thoughts on child safety on commodity platforms" by Ian Levy and Crispin Robinson. According to him, the government should always explore the point of view of the children rather than relying on organizations that sell the computer software.

The two GCHQ senior directors stated in the paper that there's a need to scan the encryption messaging apps of the children to detect potential threats and unwanted activities which endanger their security and privacy.

Additionally, Levy and Robinson also mentioned that society should not be pressed between establishing communications or paving the way for "safe spaces for child abusers."

Anderson, who also teaches at Edinburgh University, wrote a 19-page rebuttal about the paper. He said that resorting to AI in detecting terrorism, child abuse, and other illicit events wouldn't entirely solve the problem.

For Anderson, "client-side scanning" only poses privacy risks for all people in society. Meanwhile, the law enforcement side could be problematic if that's implemented.

"The idea of using artificial intelligence to replace police officers, social workers, and teachers is just the sort of magical thinking that leads to bad policy," he wrote in his paper entitled "Chat control or client protection?"

Furthermore, the Cambridge professor highlighted that the idea of scanning messages via encrypting apps could most likely create a ripple across societal groups and industries.

Language Modeling is Flawed

According to Computer Weekly, Levy and Robinson stated in their paper that the language models should be (entirely) locally run on a smartphone or a PC to identify clues to grooming and other activities. This concept has made its way to the European Union and UK laws.

However, Anderson disproved that the natural language processing models are very exposed to errors. The expert stated that there is an error rate of under 5 to 10% when running the models.

In short, if governments stick to the use of AI when it comes to child safety, billions of false alarms all over the world would be processed in just a single day.

Anderson also noted in his paper that more tech firms frequently fail to address the complaints because of the expensive employment of human moderators.

He wrote in the same paper that the companies should not ignore requests from other citizens in the society who want to reach out regarding the abuse. If they could quickly respond to the police, they could do the same thing with the ordinary residents in the community.

This article is owned by Tech Times

Written by Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics