Clearview AI Ordered to Remove All Facial Templates Belonging to People Living in Australia

Facial recognition
Clearview AI Unsplash/ Christian Wiediger

Clearview AI, a facial recognition company, was found guilty of breaking national privacy laws in Australia when it covertly collected the facial biometrics of Australians and incorporated them into its identity matching service.

Clearview AI sells its identity matching service to law enforcement agencies and other organizations.

Clearview AI Ordered to Remove Data of Australians

Australia's information commissioner and privacy commissioner, Angelene Falk, said that the facial recognition company's tool breached the country's Privacy Act 1998.

Clearview AI collected the sensitive information of Australians without consent. It was also found guilty of collecting personal information of Australians by unfair means, according to TechCrunch.

The company was also found not taking reasonable steps to notify people of the collection of their personal information. They did not take any reasonable steps to make sure that the personal information disclosed was accurate.

Clearview AI also did not implement any practices, systems, or procedures to make sure that it complied with the privacy principles in Australia.

Due to all the violations, the Australian regulator has ordered the company to stop collecting facial biometrics and biometric templates from its residents and to destroy all of the existing images and templates that it has.

The Office of the Australian Information Commissioner, also known as the OAIC, investigated Clearview AI and its practices together with the United Kingdom's data protection agency, the Information Commission's Office, or ICO.

However, the ICO has not announced any conclusions to the investigation. The agency said that it is considering its next moves and any regulatory action that it needs to take under the data protection laws of the United Kingdom.

Citizens in the UK can expect the regulator to consider Clearview AI to have violated the privacy law in the country as well since the UK regulator has been after adtech's lawfulness issue for months now.

Other European regulators have sanctioned Clearview AI users, according to BBC.

OAIC vs. Clearview AI

The OAIC posted its comments regarding the company, and the agency did not mince its words, according to NBC News.

Falk stated that the collection of sensitive information is intrusive and unfair to the people. It carries a risk of harm, especially to vulnerable groups like victims of crime and children, whose images can be searched easily on Clearview AI's database.

Falk added that another risk it imposes is that the biometric identity information can't be canceled, and it can be replicated and used for identity theft.

People featured in the database may also be at risk of misidentification, and these practices fall well short of the people's expectations for the protection of their information.

The OAIC also found the privacy impacts of the biometric system were not necessary, proportionate, and legitimate.

Falk said that when people go online and use social media, they do not expect their facial images to be taken and collected without their knowledge.

They do not know that a commercial entity collects it in order to create biometric templates for unrelated identification purposes.

Falk added that scaping facial images might affect the freedom of Australians who think that they are under surveillance.

This article is owned by Tech Times

Written by Sophie Webster

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics