Microsoft has reportedly banned the United States Police Department from utilizing Azure OpenAI's artificial intelligence-powered facial recognition technology.
The terms of service for Azure OpenAI Service were amended on Wednesday to more clearly forbid integrations with Azure OpenAI Service from being used "by or for" facial recognition in U.S. police departments.
This includes integrations with OpenAI's existing and possibly future image-analyzing models.
A new bullet point specifically prohibits using real-time face recognition technology on mobile cameras, such as dashcams and body cameras, to identify a person in uncontrolled, wild conditions.
The policy modifications occur one week after Axon, a manufacturer of technology and armaments for law enforcement and the military, unveiled a new product that summarizes audio from body cams using OpenAI's GPT-4 generated text model.
Critics were quick to draw attention to the training data's racial biases and possible hazards, such as hallucinations. This is especially troubling because individuals of color are much more likely than their white counterparts to be stopped by the police.
According to Tech Crunch, even today's top generative AI models fabricate facts.
It is allegedly unknown whether Axon was using Azure OpenAI Service to run GPT-4, and if so, it's unclear if the policy was altered in response to Axon launching a new product.
Previously, OpenAI has restricted the application of its models to face recognition through its APIs.
Microsoft's Partial Ban
Microsoft has some wiggle flexibility under the new terms as police in the United States are the only authorities affected by the total prohibition on using Azure OpenAI Services.
Furthermore, it excludes facial recognition using stationary cameras in controlled settings, such as a back office.
In February, Microsoft's Azure Government platform included Azure OpenAI Service, which added more compliance and control tools targeted at government organizations like law enforcement.
AI-Powered Cameras in Other Countries
Some nations have already started using cameras with AI capabilities. To prepare for the cameras' planned deployment during the 2024 Paris Olympics, the French police trialed AI-powered surveillance equipment as recently as March.
Paris police installed six AI-equipped cameras across the Accor Arena to monitor crowd movements and identify odd or risky activity.
The primary objective of the experiment is to prepare for the upcoming Paris Olympics, which is expected to provide a significant security challenge for law enforcement in the coming months. More than 30,000 officers are expected to be on duty to protect the opening ceremony.
AI-powered cameras are one of the few new security measures being implemented for the 2024 Olympics. Locals who live near Olympic sites would reportedly also need to apply for a QR code to bypass security checks.
Residents of the forbidden areas would also need to register visitors who might want to watch the action from their rooftop, houseboat, balcony, or window.
(Photo: Tech Times)