A Washington State Judge has officially barred artificial intelligence-enhanced videos from being used as evidence, after deeming the tech as an "opaque" method. The ruling is said to be a first of its kind in a United States criminal court.
The technology is considered innovative and relies on opaque methods to convey what the AI model 'thinks' should be revealed, according to the order signed by King County Superior Court Judge Leroy McCullogh on Friday. NBC News was the first outlet to report on the ruling.
According to reports, the Court determined that allowing the admission of Al-enhanced evidence would confuse the problems, muddy eyewitness testimony, and possibly result in a drawn-out trial within a trial concerning the AI model's non-peer-reviewable procedure.
(Photo : Win McNamee/Getty Images) WASHINGTON - FEBRUARY 05: The U.S. Supreme Court is shown February 5, 2009 in Washington, DC. It was announced today that Supreme Court Justice Ruth Bader Ginsburg had surgery after being diagnosed with pancreatic cancer.
The ruling was made in the case of Joshua Puloka, a man suspected of killing three people in 2021 after opening fire outside a bar in Seattle. His attorneys reportedly sought to provide machine learning-enhanced cellphone footage as proof.
The fatal shooting was captured on smartphone footage, which Puloka's attorneys enhanced with AI by hiring a man with experience in creative video creation. The King County Prosecutor's Office claims that forensic specialists examined the AI-enhanced version of the film and discovered visual data that was absent from the original.
In a February filing in King County Superior Court, the case's prosecutors stated that they could not find any prior legal precedence in the United States for the use of such technology in a criminal court. According to Jonathan Hak, a Canadian lawyer and solicitor who specializes in image-based evidence both domestically and internationally, this is the only instance that he is aware of in which a criminal court has provided input.
Read Also: California Says It will 'Learn' from Europe for AI Laws
AI Laws in the US
The decision is made at a time when artificial intelligence is rapidly developing, its applications are becoming more widespread on social media and in political campaigns, and state and federal lawmakers are debating the possible risks associated with the technology.
A recent investigation by the independent voting rights monitor Voting Rights Lab has shown that several states have implemented barriers against AI in anticipation of the next round of elections that will be heavily influenced by the technology due to its rapid advancement.
According to the Voting Rights Lab, 39 state legislatures were considering over 100 proposals that included provisions intended to control the possibility of AI producing false information during elections.
The legislature convenes in the wake of multiple prominent instances of computer-generated avatars, voices, and "deep-fake" video technologies being used in political advertisements and campaigns.
AI Safety Testing
AI Safety testing, on the other hand, has reached a landmark achievement as both the United States and the United Kingdom have formally agreed to collaborate on testing AI models.
According to sources, Michelle Donelan, the U.K. State Secretary for Research, Development, and Technology, as well as the U.S. Secretary of Commerce Gina Raimondo signed the contract as basis for the two nations' cooperation.
The two AI safety testing groups will develop a common methodology for AI safety testing that requires the use of similar approaches and accompanying infrastructure, according to a news statement.
Read Also: Teacher Arrested After Making AI-Generated Child Porn Using Student Yearbook Photos