AI-powered Deception Detectors Are Still Too Premature for Practical Use: Experts

Can AI really identify lies?

AI-powered deception detectors have been touted as potential tools to identify lies, but experts caution against premature adoption.

The Universities of Marburg and Würzburg research team emphasizes that while AI holds promise for understanding deception, it is still not ready for real-world applications.

SPAIN-WIRELESS-TELECOMS-INTERNET-MOBILE
An AI (artificial intelligence) logo is pictured at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on February 27, 2024. JOSEP LAGO/AFP via Getty Images

Can AI Really Identify Lies?

The experts underscore that identifying lies is complex and challenging. Scientists have long sought reliable methods for detecting deception, and AI has emerged as a new frontier in this pursuit.

There is considerable optimism surrounding the use of artificial intelligence in deception detection, such as identifying travelers with suspicious intentions at EU borders in Hungary, Greece, and Lithuania.

The cautionary stance on AI's current readiness for lie detection comes from researchers at the Universities of Marburg and Würzburg. They view AI as a valuable tool for foundational research into the psychological mechanisms behind deception but urge skepticism about its application in practical settings.

Kristina Suchotzki and Matthias Gamer, professors at the respective universities, led the study published in Trends in Cognitive Sciences. Suchotzki specializes in lie detection research, while Gamer focuses on credibility diagnostics.

Their study identifies critical issues with AI-based deception detection. Firstly, the algorithms used lack transparency, making it unclear how AI arrives at its decisions.

The researchers note that this lack of transparency limits the ability to critically evaluate results and understand why certain classifications are made.

Another issue is biased results due to the selection of input variables and biased training data. AI was hoped to mitigate human biases, but the reality often falls short due to inherent biases in the data used for training, according to the researchers.

The third problem they cited arises from the technology's fundamental nature.

AI-based deception detection assumes the existence of unique cues for deception, but the experts note that decades of research have not conclusively identified such cues or a predictive theory.

AI Research in Deception

Despite their reservations, Suchotzki and Gamer do not discourage AI research in deception detection. However, they stress the need for rigorous conditions before considering real-world applications.

They recommend verifying that AI algorithms meet quality standards, including controlled experiments, diverse and unbiased datasets, and validation on independent datasets to avoid false positives.

Suchotzki and Gamer also advise limited use of AI-based deception detection in highly controlled situations, where behavioral and statement differences could signal deception.

Their recommendations also include a caution to policymakers, drawing from historical lessons on the implications of deploying deception detection methods prematurely.

"History teaches us what happens if we do not adhere to strict research standards before methods for detecting deception are introduced in real life," the researchers said in a statement.

The findings of the study were published in the journal Trends in Cognitive Sciences.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags:AI
Join the Discussion
Real Time Analytics