St. Louis residents are reportedly being warned by the FBI that AI voice mimicry technology is being used by criminals to fool victims into thinking they're hearing a relative's voice during a phone call.
According to Jay Greenberg, FBI-St. Louis Special Agent in Charge, having a "safe" word as a family is an easy fix. The most complex AI schemes can be defeated by that basic family "safe," he claimed. In the most recent St. Louis instance, it most likely would have been the case.
(Photo: CLEMENT MAHOUDEAU/AFP via Getty Images) The shadow of Uruguyan developer Tammara Leites poses in front of a text generated by (digital Simon) thanks to artificial intelligence ahead of the (dSimon) performance at the Avignon fringe festival on July 14, 2022.
One of the cases saw a medical professor receiving a call at work reporting that her 16-year-old daughter had been abducted. The mother heard her daughter's cries for assistance during the call. The suspect claimed that following their involvement in a car accident, he abducted the girl.
He demanded that the mother stay on the phone until she arrived at Pete's Shur Save Market in U-City and transferred him $3,000 for her daughter's release, and failure to do so would mean he would harm her daughter. This process took over two hours. In actuality, the daughter was secure at home the entire time. Nothing crashed. She never let forth a cry for aid.
Read Also: Google Takes Legal Action Against Crypto Scammers in Landmark Case
OpenAI Recognizes Voice Cloning Risks
OpenAI, likewise, understands the risks of AI voice cloning, prompting the AI giant to delay the release of its voice cloning tool.
Anybody can have a lifelike voice clone created by the new OpenAI application with just 15 seconds of audio recording. Voice Engine, developed in 2022, powered the text-to-speech functionality of ChatGPT, the company's main AI tool.
The AI-generated voice can read text instructions aloud in a variety of languages or the speaker's native tongue upon request. Nevertheless, the tool has never been made public, partly because of OpenAI's "cautious and informed" approach to its wider dissemination.
OpenAI urged that laws safeguarding the use of people's voices in AI be investigated and that the public be made aware of the possibility of deceptive AI content as well as the capabilities and limitations of AI technology, alluding to its delayed release of the AI tool.
FTC Against Impersonation Scams
Impersonation scams only continue throughout the country, with the FTC already introducing safeguards against such scams. A new rule proposed by the FTC aims to stop these kinds of fraudulent activity. The agency's rule, which goes into effect right away, forbids impersonating companies, government agencies, or their agents in interstate commerce.
Additionally, the rule gives the FTC the power to bring direct complaints before federal courts, requiring con artists to repay money they have gained through corporate or government impersonation schemes.
Scams involving impersonation take many different forms. For example, false impersonations can target specific persons via phony Calendly meeting links to steal money or involve fake podcast invitations that lead to the takeover of Facebook pages via misleading "datasets" URLs.
Related Article: Deepfakes are Getting Harder to Identify But Scientists Suggest Using AI to Detect Real Signs of Life - Why?
(Photo: Tech Times)