AI has ushered in a new era of convenience and efficiency, but it is not without drawbacks. As society becomes more reliant on this technology, so are criminals looking to gain from it.
Now, a top Social Security official has warned that beneficiaries are susceptible to A.I.-backed fraud schemes, generating anxiety among citizens and authorities alike.
The Rise of AI Fraud
AI, generated by machines or software, comes in various forms, including chatbots like ChatGPT, deep fakes, and generative AI. The potential impact of AI on society is only just beginning to be understood, as highlighted by Gail Ennis, inspector general at the Social Security Administration Office of the Inspector General.
As CNBC reported, Ennis stresses the imperative of recognizing the technology's potential risks, as criminals are leveraging AI to execute fraud schemes more easily and rapidly.
Ennis said in a letter that the deceptions are becoming increasingly credible and realistic, making fraud even more profitable for criminals. In response, the Social Security Administration has established an internal task force to study AI and determine the resources needed to prevent A.I.-related fraud effectively.
Haywood Talcove, CEO of the government business of LexisNexis Risk Solutions, warns that action is needed urgently. Criminals have already set their sights on government payments, and the consequences of inaction could be severe.
With government agencies often perceived as having virtually unlimited funds, Talcove notes that A.I.-backed fraud targeting Social Security beneficiaries has become a modern version of the classic problem of stealing checks from mailboxes.
Fighting Fraud in the UK
Across the pond, the BBC reported in July that the UK government is also grappling with the challenges posed by AI. The Department of Work and Pensions (DWP) aims to combat fraud using AI, but campaigners raise concerns about potential biases in the system's referrals for benefit investigations.
To tackle the issue, the DWP plans to share more information with MPs and implement safeguards.
Amid mounting concerns over potential biases in AI systems, privacy, and campaign groups demand greater transparency and oversight. The UK's National Audit Office and campaign group Privacy International call for more substantive information about the tools used by the DWP.
Protect Yourself from Scams
As technology evolves, so do the tactics employed by scammers. This Florida government agency says recognizing general hallmarks of scams, such as pressure to act immediately, requests for sensitive information, and demands for difficult-to-recover payment forms, can help individuals protect themselves from falling victim to fraud.
Additionally, voice cloning scams have become a concerning issue. Establishing family passwords and security questions, and encouraging relatives to set their social media profiles to private can help prevent scammers from collecting personal information.
Kathy Stokes, director of fraud prevention at AARP's Fraud Watch Network, cautions that such unexpected communications can put individuals in a heightened emotional state, making them more susceptible to falling for these elaborate schemes.
Staying calm, verifying information with known phone numbers or family members, and not fully trusting caller ID can further safeguard against voice-cloning scams.
Stay posted here at Tech Times.
Related Article : Can AI Avoid Having Hallucinations? Here's What Researchers Say