In a move to safeguard investors from potential risks associated with the unchecked growth of artificial intelligence (AI), Massachusetts securities regulators have taken the initiative to investigate the use of AI by investment firms in the state.
Reuters reports that the investigation, spearheaded by Massachusetts Secretary of State Bill Galvin, addresses concerns about how AI is utilized in investor interactions.
Some prominent investment firms under scrutiny include JPMorgan Chase, Morgan Stanley, Tradier Brokerage, US Tiger Securities, ETrade, Savvy Advisors, and Hearsay Systems.
AI Used in Criminal Activities
In recent years, AI's tremendous development has presented companies with new tools to streamline operations, increase customer interactions, and optimize revenues.
However, as this technology spreads across industries, regulators are becoming more concerned about its possible implications, notably in the financial sector.
There is growing fear that AI may unwittingly expose investors to greater risk or even facilitate fraudulent activity.
Regulatory Inquiry on AI
The Massachusetts regulators' inquiry involves sending letters to a number of firms that either currently employ AI or are developing AI-based systems for their businesses.
The letters of inquiry seek detailed information about how these firms employ AI, the algorithms and data sources used, and the measures to ensure transparency, accountability, and investor protection.
AI Benefits Fraud
In parallel with this investigation, another pressing concern regarding AI's impact on the financial industry has emerged.
Earlier this week, we reported that the Social Security Administration, responsible for safeguarding the interests of beneficiaries, is grappling with the escalating threat of AI-backed fraud schemes.
Gail Ennis, the inspector general at the Social Security Administration Office of the Inspector General, has raised the alarm about criminals exploiting AI to carry out fraud more effectively and profitably.
According to Ennis, these fraudulent deceptions are becoming increasingly credible and realistic, making it difficult for beneficiaries and authorities to detect and prevent them.
To counter this growing threat, the Social Security Administration has set up an internal task force focused on studying AI and devising strategies to prevent AI-related fraud. The task force aims to stay ahead of the curve by understanding the evolving tactics used by criminals and deploying advanced AI-based solutions to safeguard beneficiaries proactively.
AI in Banks
While the Massachusetts investigation and the Social Security Administration's response are essential steps toward addressing AI's potential risks, another concern has been raised by the Consumer Financial Protection Bureau (CFPB).
The CFPB has warned about banks' use of generative AI chatbots, citing numerous customer complaints about the difficulties in obtaining timely and straightforward answers to their inquiries.
More banks are leveraging AI-driven virtual assistants to enhance customer service. Examples include Bank of America's Erica, USAA's Eva, and US Bank's Smart Assistant, which offer support for various banking tasks and facilitate easy access to account information.
Stay posted here at Tech Times.