AI Governance Programs Need to Adopt a Risk-Based Approach, Says Oculeus

AI Governance Programs Need to Adopt a Risk-Based Approach, Says Oculeus
Tara Winstead

AI Governance is a comprehensive framework that guides organizations in the implementation, management and monitoring of AI applications in their operations. Its primary aim is to ensure adherence to regulatory standards, while also incorporating evolving ethical considerations and risk management principles. As AI technology advances rapidly, the requirements for AI Governance are continuously evolving, demanding IT Governance experts to identify and adapt to new challenges in a proactive manner.

According to Oculeus, a provider of telecom fraud management solutions, AI Governance is crucial because while AI offers exciting opportunities for organizations, it also carries significant risks to citizens, potentially causing harm through unintended consequences. The right AI Governance enables organizations to harness AI-generated insights for informed decision-making, while minimizing negative impacts.

"Society generally prizes ideals of fairness and transparency in how such processes operate, especially where important life events are concerned. So the challenge relates to how fairness and transparency can be demonstrated in the complex world of 'black box' AI processing," said Gavin Stewart, Vice President for Sales at Oculeus.

Effective and compliant AI investments for organizational success

Organizations seeking to gain from AI-related applications and systems need to ensure two things, according to Stewart. First, that the AI-enabled tools and processes are effective and useful. Secondly, that they comply not just with current regulations and best practices, but are likely to comply with the next wave of regulations that will emerge in the coming years. Neglecting these aspects could lead to wasted investments due to non-compliant systems that require replacement, coupled with severe financial and reputational repercussions arising from regulatory breaches, Stewart added.

The implications brought by an AI Governance program are organization-specific, depending on their activities and risk profiles. Analysts emphasize two key areas: first, the importance of human oversight in decision-making to prevent AI bias and potential flaws; and second, the need for record-keeping to ensure that the decision-making process was transparent.

AI Governance and emerging regulations

In the future, fully-automated AI workflows may face challenges from citizens who believe AI-based decision-making caused harm. Emerging regulations might empower citizens to challenge such decisions, obliging organizations to respond appropriately or face consequences. Additionally, upcoming regulations may classify AI use cases based on severity levels, leading to varying levels of AI automation permissions. For instance, 'low-risk' AI activities such as advertising may be permitted to use more AI-automation, whereas 'critical-risk' use cases such as police monitoring or arrest may be completely forbidden to employ AI automation. Thus, AI Governance programs will have to adopt a risk-based approach to address these potential developments.

In light of these considerations, leading countries are actively working on AI regulations. The EU's upcoming 'AI Act' is expected to become law by 2024-25, impacting the EU27 countries and potentially influencing global AI regulations. This EU regulation will empower citizens to file complaints and receive explanations for AI-based decisions affecting their rights, necessitating organizations to provide detailed audit trails in response. Other countries like Singapore are also working on similar regulations.

AI Governance for telecom industry

The telecoms industry historically has a heavy need for profiling, forecasting and predictive decision making. Accordingly, telcos have pioneered the use of AI technologies and, before that, 'big data' processes. Therefore, telcos must proactively ready themselves for upcoming regulatory changes, while conducting thorough evaluations of their customers' sentiments and expectations towards AI, as this has been the subject of widespread public concern last year, Stewart added.

As a provider of telecom risk management solution, Oculeus ensures that these government regulations are incorporated in the functionality or utilization of the company's products.

"In our anti-fraud solutions, we adopt a balanced approach to AI, leveraging its capacity to process vast amounts of complex data rapidly to derive valuable insights. This enables us to detect and block deliberate frauds or abuses before they harm customers," Stewart said while explaining the implications of these regulations on providers like Oculeus. He further added that the decision-making process resulting from AI insights incorporates human oversight and the ability to override recommendations without compromising operational efficiency. With this, Oculeus can reduce the risk of mistakenly blocking services for innocent parties flagged as suspected fraudsters. Additionally, Oculeus maintains a comprehensive audit trail of system activities, readily available for regulatory inquiries upon request.

Oculeus firmly believes that every action taken by an AI-enabled system is ultimately "owned" by a specific organization, whether it is a public body or a commercial entity such as a telecoms operator. To ensure effective AI development, organizations must take ownership of AI decisions that impact citizens. While organizations focus on preparing for AI Governance, concerned citizens should direct their attention towards lawmakers to ensure that the democratic process results in favorable regulations.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics