In July, many businesses and government bodies using Microsoft faced issues due to a software update issue at CrowdStrike. Banks and other financial institutions from Australia to India and South Africa experienced service disruptions.
In the UK, the London Stock Exchange (LSE) encountered a technical glitch that impacted its news service and delayed the display of opening trades. Australia's largest bank, Commonwealth Bank (CBA.AX), reported that some customers were unable to transfer money due to the outage.
Although some employees at major U.S. banks, including Bank of America and JPMorgan Chase, were unable to log into their systems, the firms stated they had not observed any significant impact on their operations.
CrowdStrike CEO George Kurtz explained that "the reported problem was caused by a defect in a content update for Windows and was not the result of a cyberattack. And the outage highlighted the significant impact technology—and the adoption of AI—could have on the functionality of the financial ecosystem."
I spoke with Siddharth Damle, a pioneer in AI risk management frameworks and the author of Using Layers of Protection Analysis, which explores risk management techniques for past catastrophic events like the 2008 mortgage market crisis. We discussed the key lessons from the recent tech outage.
Damle's unique blend of expertise in quantitative analysis and financial risk enables him to effectively lead and implement large-scale risk management programs within the banking sector.
He has played a crucial role in assisting major banks with the development and validation of their risk and financial models to ensure compliance with relevant regulatory requirements.
Addressing technology risks ahead of time is crucial.
Within the mandate of stringent regulations governing the banking sector, many firms rely on AI technology and models provided by vendors. However, this reliance does not eliminate the inherent risks associated with AI. AI risks extend far beyond technology and span a wide array of functions.
For example, some office applications now feature embedded AI, which introduces potential vulnerabilities and exposure to cyber threats.
"One major concern is the security of AI solutions, particularly when handling confidential data," Damle said. "A breach in AI technology can result in significant reputational damage and loss of sensitive information."
Another risk is known as "hallucinations" in generative AI (GenAI), where AI systems produce irrelevant responses by generating patterns not anticipated by their developers.
For routine tasks managed by automated chatbots, the risks are relatively low as long as these systems are limited in scope and are restricted from accessing sensitive content. However, in more complex areas, such as credit decisioning, AI introduces significant challenges.
For instance, biases in AI algorithms can lead to unfair lending practices if the data used to train these systems are not properly monitored.
"Lending decisions should be based on borrower creditworthiness factors rather than demographic factors like ethnicity, gender, age, or other unrelated criteria," Damle said.
Banks can adopt a multi-layered approach to manage their risks.
Damle emphasizes that an effective strategy for managing model risk must encompass at least three layers of defense.
The first layer involves the development of risk models by banking professionals, including adequate model testing, monitoring, and mitigating controls for known weaknesses.
The second layer consists of validating these models to ensure they perform as expected—a task Damle has been deeply involved in. This validation process critically challenges model development decisions and is essential for verifying that they can accurately produce intended outcomes.
"The final layer of defense is an independent internal audit function that ensures compliance for both the model development and validation procedures," Damle said.
This independent review ensures that all components of the risk management value chain are functioning correctly and complying with regulatory expectations.
Entire industry can be integrated into proactive risk management strategies.
Damle's proposed strategy involves integrating various industry elements, such as policymakers, regulators, rating agencies, and insurance providers, into the risk management framework.
"Making these entities aware of their roles and responsibilities is crucial for strengthening the overall defense against financial crises," he said.
Damle highlights the role of the regulators, such as the Federal Reserve. Their forward-looking Comprehensive Capital Analysis and Review (CCAR) exercise requires that banks simulate losses based on various stress scenarios and hold sufficient capital to cover such losses.
These span different risk categories, such as credit, market, operational, and liquidity risk. Damle emphasizes that risk models must be tailored to address specific idiosyncratic factors within a bank.
"For instance, during the mortgage market crisis, the focus was on credit risk and subprime lending, which informed further regulatory actions and led to advancements in risk modeling," Damle said.
The July tech outage exposed vulnerabilities and areas for improvement across various sectors. By continuously evolving and refining their risk models, banks can enhance their resilience and mitigate the likelihood of potential financial crises. Focusing on first and second line of defense activities, supported by rigorous independent auditing and a tailored approach to managing AI technology, will help bolster their resilience and mitigate vulnerabilities.