AI was supposed to be the pinnacle of technological achievement — a chance to sit back and let the robots do the work. While it's true AI completes complex tasks and calculations faster and more accurately than any human could, it's shaping up to need some supervision.
As it evolved to compute highly complex and meandering if-then decisions, a little bit of humanity started to rub off. However, machine learning using humans as teachers was flawed from the beginning. Human bias affects AI systems, particularly in the way automated processes review loan applications.
How Does Bias Happen?
Theoretically, robots are supposed to be objective, far above prejudices that control the flesh-and-blood world. So why are machines exhibiting distinctly human behaviors? To recognize the significance of AI bias, it's important to understand how humans introduce bias to these systems.
Machine learning relies on data — and a huge amount of it — so AI can make accurate predictions. Although numbers are supposed to be a black and white concept, no data is apolitical when collected by people.
Typically, there's a group of people who "feed" a machine data sets for it to learn from. Let's look at Google's image recognition tool as an example. It relies on an image classification algorithm to correctly label pictures of cats.
The people behind this algorithm choose to teach the machine the visual characteristics of a cat by using features such as pointy ears, large almond-shaped eyes, and sharp canines. The machine will be able to extrapolate these standards when looking at images, successfully identifying those that share the same features.
In this, and every other instance of AI, the size and variety of data is essential. The more data providing the machine's base knowledge, the more likely it will operate successfully in the future. When there's not enough data, or more worryingly, when there's unspecific data, the AI won't be able to perform accurately.
Algorithmic Bias Is A Human-made Problem
Algorithmic bias comes into play when engineers and scientists don't account for data sets that are incomplete. If Google's imaging tool learns from a main data set that includes cats with their mouths open and canines on full display, then the tool would struggle to recognize cats with their mouths closed.
There are several examples of this algorithmic bias in the tech world. After Microsoft's AI chatbot Tay used Twitter as a learning tool, it became a Nazi sexbot. Google's image recognition tool tagged black people as gorillas, while its translation service ascribes male pronouns to specialized professions when translating sentences with gender-neutral pronouns.
Most recently, Amazon had to nix its AI recruiting tool for a similar flaw. Its computer models trained itself on résumé data from the past 10 decades, a time period during which most applicants were male. Left to its own devices, the AI presented a pool of applicants made up of mostly men.
Amazon's AI identified the pattern of male résumés as a preference toward male candidates, rather than the inherent prejudices against women and minorities in the tech industry. It adjusted its selection process to reflect this favoritism.
The Financial World Isn't Safe
Traditionally tech-facing companies aren't the only ones struggling with these issues. The financial industry is one of many turning to technology (and AI) to streamline services, including online loan applications.
AI platforms taking over the lending space could harm minorities conventionally underserved by the traditional banking model because structural racism creates a data set that disenfranchises people of color.
The banks' practice of redlining certain neighborhoods (in other words, denying these neighborhoods financial services because of their racial demographic) started in the '30s, but it continues to have an impact on these communities today.
Census data shows black and Hispanic Americans are more likely to go underbanked or deprived of conventional banking services than white or Asian Americans. Racial gaps in mortgage loans show black and Hispanic borrowers are more likely to have their applications denied than white people.
There's a high chance machines will connect these data points with other black and Hispanic applicants and deny them services, even if they're eligible for an online loan. Without human intervention, AI will fall into a feedback loop: predicting more and more loan rejections from people of color.
It's important for these vulnerable populations to learn about modern borrowing options that balance the possibilities of AI with a much more human approach. While they offer increasingly online platforms to submit a loan application and service an existing loan, they follow a procedure that involves actual humans reviewing and approving installment loan applications.
But perhaps more critically, it's important for banking systems to recognize the potential for bias to affect its AI's machine and deep learning. AI isn't infallible. Although it's quicker than any human before it, it's only as valuable as the information it uses. If it's allowed to learn from data sets ingrained with societal prejudice without interference, it will produce information that reflects it.
Cathy O'Neil, author of "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy," says there's a danger in collecting data without addressing possible biases.
"When we blithely train algorithms on historical data, to a large extent we are setting ourselves up to merely repeat the past. ... We'll need to do more, which means examining the bias embedded in the data."
Behind every algorithm, there are people. Although they may not intend to create an AI with biases, they will if they don't apply cultural intelligence to the scope and formulation of data. Without adjusting for historical and cultural prejudices, their algorithms will have a serious impact on people's ability to get a personal loan in an emergency.