When AI Gets It Wrong: Bias in AI Decisions

5 minute read

Can Artificial Intelligence Make Bad Decisions?

Even if you’ve never asked Siri to set a reminder or told Alexa to play a movie, artificial intelligence (AI) is still part of your world. That’s because AI and machine learning that can understand us and make decisions are playing an increasingly larger role in our lives.

Today, AI appears in everything from medicine and education to retail and entertainment to hiring, banking and criminal justice – just to name a few. Some AI applications are undoubtedly helpful, like a tech company developing AI that can detect wildfires faster or astronomers who use AI to help them sift through terabytes of space imagery.

But along with AI’s ability to help comes a danger. The more we trust this relatively new technology to make important decisions, the more room there is for large-scale errors. For example, things get complicated when an algorithm decides whether you qualify for a loan or land your dream job. In this article, we’ll examine how bias in AI can inadvertently impact a wide range of individuals and industries.

AI Bias in Finance

Did you know that AI helps prevent credit card fraud and exposes money laundering schemes? Some investment firms are calling on AI to help them make smarter trades, and many financial executives think AI will help with everything from improved customer service to an increase in profits.

But what happens when a machine-learning algorithm decides whether you qualify for a loan? As AI offers the ability to process bigger amounts of data, companies can now rely on alternative data to predict whether you are a good credit risk or not.

Let’s say people who pay back their loans don’t tend to shop at a particular website. Or they tend to have a certain number of LinkedIn connections. But you like that website, and you just don’t use LinkedIn very often. If a lending company fed your data into their AI algorithm, it might determine you’re a bad risk based on those choices.

History itself can create bias too. The latest US Census data shows that black and Hispanic populations have been historically underbanked. For AI to learn, it must be fed data. If the data shows that certain segments of the population are denied loans more often, it may falsely “learn” that those segments are greater credit risks, perpetuating a negative cycle.

AI Bias in Hiring

More and more organizations are turning to AI-powered recruiting agencies to help them fill positions. On the one hand, these agencies can eliminate thousands of hours of interviewing time by using AI to analyze automated video interviews or data from tests and resumes to predict employability. On the other hand, some AI researchers worry that using AI to determine a person’s fitness for a job could potentially be unscientific or, worse, biased.

The recruiting firm HireVue has developed an AI-powered hiring system designed specifically for interviews. The technology uses the candidates’ computer or cellphone cameras to analyze everything from facial expressions to diction to speaking voice – and other factors – before assigning an automatically generated “employability” score.

The problem with this approach? Some experts argue that such a system cannot truly assess the worth, value or employability of an individual and that “analyzing a human being like this…could end up penalizing nonnative speakers, visibly nervous interviewees or anyone else who doesn’t fit the model for look and speech.” Furthermore, they say the technology is “an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact.”

In another example, Amazon abandoned a hiring algorithm in 2018 because it passed over female applicants in favor of male applicants for tech roles. The reason was simple — the learning program had been fed data of past applicants and employees, the majority of which were male. While tech has long been a male-dominated industry, it’s shifting. If AI only considers past data, the future will never change.

AI in Insurance, Criminal Justice and Other Industries

It’s no secret that insurers look at countless factors to determine rates and coverage. Thanks to AI, they can now look at and analyze countless more. And while the data-crunching abilities of AI are helping insurers give faster quotes and claims settlements, some argue that it could lead to “personalized pricing,” where machine learning ultimately decides who gets coverage and what it costs.

Flaws and bias within AI get even more serious when you’re dealing with law enforcement and criminal justice — dangers range from being mis-identified by police to being slapped with a stiffer sentence simply due to race.

So what can be done to make AI less biased in the first place? For starters, we need to understand how and why AI reaches certain decisions. Two terms that often come up when people talk about improving AI are explainable AI and auditable AI.

Explainable and Auditable AI

Explainable AI means asking an AI application why it made the decision it did. The Defense Advanced Research Projects Agency (DARPA), an agency within the Department of Defense, is currently working on a project called the Explainable AI Project to develop techniques that will allow systems to not only explain their decision-making, but also offer insight into the strong and weak parts of their thinking. Explainable AI helps us know how much to rely on results and how to help AI improve.

Auditable AI asks third parties to test a system’s thinking by giving it a wide range of different queries and measuring the results to look for unintended bias or other flawed thinking.

Fei-Fei Li, AI pioneer, former Google exec and the Co-Director of Stanford University’s Human-Centered AI Institute, argues that another way to help eliminate bias, especially in the areas of gender and race discrimination, is to get more women and people of color involved in developing AI systems. While that’s not to say that programmers are at fault for implementing bias into AI, simply having a broader range of people involved will stamp out unconscious leanings and bring to light overlooked concerns.

Legal Responses to AI Bias

Ultimately, what may end up bringing these anti-bias tactics into effect is the law. The Equal Credit Opportunity Act currently requires that any lender denying a loan must offer “specific reasons” why. That means companies using AI to make loan decisions are already required to produce a form of explainable AI. And in Illinois a new law requires any company intending to use AI in their hiring process to notify all applicants of that fact beforehand. At the very least, if AI is going to start making the big decisions in our lives, we should get to know about it.

A Few Questions for All of Us to Consider

There’s no question that AI is already having a significant impact on our lives – many times without us even realizing it. What questions or concerns do you have about how it might be impacting you, those you know or those you serve? If your organization is using some form of AI in its decision-making processes, what steps are you taking to ensure that bias doesn’t accidentally creep into the picture? Feel free to share your thoughts in the comments section below.

Take the Next Step

We can help you decide pretty quickly whether this would be a good fit for your organization. With 20+ years of experience in automation, we just need about 5 minutes of Q&A. 

Keep Reading

Does a ban on automation in logistics help anybody?

Is an automation ban good for anyone?

Automation is essential. But so are real people. Will automation replace humans? We’ve approached the topic several times before, and today the fear is as real as ever as recently the International Longshoremen’s Association demands a total ban on automation at U.S. ports. Whether this is a broader negotiation tactic or an

Read More
Is the old Hyland a thing of the past? We're looking to the future with the 2025 product roadmap.

Recapping Hyland’s 2025 Product Roadmap

A look at Hyland changes and the road ahead Just over 18 months ago, Hyland Software laid off several hundred employees. Within months, new leadership from the outside was put into place and this year we saw additional leadership positions removed or modified. For years it has been common to

Read More
OnBase will play a massive role in data readiness for generative AI.

ECM is more important than ever for generative AI experiences

Data. Data. Data. I cannot make bricks without clay! – Sherlock Holmes, Sir Arthur Conan Doyle In KeyMark’s 25+ years of business, we watched Enterprise Content Management (ECM) solutions, recently re-branded as content services platforms (CSP), climb, crest, and mature in the technological hype cycle. Today, we’re at the point where if

Read More
Search