A large regional bank uses a newly developed fraud detection artificial intelligence (AI) algorithm to identify potential cases of bank fraud including anomalous patterns of financial transactions, loan applications, and new account applications. The algorithm is trained on an initial set of data to give an idea of what normal versus fraudulent transactions look like. However, the training data becomes biased by oversampling applicants over 45 years of age for examples of fraudulent behavior. This oversampling continues over a period of months, with the bias growing and remaining undetected. The model becomes more likely to think an older person is committing fraud than reality suggests. Customers are increasingly turned down for loans. Some begin to feel alienated while regulators start to ask questions. Trust is lost, the brand’s reputation suffers, and the bank faces significant consequences to its bottom line.
We know model bias is potentially a problem, but do we really know how pervasive it is? Certainly, media outlets write stories that capture the public imagination, such as the AI hiring model that is unfairly biased against women1 or the AI health insurance risk algorithm that unfairly assigns higher risk scores based on racial identity.2 But as bad as such examples may be, the AI model bias story hardly ends with what we read in the popular press.
Our research indicates that model bias could be more prevalent than many organizations are aware and that it can do much more damage than we may assume, eroding the trust of employees, customers, and the public. The costs can be high: expensive tech fixes, lower revenue and productivity, lost reputation, and staff shortages, to say nothing of lost investments.
In fact, 68% of executives surveyed in Deloitte's recent State of AI in the enterprise, 4th Edition report indicated that their functional group invested US$10 million or more in AI projects in the past fiscal year alone.3 Even internal-facing models can do significant harm and potentially put those millions of dollars of investment at risk.
To solve this problem, we need to go beyond empathy and good intentions. Understanding, anticipating, and, as much as possible, avoiding the occurrence of model bias can be critical to advance the use of AI models across the organization in a way that preserves stakeholder trust. The good news is that there are approaches that organizations can adopt—including technology-based solutions—that can help.
The term “bias” carries many meanings. For the purposes of this study, we may consider Merriam-Webster’s definition of bias as “systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.”4 Generally speaking, AI model bias happens when the training data on which an AI algorithm or model relies is not reflective of the reality in which the AI is meant to operate. In other words, despite the use of the term “model bias,” a model is not biased in and of itself; rather, it’s the training data that renders a model biased. Stuart Battersby, CTO of AI enterprise software company Chatterbox Labs, concurs. “Regardless of context, often, [model bias risk] comes down to the training data,” used to inform the model and any training data is vulnerable to bias, according to Battersby.5 (See sidebar, “Organizing the ‘wild west’ of model bias” for a discussion on the various ways model bias typically presents itself.)
Model bias is particularly troubling in part because it’s not always anticipated by organizations or those who are working with the AI models in question. These “weapons of math destruction” as Cathy O’Neil suggests in her book are secret and scalable, which can magnify their danger to an organization and its stakeholders.6
Model bias is particularly troubling in part because it’s not always anticipated by organizations or those who are working with the AI models in question.
Evidence suggests that some users of AI models may be oblivious to this danger. Consider Deloitte’s State of AI report in which some three quarters of overall respondents say they are “confident” or “very confident” that their deployed models will exhibit qualities of fairness and impartiality. A similar share said they are “confident” or “very confident” that their deployed models will exhibit qualities of robustness and reliability.7 These data points are important because such characteristics as fairness and robustness are the hallmarks of models that operate as they should, without bias.
Stories of bias found in AI models that speak to societal discrimination and prejudice reside in multiple contexts, including college acceptance decisions,8 criminal sentencing and parole decisions,9 and hiring decisions,10 among many others. Many examples of model bias mentioned publicly relate to bias found in models that serve customer-facing functions. Our research indicates, however, that bias risks are prevalent whether we’re referring to models that affect customers or within the operational or internal part of an organization. Some of these model risks within the “back office” of an organization are often undetected until long after deployment and the accompanying impacts. Indeed, the risk of model bias within an internal operating domain like cybersecurity or compliance may be especially insidious as internal models may not receive the degree of public scrutiny that more outwardly facing deployments may receive, thus delaying their detection. Jayant Narayan, World Economic Forum Artificial Intelligence and Machine Learning Technology Policy lead, says: “Most AI model bias discussion is still on the external facing functions and the use cases of industries that are more customer facing. Companies should reassess bias and risk classification for their internal functions and use cases.”11
Put another way, AI model bias is domain agnostic. In all of its forms, it can occur anywhere an AI model is deployed, regardless of context. Where context does matter, as we’ll discuss, is in the impact of model bias on trust.
Several classes or archetypes of model bias emerged during our research. We identify two main groups of biases based on the type of action that impacts the model: “Passive” bias — where bias is not the result of a planned act—and “active” bias—where the bias occurs because of human action, either with or without intent and, even when intentional, often without negative intent. Both types of bias can manifest in different ways, and both should be considered when developing strategies to mitigate model bias risk. In characterizing bias in the classification that follows, we use our own terms as well as terms that are commonly observed in social science and technology literature.12
Examples of passive bias may include:
Examples of active bias may include:
The above grouping is far from exhaustive or definitive; other bias characterizations exist. Such speaks to the evolving and still nascent understanding of what model bias is and how it occurs.
Put another way, AI model bias is domain agnostic. In all of its forms, it can occur anywhere an AI model is deployed, regardless of context.
The impact of AI model bias can cascade across an organization by impacting its decision-making and trust with stakeholders. Decision-making and trust are two separate but interrelated concepts. Trust is the foundation of a meaningful relationship between an organization and its stakeholders at both the individual and organizational levels. Trust is built through actions that demonstrate a high degree of competence and intent, that result in exhibited capability, reliability, transparency, and humanity. Competence is foundational to trust and refers to the ability to execute, to follow through on your brand promise. Intent refers to the reason behind your actions, including fairness, transparency, and impact. One without the other doesn’t build or rebuild trust. Both are needed.
When a poor decision is made based on faulty analysis from biased data, an organization risks losing trust with stakeholders who may be relying on a model’s advice. This could manifest, for example, in board members who lose trust in an executive team that recommends an unprofitable project or employees who question the hiring of a less qualified candidate.
Once a decision error occurs and trust breaks down with a given stakeholder, that stakeholder’s behavior can change. For an employee, this could mean less engagement at work, for a customer, lower brand loyalty or, for a supply chain partner, less willingness to recommend the business to others. These behavioral changes can have a meaningful impact on organizational performance, possibly limiting sales, productivity, and profitability. Ultimately, the lack of trust can prevent a company from fulfilling its goals and purpose with stakeholders.
Consider the bank to which we referred at this paper’s outset. In that example, AI model bias impacts decision-making in leading a bank to make unfair assumptions about older credit applicants and, as a result, avoid selling products to the older, underserved market. The reverse could also be true with bias leading a bank to grant loan applications to younger applicants who are actually engaging in fraud. And once this bias is known—even if the bank made efforts to correct it—bank professionals may lose confidence in the output of the algorithm. Indeed, they may lose confidence in AI models more generally. As a result, they may avoid important business decisions such as pursuing actual cases of fraud.
Multiple stakeholders are impacted by the model bias in this example. This bias, if it leads a bank to underserve the older banking customer, may alienate a constituency. This would put their trust and patronage at stake. It may also jeopardize the trust and business of other customers who become aware of and are offended by this bias, even if not directly affected. Because this bias may run afoul of various regulatory and statutory requirements as found in the Equal Credit Opportunity Act, it may damage the trust of regulatory authorities in ways that could result in civil penalties that affect the bottom line.14 Ultimately, the consequences of this model bias could harm the bank’s reputation and bottom-line performance.
This is just one of many examples of the consequence to decision-making and trust when AI models are unfairly biased (figure 1). The impact of AI model bias is typically not limited to one stakeholder group. On the contrary, the faulty decisions that result most often impact multiple stakeholder groups and can negatively influence their willingness to trust an organization. This context within which that bias takes place—the set of decisions, stakeholders, and behavioral changes that result —can define the stakes and cost to the organization.
The impact of AI model bias is typically not limited to one stakeholder group. On the contrary, the faulty decisions that result most often impact multiple stakeholder groups and can negatively influence their willingness to trust an organization.
To illustrate the individual character of model bias, we depict a few different case scenarios in which the nature of model bias could manifest and how decision-making and trust might be affected as a result (figure 1).15
Once an incident of model bias is found, the organization should “get under the hood” to assess the nature of the bias (including its causes), the ways it’s already affected decision-making and, ultimately, stakeholder trust, and how to prevent its reoccurrence. As Chatterbox Lab’s Battersby says, “You want to really get to the root cause as to why you have that bias and what that means within your organization in order to prevent it from occurring again.”16 With that said, reacting to a bias already in place is far less preferable than anticipating and preventing the bias from originating at all—or at least before deployment. Ted Kwartler, vice president of Trusted AI at DataRobot, puts it this way: “Finding bias in models is fine, as long as it's before production. By the time you’re in production, you’re in trouble.”17
The following set of guideposts can help organizations anticipate AI model bias across contexts. Such guideposts can help an organization to deploy AI models in ways that are fair and transparent.
Ted Kwartler, vice president of Trusted AI at DataRobot, puts it this way: “Finding bias in models is fine, as long as it's before production. By the time you’re in production, you’re in trouble.”
In other words, any solution to the challenge of AI model bias should be holistically based on an integration of people, process, and technology. No one aspect of this three-legged stool is necessarily more important than another. Human judgement is important, as we mentioned. Process provides a sense of order and discipline to AI model governance. It includes monitoring and correcting for model bias that, together, help form the sequential steps of operationalizing machine learning models, sometimes referred to as “MLOps.”23 Technology, for its part, is the third leg of the three-legged stool. Without it, the model (and any model bias) would not exist. But technology is also part of the solution. Software platforms are now being developed that can help organizations uncover bias and other vulnerabilities, and help ensure that a model operates fairly.24
Building trust with stakeholders is a multifaceted, complex challenge. We are all connected. When trust breaks down with one stakeholder, others become aware and may change their behaviors as well.
AI and trust share an inseparable relationship. Trust cannot flourish in an environment that relies on flawed AI and even the most unbiased AI model can provide decision outcomes that matter very little if they serve an untrusting environment. The primary reason that organizations should think about AI model bias is that—more than many issues—bias has the potential to undermine this relationship.
Organizations should meet the challenge of AI model bias with the sense of urgency that such a consequential issue deserves. To some, model bias may seem like an emerging, far-flung abstraction. But it is real. And the damage it can cause to stakeholder trust is real, whether organizations focus on it or not.
But there is a path forward. Organizations have at their disposal the tools and resources to help address the challenge of AI model bias before it manifests—through a holistic approach that includes education, common language, and unrelenting awareness. The organization that chooses a proactive approach now will likely have a leg up on the organization that is required to take a reactive approach later.
Trust is the basis for connection. It is built moment by moment, decision by decision, action by action. In an organization, trust is an ongoing relationship between an entity and its varying stakeholders. When performed with a high degree of competence and the right intent, an organization’s actions earn trust with these groups. Trust distinguishes and elevates your business, connecting you with the common good. Put trust at the forefront of your planning, strategy, and purpose and your customers will put trust in you. At Deloitte, we’ve made trust tangible—helping our clients measure, manage, and maximize it at every opportunity.