Algorithmic bias

Artificial intelligence (AI) is often touted as a bias-free tool for decision making, used widely by corporations and government agencies to inform decisions about people’s everyday life, from their consumer options and job prospects to their medical treatments and jail terms. Yet increasing evidence indicates that these algorithms often produce decisions that disadvantage racial and ethnic minorities (e.g., Caliskan et al., 2017; Obermeyer et al., 2019; Zou & Schiebinger, 2018).

Algorithmic bias is, to some, an oxymoron: if an algorithm merely reflects the data, how could it possibly be biased? Despite the perception that algorithms are veridical expressions of reality, nearly every aspect of their creation, implementation, and consumption are guided by human decisions and thus vulnerable to human social and cognitive biases. Furthermore, because these human decision points are often obscured, bias in algorithmic outputs may be utilized unwittingly and without correction—a process resulting in the propagation existing societal prejudices and inequities.

Several projects in the Amodio Lab are investigating ways that algorithmic biases form (e.g., in model training) and produce prejudiced decision making in human users. We are broadly interested in how societal-levels biases are recapitulated in algorithms to individual users, and how their effects on individual-level behavior then reinforces and propagates these existing biases, in a self-perpetuating cycle.

Gender bias in Google image search

Lab research by Vlasceanu and Amodio (2022) examines how gender bias at the societal level is reflected in internet search algorithms, and how this produces gender-biased cognition and decision making in its users. In a series of studies, we demonstrate that gender bias in a widely used internet search algorithm reflects the degree of gender inequality existing within a society. We then find that exposure to the gender bias patterns in algorithmic outputs can lead people to think and act in ways that reinforce societal inequality, suggesting a cycle of bias propagation between society, AI, and users. These findings call for an integrative model of ethical AI that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic bias.

Racial bias in face classification algorithms

Face classification algorithms, in particular, are notoriously inaccurate in classifying racial minority faces, leading to shocking incidents such as the labeling of Black people as “gorillas” and to predictive policing algorithms that have led to the arrest of innocent, misidentified Black men. Although AI aspires to improve social outcomes by removing bias from decision processes, it appears that these algorithms, often trained on human data, can recapitulate and propagate the existing biases of individuals and social systems.

To date, research on face classification bias has focused primarily on dataset quality, showing that classification bias emerges from a lack of diversity (i.e., >90% White) in the face sets used to train them (Kärkkäinen & Joo, 2019). Research in our lab finds that models trained on low-diversity face sets produce a human-like bias in race classification, such that Black-White biracial faces are more likely to be classified as Black. We also find that this pattern eliminated when models are trained on racially-balanced datasets (Berg & Amodio, in prep). In our current work, we investigate the degree to which this human-like pattern of bias reflects the influence of human prejudices or merely the computational consequences of an imbalanced dataset.