Unmasking AI by Joy Buolamwini explores how AI can create and reinforce societal unfairness. The book explains issues like bias in algorithms, problems with the data used to train AI, and how these systems can affect people's lives. Through the book's chapters, Buolamwini explains issues like algorithmic bias in "Invisible Inequalities" and the impact of flawed data in "The Faces Behind the Data", and offers solutions in "Blueprints for Accountability", showing how to build fairer, more inclusive technologies.
Idealistic Immigrant
Joy Buolamwini's journey began with a childhood steeped in art and science, shaped by her artistic mother and scientist father. This foundation ignited her curiosity and led her to explore robotics, programming, and social robots, with inspiration from Cynthia Breazeal's MIT robotics work. Despite facing challenges like biased facial recognition technology during her studies at Georgia Tech, Oxford, and MIT, Joy's passion for cutting-edge innovation remained steadfast.
At MIT's Media Lab, she joined Civic Media, focusing on the societal impact of technology. As an outsider navigating biases in both her environment and technology, Joy's work emphasized the need to address AI's inherent flaws and their impact on marginalized communities. The rising use of flawed facial recognition systems by law enforcement deepened her commitment to exposing algorithmic bias, inspiring her to shift her focus toward addressing social inequalities embedded in technology.
Joy coined the term "coded gaze" to describe bias in AI systems and began presenting her findings through academic rigor and creative platforms. Collaborations on an art show, the documentary The Coded Gaze, and a TEDx talk amplified her mission to unmask algorithmic bias. These pivotal moments solidified her resolve to create inclusive AI and confront the broader societal issues tied to technology.
Curious Critic
Joy Buolamwini continues her mission to expose and challenge biases in AI systems, reflecting on the societal resistance she faced after her viral TED Talk. She connects this backlash to historical discriminatory defaults, like light-skinned standards in photography, showing how AI systems inherit biases from their training data. Through examples such as the "white mask" test, Joy underscores how these biases manifest in critical areas like healthcare and policing, emphasizing the urgent need to expose the "coded gaze"—a reflection of systemic discrimination embedded in technology.
Her advocacy intensified as she received countless stories of harm caused by biased facial recognition systems, from wrongful arrests to inequities in e-proctoring tools. Joy categorized these failures, highlighting how flawed training data perpetuate discrimination, particularly for marginalized groups. She critiques the misconception that AI systems function flawlessly and calls for ethical oversight to prevent their misuse and mitigate their harmful impact.
Guided by mentors like Timnit Gebru and supported by her thesis committee, which she dubbed the "Guardians of the Algorithmic Justice League", Joy delved deeper into algorithmic bias. Together, they investigated how AI systems classify gender differently based on skin type, laying the foundation for the Algorithmic Justice League to promote more equitable AI systems and prevent harm.
Joy's work further critiqued the demographic imbalances in benchmark datasets, which she labeled "pale male datasets" due to their skewed representation. Using Kimberlé Crenshaw's concept of intersectionality, she exposed how these datasets disproportionately excluded women of color, reinforcing systemic power imbalances she called "power shadows". To dismantle these biases, Joy advocates for creating inclusive datasets and establishing equitable standards for evaluating AI systems, paving the way for transformative change in AI development.
Rising Researcher
In 2017, Joy Buolamwini centered her MIT research on uncovering biases in AI gender classification systems, particularly concerning skin type. Frustrated by the lack of diverse datasets, she created her own, sourcing images of parliamentarians from various countries and categorizing them using the Fitzpatrick skin type scale. While offering a more objective measure, this approach raised ethical concerns about data usage and privacy. Joy's work highlighted the subjectivity inherent in data labeling and the need for more inclusive datasets, aiming to challenge existing benchmarks and promote ethical practices in AI development.
Her exploration deepened in her study of the subjective nature of labeling in machine learning. Joy emphasized how cultural and social biases influence "truth" in AI systems. Working with her dataset, she encountered challenges in assigning binary gender categories and addressing colorism, revealing how historical and personal contexts complicate classifications. Drawing on feminist scholarship, she argued for a critical examination of the "arbiters of truth" in AI to address systemic biases and achieve greater algorithmic justice.
The findings of her landmark study, Gender Shades, further solidified her contributions to AI ethics. Using her Pilot Parliaments Benchmark dataset, she tested gender classification systems from IBM, Microsoft, and Face++, uncovering significant biases. These systems performed best on lighter-skinned male faces and worst on darker-skinned female faces, with IBM showing a 34.4% accuracy gap. Joy highlighted the ethical implications of such disparities, as misclassification can lead to serious consequences like wrongful arrests. Her study prompted IBM to acknowledge the problem and improve its models, showcasing the value of transparency, accountability, and ongoing evaluation in AI development.
Intrepid Poet
Joy Buolamwini sought to bring AI biases to the public's attention by creating impactful projects like the Black Panther Face Scorecard and the poem AI, Ain't I a Woman?, paired with a video illustrating how AI misclassifies prominent Black women. Drawing inspiration from historical figures like Sojourner Truth, Joy connected these present-day harms to broader systemic injustices, urging widespread engagement in the fight for algorithmic justice.
Her advocacy reached global stages, including a presentation in Brussels, where she highlighted the exclusionary nature of elite tech circles and the exploitation of vulnerable populations. These encounters strengthened her resolve to advocate for equitable AI systems and ensure diverse voices, especially from the Global South, are included in shaping technology's future. At the World Economic Forum in Davos, Joy presented her findings to influential figures, later expanding her Gender Shades study with Deborah Raji to include companies like Amazon. Their work exposed significant biases in Amazon's systems, sparking industry backlash and showcasing the challenges of confronting powerful tech companies. This highlighted the necessity of collective advocacy and unwavering resolve to ensure ethical AI development.
Joy also took her fight directly to affected communities, supporting Brooklyn tenants opposing a facial recognition system in their predominantly Black and Brown apartment complex. By crafting an amicus letter and engaging with tenants, she made AI harms tangible and actionable, empowering marginalized groups to resist harmful technologies. This experience reinforced her commitment to bridging academia and grassroots activism. In May 2019, Joy testified before Congress, advocating for third-party testing of facial recognition systems and a moratorium on government use. Her bipartisan approach resonated with lawmakers, influencing legislative actions and emphasizing the urgency of addressing AI's civil rights risks. Despite the personal toll, this engagement demonstrated the power of working with policymakers to push for systemic change.
Joy's story gained global visibility through Coded Bias, a documentary by filmmaker Shalini Kantayya. Premiering at Sundance in 2020 and later released on Netflix, the film highlighted algorithmic harms and the need for diverse representation in tech. Garnering critical acclaim and an Emmy nomination, it amplified Joy's advocacy and underscored the transformative power of storytelling in driving awareness and action for algorithmic justice.
Just Human
Joy Buolamwini's journey through the final stages of her PhD mirrored the resilience of figures like Simone Biles, prioritizing mental health amid immense pressure. Taking a break to recalibrate, Joy drew inspiration from the sacrifices of Black women who fought for education, ultimately finding a renewed purpose to complete her dissertation and champion algorithmic justice. Supported by mentors and family, her graduation marked a triumphant culmination of her work, symbolized by the presentation of a Wonder Woman sword from her committee.
Leveraging her platform to advocate for visibility and inclusion, Joy collaborated with Olay on the #DecodeTheBias campaign, which merged beauty and technology to spotlight AI biases and inspire women in STEM. While the campaign showcased her advocacy, setbacks like a poorly handled 60 Minutes feature highlighted the ongoing challenges of fair representation in media. Joy's partnership with Olay included pushing for ethical practices, such as conducting an algorithmic audit of their AI tools, demonstrating the power of transparency and accountability in addressing systemic biases.
Her advocacy extended to the national stage when she participated in the White House launch of the AI Bill of Rights. This framework aimed to address AI harms like discrimination and data misuse, emphasizing inclusive governance and global collaboration. Joy's engagement underscored the importance of diverse voices and the collective power of advocacy, reaffirming her belief that anyone, regardless of background, can contribute to a just AI future. This period of Joy's life highlighted her resilience, strategic alliances, and unwavering commitment to achieving fairness and equity in AI systems.