Atlas of AI by Kate Crawford examines AI's hidden costs and impacts on society, politics, and the environment. The book uses the story of Clever Hans, a horse thought to solve problems but actually responds to human cues to question perceptions of intelligence and biases. It ties this to the history of AI, examining whether machines can truly replicate human thought. AI, the book argues, is not purely "artificial" or "intelligent" but relies on natural resources, human labor, and societal structures. It reflects existing power dynamics and shapes global politics, economies, and cultures. Using an atlas as a metaphor, AI is presented as a multifaceted system shaped by economic, political, and environmental forces.
The book explores AI's impact across areas like resource extraction, labor exploitation, data privacy, military applications, and classification systems that enforce inequalities. Each chapter unpacks how AI operates as both infrastructure and a form of power, with real-world consequences for society and the planet. Ultimately, the book challenges dominant AI narratives and advocates for justice movements addressing labor rights, racial equity, climate change, and data protection. By critically examining AI's role in shaping power and politics, it envisions a more equitable and sustainable future.
Earth
This chapter explores the hidden material and environmental costs behind AI, drawing parallels between historical mining practices and modern technological production. It begins with a journey from Silicon Valley to Silver Peak, Nevada, a key site for lithium mining, highlighting the physical and environmental realities of AI's supply chain. The narrative emphasizes the exploitation of natural resources, labor, and ecosystems involved in producing technologies like AI, electric vehicles, and cloud computing.
It delves into the global impact of mineral extraction, from lithium in Nevada to rare earth minerals in Congo and Inner Mongolia, showcasing the environmental devastation, labor exploitation, and geopolitical tensions tied to these processes. The chapter also examines the significant energy consumption and carbon emissions of AI systems, dispelling the myth of "clean tech" while underscoring the logistical systems that sustain the global movement of AI-related materials.
By framing AI as a "megamachine", reliant on vast industrial infrastructures and opaque supply chains, the chapter connects the environmental and social consequences of extraction to the broader ethical challenges posed by AI. It calls for greater awareness of these hidden costs and the urgent need to address the inequalities and environmental harm perpetuated by the tech industry.
Labor
This chapter examines the impact of automation, AI, and surveillance on modern workplaces, focusing on companies like Amazon and historical labor practices. It highlights how technology increasingly monitors and controls workers, prioritizing efficiency and profits over employee well-being. Drawing parallels to past industrial systems like Ford's assembly lines and Taylor's scientific management, it emphasizes the dehumanization and exploitation of labor in today's tech-driven industries.
Amazon's fulfillment centers exemplify this trend, with strict productivity metrics, surveillance, and algorithmic management causing physical and psychological strain on workers. The concept of "ghost work" is also explored, revealing how underpaid human labor sustains the illusion of AI automation. Beyond physical workplaces, the chapter also critiques the privatization and control of time, both historically and in modern systems like Google's TrueTime protocol.
Despite these challenges, the passage highlights growing labor resistance, from Amazon workers protesting oppressive conditions to broader solidarity among workers across sectors. It underscores the importance of addressing exploitation in all forms of labor, advocating for fairer working conditions and a collective effort to shape the future of work.
Data
This chapter explores the practices and implications of data collection and usage in AI development, emphasizing the shift from consent-driven methods to mass data extraction. It highlights how mug shots, internet-scraped images, and public datasets are used to train AI systems, often without regard for privacy, ethics, or the social context of the data. These practices perpetuate power imbalances, treating individuals' data as neutral resources while erasing their personal histories and rights.
Key examples include NIST's mug shot datasets and projects like ImageNet, which crowdsource labeling while inheriting biases and ethical issues. The text critiques the notion of "more data is better", likening data to commodities like oil and emphasizing how this mindset justifies ongoing extraction and surveillance. It warns of the privatization of public knowledge and the ethical detachment in AI research, which often prioritizes technical efficiency over accountability.
The narrative underscores the socio-environmental costs of data collection, drawing attention to the exploitation of individuals and communities, the lack of transparency in AI development, and the commodification of data within neoliberal frameworks.
Classification
This chapter highlights the dangers of biased classification systems in AI, drawing parallels with historical practices like Samuel Morton's use of skull measurements to justify white supremacy. It shows how AI systems, like Amazon's hiring algorithms or datasets such as ImageNet, perpetuate inequalities by embedding societal biases into their classifications. These systems often treat complex social constructs like race and gender as fixed categories, ignoring their fluid and socially constructed nature.
Attempts to fix bias, such as IBM's Diversity in Faces dataset, often miss deeper issues, focusing on technical adjustments while reinforcing harmful ideologies like biological determinism. The use of training datasets, often created without consent or acknowledgment of historical injustices, reflects broader power imbalances, commodifying human identities and experiences.
The chapter argues that addressing these biases requires more than technical fixes. It calls for recognizing the historical and social contexts behind classification systems, questioning the authority and intent of those designing them, and understanding the societal impacts of their use. Ultimately, combating bias in AI demands critical reflection, ethical accountability, and collective action to challenge systems that reduce human complexity to rigid, often harmful, categories.
Affect
This chapter explores the history and development of affect recognition systems, tracing their roots to psychologist Paul Ekman's controversial theory that facial expressions universally convey emotions. Despite criticisms of his methodology and assumptions, Ekman's work heavily influenced the billion-dollar emotion detection industry, which now uses AI to interpret emotions from facial expressions in areas like hiring, security, and law enforcement.
The origins of affect recognition are linked to outdated ideas like physiognomy, which falsely claims that a person's character could be inferred from their appearance. Ekman's Facial Action Coding System (FACS) standardized facial expression analysis, paving the way for machine learning applications. However, critics argue that emotions are too complex and culturally nuanced to be accurately captured by facial expressions alone. Research shows these systems often reflect racial and cultural biases, misinterpreting emotions and perpetuating harmful stereotypes.
Despite doubts about their scientific validity, emotion recognition technologies remain widespread, driven by corporate profit and institutional interests. The text warns that oversimplifying emotions into fixed categories risks reducing the richness of human expression and amplifying biases in critical contexts like job interviews, law enforcement, and education. It calls for more nuanced approaches and resistance to the unchecked automation of emotion detection.
State
This chapter reveals how AI development has been deeply influenced by military and intelligence priorities, as shown in documents from the Snowden archive. Programs like "Treasuremap" and "Foxacid" highlight the use of AI for surveillance and control, reflecting a longstanding relationship between AI research and state power. Military objectives have shaped key technologies like computer vision and autonomous systems, which now extend into commercial and municipal applications, raising privacy and governance concerns.
Initiatives like Project Maven, which integrated AI into military operations, sparked ethical debates when companies like Google faced internal protests for their involvement. However, tech companies continue to collaborate with governments, blurring the lines between corporate and state governance. Tools developed for military use, such as Palantir's data analysis systems and Vigilant Solutions' surveillance technologies, are now widely used by local police and immigration agencies, contributing to a growing surveillance network.
The chapter also critiques the use of AI in welfare systems and refugee monitoring, where algorithmic errors have caused harm. It highlights concerns about privacy, discrimination, and the growing influence of corporations in governance, warning of a shift toward algorithmic systems that reinforce existing power dynamics. This convergence of state and corporate surveillance reflects a new era of governance shaped by AI and raises questions about accountability and equitable practices.