Co-liberative Computing


Resisting AI: An Anti-Fascist Approach to Artificial Intelligence by Dan McQuillan is a thought-provoking book that looks at how AI impacts society and politics. Instead of solving the big problems we face today, McQuillan argues that AI often makes things worse by creating more inequality and even supporting authoritarian ideas. The book explains how AI works, its connection to global issues like austerity and the rise of the far right, and why current ethical approaches to AI often fail. McQuillan proposes a new way forward, focusing on fairness and collective well-being. He suggests using "workers' councils" and "people's councils" to make decisions about AI and emphasizes the need for systems that adapt to change while supporting freedom for everyone. Through its chapters, the book explores the problems with AI today and offers hopeful ideas for building technology that serves the common good.


Operations of AI

Today, AI isn't truly intelligent; instead, it relies on machine learning, which uses data and math to optimize tasks. It doesn't think or understand but calculates and improves through trial and error. While it mimics abilities like recognizing faces or playing games, it's just a tool, not actual intelligence. Machine learning depends on large datasets, which often misrepresent reality, leading to biased outcomes. These datasets are created using undervalued labor, often from marginalized workers, and their demand increases surveillance and concentrates power in a few hands.
Deep learning uses neural networks to uncover patterns in data without predefined rules, enabling breakthroughs like self-driving cars and language translation. However, it simplifies complex realities into numbers, risks unfair outcomes, and lacks transparency in its decisions. Training these systems requires vast resources, generating significant carbon emissions and centralizing power with large corporations. AI's reliance on poorly paid workers for data labeling reflects historical inequalities, deepening social and economic divides while raising critical ethical concerns.


Collateral Damage

AI systems may appear powerful but are fragile and prone to errors. They depend on patterns in their training data, making them unreliable in unexpected situations. For instance, a self-driving car might fail to recognize a tow truck, and small changes can confuse them. Biases often arise when AI uses indirect measures (proxies) for predictions, such as using insurance claims to assess healthcare risk, which has disadvantaged marginalized groups. AI also tends to exploit shortcuts in training data, leading to failures in real-world applications.
Attempts to fix these issues often focus on improving accuracy or reducing bias but rarely address the deeper societal problems underlying AI systems. These fixes leave decisions opaque, making errors hard to detect, as in cases where AI misinterprets patterns in data. Biases persist even in so-called fair algorithms, such as the COMPAS system used in courts, which discriminated against Black defendants despite claims of accuracy. Industry responses, such as ethical guidelines or improved datasets, often avoid structural changes and focus narrowly on technical adjustments.
In practice, AI systems frequently reinforce inequalities by disproportionately affecting less privileged groups. Their development and deployment often reflect the priorities of those in power, perpetuating discrimination in areas like hiring, policing, and border control. AI's lack of transparency exacerbates these problems, making it difficult to challenge unfair decisions. Efforts to add human oversight often fall short, as institutional priorities override individual judgment, and failures are frequently blamed on humans instead of flawed systems.
AI is increasingly promoted as a solution to social problems, but this "tech solutionism" often reinforces existing inequalities. By categorizing people in rigid and harmful ways, it mirrors historical patterns of exclusion while limiting opportunities for imagining better futures. Rather than addressing root causes, AI maintains the status quo and creates new forms of harm. Tackling these challenges requires more than technical fixes—it demands a focus on the broader societal systems that shape and are shaped by AI.


AI Violence

AI mimics science by using data to make predictions, but unlike science, it doesn't seek to explain how things work. Instead, it reduces complex realities into simple data points to optimize outcomes. While it claims to provide neutral insights, its models reflect the biases in their design and the systems they serve, shaping how people are categorized and treated. By presenting itself as unbiased, AI reinforces existing power structures and dismisses alternative perspectives, making its social and political impacts harder to see.
Under neoliberal systems, AI amplifies precarious conditions by turning people into data to exploit. Gig workers on platforms like Uber face constant monitoring and insecurity, with risks and costs pushed onto them. AI enforces strict control, increases stress, and removes moments of relief, leaving workers in physically and emotionally draining conditions. Despite its sophistication, AI deepens inequality and intensifies exploitation.
AI thrives on speculation, treating data as a financial asset and creating cycles of uncertainty. It links unrelated data, fostering paranoia and insecurity, as seen in algorithms amplifying conspiracies on social media or making opaque judgments in other areas of life. This speculative approach turns personal and social issues into commodities, increasing instability and reshaping how people see themselves and others.
In welfare systems, AI's use can harm vulnerable people. Algorithms have wrongly denied benefits, targeted marginalized communities, and amplified discrimination, as seen in cases across Europe and Australia. These systems lack accountability, turning bureaucracies into tools of "administrative violence" that rigidly apply rules and hurt those in need. By appearing neutral, AI hides the social harm it causes and reinforces inequality.
AI also continues historical patterns of exclusion tied to race, class, and gender. It classifies and sorts people, deepening existing divisions and justifying inequality under the guise of data-driven decisions. Combined with outdated ideas like genetic determinism, AI risks reviving harmful ideologies that blame individuals for social disparities, echoing practices like eugenics. These systems, marketed as scientific, reinforce discrimination and support policies that benefit those in power while marginalizing others.


Necropolitics

Governments use AI to manage public services during austerity, where resources are limited and demand for help rises. After the financial crisis, budget cuts led to increased poverty and poor health. Instead of addressing systemic problems, AI was introduced to automate decisions about who qualifies for welfare, often reducing access and making processes less transparent. These systems prioritize cost-cutting over fairness, intensifying inequality and social instability. Automated decisions lack accountability, reshaping society while avoiding public debate.
AI can also create "states of exception", where normal rules don't apply, excluding people from services based on arbitrary criteria. Algorithms often make critical decisions without oversight, such as flagging individuals as risks or denying benefits based on patterns that aren't explained. This process allows governments to justify harsh policies by blaming individuals rather than addressing systemic issues.
In policing, AI-driven systems like predictive algorithms reinforce biases, targeting marginalized communities and increasing surveillance. These practices create a "tech-to-prison pipeline", disproportionately affecting certain groups and expanding control rather than reducing crime. Similarly, in workplaces and social services, AI-driven surveillance limits opportunities, mirroring exclusionary practices like redlining.
AI's role in managing resources reflects deeper inequalities. It reinforces the idea of scarcity, rationing access rather than addressing shared needs. In times of crisis, such as during the COVID-19 pandemic, these systems magnify harm, leaving vulnerable populations unprotected. Combined with rising far-right ideologies, AI risks becoming a tool for exclusion, managing crises like climate change through segregation rather than solidarity.
Ultimately, AI doesn't just automate decisions; it amplifies existing power structures, prioritizing efficiency over justice. To counter these risks, a shift is needed toward systems that value care, fairness, and inclusivity, resisting AI's role in perpetuating inequality and exclusion.


Post-​Machinic Learning

This chapter critiques AI's claims of objectivity, showing how it mirrors traditional science by reinforcing the interests of dominant groups. Drawing on feminist and post-colonial ideas like standpoint theory, it argues that knowledge is shaped by social contexts and power dynamics. Standpoint theory emphasizes the value of perspectives from marginalized groups and calls for questioning AI's assumptions, investigating who benefits from its systems, and addressing the structural forces behind its development. Feminist science promotes accountability, mutuality, and centering marginalized voices to challenge algorithmic harms and build fairer systems.
Post-normal science complements this approach by involving diverse communities in addressing complex problems like AI. It challenges rigid reliance on data and optimization, favoring values and collaborative decision-making. Feminist new materialism further critiques AI's tendency to reinforce divisions by showing how technologies shape, not just reflect, reality. This perspective encourages focusing on relationships and processes to disrupt harmful patterns and imagine new possibilities.
To move beyond AI's rigid predictions, critical pedagogy fosters collective learning, questioning, and imagining alternatives. This approach shifts focus from maintaining the status quo to envisioning regenerative, community-driven systems. By rejecting AI's narrow frameworks, we can create solutions rooted in care and collaboration.
Viewing AI as a "matter of care" highlights the relational work and marginalized labor that sustain society. Unlike AI's focus on abstraction and optimization, care prioritizes understanding, accountability, and mutual responsibility. This approach challenges AI's inherent exclusions and promotes organizing structures built on solidarity, fostering systems that reflect the values of fairness and collective well-being.


People’s Councils

This chapter text highlights mutual aid as a caring, collective response to social challenges, focusing on people helping each other without conditions. It emphasizes shared support and interconnectedness, countering the isolation and inequality reinforced by AI. During crises like COVID-19, mutual aid addressed urgent needs that institutions ignored. Solidarity complements this by uniting people to fight systems that create scarcity and inequality. Together, mutual aid, solidarity, and "commoning"—reclaiming shared resources and collective decision-making—offer a framework for resisting AI's harms and fostering a more inclusive society.
Tech worker activism has grown as employees challenge unethical AI uses, such as facial recognition and military projects. Workers' councils, which prioritize collective decision-making and direct democracy, provide a way to reshape workplaces and technology itself. By connecting with broader community movements, workers and communities can collaboratively push for justice and sustainability over profit.
People's councils extend this grassroots organizing by empowering those directly affected by AI. These democratic groups focus on care, inclusion, and solidarity, resisting AI's exclusionary practices. Inspired by real-world examples, they aim to challenge systemic inequality while aligning with broader social movements.
Drawing from the Luddite movement, the text argues for resisting harmful technologies that exploit and harm communities. Far from being anti-technology, Luddites were skilled workers fighting tools that disrupted their lives. Similarly, modern resistance to AI must combine collective action, grassroots organization, and the refusal to accept exploitative systems. The book concludes with an anti-fascist approach to AI, addressing systemic inequalities and refusing harmful technologies outright. Through grassroots organizing, solidarity, and care, communities can reclaim power and create equitable alternatives to AI-driven systems, paving the way for a fairer future.


Anti-​Fascist AI

An anti-fascist approach to AI focuses on resisting its harmful effects, especially its role in reinforcing systemic inequalities and exclusion. AI often inherits colonial, patriarchal, and far-right ideologies, so this approach centers on challenging those structures. It prioritizes decolonial and feminist perspectives, addressing how AI can oppress marginalized groups through racialized exclusion and gender-based control.
Key tactics include creating workers' and people's councils—self-organized groups where communities and workers take democratic control of AI and its applications. These councils promote collective decision-making, solidarity, and mutual care, opposing top-down systems that normalize inequality. Anti-fascist AI also involves rejecting technologies used to harm marginalized groups, such as border enforcement or surveillance systems, while building alternatives that prioritize justice, equality, and community empowerment.
The book emphasizes structural renewal, proposing a shift away from exploitative systems toward technologies designed for social and ecological well-being. This includes solidarity economies and commoning—practices that reclaim shared resources and foster collaborative, sustainable communities. By integrating care, cooperation, and democratic participation, these approaches aim to transform AI into a tool for collective good.
A new apparatus is needed to replace current AI systems, focusing on adaptable, decentralized, and sustainable technologies. This vision supports local decision-making, mutual aid, and social autonomy, ensuring technology serves equality and emancipation rather than reinforcing inequality. By prioritizing care and community-led change, this approach envisions a future where technology empowers people and fosters a just society.