Co-liberative Computing


The Eye of the Master by Matteo Pasquinelli delves into the history and impact of AI, revealing how it mirrors and extends systems of labor and power from the industrial age. The book argues that AI is not simply about replicating intelligence but about automating human knowledge and behavior rooted in historical practices like the division of labor. It highlights how AI development depends on the often invisible work of marginalized groups and reflects social inequalities rather than being a neutral or purely technical innovation. By tracing the evolution of AI through the industrial and information ages, Pasquinelli challenges myths of "superhuman" intelligence and emphasizes that AI encodes social hierarchies and collective human labor, urging a critical examination of its historical and social roots.


The Material Tools of Algorithmic Thinking

This chapter explores the historical and cultural roots of algorithmic thinking, showing how it has evolved from ancient practices to modern technology. It highlights the Agnicayana ritual, an ancient Hindu practice that symbolically reconstructs the god Prajapati's body using precise, step-by-step methods akin to algorithms. Donald Knuth's work connects such practices to the broader history of algorithms, tracing their use in problem-solving across ancient civilizations and everyday tasks like trade, cooking, and law.
The chapter emphasizes that numbers and algorithms were not abstract from the start but emerged from practical activities like counting, trading, and organizing resources. The term "algorithm" itself originates from the Persian scholar Al-Khwarizmi, whose work introduced Hindu-Arabic numerals to Europe, revolutionizing calculations. Later, mechanical devices like Pascal's calculator and Babbage's Difference Engine automated calculations, eventually leading to modern computing with binary systems and machine learning algorithms.
The evolution of algorithms reflects two key shifts: the adoption of Hindu-Arabic numerals for efficient trade and the transition to binary systems for automated computation. These changes illustrate how algorithmic thinking, rooted in practical problem-solving, has continually adapted to meet human needs.


Babbage and the Mechanisation of Mental Labor

This chapter explores the evolution of computation in early 19th-century England, where the term "computer" referred to people, often women, who performed manual calculations for organizations like the Navy. Charles Babbage sought to automate this work with his steam-powered Difference Engine, designed to create accurate logarithmic tables critical for navigation and British maritime power. While the engine wasn't programmable and was limited to specific calculations, it represented a step toward modern computing.
Babbage's work reflected industrial priorities rather than academic theory. Influenced by the Jacquard loom's use of punch cards for patterns, he linked automation to labor organization, viewing machines as extensions of industrial knowledge. His ideas about dividing labor into smaller tasks to increase efficiency influenced both the design of his machines and broader economic strategies.
Ada Lovelace contributed significantly to Babbage's vision, writing the first documented algorithm for his Analytical Engine and imagining machines capable of symbolic manipulation beyond numbers. She recognized their limits, noting they could only follow instructions, not create new knowledge. Despite her forward-thinking insights, her contributions were initially overshadowed.
Babbage's efforts emphasized the connection between computation and labor management, laying the groundwork for modern computing and its role in optimizing productivity. His work reflected the industrial mindset of his era, prioritizing efficiency and control, themes that continue to resonate in discussions of technology and automation today.


The Machinery Question

This chapter explores the historical debate on the relationship between technology, labor, and knowledge, starting with the Industrial Revolution. While machines symbolized progress, they also caused job losses and deepened social divisions, separating skilled and unskilled work and mental from manual labor. Workers resisted these changes, leading to the Machinery Question, a public debate on how machines affected society, labor, and wages. Thinkers like Marx argued that it wasn't the technology itself but the division of labor and exploitation of workers' knowledge that drove capitalism.
Educational movements like the Mechanics' Institutes emerged to teach workers about machinery, blending practical skills with applied science. Socialists like William Thompson and Thomas Hodgskin emphasized that knowledge was fundamental to labor productivity and economic growth. They critiqued how machines often deskilled workers, turning them into tools rather than empowering them.
The chapter highlights the intertwined evolution of machines and science, showing how tools like steam engines shaped scientific understanding, such as thermodynamics. This dynamic continues today, with AI acting as a modern extension of the collective knowledge of workers. The historical debate remains relevant as AI transforms labor and social structures, echoing earlier concerns about automation and its impact on society.


The Origins of Marx's General Intellect

This chapter explores how knowledge, technology, and labor have shaped each other, drawing on historical debates like the "March of Intellect" and Marx's ideas on machinery and collective labor. The "March of Intellect" began as a movement promoting public education and technology for societal progress but faced backlash, including satire like William Heath's 1828 cartoon mocking the democratization of knowledge. Marx later connected these ideas to his concept of "general intellect", arguing that collective labor and knowledge, rather than individual creativity, drive technological and economic change.
Marx's Fragment on Machines predicted how integrating knowledge into machinery would reduce labor's role in capitalism, destabilizing its foundations. This idea was revived in the 20th century to critique automation and the rise of the knowledge economy, with thinkers highlighting how machines embody collective human knowledge. Marx, influenced by Babbage, argued that machines evolved from the division of labor, replacing tasks through mechanization but alienating workers by transferring their skills to the machine, turning them into operators rather than creators.
Marx viewed machinery as a crystallization of collective labor controlled by capital to extract surplus value while maintaining worker dependency. This shift reduced labor's value while increasing capital's power through accumulated knowledge embedded in machines. Marx emphasized the role of the "collective worker", whose shared efforts created technology, even as capitalism used machines to control and exploit.
The chapter highlights how these dynamics persist today, with AI serving as a modern form of collective labor. Marx's ideas remain relevant, showing how technology reflects social relations, not just individual invention, and how knowledge, when integrated into machines, continues to shape labor, power, and economic systems.


The Abstraction of Labor

This chapter explores how the Industrial Age marked the separation of energy and information, transforming labor and production. Gilbert Simondon argued that machines became intermediaries, with humans providing instructions while machines controlled energy. This split disrupted the traditional unity of hand and mind in labor. Andreas Malm added that coal's reliability enabled factories to move to cities, driving industrial capitalism. Innovations like James Watt's steam governor and the Jacquard loom's punched cards introduced early systems for managing energy and information, laying the groundwork for modern industrial organization.
Hegel and Marx expanded on these changes, seeing labor as increasingly abstract under capitalism. Hegel viewed machines as embodiments of human activity, while Marx argued that labor became a commodity measured in time and controlled by capital, shaping social structures and power dynamics. James Beniger described the rise of information technologies as a "control revolution" developed to manage industrial growth. Romano Alquati critiqued this neutrality, showing how workers' decisions and adjustments in factories generate valuable information absorbed by machines, turning labor into knowledge and economic data. He argued that information technologies reflect power struggles, using data to manage and discipline labor. This process paved the way for AI, where the information continues to measure and regulate labor, embodying the ongoing tensions between workers and capital.


The Self-Organisation of the Cybernetic Mind

This chapter explores the history of artificial neural networks and their connection to ideas of self-organization, autonomy, and societal structures. The success of AlexNet in 2012, which revived interest in deep learning, builds on ideas from the 1940s by Warren McCulloch and Walter Pitts, who proposed that artificial neurons could mimic the brain's logical processes. Inspired by earlier analogies between the nervous system and telegraph networks, their work laid the foundation for modern AI.
Self-organization became a key concept in fields like cybernetics, biology, and economics, emphasizing how systems adapt and regulate themselves. Innovations like Donald Hebb's theory on neural connections and Frank Rosenblatt's perceptron demonstrated how machines could learn from input rather than following fixed rules, marking a shift from linear computing to adaptive systems.
The rise of neural networks reflected societal structures, mirroring how labor and cooperation were organized. Cybernetics didn't discover new science but used analogies between machines and life, often projecting societal norms onto biology. This led to ethical debates about autonomy in both machines and social systems. While counterculture movements championed autonomy as freedom, cybernetics saw it as self-regulation for control, a tension that continues in AI discussions.
Ultimately, the development of AI and neural networks is shaped more by social relations than purely by biological inspiration. From industrial-era labor organizations to modern information technologies, the evolution of AI reflects broader societal struggles for autonomy and control.


The Automation of Pattern Recognition

This chapter explores the debate between Gestalt theory and cybernetics over whether machines can replicate human perception. Gestalt theorists argued that humans perceive the whole of an image before its parts, emphasizing holistic, self-organizing processes that machines could not replicate. In contrast, cyberneticians like McCulloch and Pitts believed perception could be reduced to binary logic and computed using artificial neural networks. Their work laid the foundation for pattern recognition in AI, marking a shift from rule-based reasoning to connectionism.
The 1940s and 1950s saw these ideas evolve further, with studies like "What the Frog's Eye Tells the Frog's Brain" showing that basic cognitive tasks could occur in the eye itself, not just the brain. This mechanistic approach contrasted with Gestalt views, which stressed the brain's complex, continuous processes. John von Neumann, a key figure in cybernetics, acknowledged the limitations of reducing the brain's functions to mathematical logic, emphasizing the brain's probabilistic and error-tolerant nature.
While Gestalt ideas highlighted the mind's holistic nature, cybernetic models proved effective in developing AI, focusing on processing information rather than imitating human perception. This debate continues to influence modern AI, showing how neural networks prioritize computational efficiency over replicating the complexity of human cognition.


Hayek and the Epistemology of Connectionism

This chapter explores how Friedrich Hayek's theories on the mind as a classification system influenced both economics and the development of artificial neural networks. In his 1952 book The Sensory Order, Hayek described the brain as a self-organizing network that classifies information based on connections, similar to how neural networks process patterns in modern AI. He linked this process to free-market principles, arguing that just as the brain organizes decentralized knowledge, markets operate through spontaneous self-organization without central control.
Hayek emphasized that knowledge is dynamic and reorganizes itself, paralleling the way markets and neural networks adapt to changing inputs. His connectionist view combined ideas from neuroscience, Gestalt psychology, and cybernetics, presenting the brain as a flexible, adaptive classifier rather than a rigid computational machine. This perspective influenced the shift in AI from symbolic logic to connectionism, focusing on pattern recognition and classification over strict rule-based reasoning.
In economics, Hayek compared market price signals to a communication system, where decentralized decisions create order without any single entity having complete knowledge. While critics like Oskar Lange believed computers could automate market calculations, Hayek argued that machines could not fully capture markets' complexity. His theories mirrored Marxist ideas about abstraction but differed in their political interpretation. Where Hayek saw abstraction as neutral and aligned with individual freedom, Marx argued that it was shaped by capitalism's social relations and used to normalize control.
Ultimately, Hayek's ideas linked the brain's classification processes to market dynamics, framing both as decentralized systems of adaptation and problem-solving. This perspective helped shape early thinking in neural networks and AI, highlighting how social and economic theories influence technological development.


The Invention of the Perceptron

This chapter explores the development and impact of the perceptron, an early artificial neural network created by Frank Rosenblatt in 1958. Initially overhyped in the media, the perceptron was a groundbreaking system designed to learn and recognize patterns by adjusting its connections through trial and error, much like how the brain learns. Although its early versions had significant limitations, it laid the foundation for modern neural networks and tasks like image and speech recognition.
Rosenblatt's perceptron worked by mapping inputs into a multidimensional space and separating them into categories using statistical methods. This approach shifted AI from symbolic logic to a focus on pattern recognition, introducing concepts like vector spaces that remain central in machine learning today. Despite its promise, a critique by Marvin Minsky and Seymour Papert in 1969 highlighted its inability to handle complex tasks, temporarily halting neural network research during the "AI winter". However, Rosenblatt had already proposed multi-layer perceptrons, which would later overcome these limitations with the development of advanced algorithms.
The perceptron also pioneered the automation of perceptual tasks traditionally performed by humans, such as supervision and categorization. This marked a shift in AI toward automating not just physical but also cognitive labor. Today, machine learning systems inspired by the perceptron analyze vast amounts of social and cultural data, shaping fields like surveillance, big data, and online platforms. While the perceptron's ability to classify data reflects human-designed social conventions, its legacy highlights how AI has evolved into a tool for managing knowledge and exercising control in the modern world.