Co-liberative Computing

Amir H. Payberah - 2025-04-17

In the digital economy, users' everyday actions, such as clicks, searches, and posts, generate massive amounts of data, which are used by big tech companies to build their AI models. Some argue that AI users, whose data is used to train these systems, are contributing a kind of hidden cost toward developing these models, making the relationship appear mutually beneficial. You feed the system with your data, and the system provides services in return. However, this relationship is far from symmetric. The vast majority of the profits generated by these models flow to corporations, while users remain unpaid, unaware, and largely powerless.

If we look closer, this dynamic resembles what Karl Marx described in his Theories of Surplus Value: the extraction of value from labor without fair compensation. In this context, "labor" is the digital activity that generates data, and the resulting value is captured almost entirely by tech companies. From this perspective, user data is not merely a technical input; it is an exploited resource that reflects and reinforces broader power imbalances in the digital economy.

However, today, most discussions in AI ethics focus on issues such as bias, fairness, and transparency, but they tend to overlook the deeper problem of economic exploitation embedded in the very structure of AI development. Addressing this requires moving beyond a narrow, reactive ethics framework and embracing the broader principles of data justice. This starts by challenging power structures and asking: Who controls the data? Who benefits from it? How can individuals and communities ensure collective rights over the data they generate? And many more questions.

One approach to addressing this issue comes from platform cooperativism, as exemplified by models like the Mondragon Corporation, where employees are not just workers but co-owners of the enterprise. They participate in decision-making and share in the profits they help create. Inspired by this model, we can imagine AI systems built as collective enterprises, where data contributors are recognized as stakeholders with ownership, governance rights, and a share in the value they help generate. Drawing from cooperative traditions like Mondragon, this approach offers a path toward AI that is not only ethical but also economically just and democratically governed.