
The Feel of Algorithms, by Minna Ruckenstein, examines how algorithms are experienced in everyday life, not primarily as technical systems, but as felt social and emotional realities. Rather than asking only what algorithms do or how powerful they are, the book asks how algorithms feel to ordinary people, and how emotions such as pleasure, fear, irritation, trust, and fatigue shape everyday relations with data-driven systems. Based on qualitative research in Finland, the book shows how people make sense of algorithms through stories, habits, and embodied experiences, often in the absence of technical knowledge or transparency.
The book is structured around thematic chapters, each with a distinct focus. Chapter 1, Structures of Feeling in Algorithmic Culture, shows how people understand algorithms through shared everyday feelings rather than through technical explanations. Chapter 2, Coevolving with Algorithms, examines how people live alongside algorithmic systems in daily routines, highlighting how habits, convenience, and optimism normalize algorithmic presence. Chapter 3, The Digital Geography of Fear, focuses on insecurity, anxiety, and unease, showing how surveillance, opacity, and unequal power relations produce shared forms of digital fear. Chapter 4, Friction in Algorithmic Relations, turns to irritation and fatigue, analyzing how stereotyping, misclassification, commercial targeting, and data extraction create everyday tensions and expose the limits of algorithmic systems. The final chapter, Care for Algorithmic Futures, brings these insights together by proposing a shift from individual choice to a logic of care, arguing that algorithmic futures are not inevitable and that attending to everyday feelings is essential for imagining more humane, caring, and democratic ways of living with algorithms.
Introduction
People usually understand algorithms through everyday interactions, not technical knowledge. Because algorithms are hidden and constantly changing, even experts do not fully understand them. What people do understand comes from how systems respond to their actions. When someone clicks, scrolls, likes, searches, or buys something, the system reacts. From these reactions, people build stories, feelings, and expectations about how algorithms work. These interpretations may not be technically precise, but they are grounded in real experiences and should be taken seriously.
These interactions matter because they produce data, and data is a source of power. Every small action leaves a digital trace that companies collect because it has economic value. Through this process, called datafication, everyday life is turned into data that can be tracked, analyzed, predicted, and sold. This creates power imbalances and can lead to surveillance, bias, and loss of control. Some researchers describe this as data colonialism, where ordinary human activity becomes a resource for extraction. At the same time, people feel conflicted: they enjoy the convenience that comes from interacting with algorithms, but they are uneasy about how much data those interactions generate. Although algorithms are designed to feel smooth and invisible, moments of friction, misunderstood ads, wrong recommendations, or unwanted tracking, make the data relationship visible.
Algorithms then feed this data back into people's lives through metrics and feedback such as likes, step counts, scores, and rankings. These numbers are based on past interactions, but they also shape future ones by nudging behavior and attention. Over time, habits, decisions, and even self-image begin to align with what the metrics reward. People start to "live the metrics". Instead of focusing only on dramatic harms, the book argues that these everyday interaction–data–feedback loops are where algorithmic power is most strongly felt. The feelings they generate, such as pleasure, fear, and irritation, form shared emotional patterns that shape how society lives with algorithms. In this sense, algorithms are not just technical systems but ethical and political ones, created and sustained through everyday interactions, data extraction, and collective choices.
Structures of Feeling in Algorithmic Culture
Understanding how algorithms feel helps include ordinary people, not just experts, in shaping technology. People experience algorithms with comfort, curiosity, fear, confusion, and frustration. When they talk about these feelings, they show what matters to them and what feels wrong in everyday digital life. Algorithms shape how people see themselves through social media, dating apps, fitness trackers, and other tools. Numbers like likes, matches, and scores can cause stress or disappointment. These reactions are shared by many people, showing common struggles with data-driven life.
Feelings about algorithms are not random. Excitement, fear, and irritation appear again and again and guide how people trust, use, or resist technology. Algorithms also work like invisible infrastructure, quietly sorting people into data profiles. Because they operate in the background, they feel neutral even though they strongly shape daily life. Companies use algorithms to profit from personal lives. Services can feel helpful, but they also influence behavior through nudges, rankings, and rewards. People enjoy convenience but worry about control and autonomy.
Targeted ads often annoy people because they stereotype or expose sensitive information. Through such moments, people learn about algorithms by experience, not technical knowledge, and develop stories about how systems work. These stories may be inaccurate, but they still shape behavior. Algorithms often feel convenient and caring, which hides power and data extraction. Between optimism and fear lies irritation, a growing sense that something is off. This irritation matters: it shows people are not passive and points to where more fair and caring ways of living with technology might emerge.
Coevolving with Algorithms
Algorithms are part of everyday life and often make things easier. Apps help people navigate, communicate, get services, shop, and connect with others. Many people see algorithms as helpful assistants that support work, health, and social life, not as replacements for humans. You don’t need deep technical knowledge to use them, just enough to get by. Some people feel very positive about technology and believe AI represents progress and hope. They trust that algorithms can help solve big problems and improve the future. When technologies feel useful and supportive, they become normal and deeply embedded in daily routines.
Over time, digital tools become habits. People scroll, click, and upload without thinking much about it. Algorithms start to feel neutral, even though they are built from users’ own data and shape what people see and do. Because data is invisible, it’s hard to notice what is being lost or taken. Some technologies feel caring, like fitness trackers or recommendations that seem to understand the user. This can feel good, but it also depends on constant tracking. People enjoy being understood, yet feel uneasy about being watched. So pleasure and discomfort exist at the same time.
However, people are not passive. Many try to shape their digital spaces by clicking carefully, blocking content, or training algorithms to show what they prefer. This gives a sense of control, even if it doesn't change the larger system. Overall, people live alongside algorithms, adapting to them while also judging them. Convenience, curiosity, and optimism help algorithms spread, even as people remain unsure about how much control they are giving up and where the limits should be.
The Digital Geography of Fear
People often feel fear or discomfort about algorithms even when they don’t know how they work. Seeing ads follow them after searches or conversations can feel creepy and make them feel watched. Because privacy policies are hard to understand, people are unsure what data is collected and who controls it. This creates anxiety and a sense of lost control. These fears appear in everyday moments and are shared by many people, though they affect some more than others, especially those who feel less confident with technology. Digital spaces can feel unsafe in the same way dark public places do: not because harm always happens, but because power is unequal and hidden.
Over time, repeated tracking and targeted ads make people tired and resigned. Even when data use is legal and based on consent, it still feels invasive. People click “agree” without real choice and feel they cannot escape surveillance. Many try to protect themselves by using ad blockers, VPNs, or fake accounts. These tools help people feel safer, but they put responsibility on individuals and don’t fix the larger problem. Fear and insecurity remain. Some people choose optimism instead of fear, especially tech professionals, focusing on progress and control. But even they feel uneasy at times. Fear, trust, and hope exist together.
These worries are often dismissed as paranoia, but they reflect real risks like fraud, surveillance, manipulation, and loss of power. Younger people in Finland describe this feeling as “heating up”: a mix of stress, unease, and powerlessness about where digital life is heading. Much of this fear is really about who controls technology. Algorithms are run by powerful companies, and people worry about influence, inequality, and loss of shared understanding. Dystopian stories about social media, politics, and misinformation give people a way to talk about these fears. Overall, digital fear is not just personal. It is collective and points to deeper problems with transparency, power, and control.
Friction in Algorithmic Relations
People often feel annoyed when algorithms, especially ads, get their lives wrong. Ads based on age or gender can feel insulting or intrusive, especially when they repeat stereotypes. People are not against technology itself; they are frustrated that algorithms don’t really understand them. Algorithms promise personalization but often use simple categories. This makes mistakes common and hard to fix. A single search or short-term habit can lead to weeks of irrelevant ads or recommendations. Because systems are slow and rigid, people feel judged, misunderstood, and tired of trying to correct them. This leads to "algorithm fatigue".
Ads and recommendations often reduce people to stereotypes and repeat old social ideas, which feels like a step backward. In smaller countries, targeting is even less accurate, making ads feel random and badly aimed. Social media adds to the frustration by pushing self-promotion and commercial content, making communication feel less genuine. Some people respond by sharing less or leaving platforms to protect their well-being. Many people are also upset that their data mostly benefits companies rather than them. Some want less data collection; others want more control so data works in their favor. This reflects a power imbalance between users and platforms.
These problems are part of what researchers call data colonialism: companies turn everyday life into data and profit from it. People still use these systems because they are convenient, but they push back when things feel unfair or invasive. Everyday irritation is not trivial. It shows that algorithms simplify complex lives too much and leave little room for choice. These frustrations are a signal that people want more control, more care, and more "breathing space" in how technology shapes daily life.
Care for Algorithmic Futures
This chapter says we should not only ask how powerful algorithms are, but how they feel in everyday life. People feel pleasure, fear, and irritation, and these feelings shape how technology affects them. Stories, rumors, and experiences matter because they guide how people live with algorithms. Today, technology is mostly handled through a logic of choice. People are expected to adapt, learn skills, and make the right choices. If something goes wrong, the blame is put on individuals. But this does not match reality, because digital systems are everywhere and hard to avoid. Real choice is disappearing.
The book suggests another way: a logic of care. This means shared responsibility, attention to differences between people, and designing technology to support well-being, not just profit and efficiency. Care takes people’s worries and frustrations seriously and sees them as signals of what needs to change. Many people feel stressed, tired, or left behind by digital systems and blame themselves. These shared feelings show the limits of putting all responsibility on individuals. People want more say, clearer explanations, and systems that respond to real life, not just data.
Algorithms can sometimes help people, but they can also reduce autonomy by pushing, tracking, or manipulating behavior. Autonomy is not all-or-nothing; it depends on context. People need breathing space to reflect, resist pressure, and set their own goals. Better futures with algorithms require humans and machines to work together. Algorithms should listen better, allow correction, and respect human judgment. They should support people, not replace or control them. Finally, frustration with algorithms is important. It shows that current systems are too narrow and ignore care, context, and sustainability. By listening to these feelings and focusing on care, empathy, and shared responsibility, we can imagine and build more humane and livable digital futures.
Ways Forward
The main message of the book is simple: to understand algorithms, we must look at how people actually live with them and feel about them. The author realizes that abstract criticism does not help people who are just trying to cope with digital life. People need ideas that connect to their everyday worries, needs, and emotions. Stories of pleasure, fear, and especially irritation show how algorithms affect daily life. These feelings are shared by many people and point to real problems. Irritation is important because it signals that something is wrong. The book argues that algorithms often weaken human values like autonomy, trust, care, and equality. These problems cannot be solved by individuals alone. Collective solutions are needed.
People feel constantly pushed and tracked by algorithms and need breathing space to think and act freely. This discomfort is a form of critique, even if people cannot fully explain it. The book shows that algorithmic futures are not fixed. By listening to everyday experiences and emotions, we can imagine fairer and more humane ways of designing technology. The message is clear: we are not machines, and technology should be built to care for humans, not control them.