
The AI Con by Emily M. Bender and Alex Hanna argues that many claims about AI are exaggerated and mostly benefit powerful tech companies rather than society. The authors say that AI is not truly intelligent but rather a set of automated systems, and that the hype around it distracts from real problems such as worker exploitation, inequality, misinformation, and environmental harm.
The book begins by explaining what AI hype is, how it shapes public thinking, and how language models actually work, showing why people often mistake them for intelligent systems. It then examines the effects of AI on work, social services, and areas like art, journalism, and science, arguing that it often lowers quality and shifts power rather than improving outcomes. In the later chapters, the authors critique both overly optimistic (“Boosters”) and overly pessimistic (“Doomers”) views, and end by suggesting ways to resist AI hype through critical thinking, better regulation, support for workers, and stronger information practices, emphasizing that the future of technology can be shaped by collective action.
An Introduction to AI Hype
AI hype often presents extreme future risks, like the idea that AI could destroy humanity, as if they were scientific facts. But the authors argue that this distracts from real problems happening today, such as wrongful arrests from facial recognition, deepfake abuse, and the use of AI in warfare. These dramatic future scenarios also allow powerful actors to appear responsible while avoiding accountability for current harms. At the same time, “AI” itself is mostly a marketing label used to group many different kinds of automation, making systems seem more intelligent and trustworthy than they really are.
Hype works by exaggerating the importance and power of AI, creating pressure to adopt it quickly. Companies, governments, and individuals are told they will fall behind if they do not use AI. This benefits businesses by attracting money and attention, while also tapping into popular fantasies about intelligent machines. Although AI technologies can be useful in some cases, many promises, such as solving major social problems or replacing human expertise, are unrealistic. This pattern of hype is not new and dates back to the early days of AI research.
The book shows that overtrust in AI can lead to real harm, such as unfair decisions, false accusations, and dangerous system failures. These problems often happen because technologies are oversold and used without proper understanding. It argues that instead of focusing on imaginary future risks or believing exaggerated promises, we should focus on how AI systems actually work, who benefits from them, and how they affect people today. Better regulation, stronger protections, and more critical thinking are needed to limit harm and make informed decisions about using AI.
It's Alive! The Hype of Thinking Machines
Some tech leaders talk as if AI is becoming conscious or close to human intelligence. But these ideas are not new and have been repeated for decades. In reality, systems like ChatGPT are not thinking or aware; they simply generate text based on patterns in data. Still, it benefits companies and investors to promote the idea that AI is becoming truly intelligent, because it attracts money and attention.
The chapter explains that language models work by predicting the next word using large amounts of text and complex mathematical models. Their output can look meaningful, leading people to believe there is a mind behind it. But this is an illusion. Humans naturally interpret language as coming from someone with intentions, so we project understanding onto these systems. In fact, they have no awareness, no goals, and no real understanding; they are just "text-generating machines".
The authors warn that this hype can be harmful. It can make people trust AI too much, reduce how we value human intelligence, and reinforce problematic ideas about measuring intelligence, which have roots in racism and inequality. They also argue that claims about superintelligent AI and existential risk are often used to promote the industry and attract investment, rather than reflect real scientific progress.
Leisure for Me, Gig Work for Thee: AI Hype at Work
In 2023, Hollywood writers and actors went on strike because of concerns about AI. Studios wanted to use AI to write scripts or reuse actors’ digital images, potentially reducing jobs and pay. This reflects a wider trend: companies using AI to increase productivity, but often this means replacing workers or giving them less meaningful, lower-paid roles. The authors argue that AI will not fully replace most jobs, but it will likely make work more unstable, controlled, and less valued.
This pattern is not new. Throughout history, new technologies have often been introduced with promises to make life easier, but in practice, they mainly helped employers cut costs and control workers. Today, AI hype exaggerates the number of jobs that can be automated, even when the evidence is weak. These claims still have real effects; they put pressure on workers, lower wages, and justify worse working conditions. At the same time, AI is not truly automatic; it depends on many hidden, low-paid workers around the world who label data, check outputs, and handle harmful content.
While AI tools can sometimes save time, relying on them too much can reduce skills, lower quality, and increase dependence on big tech companies. In many cases, workers are pushed into worse roles, such as fixing AI mistakes or producing large amounts of low-quality content. However, workers are also resisting. From Hollywood strikes to data workers organizing for better conditions, people are pushing back against how AI is used.
If It Quacks Like a Doc: AI Hype and Social Services
AI is increasingly used in sensitive areas such as mental health, healthcare, education, and law, but it can also be dangerous. For example, a chatbot once encouraged a person to take their own life. These systems do not truly understand people or emotions, yet they are being used as substitutes for real human care. Companies and governments promote AI as a way to make services cheaper and more efficient, but in reality, this often means replacing human support with lower-quality automated systems, especially for those who cannot afford better care.
Governments are also using AI to make important decisions about people's lives, such as welfare, housing, and criminal justice. These systems are often presented as fair and objective, but they can be biased and harmful, especially for vulnerable groups. They do not solve problems like poverty or inequality; instead, they make these systems more efficient while hiding their flaws behind technology. In many cases, AI tools also provide incorrect or misleading information, which can have serious consequences in areas such as law, healthcare, and immigration.
The authors argue that AI is often poorly tested and overhyped. Good results in controlled tests do not mean these systems work safely in real life. In fields such as healthcare and education, AI can reduce quality, increase risk, and deepen inequality. While it may seem like a quick solution, AI is often used to cut costs rather than truly help people. The social problems need real human support and investment, not automated systems that only appear helpful.
Artifice or Intelligence? AI Hype in Art, Journalism, and Science
Many people think AI can be truly creative, but it does not create in the same way humans do. It only recombines patterns from data made by people. Even if its outputs look impressive, humans are the ones who give them meaning and value. At the same time, AI is producing huge amounts of low-quality or fake content, which damages trust in art, news, and science. Instead of supporting human creativity, it often replaces it with weaker imitations.
In fields like art, journalism, and science, AI is often used to cut costs rather than improve quality. Artists' work is used without permission, journalists are replaced by cheap AI content, and scientific writing is mimicked without real understanding. This leads to fake books, unreliable news, and weak or incorrect research. AI can mimic the structure of these fields, but it cannot do the real work that requires human judgment, experience, and responsibility.
The authors argue that creativity and knowledge are deeply human processes. Art depends on human experience, journalism on trust and accountability, and science on careful thinking and collaboration. AI hype ignores this and treats creativity as something simple and mechanical. In reality, the main problems in these fields are not technical; they are social and economic, and solving them requires supporting people and improving systems, not relying on AI as a quick fix.
I'm Sorry, Dave, I’m Afraid I Can't Do That: AI Doomers, AI Boosters, and Why None of That Makes Sense
There are two main views about AI today: (1) "Doomers" fear that AI will become so powerful it could destroy humanity, and (2) "Boosters" believe AI will solve major problems and improve the world. Even though they seem different, both assume that AI will become extremely powerful and inevitable. The chapter argues that both views are misleading because they focus on imagined futures rather than real problems occurring today.
The authors say there is no strong evidence that AI is becoming truly intelligent or close to taking over. Ideas like "AI alignment" are also unclear, since human values are different across societies. Meanwhile, focusing on these speculative risks distracts attention from real harms, such as inequality, surveillance, labor exploitation, and misuse of AI in areas like warfare and public services. These are already affecting people, especially vulnerable groups.
The authors argue that the real urgent threat is not AI taking over the world, but issues like climate change, which AI development is actually making worse through high energy use and resource consumption. Discussions about extreme future scenarios can delay real regulation and allow companies to grow their power without accountability. So, instead of worrying about unlikely futures, we should focus on the real social, economic, and environmental harms AI causes today.
Do You Believe in Hope After Hype?
AI hype mainly benefits powerful companies. It helps them make money, collect data, replace good jobs with worse ones, and avoid responsibility by claiming technology can fix social problems. This hype makes people trust AI too much and distracts them from what is really happening. The authors argue that we can push back by asking simple questions: what does the system actually do, who benefits, who is harmed, and how it was built. Being critical and informed helps break the illusion around AI.
There are also everyday ways to resist. Workers can push back when AI tools threaten their jobs, and individuals can choose not to use these systems or challenge low-quality AI content. It is also important to protect how we handle information: instead of relying on chatbots, we should verify sources, compare perspectives, and support institutions like libraries. AI often removes this effort, but that effort is what helps us think critically and understand what is true.
Finally, the authors argue that strong rules and accountability are essential. Existing laws can already address many problems, and new regulations should focus on protecting people rather than just promoting innovation. Companies should be transparent about data and AI use, and humans must remain responsible for decisions. Workers need better protections, and people should have control over their data. AI is not inevitable; we can shape how it is used by resisting harmful practices and demanding better systems.