Skip to main content

Smart businesses, institutions and nations will leverage AI to cut costs and turbocharge productivity. Workers and students must therefore (re)train for a transformed job market. Savvy investors will make boatloads of money.

Somehow this will lead to greater prosperity for all, but only if we’re clever enough to understand the vision and hold its promise in mind while the powers that be redeploy capital and repurpose workers as AI system attendants.

This banal hype targets investors, managers and elected officials, aiming to nudge adoption. If it happens to justify it in others’ minds too, facilitating general acceptance of unending competition and attendant technological change and precarity, so much the better.

Enter The AI Con by University of Washington linguist Dr. Emily Bender and sociologist Dr. Alex Hanna, director of research at the Distributed AI Research Institute (DAIR). Bender is well-known as one of the authors of the influential AI-critical “Stochastic Parrots” paper from 2021. 

Published in 2025, The AI Con is a book-length dismantling of AI hype. That Bender and Hanna are not anti-technology is clear from the outset (and from the work of colleagues: DAIR founder, computer scientist Dr. Timnit Gebru, was an AI researcher at Google prior to being forced out). Nor do they paint all machine-learning systems with the same brush, even when lumped together under a common “AI” banner. Bender and Hanna help us tease the issues apart.

Intelligence, thought, mind, understanding and creativity are hardly sharp and fully understood categories, despite their liberal sprinkling throughout AI-industry marketing copy. As the authors recount, the term “AI” originated not as a descriptor of a well-defined set of technologies but as a branding move. Indeed, the term has been applied variously, both today and over time.

Consider, for example, deductive, logical model-based “expert systems” — the “AI” of decades past — versus the stochastic parrots promoted today. Bender and Hanna especially target generative systems that extrude synthetic text and other media. Think chatbots like ChatGPT and text-to-image systems like Stable Diffusion. These are trained, using vast amounts of computing power (not to mention human labour, energy, material and water), over internet-scale datasets to create corpus models that capture things like word relationships and can thus reproduce plausible approximations of already-existing documents, images, etc. The words in the synthetic response to a chatbot prompt appear to hang together coherently because they tend to hang together in similar ways in prior human works.

Drawing on linguistics, the authors explain why readers are predisposed to impute the existence of another mind behind the synthetic responses extruded by a large language model, even though there isn’t one. This predisposition, Bender and Hanna argue, is the stock-in-trade of hype-mongers, making it easier to convince others that generative systems are powerful enough to replace human workers.

For those following the AI adoption push from a critical perspective, the book is probably well trod ground. Nevertheless, at about 200 pages, it’s a concise debunking of AI hype and helpful summary of many key issues. The authors provide copious examples of systems falling short of promises, sometimes by a wide margin, sometimes with dire consequences for real people. 

Public institutions and their administrators have been especially vulnerable to the AI push. Bender and Hanna illustrate repeatedly that conditions are ripe when austerity plus high demand (for example, for medical care, post-secondary education, peer review, etc.) lead to persistent strains, together with the presence of accumulated, minable data. The pitch: AI systems trained on institutional/sectoral data can mitigate the strains, cutting costs and freeing the time of those who remain, albeit under more precarious employment conditions. 

The catch? As educators know, the teaching-learning relationship, like the doctor-patient relationship, is a complex, deeply human one not reducible, when meaningful, to prompt-call/synthetic-text-response. There’s simply too much more to it than that. As the authors persuasively argue, such activities are distinctly human, and you need people for social services. 

If The AI Con has a shortcoming, it’s an insufficient political-economic analysis of the current AI-adoption drive and the collective political power required to set it on a better path. Discourse is a terrain of struggle, so inoculating against hype has its place, but it’s not enough. 

The authors conclude with a strategy chapter containing a comprehensive list of regulations and policy measures, including support for libraries, which governments ought to enact. But they put too much faith in calling out hype as a means of realizing these goals, without addressing the elephant in the room: Governments, including the Carney government and the Trump administration, are all-in on the capital-friendly AI adoption Bender and Hanna convincingly argue hollows out science, journalism, education, health care and so on. The 2023 U.S. actors’ and writers’ strikes are named as successful struggles, but otherwise unions receive only glancing attention.

A larger and stronger labour movement — strong both locally and through wider relations of cross-sector solidarity — is the only real hope we have for protecting the dignity of work in our institutions and building the collective power without which governments will be disinclined to enact better regulations, siding instead with moneyed interests.

So yes, take a critical approach as a scholar and learn to recognize and dismantle AI hype. Bender and Hanna provide a useful resource. But understand that calling out hype won’t stop so-called AI from being forced down our throats anyway. Work with your colleagues every day, even in small ways, to steadily build the collective power needed to ensure technology, whenever we choose to develop and adopt it, truly advances the public interest and the human-centered nature of our work.