I just finished reading Emily M. Bender's & Alex Hanna's book The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, and I come away with mixed feelings. They're mixed because they provide a comprehensive takedown of technology that I use (and that I'm asked to use) on a daily basis, but I'm not sure how much of it I buy at the moment.

The parts I have trouble trusting at the moment are how the current iterations of LLMs as "AI" are part of a bubble that will eventually burst. However, I spend enough time keeping up with current tech news and reading takes online to see that AI usage has definitely grown over the last several years. And even some skeptics are coming around on the use of coding agents to help with software development.

Overall, I like the book because several of the ideas resonate with me. I believe in giving more power to workers, and giving them more agency overall, and the current waves of AI purport to threaten that. And I also see that building ever larger models, and marketing them to companies and consumers will just drive up energy demand at a time when we should be looking for alternatives to the traditional ways for producing electricity.

However, they come across as overly dismissive, and don't give much ground to uses of the technology that could be legitimately helpful. They spend a lot of pages explaining that current LLMs are not "AI", but it sort of discounts the lived experience of many people in tech right now (including many that I respect) who see this technology as worth their time to invest in.

There was one place where they give credit to current models when they say,

Prosocial applications of automated pattern matching are possible, provided we follow some principled guidelines.

Here, if you squint, you can see the authors acknowledging the utility of the technology, but with a grain of salt. I would have liked to hear more about the positive uses of technology, or if they could have explained which currently advertised uses are beneficial and useful in their eyes. Toward the end of the book, starting on page 185 of the 196 pages, the section "Building Socially Situated Technology" talks about some of these applications over the course of two pages. I would love to read the full book treatment of this because I think to get people off of the AI hype train, they will have to see the ethical & positive cases amplified as examples to emulate.

Some things that I learned from the book that I wouldn't have known otherwise,

  • the media sites Defector Media & 404 Media are good examples of employee-owned news that have rejected the premise of using AI for news
  • Joseph Weizenbaum created the Eliza chatbot as an exercise of a counterexample of computers doing something only a human should do, act as a therapist. However, this backfired as people held it up as an example of how powerful computers could be. There are several great Weizenbaum quotes but my favorite was "we ought not now to give computers tasks that demand wisdom", which resonated because the division of intelligence vs wisdom could be a good marker for what you'd ask a computer vs a human to do

Despite what I've said, I would recommend reading the book. I admire Dr. Bender's work, and think that she has an overall positive influence in the tech space. I also think that the book has rekindled a few thoughts & inclinations in me about resisting the technology, at least for anything and everything.

But their rallying cry to Just say No…is hard right now. Everywhere in tech, people are urging you to use more AI somehow, whether that's in your product or in your workflows. Oftentimes, that even comes in the form of marching orders as industry leaders hold workers to the standard that they need to use X% of their time using this technology.

At this point, resistance is a privilege, and likely needs more collective organizing to do properly. I don't have a choice to stop using AI tools at the moment, but I'll push back in cases where I think it is unwarranted or could produce harm.

For now, I'm trying to find a middle path, and I'm trying to follow my curiosity. Lately, that has been fairly broad, but narrowing in on understanding how agents work, mechanistic interpretability, and model evaluation. I could see taking the lessons from the book more deeply into that domain, to help highlight or amplify several of the overall messages, like model error rates in high stakes situations as well as showcasing and explaining model biases. Maybe that's part of what the book is missing - more concrete examples of how to distinguish between AI applications that genuinely help people and those that just serve hype.