The 404 Media Podcast (Premium Feed)

from 404 Media

The Marketing Tricks of "Artificial Intelligence"

You last listened March 24, 2026

Episode Notes

/

Transcript

This week, Sam talks to Emily Bender and Alex Hanna about the marketing ploys of “artificial intelligence,” why ridicule works to keep big tech’s claims in check, and what makes them hopeful for the future. They’re the authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.

Dr. Alex Hanna is a writer and sociologist of technology, labor, and politics. She’s the Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. Dr. Emily M. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School.

They also host the The Mystery AI Hype Theater 3000 podcast which “deflates AI hype and draws attention to the real harms of the automation technologies we call ‘artificial intelligence’.” 


YouTube Version: https://youtu.be/cHiL4ZuLNX0
Sam:

Hello, and welcome to the four four Media Podcast where we bring you unparalleled access to hidden worlds with online and IRL. Four four Media is a journalist founding company and need your support. To subscribe, go to 44media.co. As well as bonus content every single week, subscribers also get access to additional episodes where we respond to the best comments, and they get early access to our interview series too like this one. Gain access to that content at 44media.co.

Sam:

Today, I'm here with Alex Hanna and Emily Bender, authors of the AI con, how to fight big tech's hype and create the future we want, which came out last spring. Unbelievable that it was last spring. It feels like Doctor doctor Alexander is a writer and sociologist of technology, labor, and politics. She's the director of research at the Distributed AI Research Institute, also known as DARE, and a lecturer in the school of information at the University of California Berkeley. Doctor.

Sam:

Emily M. Bender is a professor of linguistics at the University of Washington, where she is also the faculty director of the computational linguistics master of science program and affiliate faculty in the school of computer science and engineering and the information school. Welcome. It's so good to have you both. I'm so excited that we're here today.

Emily:

So fun to be here. It's so fun. Like, I listen to your podcast, so I hear that intro a lot. And it's really fun to, like, hear it as it happens.

Sam:

Well, Joseph does a better job of it. He's got the accent going for him. So

Alex:

I'd say, I'm not I'm not charmed by that accent, but great

Sam:

Thank be here, you so much for being here. I will also note that you two, host the Mystery AI Hype Theater 3,000 podcast, which you're gonna go record one of right after this, which I'll just quote from the description, deflates AI hype and draws attention to the real harms of the automation technologies we call artificial intelligence, quotes for the people who are not watching on YouTube. I enjoy hearing how this podcast started, so I'm gonna ask you how it started. But I was also a huge Mystery Science Theater 3,000 work as a kid. I would say it influenced my sense of humor, maybe for the worst, maybe for the better.

Sam:

Hard to say. I will leave that to other people to say. But, yeah, do you wanna just, before we kinda dive into that and the rest of your work, can you just give us a little bit of background, just kind of the the quick how you got here pitch, how you got into respective fields, how did you get how did we all get here, and how did you eventually meet each other?

Emily:

Yeah. Oh, this is a fun story. Maybe I'll start by saying my background and then Alex, then we can get into how we met. But, I'm in linguistics, and linguistics, broadly speaking, is the study of how language works and how we work with language. So it is incredibly relevant to the current moment.

Emily:

And I work specifically in computational linguistics, which is about getting computers to deal with language. And starting in, about 2016, I started paying attention to what I now refer to as societal impacts of language technology, sometimes called ethics and natural language processing. And that started in large part because actually a former PhD student in my department who's on the advisory board for our program, Leslie Carmichael said, hey. You should have an ethics class in that program. I was like, oh, good idea.

Emily:

And then I couldn't find anybody to teach it. It's like, alright. I guess I'm doing And I got organized first basically by learning from people on Twitter. This is 2016, so Twitter was still Twitter solidly, including Alex. And so, you know, sort of went from there.

Emily:

I taught that class for the first time in January 2017. Got very concerned with language technology and the ways that, language variations of different people speak differently is not being handled by language technology and sort of all the possible downstream impacts of that. And also, I noticed in about 2019 that folks in my field were super excited about language models and were making outlandish claims about them understanding. So I started pushing back, pretty hard on that, and that led to, eventually connecting with Alex. But I'll get let Alex tell her background first before we talk about that part.

Alex:

Yeah. I got into this because since the start of my PhD, I was which I started in 2009, I was really interested in the interaction of tech and society. I did an undergrad degree in computer science as well as sociology, and so I basically found a way to be interested in both of them. And so I was initially interested in how social movements use social media. And so I had done some early work around Egyptian social movements, especially around the Arab Spring, but even prior to that.

Alex:

And wrote my master's thesis on a movement that used Facebook called the April sixth movement in 2008. And so then kept on doing a lot of work around social media and politics, became also more interested in computational social science as a way of studying these movements. So an analysis of social media data, trying to understand how movements even engage in discourses online. And then I did my dissertation building a relatively small language model, which had basic classification about features of protests as mentioned in newspapers. And the more and more I was doing that work found was more and more concerned about surveillance in the ways that tools like that could effectively be used to identify social movement actors, social movement organizations in a systematic way.

Alex:

So got very interested in investigating the kind of ethical dimensions of machine learning and things that get looped into AI now. And so that got me more into this fairness, accountability, and transparency space. Went to the first conference in 2018, but also went to a workshop in 2017, organized by my friend Anna Lauren Hoffman and, a few other folks. And then, yeah, and then also became really interested in the status of data with regards to AI, quote unquote. And then, maybe I'll kick it to Emily to talk about how the podcast got started and how the book got written.

Emily:

Yeah. So, in 2020, on the strength of having been interacting on Twitter, Alex and some of her colleagues looped me into a group that was working on, some papers, which, eventually included, AI and the Everything in the Whole Wide World benchmark, which is inspired somewhat by the children's book, Grover and the Everything in the Whole Wide World Museum. So that was that was really fun to work on. So we were this is 2020. So we're doing this work remotely over Zoom.

Emily:

And, those papers, I think we ended up writing three papers together, and then that group was basically done on the academic work. And so the Zoom meeting stopped and we had a group chat going and we had a couple other group chats with other colleagues. And many people among those groups were doing this work of debunking AI hype where we would see hype artifacts and then write like, tweet threads sort of making fun of them, basically saying, this doesn't make sense because. And then we came across one that was a video of someone giving a talk. And so in one of our group chats, I'm like, how do you do the debunking when it's a video?

Emily:

And, our colleague Meg Mitchell says, oh, well, you have to give it the Mystery Science Theater 3,000 treatment. So that's where the idea comes from. And then a little bit later, we came across actually a textual artifact, a long blog post that Medium estimated as a sixty minute read by, Google VP Blaise Aguirreakis entitled, can machines learn how to behave? And I'm like, ugh, this would take so long to do, like, bit by bit, you know, as a text thread. We gotta give this the MSU three k treatment.

Emily:

Who's in? And I have to say to that point, I had actually not watched the show. I knew the concept, but I hadn't watched it. But Alex, being a big fan of that, she was like, I can do it. So you wanna pick it up from there, Alex?

Alex:

Yeah. I mean and so we decided to well, I was like, well, why don't we do like a Twitch stream? That's something people do. Right? And so literally downloaded I forgot what the software was.

Alex:

It was Streamlabs or something and downloaded it, and then we tried to do it. And then it immediately ran into sound concerns, and then I I taught myself OBS on the fly, which I don't really suggest doing that. And then

Sam:

Definitely not.

Alex:

No. It was it was I don't know.

Sam:

It's the most impressive thing I've heard yet, honestly. I I don't degrees and positions aside.

Alex:

Learning OBS in fifteen minutes to do a livestream, probably one of my greatest accomplishments. Yeah. And then learned how to do it, did the broadcast, and we did it. And we only got through about ten minutes of it, and then we did a second and then a third one. And then I kept on doing it.

Alex:

We got to episode eight, then hired a producer, and then we ported all those things to podcasts and now do a stream and a podcast. And then Emily was approached by an agent to consider a book after the it was at New York Magazine or New Yorker. I was confused confused at at two. Two.

Emily:

New York Magazine.

Alex:

Yeah. They did a profile on on her, and then Emily looped me in. And then we made us to write a book. And then all made us to write a book all remotely. And the kicker to all this is Emily was in town in the Bay Area to do an event, which is the great chatbot debate that was held at the Computer History Museum.

Alex:

And it was in March 2025. And after all that, we finally met in person. So, basically, closest collaborators. We're up to podcasts. We're recording podcasts 74 today, three papers, a book, and then all that, and finally met in person in March.

Alex:

And now we've got to hang out a lot because we're doing fun book events and talk a lot online for things like this. Exactly.

Sam:

Love it. Love it so much. Thank you for the the quick and dirty rundown. Something that I think is from from the book, this is a phrase that you use in the book, ridicule as praxis. I think that's the ethos of the podcast and the show.

Sam:

Right? That's kind

Alex:

of the

Sam:

idea here. I find that so refreshing because so much of everything now is really, really depressing, and we need that kind of, like, refreshing perspective that everything is shitty and getting shittier, but to paraphrase another great thinker of our time. But I would say the virtuous practice concept is really important. I'll just quote straight from the book if it's not too cringe to quote your book in front of you. No.

Sam:

Resisting a resisting hype can also be empowering, grounding, and even joyful. It's empowering to reaffirm the value of our skills and expertise. It is grounding to lean into the value of human to human connection, of being human together. And it can be flat out fun to find the silliest excuses of the hype machine and deflate it with deflate it with deflate it. Defeat and deflate with re with ridicule as praxis.

Sam:

I love that so much, and I think that's something that keeps me doing the work that we do at four four is kind of the the subtext of all of the stories that we're writing is, you know, people are people are pushing back against this stuff, and people are saying, it's still important to be human. We're not just gonna roll over and say, sure. The machines can take over. So I would love to kinda hear more about that. What does that mean to you as people who are, you know, coined that term but also are doing that work every day?

Alex:

Yeah. So it's this is this is great. And, also, just just to give a shout out to y'all, I did a search through the book in our extensive endnotes. I think there's 55 pages, and I believe y'all are cited 12 times Nice. If I counted if I counted correctly.

Alex:

I mean and so we rely we really, really rely on your reporting, which is fucking phenomenal because it's so important. And y'all, I just, I think, are really punching above your weight for just, like, the size of your team and the amount of the amount of stuff that you cover. So just, like, kudos to y'all. And then I think so ridicule's practice, I have the honor I, like, I came up with that term, and I think, like and I'm very proud of it. My I was speaking on at a at a panel with an author.

Alex:

I love Carmen Maria Machado, and I said, Ridicule is Practice, and she cackled. And I think that I was like, oh, I can die now. Like, someone whose poetics I really appreciate, like the phrase of mine, we're done. Yeah. But I think that the kind of ethos of it is, like, so much of what we encounter, and I'm sure y'all get get this as well, like, it's really depressing.

Alex:

It's just, you know, like, these are and in my lowest moments, I'm like, these are the worst people in the world that have the most money that has ever been had. And, like, how does it feel? You know? Like, what can we do right now? And we get this question a lot too, which is like, how do you actually slog through this?

Alex:

This stuff is, like, terrible. And it is like, you gotta make fun of it. You have to really engage in it and really engage in, like, creating joy with other people and and really sorting through it because otherwise, it's just, you know, just infuriating, and it is really maddening. And it's people just I mean, like we do on the podcast, people who are in these fields just say the stupidest shit. Like, they're they are making the most ridiculous claims.

Alex:

We had Adam Becker on the show, and hopefully, the podcast will hit with him will be out before this one is out. I don't know the timelines. But, basically, Adam is a physicist. He's got a physics PhD, and he wrote a a a great companion book, More Everything Forever. And one of the things that we were talking about is data centers in space.

Alex:

And the kind of claims are made on the SpaceX site were like, space is great to put up a great place to put data centers because it's cold. And and then and then it's like, okay. It's cold, but it's also a vacuum, and vacuum is a fantastic insulator. And that is and you're not going to have this natural diffusion of heat or dissipation of heat because of coldness of space. And the thing is like, yeah.

Alex:

Like, Elon Musk makes these claims because people think he's smart, but no. It's ridiculous. And it ought to be made fun of, and we need to punch up and use humor to deflate these claims. So, I mean, ridiculist practices and ethos has worked pretty well and keeps us sane as well.

Emily:

For sure. For sure. We do periodically these episodes that we call all hell episodes where we go through, like, 25 to 30 things rapid fire. And on the one hand, it's a lot of terrible. On the other hand, it's really cathartic to sort of, like, go through all of that and laugh at it.

Emily:

What's been really important to me about the podcast is the way it has sort of catalyzed a community. We found that the people who listen, especially our livestream viewers, often tell us they thought they were the only ones. Everyone around them seemed to be taken in by this, and they felt very isolated. And so by basically planting a flag, you know, and and doing our ridiculous praxis, we've sort of created some ground for people to come together and meet, and that has been awesome. And I'm reminded of I gave a talk at UW, a couple of weeks ago to a large audience, including students and members of public and faculty.

Emily:

And this one student said, how do I refuse and resist without being a stick in the mud? I'm like, no. Be a stick in the mud. Right? Because if you're a stick in the mud, you're sort of creating space that that can sort of solidify solid ground that other people can come stand on with you.

Emily:

And I think that that's like the serious back end of the ridiculous praxis is that we are saying, no. We're standing firmly on our understanding of truth here so firmly that we can make fun of them, and you can come join us here.

Sam:

Love that so much. Yeah. Get a little ecoculture, restorative marshland in this thing. Like, be a stick. Be a stick.

Sam:

Exactly. I love that so much. Yeah. So I guess just to back up a little bit because maybe there are people I feel like four zero four's audience is familiar with a lot of these topics because of what we write about. But for people who are listening and are like, what are they talking about?

Sam:

Like, what are what are these terrible things? Why do they keep saying air quotes around AI? I guess maybe the the first question that we need to kinda establish is, like, why do we keep saying AI in air quotes? Like, why is AI not why is AI part of AI hype as a marketing ploy?

Emily:

Yeah. So it is a marketing term. One of the things I've learned from Alex as a sociologist is always be historicizing. So if we go back to the origins of the term, which is, part of a 1956 grant proposal, by John McCarthy and colleagues. So written in 1955 where they were trying to get some money to basically hang out with some friends for the summer and do some things they wanted to work on.

Emily:

And so they needed a word that they could apply to sort of loop it all together, and so they called it artificial intelligence. And we actually have, a fun sort of throwback episode of our podcast where we go through that document, and apply ridiculus praxis. But it basically, from the start, was basically a way to say, give us money. And, it's doing the same thing now. And the fact that there's sort of two ways that the term itself is doing the hype.

Emily:

One is that it draws on notions of intelligence that is something that is supposedly shared between humans and machines where you can rank people and then rank the machines, and there's a whole horrible history that we can get into over there. But also when you lump together chatbots and image generators and license plate readers and protein folding algorithms and chess playing engines and so on as one thing, then it sounds like it is one thing that is in quotes smart and in quotes getting smarter. And the sort of illusion of, cognition or intelligence that we get from the synthetic text coming out of the chatbots is then also papered over everything else, and it starts to be like, maybe it would be a good idea to use this in quotes artificial intelligence to make consequential decisions.

Alex:

Yeah. And I think that's the it's very helpful, and we're following a a few other folks here. I mean, there's a great essay from Emily Tucker from the Center for Center of Privacy and Technology at Georgetown. And the essay is called Artificial Intelligence Artificial Intelligence. And it's effectively why they're saying, you know, they're they're not gonna use the term AI or even machine learning, basically, because they wanna be very exact about the technology that they use.

Alex:

This is a place that has really produced some really helpful things like the perpetual lineup, more it's focusing on facial recognition. And I think there's there's a way that it's pretty helpful to distinguish between this. And, I mean, you know, if we talk about something like Flock and say, like, Flock is AI enabled, well, what is Flock actually doing? Well, Flock is an automated license plate reader. Right?

Alex:

And then if it they have a partnership with with ShotSpotter. I don't know what ShotSpotter's new name is. You know, what is that doing? It's, you know, audio classification of trying to distinguish between gunshots and fireworks or whatever. And so it's helpful to understand what those are because it gives you a little bit more insight about what the technology is and what it can and can't do.

Alex:

And I think the companies have done a lot of work to paper over those differences and just to say, can feed any kind of modality into ChatGPT, and it's gonna tell you what this is. And then it really obscures what's happening under the hood. And for some things, know what's happening under the hood, like large language models. We're assuming that multimodal things are basically doing this kind of pattern matching and then doing some kind of transformation. But it makes sense to distinguish because it helps us be more precise about where to critique and how to critique and also to interrogate what kinds of automations could be desirable if constrained in a certain way and what kinds of automations should not be used in any way, shape, or form.

Sam:

Yeah. For sure. Yeah. And I think a lot of the insidiousness of AI when it's used by these really big companies such as Flock that are trying to gain footholds wherever they can in especially in The US. It makes it all sound very inevitable and very just like it flock is everywhere.

Sam:

AI is everywhere. What is AI? We don't know. But, you know, it's in flock, and it flock is in your town. And I think flock especially is a good example of that not being true necessarily because people have fought back against it and kicked it out of their towns Yeah.

Sam:

Kicked it out of their counties or not allowed it to be partnered with local police and their counties to begin with. Because they feel empowered by knowing what it does, knowing that they don't want that, and that they know that it's an overreach and an overstep that they don't want in their communities and saying, no. We're not gonna fund this. It's not allowed here, which I think is the the work of AI as AI marketing as an inevitable sort of, like, all encompassing thing is probably one of the more scary things to me. Just for people who are still kind of like, well, why is it bad?

Sam:

There's no way anybody's left who's thinking that, but I was listening to, both of you were on the AI inside podcast with, Jason Hall and Jeff Jarvis recently. I think it was a couple months ago, maybe. Was it it wasn't that recent.

Emily:

A bit. Yeah.

Sam:

It was right after the book came out. Yeah. But you talked about I was I was just like, I'm gonna do some, like, more historical research and catch up on what they've been up to before we talk today.

Alex:

Appreciate it. Some people don't even read the book.

Sam:

Right. But, I mean, it's they should for many reasons, but especially if they're gonna talk to you. But, yeah, the I was pleasantly surprised to hear one of the things that was talked about on that show was a story that I wrote in 2024 about bars and sages. Mhmm. And I think that's an interesting, like, micro micro example of of AI being talked about and then being being pushed back against and used in a way that is just, like, kinda runs over everything else and why you might say, oh, we don't want this.

Sam:

It's it's destroying things that we actually like and value. So in that story, if people aren't familiar, Barns and Stages was a it's now defunct, which we'll get into, but it was a small indie press. And the founder the quote that they pull in this in AI Insight is, the problem with AI is the people who use AI, they don't respect the written word is what the founder of Bards and Sages, who's talking about shutting down their company, their beloved indie press. They're people who think that their ideas are more important than actual craft of writing, so they churn out all these ideas and enter their idea prompts and think the output is the story, but they never learn bothered to learn the craft of writing, which out of context is kind of pretentious in itself. If it's like someone someone talking about the craft of blank, I I'm kinda like, okay.

Sam:

What are we actually talking about? But you correctly note during this conversation that this is out of context, and it's not just some, like, bitter AI hater. This is someone whose work, who coincidentally does speculative fiction and role playing games, shuttered their publisher after twenty two years. And the the final straw was this influx of AI generated submissions. So I would love to just kinda touch on the idea because this is something that we still hear all the time.

Sam:

This idea that AI will democratize art and writing. I know. Yeah. And that it's that people who can't paint will suddenly, get be given the gift of art by using Midjourney. And people who can't write, it's like everyone should be, you know, given the ability to write at the same level as everyone else, and you should let your ideas be democratized by AI.

Sam:

So let's unpack that a little bit. Yeah.

Emily:

I have a a couple of stories there. First of all, I am now allergic to this word democratized because democracy means shared governance. Mhmm. Right? Sharing power.

Emily:

And that's not what's happening here at all. But also, if you really wanted to make art broadly accessible as an activity, then you would take action in society so that people had leisure time to develop artistic skill and to follow their own artistic passions, which is also not what's happening here. And the the very, recent story that I wanna tell is that my art form of choice is photography. I am not very good at creating images with my own hands. I did take a cartooning class in high school, which is now decades ago.

Emily:

But I had an idea for a cartoon a couple nights ago, so I sketched it out and posted it on Blue Sky and Mastodon today. And I can give you the link for your show notes if you want, but it's it's really the the art is, awful and has the benefit of being clearly done by a person because that wouldn't have come out of a machine. But somebody on Mastodon said, oh, well, you should feed it into one of these systems so you get a better version.

Sam:

It's like average Mastodon user.

Alex:

Average Yeah. Average Mastodon experience. Yeah.

Emily:

Yeah. But I think that sort of speaks to this larger thing of we can't be bad at art or writing. That's not good enough. That's not okay. I think we see this in education too where students feel like they have to turn in something polished.

Emily:

So they turn to ChatDPT or whatever, and the universities are like creating subscriptions, which is ridiculous. So that what they turn in looks more polished, and they are missing out on the chance to actually hone their own voice and learn the craft of writing, how to take your ideas and turn them into something that is persuasive or enjoyable, whatever kind of writing you're doing, and doing it in a way that is not aiming towards the sort of LinkedIn corporate mean, but actually respects the the voice and perspective and experience of the writer.

Alex:

Yeah. And just I wanna add on to that. I mean, again, I'm gonna bring back being on this panel with with Carmen and also Umer Qazi from the Authors Guild and Wahini Varah, who is another author and technology reporter. And Carmen, you know, was talking about the experience of teaching writing, and I mean, teaches at a very prestigious writing workshop and kind of the craft of writing. I meant sorry.

Alex:

I'm gonna use craft. I actually like the word craft. I I both hate That's okay. I am the problem

Sam:

with the word craft.

Alex:

I both hate right. I I I I both like the word craft because I think it describes certain things, but I also have an essay from a Palestinian writer who talks that's called, like, craft is a lie. So, anyways, love hate relationship. But so yeah. So, I mean and what she was talking about is basically like, yeah.

Alex:

I mean, it's not democratizing this. It is if you're learning how to write, you are you have to go through the pain of writing. You you have a sense of taste, and then you cannot match that sense of taste until you practice quite a lot. And I'm looking at Ellie's cat. That's right.

Alex:

That's in the frame. Yeah. So, basically, it it takes some time. It takes practice. And the thing that is very upsetting to me is when a lot of these folks use disability as an example.

Alex:

It's like, oh, well, disabled people now can do this and now can create. I'm like, disabled people have been creating art for millennia. I am autistic. I have ADHD. Like and I'm like, there are are strategies for sure, and they're gonna be different for every type of different person.

Alex:

But it's like, that's doc democratizing is not gonna get you there. I mean, cultivating a craft, you're serious about it, what, you know, people have been doing that for so long, and it is about developing those things that are unique and understanding what about you is going to really make your voice and your style and your expression really individual to you. The second part of it is, like, don't you want to, like, connect to somebody through this really unique experience? Isn't that the idea of art? It's not to, like, go viral on whatever, but I mean, that is you are now moving from art to the views and whatever those views mean Content.

Alex:

To It's a move to content. The word content is the just melts my brain every time I hear it.

Sam:

No. I'm with you. Yeah. And I think that's those are those are all such great points and really get to the heart of why the the, like, so called, like, democratization of it is such a weird argument to me. It's also it's like the point of being bad at things is that you're working through and you wrote about this in AI Con also, the the critical thinking required to get to the point where you wanna be.

Sam:

And it might not even be where you thought you were gonna go. And I think Yeah. As writers, we understand that, because that's a huge part of I mean, it's a huge part of the process for me is sitting down. I don't even wanna really read or be influenced by a lot of other similar writing if I'm gonna sit down and seriously write something that I want to be, like, original. It's like, need to sit down and have this come out of my brain raw first and then go in and kinda draw other inspiration.

Sam:

But, yeah, it's, it misses the point of of art, of craft to bring it back, is to have this kind of, like, perfect polished. It's not even perfect. It's it's shitty. This mid output Mhmm. That will get an a on a test or that will get you a bunch of views on, Twitter or YouTube or whatever that just ends up being slop.

Sam:

I think it's also it's part of the the experience and the I I mean, in my most, conspiratorial, like, when I go somewhere really dark about AI in general, is I feel like it is the point to be very isolating and to kick the legs out of that critical thinking aspect of everything. It's like they don't they, meaning, like, the people who are in power, authoritarians and fascists, want do not want us thinking critically about art, about anything.

Alex:

Yeah.

Sam:

And they want us to be very isolated and alone in that experience. So like you said, it's like not to not be the light for someone else's light. It's you they want you over here talking to Chad GPT endlessly, about your own thoughts and delusions. So

Alex:

Yeah. I think one could go very Foucauldian with it. And Yeah. And and really think about, you know, why do why do schools resemble prisons? And

Sam:

Oh, yeah. Mean It's different podcast. Yeah. Yeah. I mean,

Alex:

we could being I'm not I'm not going to go down that rabbit hole, but, I mean, it is very isolating, and I think it's the answer to isolation is more engagement with our product. And there's been the, like, two two quotes come to seems. I and the one that I used to use in in the talk about this book is I think I still use it, but it's a quote from Meera Moradi, used to be CTO of OpenAI when she is literally on stage at the Dartmouth School of Engineering, and she talks about creative jobs being lost. And she says, well, but maybe those jobs shouldn't have existed in the first place. And so it's very indicative of of the worldview.

Sam:

Thanks, Mara.

Alex:

And then yeah. And then the other one is Zuckerberg, which I think this was last year where he said something like, well, humans the average human has three friends, and Mhmm. We actually have the capacity for something like 15, and I have no idea where these he's coming over.

Emily:

Capacity but demand.

Alex:

Right? Yeah. Some it's just yeah. And and so, like, AI can fill that that that void or whatever, and it's just it's the worldview is so bizarre, it is text as content or relationships as transactional. It really is showing this particular view both as, I think, a particular sort of technologist and just a really terrible human being.

Alex:

Like, in that shape either by one's just, like, complete I don't know if they were born fried like that or it was, like, through continual interaction with the with the capitalist tech machine. I mean, it's probably little of a, little of b, but it's it's it's just really upsetting to hear, yeah, hear them say that.

Emily:

In that context, MJ Crockett says some really lovely things about the idea of automating empathy because it's always sold as basically, it's too much work to do empathy for other people. So let's automate that so we relieve ourselves. And, you know, Crockett's point is what a horrible way to look at people. Right? And what a horrible way to think that we don't benefit from being the empathizer and building those relationships.

Emily:

And, yeah, it can be difficult, but it's valuable. And it's not just altruistic that we're doing something for somebody else, but actually being in that relationship and connection is super important.

Sam:

Yeah. Yeah. For sure. And that's such a I mean, empathy as as weakness as unnecessary, that is a right wing far right wing talking point.

Alex:

Yeah. Yeah.

Sam:

And I just I hear that more and more, and every time I hear it, I get more it's like alarm bells are ringing, that empathy is not necessary. And I think it also it plays into the idea that, this the AI hype is part of a dehumanization engine where, like, where the role of dehumanization is very important in the project of authoritarianism and fascism. Yeah. So it's being pushed in a way that, like, it's like this I've mentioned this so many times in the last, like, two weeks ever since he said it, but when Sam Sam Altman was, like, talking about how training AI was, like, training babies or something.

Alex:

Yeah.

Sam:

And he was like, oh, well, humans need the totality of billions of years of of humanity and billions of people and, you know, millions of years of humanity, and we need to, you that all goes into making a human. It's like, what the fuck do you think goes into making AI? Like, your LLMs are based on millions of years.

Emily:

Yeah. But and on top of that, and I think this is where you're going by bringing this up, like, the idea that we should look at people in terms of what it costs to raise them in terms of of energy to 20. And he says, you're you're useless until you're 20, which, like, feel bad for his kid. Right? Who's

Sam:

Right. Yeah.

Emily:

But also people aren't meant to be useful. People are people. And the thing about, you know, human rights and human dignity is that we we are valuable because we exist, period, and we don't have to justify our existence.

Alex:

Yeah. Paris Marks had this piece on his blog called Sam Altman's anti human worldview, and the the subhead was OpenAI CEO downgrades humanity in pursuit of gold to merge with computers. There's lots of problems with the concept of humanism, you know, and many folks have and we talk about this in the book. Basically, like, humanism has left out quite a lot of people. Like, you know, people in the majority world, you know, that have been seen as not human.

Alex:

And at the same time, the far right has just completely jettisoned with any kind of notion of human because they've been very fine with saying, like, well, it's okay if, you know, we have machines, but it's a really like, it's definitely a thought of, like, the only humans that really matter are people that create value and that's, you know, the richest however 500 people in the world or whatever. It's really dark.

Emily:

Yeah. And just to add quickly, Altman's been at this for a while. So in the I coined the phrase stochastic parrots in the process of writing that paper, with coauthors in late twenty twenty. And sometime after ChatTPT was released, so sometime after 11/30/2022, Altman tweets, I am a stochastic parrot and so are you. And he's basically saying, I need to raise up the synthetic text extruding machine to something that I can call equivalent to humans.

Emily:

And the only way to do that is to basically view everyone around me and, you know, ostensibly also myself as nothing more than a synthetic text extruding machine. And sometimes people will describe that as I have Twitter beef with Sam Altman. I have no reason to believe that Sam Altman knows who I am, but I I do, have quite a bit of beef with how he uses my phrase.

Sam:

Fair enough. Yeah. And that that idea that it's like humans are equivalent to machines and machines are equivalent to humans. That kind of like that two sided coin that is being promoted and attempting to be normalized. I think I see it more and more from from people who are attempting to normalize it.

Sam:

And that's another one that kind of it freaks me out because it's I don't think I think people think that they're being, smarter, edgy when they're like, oh, people are people are just humans. It's like, this is you're doing the work of the people who want you to say this kind of shit. So Yeah. Just Yeah. Think a little harder about what you're what you're saying.

Sam:

You know, humans aren't machines, and the people who are invested in you being a machine want you to say that you are just a machine. It's something that, I think people would need need more defenses up about perhaps. Yeah. Yeah. So is there anything that you've I'm curious, and this is maybe not to put you on the spot too much.

Sam:

Is there anything that you've changed your minds about over the years? Like, is there anything that you look back and you say, oh, my thinking has evolved, or is there anything that you look back and say, oh, I was exactly fucking right about that?

Alex:

I mean, there's a there's a lot of that. Yeah. Yeah. I mean, it's but I mean, it's not whatever. It's not exciting to say, like, well, I was Cassandra and and now, like, this whole whatever.

Alex:

That's not exciting because, like, that's not that's not staking, like, a positive vision. I will say probably and this might be some place where Emily and I might have some daylight. But there is if there's a highly constrained problem and, like, there's particular if you're using language models at or large language models as a pretraining device for a very well evaluated problem, that might be some places where there is, but it does not it does not absolve basically any of the training data theft, any of the energy consumption, all these other problems that are in data center or what or or in large language model construction. And so here, I'm, like, a little bit convinced more by some uses of large language models by there's an organization I sit on the board of, which is called Human Rights Data Analysis Group. They have a very constrained problem where they're taking, like, unstructured police reports, and they are basically doing slot filling exercises with them, and they are evaluating the accuracy of that.

Alex:

But I will say, like, I still don't quite sit with that well just because of everything else that had to happen to get to that place. But, like, if there's anything that is that well evaluated, possibly, but, like, otherwise, yeah, like, it's a lot of the things are recon like, confirming what we basically thought would happen. And I'm more I am probably Emily doesn't like to be a take the prediction path, but I'm more fine with it. One of them is, like, I think in chapter three in the book, we're like, is there gonna be an ads in ChatGPT? Because that's the only thing that's been able to be monetized on the web.

Alex:

And lo and behold, you know, we see the advertising strategies for everybody. But I'm curious what Emily has to say on this one.

Emily:

I mean, I think in terms of your use case, Alex, certainly, and and speaking as a language technologist here, the underlying technology of representing words in terms of vector space, you know, which words co occur with which other words, that's very powerful and can be very useful. It is sad to me that the way this has become usable technology by many people across society is mediated by these large companies who

Sam:

have,

Emily:

you know, all of the liabilities that you're talking about plus no transparency about the training data plus the fact that the models could, like, change. Like, you may have evaluated it as carefully as possible, and then the next time you connect to the API, you're using a different model. So, like, those concerns.

Alex:

I will I will say that on in their use case, they are they are basically doing open weights models so they have, like, full control over all that stuff.

Emily:

So that part that last bit about the model changing is better. In terms of, like, where I've changed my position, I'd like to go back to the Stochastic Parrots paper actually, which was written in late twenty twenty, and we were sort of looping together the survey paper, right, talking about the various problems that we saw with the push to make language models ever larger. And, there are two things that I think we really missed in that paper. The first is that we did not connect to what was already known even about the data work that goes into creating these systems, and that has been an enormous issue. It's only gotten bigger.

Emily:

So that's something that's missing from that paper. And the other thing is that in the section where we actually introduced the phrase stochastic parrots and we talk about synthetic text, We thought we were a little bit on thin ice there because it seemed unlikely in September, October 2020 that many people would get excited about the idea of synthetic text. And I I continue to think it's a terrible idea and the various, you know, downstream harms, including the ones we talked about in that paper are are very much happening. But I was very wrong about how much people would get excited about it.

Sam:

That's wild. That's that's actually really wild to think back on, to think people would not be at the time you thought people would not be that that hype about, something like ChatGMT, which then came out, what, like, three, two years later?

Emily:

Two years later. Two years later.

Sam:

Wow. It's been a long six years. Yeah. Okay. I will give you one last question, and then I will cut you loose.

Sam:

I'm sorry. Told you it would be a a light jaunt, and I feel like it's been a hard sprint, but it's been super hard.

Emily:

Used to it.

Alex:

It's totally fine. And and when you like the conversation, like, this is fine. Like, it goes fast. Yeah.

Sam:

So, I mean, my last question is is a fun one. It's like, what does what makes you hopeful? And we touched on this a little bit earlier. What makes you hopeful about this space as, again, people are in it and immersed in it every day?

Emily:

Yeah. To me is what I'm seeing coming from young people. There's a Medium post, I think, in Emily Tucker's Medium written by a high school student who basically is talking about how important it is to denormalize all of the surveillance. I was just listening to a wonderful episode of Paris Marx's podcast about the Luddite club. So a bunch of students who were intentionally disconnecting from tech to spend time together.

Emily:

And I think that every time we see pushback of any kind, like, that enables more pushback just as you're describing earlier, Sam. People feel empowered to push flock out of their communities the more they know about what it is and the more they see other communities have done this.

Alex:

Yeah. I mean, in the same register, I think the resistance I mean, individual resistance, think, is huge. I mean, this panel I keep on coming back to was packed. I mean, it was 250 people with people out the door at this conference at this writing conference, writing and teaching conference. And we're also seeing, you know, young people turn to analog and to, you know, and to write by hand or write, you know, write not connected to any of these tools or calling out slopping.

Alex:

Mean, I am nominally on Instagram and TikTok. I'm not I'm more an anomaly, but I'm on those platforms enough to see people's like, if it is clear that it's slopped or, like, slopped. I don't wanna engage with this. There's that. There's the collective action.

Alex:

There's groups that are forming. The data center resistance is kinda wild. The data center resistance that's happened that's been so just across the political spectrum. You know, my mom my mom lives in rural Ohio, and there's, like, people in rural Ohio that are going to meetings and saying, why are we letting this into our community? I'm worried about, you know, my water, my electricity, and worried about pollution, everything, and, like, deep red places.

Alex:

You know? So it's that's very heartening. And I think the thing that's heartening about it all is that there is now connections that are being made. So there's the environmental connection, the labor connection, the consumer protection. We're talking about all these kind of vagaries of capitalism, but tech is really the wedge that you get into there.

Alex:

And it you're making different connections about who the power brokers are, what they want, how they view humans, how they view the natural environment, how they view the workforce, and people are making those connections. And it becomes the basis for broad organizing coalitions, and that's fantastic.

Sam:

Yeah. Awesome. I I completely agree, and that's also what's giving me hope is seeing, like you said, the the ways that people are saying, even in unexpected places where people are coming together and saying, you know, hey. This is not cool.

Emily:

At risk of being cringe, I think I wanna add in a shout out for four zero four Media. I think that Oh, that's

Sam:

my friend.

Emily:

That was seriously the success that you all have had. Mean, you took a great leap starting that. And, the fact that the kind of reporting you're doing is sustainable really shows that there's a community that cares, and I think that's important.

Alex:

Yeah. 100%.

Sam:

Thank you for saying that. Yeah. And it's the work that we do, it's a two way street. So we can't do this kind of work without people like you doing this work. So it goes both ways.

Sam:

Thank you. Okay. Before we get into too much of a love fest, I will put us off there. Thank you both so much for being here. So appreciate your time, and I will play us out.

Sam:

As a reminder, four four Media is a journalist founded and supported by subscribers company. If you wish to subscribe to four four Media and directly support our work, please go to 44media.co. You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week. This podcast is made in partnership with Kaleidoscope and Alyssa Another way to help us out is by leaving a five star rating and review for the podcast.

Sam:

This has been for full media. We'll see you again next time.