The 404 Media Podcast (Premium Feed)

from 404 Media

The Disappearing DOGE Depositions

You last listened March 19, 2026

Episode Notes

/

Transcript

This week we start with Joseph’s series of articles about the DOGE depositions. He watched hours and hours of them, then a judge ordered them removed from YouTube. But, they’ve already been archived all over the web. After the break, Jason tells us about the AI data labelers who are fighting back. In the subscribers-only section, Jason breaks down what’s wrong with all the AI job loss research at the moment

0:00 - Intro
0:51 - Google Street View's Unmappable City
2:21 - I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves
12:05 - DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery
17:39 - The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet
22:36 - 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
39:53 - AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet

YouTube Version: https://youtu.be/zLDyXawRawc
Joseph:

Hello, and welcome to the four zero four Media Podcast where we bring you unparalleled access to hidden worlds both online and IRL. Four zero four Media is a German standard company and needs your support. To subscribe, go to 404media.co. As well as bonus content every single week, subscribers also get access to additional episodes where we respond to their best comments. Gain access to that content at 404media.co.

Joseph:

I'm your host, Joseph, and with me are two of the other four zero four Media cofounders. The first being Emmanuel Mayberg. Hello. And then Jason Kevlar.

Jason:

Hello. What's up?

Joseph:

Alright. I'll give a quick shout out to Jason's interview podcast episode that just went up. People should definitely check that out if they haven't already. Jason, do you just wanna give us the very brief overview of what it's about? Because it's pretty crazy what this guy, you know, did and why he did it.

Jason:

Yeah. Yeah. It's a YouTube filmmaker who mapped, the only unmapped city in America on Google Maps. As in, there's, like, one place called North Oaks, Minnesota that doesn't that's not on Street View. And it's not on Street View for, like, very specific and weird reasons that we get into in the podcast.

Jason:

And, basically, he took a drone and flew it around town, and encountered some issues. But I found it to be super interesting. I had no idea that this existed. I just actually honestly, he emailed me because, he is making an episode about Flock. And so he interviewed me for that, and I was checking out his channel, and I was like, oh, this is nuts.

Jason:

I had no idea that this this was happening. So we did a bit of a trade z's where I'm going on his YouTube channel and he's on ours. But, yeah, please check it out.

Joseph:

Yeah. Definitely. Alright. Emmanuel, do you wanna take us through this first story?

Emanuel:

Yeah. I think we actually have a flurry of stories that happened mostly over the weekend for reasons which we'll get into. But the first one of them is from Joe, and the headline is, I watched six hours of DOGE Pro testimony. Here's what they had to say for themselves. Joe, what are these videos?

Emanuel:

Where did you find them?

Joseph:

So these are videos of depositions from two members of DOGE, Justin Fox, I believe, and Nate Kavanaugh. And they were part of a sort of DOGE operation campaign, mission, however you want to describe it, to cut a ton of government grants in NEH. And they went about and they did this and they were very instrumental in cutting hundreds of millions of dollars worth of grants. And because of that, they are being sued, and I just want to get the organizations correct, they're being sued by the Modern Language Association, the American Council of Learned Societies and the American Historical Association. They are suing NEH and a number of other parties, including these two DOGE members.

Joseph:

So these video depositions were recorded as part of that and then also just put on YouTube, which I feel is somewhat rare, you know, like we have the was it the Bill Clinton, Epstein one recently? I have never really seen depositions before, maybe that's why I was so captivated and I watched six hours of Fox's deposition specifically, but very interesting or horrifying videos. And I spent a lot of time going through them as the headline suggests. Yeah.

Emanuel:

I've seen some depositions. I'm sure Jason has as well. Bill Gates had a famous one about like the monopoly trial, think it was. And of course, Epstein has been in the news a lot. I've seen some of that.

Emanuel:

Also, just wanna it's like feels like ancient history, but not that long ago, we were writing all about these young DOGE guys trying to make cuts to the government. And this is going to be very interesting for many reasons, but it's the first time we really hear from them directly a lot for six hours, as you say, and not at a stage settings. Like I think Trump had some public meetings with them that were filmed, but this is like in-depth questioning about what they were doing. So what are some of the moments that you pulled out of there in your six hours of of viewing?

Joseph:

Yeah. It's mostly the clips that listeners or readers may have already seen because the way I saw it was that somebody on Blue Sky posted a link to the YouTube channel of one of these organizations that uploaded the videos. And then I saw a quick little clip where Fox was unable or unwilling to define DEI. You know, he was asked repeatedly, how do you understand DEI? The reason being is that that was the reason or the justification given for severing a ton of these grants, right?

Joseph:

And he kept pointing back to the executive order, which said, we're trying to get DEI out of the government or whatever. But even when the lawyer, the attorney pushed him to be like, yes, but what's your understanding of DEI? He wouldn't do it. I don't wanna say he couldn't do it, It's more unable or unwilling is sort of the term I'm using, but he's incredibly evasive, obviously. I'm sure we'll play a little bit so people can hear it.

Speaker 4:

How do you interpret DEI?

Speaker 5:

There was the EO explicitly laid out the details. I don't remember it off the top

Speaker 4:

of my head. It's okay. I'm asking for your understanding of it.

Speaker 5:

Yeah. My understanding was exactly what was written in

Speaker 4:

the EO. So can you I don't remember what was in the EO. So right now, do you have an understanding of what DEI is? Yeah. Okay.

Speaker 4:

So what's your understanding as you sit here today in this deposition?

Speaker 5:

Well, it it was exactly what was written in the EO. And so anytime that we would look at a grant through the lens of complying with an executive order, we would just refer back to the EO Right. And assess if this grant had relation to it.

Speaker 4:

Okay. But I guess I'm stepping back from your methodology strictly in terminating the grants. Do you have an understanding as you sit here today of what the EEI means? Yeah. Okay.

Speaker 4:

So what's your understanding of what it means?

Speaker 5:

Well, I it it it is exactly what was written in the EEI. Okay. So why is

Speaker 4:

the documentary about Holocaust survivors DEI? Faction.

Speaker 5:

It's the gender based story that's inherently discriminatory to focus on this specific group.

Speaker 4:

It's inherently discriminatory to focus on what specific group.

Speaker 5:

The gender based, so females during the Holocaust.

Joseph:

And then the other parts are I mean, he gets into very specific examples. There was one about a documentary, a grant for a documentary about black civil rights and Fox says something to the effect of, well, we cut this grant because it wasn't for the benefit of humankind, which is obviously an absolutely insane thing to say. He does walk that back and they actually read back to him live in the deposition, they read what he just said. He's like, well, that's not what he meant, blah, blah, blah, but he did say it. And then there was another example of cutting funding for a holocaust documentary called My Underground Mother.

Joseph:

So it was really, even though it was six hours of footage and of depositions, it was kind of the same examples over and over and over and over again, as you might imagine, because he just wasn't answering the questions. And I know that people may think, well, what's the value of it? You know, he's being evasive. But as you said, this is sort of the first time or one of the first times we've seen or heard them speak for themselves. And even though they're obviously being coached, there is a DOJ lawyer right next to these people that I don't know if you'd say representing technically, but definitely assisting them in this deposition.

Joseph:

So of course the answers are gonna be coached, they are gonna be massaged, But even then, I still think this is incredibly illuminating, but those are the examples that jumped out. Beyond that, it was just repeating those over and over again, essentially, but very interesting stuff.

Emanuel:

I think whoever the lawyer is who was asking him questions actually did a pretty good job because to this definition of DEI, it seems to me like the entire point of that line of questioning was to show that it's a ridiculous category. And what Fox repeatedly says is that his definition of DEI is whatever the executive order from Trump states, and that's clearly coaching, as you said, because I think that kind of makes it legal for him because he's like, I was just following the executive order, and I'm following that to the letter, and that's it. But then when he's asked to like define DEI, I think it essentially, it's like, this is me interpreting a little bit, but it's like, any activity that highlights any particular group, and then obviously, it becomes ridiculous because it's like, if there's any initiative for women, if there's any documentary that focuses on women, it's like certain minorities. It's like it's a that is DEI, and it's just it's impossible to to to do anything.

Joseph:

And he and he acknowledged that when they used ChatGPT, they were searching for, you know, black, homosexual, LGBTQ plus but they didn't search for white or Caucasian. And he does acknowledge that and he actually says, we well could have done that. Yeah, but he didn't. So that's the difference there. But yeah, you're right.

Joseph:

The line of questioning was just so persistent. I mean, again, it's over six hours, and doing it in different ways where it just showed how sort of ridiculous their position was.

Emanuel:

Yeah, and our previous coverage showed that it was ridiculous in practice because you would just have these total blanket activities or actions where it essentially looked like they did word searches on studies, and if it included the name of any minority or DI or gender, it just got removed. You can look back at all that coverage if you want to see how silly this was in practice. Okay. So you published this story, I think it was Friday, right?

Joseph:

Yeah. I I believe so. Because for context, watched the reason I watched six hours was because I was on the plane a lot. So I had a lot of time to watch these and at the gym. And then when I got back on the Friday.

Emanuel:

Yeah. Yeah. So Perfect Fight activity, you you land, you you published a post on Friday. And then I think because the the I think, well, actually, we also published a video that Several. Yeah.

Emanuel:

Several videos that kind of highlighted some of the things that we're talking about, and all this stuff went extremely viral, both your posts and the videos. And I think it's when a lot of people first learned that this was even happening and and and seeing what this guy was saying. And then with that led us to a couple it got so much attention that some stuff happened because of it. Yeah. And so this leads us to your next headline, which is DOGE Depositions videos taken down after judge order and widespread mockery.

Emanuel:

So what what happened?

Joseph:

Yeah. You're right in that we were one of the first outlets to like clip the depositions and then posted those, and then it was very funny seeing the right wing. It was mostly right wing. I think there was other political leanings as well, but basically x.com grifter accounts lifting our clip. It's like, that's a four zero four media font.

Joseph:

Like that's a 100% of it. It's nothing to beef about. It was just funny and just funny to see how that ecosystem works. But we clip those, they go viral, do that article as you say, On I believe Friday night, so very soon after we publish, the government then issues a filing into this lawsuit and they say or ask the judge, we need you to intervene and get the other party to stop the spread of these videos to get them taken down because these may cause harassment and reputational harm. That is the argument from the government that seeing DOGE people saying things of their own words is going to cause reputational damage.

Joseph:

I'll leave it to others to decide, well, maybe that's a consequence of saying things. Then further on in the filing, they do get a bit more concrete or rather just before that, it does actually cite our article specifically, the I watched for six hours and our videos and whatever, and there was a Huffington Post video as well. The government then later on in the filing does say Fox specifically, he was the focus of much of the videos, he has allegedly faced death threats because this stuff went massively viral. I'm not necessarily doubting that. I'm sure that people made those threats.

Joseph:

I would say that we haven't seen what those threats are or how concrete they were, but that is the government's argument in this lawsuit. So they do that filing. The judge, I believe looks like they're going to agree. And there's then an emergency filing from the other parties, these language associations and organizations bringing the case and they're like, well, actually there's a massive first amendment issue here, these videos should be public. They were never under a protective order, so you shouldn't order their removal.

Joseph:

The judge disagreed and late on Friday night ordered that they be removed, and then sure enough, you go to the organization's YouTube channel and the hours and hours of video spread across maybe not dozens, but at least a dozen videos, they've gone completely. They've been wiped. So a pretty wild series of events to go from something that's massively, massively viral, like it's not just us covering it, it's basically everyone, to a judge saying you must remove those from YouTube.

Jason:

Can we discuss how crazy this is? I mean

Speaker 4:

Yes, please.

Jason:

I think that occasionally judges will seal things if someone's safety is at risk, like sometimes. But I I feel like the bar for that is usually pretty high. At least I'm not a lawyer, but that's like, as I understand it, that's pretty high. It's like we see court records all the time that have incredibly sensitive information on there, like details about people getting harassed, like addresses, phone numbers. Like, sometimes these things are redacted, but quite often they're not.

Jason:

And those those are not, you know, filed under seal for the most part. And here we have a situation where these government employees in a highly publicized case of great public interest dealing with millions and millions, billions of dollars, I guess, probably of government grants that have, like, led to people dying all over the world and in The United States because of the types of things that they did. And we are protecting them because people are being mean to them online. Like

Emanuel:

I don't know if very wild. Maybe I missed if you said it, but also, like, public servants. Right? It's like, this is about stuff that they did for the government. It's not as if it's a private company or it's like a family matter or this is taxpayer, you know, funded activities.

Jason:

Yeah. And I don't know if it's one of those things where it's just like the judge ordered it taken down while they deliberate whether to put it back up, like, as sort of an emergency measure. But kind of regardless, like, this is very I think it's crazy. I mean, I think that I think that most people do considering, like, how viral it went and all that sort of thing. But, it is not normal, I guess.

Jason:

Like, this is not this is not supposed to happen, I don't think.

Joseph:

Yeah. Incredibly unusual, I would say.

Emanuel:

Yeah. The the only information in there that is damaging to them is the fact that people really don't like what they did. Right? It's like they're not talking about their home address or anything like that. Okay.

Emanuel:

So that happens. And then that immediately leads us to another headline here. The Remove DOGE Depositions videos have already been backed up across the Internet. Very predictable. What so where where do these videos live now?

Joseph:

Yeah. So this was on Saturday, the day after the judge ordered the removal from YouTube, and then seemingly the organizations come, you know, went along with that because obviously it's a it's a legal order. And then Jason was actually keeping an eye on the data hoarders subreddit, which is a very fascinating place and there's always very interesting people doing very interesting things there. And I think when Jason flagged it, it was more people discussing, backing it up, and maybe that was Friday night, I can't quite remember, but come Saturday, somebody sent me a link to the internet archive and on Saturday, someone had uploaded all of the videos and I went through them and double checked like, there's the Fox one, yes, there's the Cavanaugh one. It was actually two depositions from another two NEH officials as well, who were a bit more senior and sort of managing what these DOGE people were doing as well.

Joseph:

So their depositions were up there as well. So Internet Archive, obviously a very useful resource for preserving this sort of thing. The last thing I'll say just before I throw it to Jason to talk about the torrent as well, I think crucially, the judge's order, it wasn't like an order against YouTube, it wasn't an order against like a platform, it was an order against the specific organizations in this lawsuit to be like, you have to take steps to claw back these videos. And the most obvious way they would do that is they would remove it from their own YouTube channel, but I don't think they have really any power to remove, you know, Instagram posts or really the internet archive stuff. And I don't even know, or I don't really expect them to be expected to then go do that as well.

Joseph:

Like I don't really, I'm not familiar with this judge or sort of their understanding of the internet, but that is just not how it works. Stuff is stuff is gonna be out there and people very quickly archived it. While I was writing this, Jason, you were editing it and you pointed to the to the torrent. Right?

Jason:

Yeah. And so, I had seen not just that people were talking about backing it up, but there was someone on there who was like, I have the files, and I'm going to torrent. I'm gonna make a torrent, but I don't know how. So he was like trying to learn how to do it. But yeah, it's like, I mean, it's the Streisand effect, which I think is real.

Jason:

I think there's been some studies that the Streisand effect effect is actually like overstated to some degree that like there's an initial spike in interest and people kind of like, finding something when the government deletes it or when a company deletes it, but that in the long term, it's like becomes harder to find. But I think that, in this case, like, because it's torrented, now it's censorship proof. Like, it's decentralized. It cannot be deleted. I mean, I think it's still up on the Internet archive, and I hope that it stays up on the Internet archive, but that's still a centralized place.

Jason:

Whereas this torrent now, there's tons of different cedars, and it's like, you know, Torrents are undefeated in that way. Like, it will live forever somewhere. I don't know. I downloaded copies of it. I'm not deleting them.

Jason:

We haven't uploaded them, but it's like they're useful to have if we if we report on it in the future, like that sort of thing. So, it's good to have sort of like multiple backups, I guess.

Joseph:

Yeah. I mean, on Saturday, I was thinking before obviously, it was clear they were already on the Internet Archive. I was thinking like, if we get these videos, do we upload them to our site or something? You know, kind of like what we did with some of the Epstein documents when Jason bought them off Pacer and then we just uploaded them so other people could access them. I was thinking, do we do the same here?

Joseph:

But we didn't need to, frankly, because other people had already archived them. All right. Thank you for asking me those questions, Emmanuel. We'll leave that there. When we come back after the break, we're gonna talk about Jason's story about African intelligence.

Joseph:

A very, very interesting trip, I'll say that. We'll be right back after this. Alright. And we are back, Jason. This is one you wrote.

Joseph:

The headline is AI is African intelligence. The workers who train AI are fighting back. So you mentioned on the podcast recently that you took a trip to Kenya. I think at the time you were talking more about the conference, which is sort of the reason you went, you were giving a talk and that sort of thing. But as is detailed in this piece, you also did a fair bit of reporting while you were there as well talking to various people and and going to different events.

Joseph:

What events related to data labeling did you go while you were in the area?

Jason:

Yeah. So I was aware of this guy named Michael Jeffrey Asia who wrote a report for the Data Labelers Association, which is his organization and a few other people's organizations about his time as a data labeler. He was on the podcast as a as a an interview episode a few weeks back. But I felt like there was more to the story than just just his story because it's a whole organization of thousands of people who are the very low paid labor behind AI training. And that's very broadly defined.

Jason:

It's like a lot of them have worked for Sama, which is this company that has worked with Meta, actually continues to work with Meta. They they said that they've stopped, but they they're doing their smart glasses data labeling stuff, which came out of, I believe, like a Swedish newspaper, Czech newspaper. It's in the article. I'm sorry. But there was like a really good, article about data labelers in Kenya who were looking at all of this, like, highly sensitive video footage from Meta Smart Glasses.

Jason:

So data labeling is like a very broad category of jobs that I would argue is quite related to content moderation. And so content moderators are people who look at violent content, really like highly contentious political content, sexual content, for different social media companies, and determine whether or not it violates the rules of a given platform. And as we've reported, like, over the last few years, social media platforms have largely stopped giving a shit. And so as social media companies have kind of taken a step back from, content moderation, the jobs there have become a bit more scarce. And it's a similar type of job.

Jason:

Like, data labeling is a similar type of job, and it can include everything from, like, looking at a bunch of pictures and saying what is happening in the pictures to, like, drawing squares over the faces of people in different either footage or images to, like, help train facial recognition systems to describing what's happening in in like, in porn, for example. So that's something that, this guy Michael Jeffrey Asia was doing and his job was like eight hours a day. He watched porn, for some platform. He didn't know which platform it was because the way that it works is like you work through a subcontractor. And he was categorizing, like, what was going on in any given scene so that the platform could, like, categorize it for search.

Jason:

And then also, I don't know, sometimes they like you can jump to different parts of a video that's like, oh, now they're doing this. Now they're doing that. Blah blah blah. So he was doing that. And then after that, he had a second shift for with a different job where he was an AI chatbot, like an AI sex bot, essentially.

Jason:

So he was training AI companionship bots that were telling users they were talking to AI, but he was the one who was actually chatting. And he was given

Joseph:

Is he even training in that? Because it's almost like I mean, yes. With with the meta smart glasses one, they they look at images and they're training it that way. With the porn one, it's like, do you see? You categorize it as a position or whatever.

Joseph:

You're training the data. I mean, I'm I'm sure training is going on this one, but it almost just sounds like he's just it's all just smoke and mirrors. He's just being quote unquote the AI essentially.

Jason:

Yeah. I mean, that that's that's the, like, very interesting question. And it's like we've seen different models. Like, we've seen companies that just straight up lie and say that they're doing AI because AI sounds high-tech and whatever, but then you look under the hood and it's a bunch of human beings in Kenya or India or Pakistan who are just, like, pretending to be AI. But then I think that the business models of a lot of these companies are to start with that human labor and then slowly automate it over time and and eventually turn it over to the AI.

Jason:

And so it's hard to say because we actually don't know which platform he was working for. Like, we know the name of the subcontractor he was working for, but we don't know. There's, like, so many different AI companionship bots out there. And because of the way that this industry works, it's like he basically just gets he sits down at his computer or at a terminal and a window pops up and and he's, like, just given instructions, like, sexed with these people. Like, that's that's essentially what he was doing.

Jason:

And what he was saying was that he was given a persona. And so that persona could be you're a straight man talking to a woman. You're a woman talking to a man. You are a lesbian, a teenage lesbian. Like, he was like, I had to do all this stuff and, like, take on these personas.

Jason:

And I had to switch between the personas all the time. And basically, his, whole his whole thing is that, like, this ruined my life in many ways. It's like I was paid very little. It was quite traumatizing because like I felt I felt like pulled between all these different personas. I felt like I had to do things and say things that I didn't want to say because like

Speaker 4:

I But it's not him.

Jason:

He's like, I'm a Kenyan man and I'm being asked to be like a college student in The United States. Like, like, it's it was weird for him. And he was also looking at porn all day before that. And he was like, I basically became desensitized. I had trouble having sex with my wife.

Jason:

I had PTSD. I had insomnia because I was working like a zillion hours a day staring at a computer. And he did all of this because his son had lymphatic cancer and he needed a job. And it's like, this is these are like one of the biggest sectors of tech jobs in Kenya. And so I talked to a lot of people in Kenya just because I was at this conference.

Jason:

A lot of them worked in tech, lot of them worked in journalism, talked to various Uber drivers. I talked to like servers and bartenders and like all of them knew what data labeling was first of all, and many of them had done it themselves. It's kind of like door dashing here or Uber driving. It's like largely gig work that you can just pick up and do on the side. And then some people make like entire jobs out of it where they're just doing it like all day every day.

Jason:

And so basically

Joseph:

ingrained, maybe the culture is the wrong word, but like data labeling.

Jason:

In the economy for sure. It's a big part of the economy. And it's like, you leave the Nairobi Airport and you get a cab and you immediately drive past the headquarters of Sama, which is the biggest company that does the subcontracting. And it's like, it's huge. It's like a huge campus on the side of the highway.

Jason:

But anyways, I know that was like a long, like, wind up. But basically after, like, over a year of doing this, you know, Michael was like, fuck this. This is terrible. I hate this. Like, need to fight for better rights.

Jason:

And so he and some colleagues formed the Data Labelers Association, which, is, I mean, it's not formally like a labor union as in they they haven't been, like, recognized by the companies and do doing collective bargaining and things like that. But right now, they're, like, growing power to basically push back against these companies and push for better working conditions. So the event that

Speaker 4:

they had

Joseph:

doing that exactly, the pushback?

Jason:

Well, right right now, they're just signing people up. Like, as in they're just like, are you a data labeler? Do you feel like you're mistreated? Like, here is what's happening more or less. It's like it doesn't have to be this way because people who do data labeling in The United States, there's like not that many of them because they have to be paid minimum wage and things like that.

Jason:

But it's like they're paid better wages. Like they have benefits. A lot of them have like mental health support. A lot of people in other countries have like better labor protections there. And so right now they're doing a lot of like educating of people about how they are being taken advantage of and how this is an extractive industry.

Jason:

And then they also have worked with a lot of lawyers in Kenya because Kenya has laws that should prevent some of this stuff from happening. So there's a lawsuit against SEMA right now. There's a lawsuit against OpenAI. There's a lawsuit against Meta about how these people are treated,

Speaker 4:

about the fact that a lot of

Jason:

them don't have mental health support, about

Speaker 4:

the fact that a lot

Jason:

of them are very poorly paid and don't have benefits and like all this sort of thing. And so I think it's a bit of like a two pronged approach where they're trying to get as many data labelers as possible to kind of say, like, I support collective action. Like, I want to make this a better job for people and myself. And then there's like the legal aspect of it where, you know, I spoke to one of the lawyers that is suing Meta. And she told me like, we have laws that should protect against this.

Jason:

It's just a matter of getting them enforced. And some of these lawsuits have been, like, winding their way through Kenyan court for, like, years at this point. And it's just a matter of, like, getting an injunction or sort of getting a result where, these companies will be required to treat the workers better. I think one of the kind of scary things, and, you know, this is not to discourage them at all because what they're doing is great, is that like, a lot of these big companies will go to the Kenyan government and say, if you regulate us, we're just gonna leave the country. We're gonna we're gonna go work in another country.

Jason:

And so that's the kind of like thing that they're holding over the entire country at this point. It's not it's not a matter of like, oh, the workers are pushing back. It's a matter of like, the government has largely, as I understand it, according to Mercy Mutemi, who's the lawyer I spoke to, like the government's largely kind of looked the other way because Kenya sees this, like the Kenyan government sees this as a chance to like, work with American big tech and to, you know, kind of gain access to that, you know, these jobs because these jobs do exist even though they suck. Like, they are jobs. And so they're like, oh, we don't wanna lose these jobs.

Jason:

And if we regulate these companies, like, Mark Zuckerberg is just gonna go to like Uganda or something instead.

Joseph:

Or Southeast Asia or or something like that. Yeah. Right. Two things I would just mention. You you mentioned the sort of similarity to content moderation, and then we have these AI jobs.

Joseph:

There's almost one in between, which is the translation jobs. I remember back at motherboard, we got a leak talking about how workers were listening to Skype calls to aid in Microsoft's in improving the translation engine behind that. And is translation AI, I don't know, maybe. People use ChatGPT for translation, all that sort of thing. But that almost seems like a bridge between the content moderation stuff and the AI stuff.

Joseph:

And then I would also just say that, you know, there are some projects which There's sort of a spectrum, right? I mean, there's the really sensitive stuff like the porn stuff and then the meta glasses especially, I would probably put like listening to smart assistant audio in there as well. Then you have maybe training images for flock license plate reader cameras. Like we reported recently, they're using overseas workers to train those algorithms. You then also have some stuff which straight up involves the military.

Joseph:

There was this recent article in the Bureau of Investigative Journalism in London and the headline, I mean, I'll just read it and you'll the picture, gig workers in Africa have been helping the US military, they had no idea. And that was about Appen which is this other huge consulting contracting firm, right? So yeah, we had waves of coverage of the content moderation stuff and we did a lot of that and then other technology websites did it. And now there's like the wave of the AI coverage as well.

Jason:

Yeah. So so there's a few things. One, you're absolutely right. Like a lot of it is translation. A lot of it is actually content moderation for AI chatbots.

Joseph:

Right.

Jason:

Like a lot of them do like content moderation for chat GPT. There was like a Time Magazine article about some, SEMA workers, like maybe a year and a half ago, two years ago, where they were like judging how like they were grading ChatGPT on the responses that it was giving to people. And then also they were looking at like, if someone was trying to like make a bomb on ChatGPT or something, they were they were like testing the effectiveness of the guardrails more or less. And so that that really bridges the gap as well. And then there's a there's an article that I mentioned in the interview that I did with, Michael.

Jason:

It was on a Substack, written by a Kenyan guy. And it was like, I don't write like ChatGPT. ChatGPT writes like me. And it's very interesting because a lot of Kenyans, at least according to this article, Michael said the same. They like when they post on LinkedIn or when they like email people, they are getting told that they're using ChatGPT to write their their things.

Jason:

And a lot of them say that they are not using ChatGPT. What's happening is that the people who are training ChatGPT how to write, like how they are tweaking the outputs and just like doing that, it's like it's Kenyan English. And English is one of the two main languages or two, official languages of Kenya. Swahili is the other. But it's, like, basically, like, we have now trained this robot to write like us.

Jason:

And now when we write in the way that we were taught to write in school, we're getting accused of using AI and we're not using AI. And like that's leading to bad outcomes for us because some of them are like, I'm a I'm a writer and now I'm being accused of using AI just because this robot writes like me. And I thought that that was super interesting. Yeah. And then, yeah, the title of the article is like AI is African intelligence.

Jason:

I thought that was a really powerful quote from Michael where he was like, and everyone knows this, they should know this, but it's like AI is not magic. It's like there is just like zillions of human hours that go into not just, of course, all of the training data that that comes in, all of the stuff that's sucked into these tools, but then also the managing the outputs of it, and and tweaking the outputs of it and, like, making sure that it all works. It's like the people who are doing that, not all of them are African, but a lot of them are. And so they're like, this is our labor. We have we're getting paid $200 a month to, like, to do this, and OpenAI is worth a trillion dollars or whatever.

Jason:

Like, they feel like that's not fair. And I I think it's very hard to argue with that.

Joseph:

Yeah. It makes complete sense. I'm sure we'll keep an eye on that. But for now, if you're listening to the free version of the podcast, I'll now play us out. If you are a paying four zero four media subscriber, we're gonna talk a little bit about jobs and AI and, you know, maybe there's some inaccuracies about what is being reported or some stuff is being missed out.

Joseph:

You can subscribe and gain access to that content at 04:04 media dot co. We'll be right back after this. Alright. And we're back in the subscribers only section. Jason, this is what you published today.

Joseph:

The headline is AI job loss research ignores how AI is utterly destroying the Internet. So studies keep coming out about job losses and AI. There was this anthropic one recently. Before we get into sort of what your piece says about it, what was this anthropic study? Like, what what was it saying or or claiming about AI and jobs?

Jason:

Yeah. I mean, AI companies do this all the time where they're like, here's how our product is gonna impact the workforce. And for the most part, they are like, a lot of jobs are going to become redundant. People are going to be laid off because they are saying like, our technology is so powerful that it can replace workers and therefore, you know, society can become more efficient and all of this. And then some of them also are like AI is gonna create new opportunities.

Jason:

People are gonna be able to, like, retrain themselves, reskill themselves, blah, blah, blah. And and so AI companies are constantly doing these projections where it's like 37% of all jobs are at risk over the next five years, like things of that nature. And Anthropic published a paper earlier this month that was like that. It's hard to say like what the takeaway from it was because all of them are quite vague. But basically like they anal- these researchers from Anthropic analyze how people were using Claude, like for what tasks, and then they line those tasks up with like the jobs, the tasks that you would do at certain

Joseph:

jobs. And then they

Jason:

would say like software developers are at risk, like in the next few years, like, which I think is true. And, and like healthcare workers and secretaries and so on and so forth. And then they made this, like, really ridiculous, chart.

Joseph:

That's what I'm looking at. That's why I'm squinting into the camera right now because it's very small. But what what makes you think it's ridiculous? It's showing, like, office admin, and I can't even make out that one. You you do a better job of describing it.

Jason:

I struggle to describe the the graph, but it's a circle, and then on the outside of the circles circle, there's like all these jobs. So there's like transportation, management, business and finance, computer and math. And then there's like a area chart that is observed AI coverage and theoretical AI coverage. And observed AI coverage is like how much of that job can already be done by AI according to, like, our proprietary guess as Anthropic. And then the theoretical coverage is like, as our tools get better, as AI gets better, we believe like this percentage of a specific job or job sector can be automated by AI.

Jason:

And The vast

Joseph:

majority is theoretical. The vast majority

Jason:

is theoretical. And then also it's like, I don't know, it's like construction is very low on the theoretical and, like, essentially nothing on the observed. Whereas, like, business and finance is, like, a little bit actual observed at the moment. And then the theoretical is, like, almost a 100%. Like so I don't know what the exactly what that's suggesting, and we can talk about it, but that's like what they're doing.

Jason:

They're trying to like predict which sectors people are gonna lose their jobs in. And, like, the long and short of it is that anytime any of these papers comes out, there's, like, a wave of news coverage. There's a bunch of discussion about it. And I understand why because it's very important to kind of try to understand how AI is going to, like, result in people losing their jobs and things like that. But it's not just anthropic.

Jason:

Like, there's universities are doing these kind of studies all the time. Think tanks are doing these kind of studies all the time, and it's always kind of like a thing that people end up covering.

Joseph:

Yeah. So I think when these studies come out and maybe well, this isn't the reason AI companies do them. They do them because it makes their technology look great, right, as you say. But when they do come out, it's probably intuitive for people to go, well, I keep hearing that ChatGPT or Claude is being used for writing emails and office work. So, you know, intuitively, it's going to maybe get rid of a bunch of office jobs and maybe I should be scared.

Joseph:

Like, you can see the intuitive response. So what's your thought about it? Because you have you have like a different opinion to sort of something that these reports are missing.

Jason:

Yeah. So I wanna get there in a second, but another thing I wanna say is that these reports are made up. Like, they're they're invented because they're all, like, thought experiments for the most part. And so when they say, like, 80% or 90% of business and finance tasks can be automated by AI, that is, like, a theoretical projection of in the future that is not like, these things come out and they look like scientific papers. They're like 100 page PDFs.

Jason:

They have, like, charts and graphs. They have, like, figures and, like, all that sort of thing. But then you look at, like, how did you come to the number that 90% of all business and finance jobs are at risk or 90% of all tasks are at risk? And the underlying assumption there is that they just ask other people in the AI industry, do you think that this will be possible at some point? There's not like a it's not like an experimental or scientific basis for this.

Jason:

And so I think it's important for people to understand that, like, these studies are are, like, not studies. They're they're like AI companies guessing. It's marketing material. And then even the academic ones, like, they're kinda interesting, but still they're based on, like, survey data. They're kinda based on, like, do you think that like, list your tasks and, like, can we do them can AI do them one to one?

Jason:

And so the way that these are working is, like, for a lawyer, a lawyer will be like, here are the 15 things that I do in a day. And then there'll be like an AI scientist or AI coder, like programmer person being like, oh, yeah, like LLMs can do that now, or they will be able to do this eventually. And then if enough of those tasks can be automated, they decide that, okay, that job is at risk. And that's not like quite how employment works. It's like often, like AI will create new issues that humans either need to manage or maybe creates like new opportunities for different types of jobs within these sectors.

Jason:

Then sometimes it's like, yeah, the AI can do this, but it does it really shittily and therefore, like, we don't want it to do that task. So there's there's kind of like that to to preface all of this. But because of the way that they do this, where it's like, we are going to focus only on the tasks of specific jobs as the vector for, like, AI replacement, they are not looking at the macro situation that AI has created. And so that that is my argument that is like, yes. Like, maybe an AI can do a I actually don't believe this, but, like, let's say an AI can do a journalist job.

Joseph:

Right.

Jason:

Like most of the tasks. You you would therefore say, like, okay, journalism jobs are at risk from AI. But what they're not considering is the fact that the biggest use of AI right now is for AI slop, AI spam, AI scams, AI porn. It's like polluting the Internet. And so what has happened is like the macro environment where news outlets exist, newspapers exist, content creators exist and can make money, that is being destroyed by AI.

Jason:

And therefore, those jobs are dying now, not because AI can do the jobs, but because the business model of the Internet is changing as we speak. And, like, journalism is a tiny, tiny part of the entire job sector. But then when you think about, like, all the things we've been talking about over the last three years that we've had this company, two and a half years, like, the Internet creates a lot of jobs. Like, a lot of people have jobs on social media. A lot of people do side hustles on social media.

Jason:

A lot of people have little shops where they sell things. And it's like, if you have a clothes shop, you're now competing against AI generated t shirts that are drop shipped from somewhere. You're now competing against and you're competing in SEO for that. You're competing on social media for that, and you're competing against AI generated content across the board. If you're a writer, if you're an artist, if you're an author, if you're if you do like anything on the Internet, if you're a small business, like if you're a restaurant, like maybe it's harder for you to reach your customers because you can't just like post on Instagram anymore.

Jason:

I mean, you can, but maybe not as many people see it. If you're a YouTuber, maybe you can't make ends meet anymore because you're competing with AI slop. And it's like none of these studies consider that at all. And my point is that it's probably quite difficult to measure, but they should consider this because the Internet has created millions and millions and millions of jobs worth trillions of dollars, the economy, and we're replacing that human Internet with a dead Internet, zombie Internet as we speak. And and that is already leading to job losses, but to what extent it's like quite hard to measure.

Joseph:

Yeah. I mean, I guess, like, I'm I'm trying to think well, trying to think of two reasons why they might not do that. And that'll lead to my last question, which is that they probably in these reports, don't want to go looking at how many people are using their tools to generate AI swap or AI porn, because that is a very inconvenient fact for them. Right? That, hey, we put it in the report that people are just sexting with our chatbots all day.

Joseph:

It's like they don't they wanna highlight that use case even though it's Emanuel and Jason have shown, you know, repeatedly, that is what people are using these AI chatbots for. But

Jason:

Maybe Emmanuel can talk because the the example I always think about is Civitai

Joseph:

Right.

Jason:

Which Emmanuel covered 1,000,000 times is very important. But it's like the difference between what that company pitched itself as versus what it was actually being used for is sort of the like dichotomy that I'm speaking to. It's like these companies have a high minded idea of what their customers or their user base is using them for. But then if you look under the hood, it's like, no, they're just making porn with it or whatever.

Emanuel:

Yeah. I believe you're both referring to one of the first articles I wrote for Four Media, which was itself responding to a report from a 16 z, and like their head AI analyst and investor there did a market analysis about like the top 50 AI services on the Internet and kind of broke it down by category about what people using are using it for and it's like generating voices, generating images, you know, all these like normal services somehow ignoring the plain fact that it's like many of the top companies are either explicitly about pornography or slop, or even though they're not explicitly for that, that's what people do with them. Like Character AI, which was like a quote unquote companion app was really just for sexing. Civitai, as Jason mentioned, was like number seven on the list, and was mentioned as like a image generation company or a model sharing company, ignoring the fact that all people use it for is is making porn.

Jason:

Yeah. I mean I feel like a lot of what they wanted also was like for Hollywood studios to like use it for special effects or storyboarding or whatever. And it's like, no, there are people are just using it for like non consensual porn.

Joseph:

Then they'll just buy Ben Affleck's AI company instead. I mean, it would be really good to have data from like OpenAI, ChatGPT to be like, well, how much is your tool being used for sexual content? And I guess that would be difficult because again, obviously ChatGPT is not the same as Civatide, they're doing different use cases for the most part there, and now OpenAI is introducing this explicit sort of user mode as well. Like I don't know if they're ever gonna release data about that, but like it would be useful for all of these discussions. I guess just the last question I want to ask Jason is just more broadly, like, do you think AI companies really understand AI slop?

Joseph:

Like what it is and how significant it is and how pervasive it is?

Jason:

Yeah. I'm gonna be careful, but I I went to an AI slop salon, at the Hewlett Foundation, which is, just outside of Stanford in Menlo Park a few weeks ago, that was being thrown by the Hewlett Foundation and Columbia University. I wrote about it in the behind the blog last week or I guess two weeks ago.

Speaker 4:

Goodness.

Jason:

And I was invited to talk about my reporting there, and there were a bunch of people who work for big platforms there. And I wanted to do this because I felt like none of the platforms ever talk about AISlop. I I had talked to them off the record at some point or like gotten little statements from them a couple times when we first started talking about this, but I think once they started to understand how big the problem was, they were like, we're not talking about this at all. And so I actually have no had no idea how these companies think about the problem or if they're thinking about the problem. And I gave a talk there about my reporting.

Jason:

Nothing that any of you guys haven't already seen, like as in it was all public reporting that I've done. And I showed a video of my Instagram algorithm. And some of the people there were like, who work on these products to some degree, like, work on social media products, were like, what the fuck was that? Like, how did that happen? Why did that happen?

Jason:

We didn't know that this was happening. And I get the sense that they don't understand. Like, they just don't know how bad the the problem is. Like, how much AI generated content there is on these platforms, how viral it goes, like, how kind of like a lot of it is really vile. A lot of it's like super either sexually violent or non consensually like vibes or like, I don't know.

Jason:

There's kind of a lot of like underage stuff on Instagram that's very questionable. That's AI generated that just like pops into your feed sometimes. And it's like, this is not really supposed to be happening. And I get the sense that they don't have any sense of, like, how bad the issue is. And I don't know if that's because they're, like, myopic or I don't know.

Jason:

Or or perhaps it's because these companies have a lot riding on AI and the success successful integration of it. And so they're very reluctant to kind of crack down on it. But, yeah, I'm I'm hopeful that, like, coming out of that, like, wasn't it was Chatham House rules, which

Speaker 4:

Which is why it was incredible.

Jason:

Yeah. It's a British thing. Right?

Joseph:

Yeah. Yeah. Technically. And it's like, you you can use the information there, but you have to paraphrase it and you can't say who said it. Yes?

Jason:

I which I believe I just complied with. You did. Yes. And, yeah, it's not something I would normally do, but I I felt like just getting any sort of information about how these people think about it has been so hard that I thought it'd be worth my time. And it was.

Jason:

It was definitely worth my time. And hopefully, like, more will come out of it. But, yeah, I think that I think that they don't super know. And then talking about like chat GPT, anthropic, and whatever, it's like, it may be the case that a lot of the uses of Claude right now at this moment are like to write memos or do emails or something. But, like, they they're that's not what Anthropic is claiming to be publishing.

Jason:

They're claiming to be publishing, like, what AI as a sector is gonna do to employment. And so there's all these open source models that are being used for abuse. There's all the, you know, there there's all these ways that like Claude might be one part of a chain of AIs that are that is being used for abuse or being used for spam. And so I don't think that these companies necessarily have full insight into how, like, AI as a sector is changing stuff. It's like yeah.

Jason:

I don't know.

Emanuel:

Don't don't you think it is I mean, I agree with all that. I would just add that, and I'm speculating, but I think in a way they don't want to know because it doesn't serve their mission, and that's very much what we saw with like our whole arc on reporting on moderation. It's like, did Facebook understand how many racist groups existed on the platform? How much violent activity was coordinated on the platform? That it was driving violence in Myanmar and stuff like that.

Emanuel:

It's like somewhere I think it's a big enough company with enough people whose job it is to keep track of the stuff that they know, but it's not the focus of anyone, and it's not driving a lot of activity in ways to like mitigate the problem because they're very focused on, we're making a positive product that's connecting the world and we could sell advertising against. And that's not even unique to tech. It's like any company you work at, like the way things become a problem a lot of the times is people kind of like willfully ignoring it until they no longer can, which is like kind of where we come in.

Joseph:

Yeah. I was gonna bring up the comparison of the content moderation is very much similar, I think. Alright. Maybe with that, I'll play us out. As a reminder, four zero four Media is journalist founded and supported by subscribers.

Joseph:

If you do wish to subscribe to four zero four Media and directly support our work, please go to 404media.co. You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week. This podcast is made in partnership with Kaleidoscope and Alyssa Midcalf. Another way to support us is by leaving a five star rating and review for the podcast.

Joseph:

That stuff really helps us out. Here is some of a very long one from all mars, important relevant reporting. I'm basically the opposite of a tech enthusiast, but this is one of my favorite podcasts. The reporters make it really clear why tech stories matter and how tech and tech billionaires are impacting our lives. Thank you so, so much.

Joseph:

This has been four zero four Media. We'll see you again next week.