The 404 Media Podcast (Premium Feed)

from 404 Media

AI Slop Is Drowning Out Human YouTubers

You last listened September 11, 2025

Episode Notes

/

Transcript

This week, we talk about how 'Boring History' AI slop is taking over YouTube and making it harder to discover content that humans spend months researching, filming, and editing. Then we talk about how Meta has totally given up on content moderation. In the bonus segment, we discuss the 'AI Darwin Awards,' which is, uhh, celebrating the dumbest uses of AI.

YouTube Version: https://youtu.be/QN9IotBJmyQ
Jason:

Hello, and welcome to the four zero four media podcast. Joseph and Sam are out this week. So you get me, Jason Kevlar, and I'm here with Emmanuel. What's up, Emmanuel? Hey.

Jason:

How are we feeling about a a full on takeover of this this year's show?

Emanuel:

Finally, the time has arrived for us to do a four hour podcast of us just rambling about whatever not related to technology or our reporting at all.

Jason:

That's what I wanted to do, but you you seemed scared. You seemed scared to do it. You didn't wanna upset upset the the listeners, so we're gonna talk about our our stories. Speaking of, Joseph and Sam are both out reporting, so they're not gallivanting. They're doing stuff that you'll hear about quite soon, I'm sure.

Emanuel:

People might have seen Sam was in the courtroom for the sentencing of Michael Pratt, who was the ringleader behind the Girls View Porn case, which he has covered for years, and I think she was the first to to post about the sentencing and some shocking statements from the Jane Does involved in the case, so people could check that out. But, yeah, that's why Sam's not here, and Joe is on a secret mission in unnamed parts of the world.

Jason:

Unnamed parts of the world. We can't even contact him, so hopefully he's okay. But he'll he'll be back, I think, next week. In in any case, yeah, check out Sam's Girls Soup Porn reporting. We are not gonna talk about it on the podcast because we think it will be still relevant when Sam is back.

Jason:

So more to come there. So for the first story, we're gonna talk about one that I wrote, which is called AI generated boring history videos are flooding YouTube and drowning out real history.

Emanuel:

Okay.

Jason:

So Should we should we do a confession?

Emanuel:

Yeah. I mean, there's a lot to get into here, but I wouldn't mind if we spent most of this podcast talking about how you stumble upon these videos because they we can't go into this without going into, like, your YouTube consumption habits. Can you please explain to our audience when you watched slash listened to the videos we talk about in the story? Yeah.

Jason:

So all of the videos that we're about to talk about, I discovered at 3AM or later because most nights, not all nights, but most nights, I will put my AirPods in, and I will turn on a YouTube video whether that's well, I go through phases of what I listen to. And my most recent phase is listening to history videos. So for a while, I was listening to history videos about indigenous cultures in the Western Hemisphere run by a guy called Ancient Americas, who I actually talked to for this story. He talks about, like, you know, the Mayans, the Aztecs, but then also, like, the Toltecs and other indigenous societies that we don't hear about very much, incredibly interesting. But also, he speaks at a cadence that puts me to sleep very quickly.

Jason:

And so for a few months, I would be listening to his hour long videos, and I would make it, like, five minutes and fall asleep. And then I would just leave it on, and eventually, you know, the video would end and YouTube would autoplay something else. And, like, one night, I I guess a couple of weeks ago, I, like, woke up, like, I jolted awake listening to this, and maybe we'll play the clip here.

Emanuel:

In the end, Anne Boleyn won a kind of immortality, not through her survival, but through her indelible impact on history. The by the early seventeen seventies. Before we move on, let's just, like, dwell on the fact that I mean, this is probably a night where you're having trouble falling asleep, so you put in your AirPods, put on one of these videos, and try to lull yourself to sleep. And I have done that before, but I feel like it's probably one of the least healthy things that I do to myself. Like, that can't feel good to fall asleep that way.

Emanuel:

That can't be like a restful good night's sleep.

Jason:

I mean, I I I really don't know. I really don't know. I've done I I tried to do research on this topic once. So back at Vice, we had theme weeks occasionally, and I think it was 2017, we did something called sleep week. And so everyone on staff would write an article about some aspect of sleep, you know, the techification of sleep, the science of sleep, whatever.

Jason:

And mine was I was trying to figure out if my falling asleep with a TV show on habit was bad for me because we hear quite a lot about, like, blue light, which is emitted from your cell phone, is emitted from, like, you know, a laptop, like a game console, whatever. It's, like, disrupts your sleep. It it messes with your, like, melatonin and circadian rhythm, etcetera. And but but, like, what was unclear to me was that I was watching these TV shows with, like, my laptop facing away from me at first because I was doing it on a laptop for a while. And then that eventually morphed into listening to YouTube videos.

Jason:

So I would, like, start the video on YouTube and then put my phone face down because I didn't pay for YouTube premium.

Emanuel:

It's so funny that it's a premium feature, but, yeah, go ahead.

Jason:

Yeah. I mean, I've now started paying for YouTube premium because of this. Like, I was I've been listening to these videos now for years, and literally within the last, like, month, I wanna say, you pretty much can't listen to and I I always listen to things on YouTube because I so rarely, like, actually watch it. I just am listening to it. But, like, every five minutes, you're getting an ad, and the ad is really loud, and that would be really jarring.

Jason:

So I would be falling asleep listening to this stuff, and then an ad would come on, and I'd be like, oh, that's probably not healthy. And then sometimes you get a YouTube ad that was like thirty five minutes long. So I would like wake up and I would be in like minute twenty of some some ad. Anyways, I talked to sleep scientists and it's like it's not studied that well. It's not really studied that well what like audio does to your brain because your brain is nominally consuming these voices, but I feel like I'm not processing really any of it once I actually fall asleep, and I find it to be, like, quite calming and soothing.

Jason:

And I think a lot of people like fall asleep to a podcast. There's like various boring history podcasts. There's like soothing sleepy time stories podcasts. Trust me, I've tried them all, like ASMR stuff.

Emanuel:

There's a thing that's happening on YouTube now also where it's like whatever YouTube algorithm you're in, it will suggest that, but in like a you can sleep to this format. So I listen to like comedy podcasts, and I get offered via the recommendation algorithm, like, four hour compilation videos of, like, this comedy podcast to fall asleep to. So it's like it's doing that for everything. Sorry, like two more things quickly, because Joe is not here to stop us, and then we can go back into the journalism. One, I do this sometimes, but like when I do it, I definitely feel like shit.

Emanuel:

Like when I wake up in the morning, I'm like, that was bad. That was like a bad choice, and like the healthy thing to do is not watch anything and like read a book, and whenever I do that, I'll read like half a page and fall asleep like a baby and have a really good night's sleep, but I don't know, for whatever reason that you understand, I'm sure it's like hard to do that. And then the second thing, there's a really good episode of, you know, Dexter's Laboratory, the cartoon

Jason:

Yeah. Yeah.

Emanuel:

Where he falls asleep trying to he's trying to learn French in his sleep, so he puts on like a learn French record when he goes to sleep, but the record skips, and the only phrase he hears is, and then the next

Jason:

remember that one. Yeah.

Emanuel:

Yeah. Yeah.

Jason:

It's the only thing he

Emanuel:

can say, but somehow it's the best day of his life because it's like always appropriate for the situation. Great episode. Okay.

Jason:

Wait. Wait. But but so not to the journalism yet, though, because it it's interesting, though, because I agree with you entirely, and I I actually rarely use YouTube to fall asleep initially. Like, I will often fall asleep, but then, like, one hour later, I will wake up, and it will be the middle of the night. And I don't wanna turn off like, maybe I could get a Kindle or something, but I don't wanna, like, turn something on to read from, and I also don't wanna have my eyes open staring at my phone.

Jason:

And so this is why I do it. It's like I'm like, I'll look at my phone for thirty seconds to put on one of these videos and then fall try to fall back asleep, and it usually works. And and at some point in the night, I usually, like, toss my AirPods away, like, unconsciously. And so, I mean, I agree it's probably not the best, but there's a huge market for this. Like, there's huge market for this because of what we're about to talk about.

Jason:

It's like yeah. So so, anyways, like, I I was listening to these human creators who spend a lot of time.

Emanuel:

You're you're watching these history videos. Yeah. Let's talk about, like because it it sounds like in the story, you, in the morning, realized that you were listening to something that was AI generated, but, like, in order to illustrate that, can you describe, like, what a good one of these videos is like and then what this AI generated video sounded like?

Jason:

Yeah. So, I mean, interestingly, like, a lot of the ones that I listen to are not designed to make people fall asleep. They're just like history podcasts for lack of a better term, usually with visuals. And sometimes they're they're more than that. Like this one guy we're gonna talk about, History Time, this man named Peter runs it.

Jason:

A lot of Peters in this story actually. You know, he will read like four or five books on a topic, and then he will consult like academic journals, and then he often does site visits. And so, like, if he's talking about an ancient war or something like that, he will, like, go to the battlefield and will film there. And so he's making documentaries that are often two, three hours long. And, like, any given one of these documentaries can take months.

Jason:

They can take years. He told me some of them he worked on for over five years just in terms of, like he he was doing other things also, like other videos came out, but, like, he will be working on multiple of these at a time, and they're really popular. Like, he became quite a big YouTuber. He has, I think, like, 1,200,000 followers. And, you know, he puts out, like, a video every month or two.

Jason:

And that is probably, like, the upper limit of what a human being who's trying to make really high quality, really well researched videos can do. If you're reading multiple books for each one, if you're reading multiple academic papers about, like, different perspectives on some ancient wars, some ancient civilization, or the types of food that they ate in medieval times. Like, there there's all sorts of different videos that that I've become interested in. I've become interested in this channel called Let's Talk Religion, and they will talk about, like, the, like, different perspectives on, like, drinking alcohol in Islam, for example. And, like like, where did this come from?

Jason:

Like, where did the idea that Muslims shouldn't drink alcohol come from, like, throughout through a historical perspective. Like, that's an example of one of the videos that I've watched. There's another one that's like an entire history of Zoroastrianism, which is a religion I did a project on in eighth grade, and I'm like, oh, now I've like it's it's a predecessor to Christianity and and Islam and Judaism, and it's still practiced by by some people. But I was like, oh, what's the deal with this? And it's like a very academic look at these sorts of things.

Jason:

It's a human being who's doing, like, a ton of research. And then, like, not only are they a ton of research, they're then also finding imagery to go with it, which I don't watch, but, like, a lot of people do. And this imagery is often, like, Creative Commons license stuff. It's stuff where they're getting permission from universities to use it. Like, a lot of the people I spoke to for this article said that they are doing a ton of legwork just to script the video and then, of course, like, edit the video, you know, add music to the video, record it with their own voices, like, all of this stuff.

Jason:

And they've made careers out of it. Like, a lot of them made careers out of it because history stuff does really well on YouTube. You know, a lot of the videos, like, the most popular videos have millions of views, and so they're collecting money. A lot of them have Patreon, stuff like that. But, anyways, like, along comes these new AI slop channels that I started getting auto fed.

Jason:

And, you know, as, like, a journalist and someone who writes about AI slop, I I did recognize them as slop pretty much immediately when I was coherent in the morning and was scrolling through my, like, YouTube watch history. And it's funny because I had I had, like, quote unquote, like, watched, like, dozens of them Right. According to my watch history, but I didn't even realize it. I was consuming them in my sleep.

Emanuel:

Yeah. The the good videos remind me of, like, a very popular early history podcast, this Hardcore History that maybe people have heard of, and I would describe them as meticulous, and then also something that I think it's Dan Carlin, the guy who makes it, often jokes about, is that I think he'll go like two years without putting out a podcast because of like all the research, and then like you said, the editing, right, it's like it's very carefully put together. So that's like a good version. How like, describe one of these AI generated videos because in theory, I don't know, you could go to ChatGPT, generate some text about some period in history, feed that to an AI tool that has AI voice generation, and you basically have a video. Like, why what does that actually what does that look like in practice?

Jason:

Yeah. So I would say that the the accounts are called, like, boring history for sleep, and then there's a bunch of different variations of it. They have, like, clearly AI slop thumbnails, first of all, and all of the visuals are pretty clearly AI slop. Although it's it's pretty hard to tell in some of them because a lot of the visuals are, like, medieval paintings, and AI is pretty good at doing that. And so if you're not, like, up to date on, like, historic oil paintings, you might not notice them as slop.

Jason:

But here's some of the, like, video titles. Why it sucked to be an Aztec sacrifice victim and more, totally wrong medieval facts everyone still believes, medieval Irish food, peasant to king, the queen who slept with her stepsons, your life as a eunuch, crazy facts about queen Cleopatra, etcetera. So it's a lot of stuff like that. And then, you know, I watched some of these, and there are facts in here. You know, I think whoever is making them, I'm not exactly sure what tools they're using.

Jason:

I was trying to reach out to them. I would imagine they're probably using some combination of, like, ChatGPT for scripting, maybe, like, Eleven Labs to record the video. It's always the same narrator. The narrator is has a British accent. He it's not really, like, whispered.

Jason:

It's, like, very soft spoken. And one thing I I noticed is that it uses a ton of adjectives. Like, it it just is like imagine you are, you know, a a peasant in fourteen hundreds England. It was a cold and dreary night. The willowy wisps, like and it's, like, really a lot of filler stuff.

Jason:

Mhmm. And then it's, like, kind of a list of of very basic facts. And the so I I basically, like, I spoke to the guy who runs Ancient Americas, which is a channel I really like. I talked to History Time, which channel I really like, and then another one called The French Whisperer, which is an ASM artist from France who does a lot of research as well for for his videos. He does, like, science and history videos.

Jason:

And pretty much all of them were like, yes. We are aware of these things. The guy who does history time told me, like, I think they may be modeled on my videos, which could be possible because a lot of the topics are the exact same, and then the voice sounds really similar to his, which is concerning. And then the videos are usually, like, two to three hours long, which is how long he's doing them. But he's basically like it's really, really, really, really surface level.

Jason:

It's, like, glossing over. Like, a lot of them are maybe not totally wrong, but it's like it's like reading a Wikipedia article.

Emanuel:

You know? It's like reading

Jason:

a Wikipedia article, but not clicking, like, any of the citations or anything like that.

Emanuel:

I've read a lot of AI generated books for some of our reporting, and I would say two things that the AI generated content has in common almost always is a, yes, it's like it reads a Wikipedia article, it's very surface level, and then I wonder if you agree if you've listened to enough of these to say, but like something that is good about, like, what I liked history as a subject in school and what I liked about, like, what I like about history podcasts and stuff is usually the the person who is making it is not just listing facts about the time or the figure. It's like there's some sort of point or narrative that they're trying to get there's some sort of thing that they're trying to say about this period of time, and I imagine that these videos lack perspective, basically.

Jason:

They definitely lack that. And then, you know, I am not a historian. I've watched a lot of these, but something I really appreciate is that history feels like it's a conversation with different perspectives. And so a lot of these the best human made channels will be like, well, this this, like, academic study or this historian says this, but, like, this other person who is also a well renowned expert in the field says that, and it's not quite exactly the same. So you need to, like, consider it from these different perspectives and then sort of decide, you know, what you believe to be the truth.

Jason:

And I think that there's there's absolutely none of that in here. And it it definitely doesn't cite any of its sources. Like, it's it's so funny. I'll be listening to one of these ancient America's videos, and it will be like, oh, like, the these Harvard anthropologists wrote a study, you know, dating these ruins to a specific time, but, like, these other historians say that's impossible because the migration patterns that are that were known during this time period don't line up or whatever. And, like, that's that's obviously very vague, but it's a lot of that.

Jason:

And I find that to be extremely interesting despite well, these a these AI ones market themselves as boring history. None of the other ones say boring. I don't think they're boring, but they put me to sleep nonetheless because I'm tired, not because I think it's boring.

Emanuel:

So you talked to this guy, Pete Kelly, who runs History Time, one of these more popular human YouTube history channels. You talked a little bit about how everyone thinks these videos are very surface level, but what did I thought he had some very interesting things to say about how like, what does this mean for the audience, and what does this mean for how we are recording history or explaining history now on the Internet?

Jason:

Yeah. So I I think that this one was a really tough one for me because, obviously, I've been writing tons and tons about AI slop, and I think that this is some of the most insidious stuff that I've seen. And that is because these real historians, again, spend months making a single video, and these AI slop factories make videos that look and feel the same in form, if not the actual information. Like, every, you know, every day they're publishing one, and there's dozens and dozens and dozens of these channels. And so there's tons of them out there.

Jason:

And there's no way that even all of the world's historians, if they were making videos, like, they're going to be drowned out by this very quickly. And so I think, like, the discoverability aspect is a a massive problem because if you search for, I don't know, what was life like in medieval times, like, you're gonna get a bunch of these. Maybe you'll get a really high quality human made video still, like, at the top right now, but, like, that's that's been changing really rapidly. And so I I think it's gonna be, like, really, really hard to find these things. Every creator that I talk to, every human creator, so that they've seen their YouTube views go down this year, and they think it's because they're now competing with this swap.

Jason:

So it makes it, like, less tenable for them. It makes it a lot harder for new people to break into this space because, you know, if you have no following whatsoever, like, how are you going to build one if you're competing with with this stuff? And then, also, it just it, like, again, glazes over the intricacies of of history and and what is, like, really complicate often really complicated, often very contested things. So you're getting, like, a really surface level sort of thing. And then, you know, that's to say nothing of the fact that a lot of the companies that make large language models are trying to make them, like, more centrist and, like, less woke and whatever.

Jason:

And so you can, like, see Grock, like, changing its answers about historical events in real time according to what Elon Musk wants and and things like that. And so I think it is really, really insidious, and I think it's something very difficult for humans to fight back against because there's just not enough time in the day to to be making to to be competing with the AI AI on this, I think.

Emanuel:

I'll I'll I'll do a slight tangent again because no one is here to stop us, but it's like, I agree that, like, across the board, I think sometimes people wonder if we're, like, over focused on what we call AI slop, and this is why I don't think we are because this is the most insidious and, like, terrifying outcome in my opinion where AI companies hoover up all the human made research, content, art, whatever it is, and then we reach some sort of tipping point where we can't even find that stuff because we're flooded with all this fake bullshit, and we're already seeing it. It's like, it's the same what what you just said is true for news websites, for books, for academic research, for YouTube, for for, you know, adult entertainment content creators. It's like it's it's like everything is being drowned out by AI generated slop, and it's easy to imagine a situation where we're not even able to find the human stuff, and that's terrifying. And I just had one of those weird weeks where it's like, I was looking at Blue Sky and Facebook, and I feel like every other post with someone complaining about some AI generated slop, they find a completely different category of life.

Emanuel:

It's like, oh, I'm watching TV, and like clearly this art in this ad is AI generated, and it looks like shit. I'm scrolling Instagram, and this influencer is AI generated. I'm buying a book from Amazon, and all these books are AI generated, and people are like very fed up, and like that is why we focused on it and why we continue to focus on it, and it's it's not clear how it's going to shake out, and it's a very it's a very big problem.

Jason:

Yeah. I mean, I think, like, as we've said, it's also there there's very, very little incentive for anyone enabling this to to fix it. It's like Google is pushing the tools to make this stuff, and so YouTube has said that it's going to, like, demonetize low effort content, but it's not. It's not demonetizing this. It's like the you know, Google has a lot riding on Gemini and Vio and, like, all these other tools that they they make that are AI tools.

Jason:

And so I don't think that it's going away, and I think it's like it still feels really early days. Even though things have changed so much since we started reporting on this, it's like this is still there's still probably only a handful of people actually making these videos right now.

Emanuel:

Mhmm.

Jason:

And they're so easy to make. It's like, well, there's gonna be more and more people doing the same thing very soon, and I don't know what the Internet looks like after that happens. I I think it's, like, a very bleak situation. And then the last thing I'll just mention is that a lot of these accounts are using, like, spam and, like they're they're basically, like, all commenting on each other's YouTube videos in, like, a concerted way. So so they're, like, manipulating the algorithm in addition to to flooding the flooding the platform.

Jason:

It's like, I found dozens and dozens of of these accounts. I think they're probably all run by the same person because they all, like, link to each other and they all comment on each other, and you can tell that they're doing, like, bot type activity. And so, you know, add that on top of all all of the other problems we've just said, and it's like it it just gets really difficult. And then the other the only other thing I'll mention is that HistoryTime told me, like, he is making a lot of his videos by going to, like, special academic libraries where you, like, check out, like, the old English book, like, the the the old manuscript and stuff, and is, like, going through that sort of thing. And, you know, these AI companies have tried to train their their LLMs on, like, as much stuff as they can get a hold of, but they're still largely just scraping the Internet and sometimes scraping, like, commercially available books.

Jason:

Whereas, like, a lot of these history YouTubers and historians are like I don't know. They're going into, like, the archives in a basement somewhere and, like, reading the old scrolls and,

Emanuel:

like Yeah.

Jason:

So he's like, I'm giving you, like, a real, like, really diverse interesting perspective, whereas these AI is just, like, taking what's on Wikipedia and, like, regurgitating it somehow. Should we leave that there? That was a that was a real that was a a real indulgence segment.

Emanuel:

That's good. Only only two and a half more hours to go.

Jason:

Only two and half more hours to go. Alright. We'll leave that there. And when we come back, we are going to talk about Instagram and a story that Emanuel did. Okay.

Jason:

We are back, and the story is Instagram account promotes holocaust denial T shirts to 400,000 followers. I feel like this is something where you woke up one day and were like, I'm going to write about this account that I've known about for a long time even though I have become so, like, inert to this type of content on Instagram and the Internet. I guess, what is the account, and, like, how did you first find out about it?

Emanuel:

So I don't know what is the first time I pulled on this thread, but I got fed into an algorithm on Instagram that I think you're familiar with as well where I don't even know how I would define it other than, like, racist. Like, it's not even MAGA, it's not even right wing or Republican or conservative on issues that are in the mainstream, even if you strongly disagree with them or think that they are racist, like ICE raids or whatever it is. It's not even that, it's just like straight up, this race is bad, this, you know, Jews control the world, black people are a burden on our society, just like a lot of mocking of Anne Frank and George Floyd, and just like very, very inflammatory images and language and stereotypes of the most vile kind you can imagine. And Jason, you're familiar, Jason is nodding. It's like it's this algorithm that you get into that is awful, and I've been looking at that stuff for, don't know, about a year now, and I didn't really feel like it was worth covering because people know that that stuff exists on Instagram because other people get fed into this algorithm as well.

Emanuel:

And there's no effective way for us to, like, encourage Instagram to ban thousands of accounts that are doing this or hundreds of accounts that are doing this. But as I'm watching all these racist memes, as I'm scrolling through reels, I saw one that, like, has an exact the exact same aesthetic, saying the exact same type of stuff, but it was then also showing off t shirts and hats and hoodies that had some of that same imagery on them and selling them, and it looked like not high end, but not Like, not drop shipping? Not drop shipping. I would say now that I've clicked around Shein for an article that Sam was was writing last week, it's like above Shein level of quality, and I was like, okay, what's going on here? And I'm not going to name the channel because unfortunately, Meta decided that this channel didn't violate their terms to the point where it would ban it, But it's a huge channel with a huge following, 400,000 followers, real people interacting with it, and it's selling merch with all this like vile stuff on it, and then other people are sharing selfies of themselves wearing the merch.

Emanuel:

So it is like a very real account that is just promoting and monetizing hate speech on Instagram, and I mean, we we can we can talk about how we got there, but despite sending Instagram clear evidence of all this happening, they were like, you know, they removed a couple of videos that have flagged, they've kept other videos that have flagged, and generally, they just let it slide.

Jason:

Yeah. I I think I mean, you you sort of touched on it, but I guess a big thing for me is, like, I could log on to Instagram right now and scroll, like, through Reels, maybe, like, one one thumb swipe and be like, that violates Instagram's rules. Like, that is horrendous. Like, I I see such horrendous stuff on Instagram, and, you know, we have reported extensively about the fact that, you know, Meta has rolled back a lot of its restrictions on hate speech, a lot of its restrictions on, like, racism and misogyny and and antisemitism and and all sorts of just, like, awful awful things, transphobia. But even given that it has rolled back a lot of these things, like, if even if you sort of agree with the idea, which I don't really, that social media should be more like anything goes spaces, a lot of the accounts that I see are in blatant violation of the stated rules of of Meta.

Jason:

And so I guess, like, the I don't really know what to take from that other than, like, they aren't doing content moderation like they used to. They have, you know, really gutted those teams, and they just simply, like, don't care. And I'm curious sort of what you think because I'm I know you can log on. I know I know if I said, like, hey, right now, go find, like, the worst Instagram account, and you probably will find something, like, really, really bad in, like, two seconds. Without even looking, it will be fed to you.

Emanuel:

Yeah. I mean, I think that's exactly that's exactly the thing and why you have to write an article like this once in a while. We all see this stuff, we're desensitized to it, and then I saw this channel, and then it's also I saw, honestly, like, another reason I wanted to write it is The Verge wrote an article about this anti Semitic shirt that was advertised on a TikTok shop where it's like you can buy stuff directly from a TikTok account, and I was like, that's totally fair, and it's like good to call TikTok out on this shirt, and they reported it, and TikTok took it down. But that sort of snapped me out of it where I'm like, if this is a story, I'm swimming in, like, a totally much more terrible level of garbage every day, and there's more of it, and they're also selling shirts, you know what I mean? So I was like, okay, let me look into like, let me look at the facts of the story and see if it's worth covering.

Emanuel:

And it's like, as I'm, like, you know, writing notes about what this story would be, I'm like, this is this is crazy. We're in such a crazy place with Instagram that it's like, you you just like, you have to wonder what's going on there. You know, I'm I'm I'm not even gonna, like I don't even know how I feel about, like, what is the correct way to do this? I'm not sure, you know what I mean? But the fact is that Instagram has decided that it's not an everything goes space.

Emanuel:

It doesn't allow pornography, it doesn't allow various forms of hate speech, and that is how it decided that it manages this huge influential platform. And I go into this example in the story where a few years ago, in response to Trump's election and people being very, very upset about what was going on on Facebook and how extreme it was and how, as you and Joe exposed at a PC wrote for Motherboard, how ridiculous their moderation efforts were, they formed this thing called the Facebook Oversight Board, which they frame as like the Supreme Court of Facebook moderation, and essentially, the way that it's supposed to work is that it's like an independently run body with independent members who are like judges, and when there's like a moderation dispute on Facebook, it gets elevated to the Supreme Court, the Facebook Oversight Board, and they then, like, you know, very carefully look at the facts and make a recommendation to to Meta about what to do, which as far as I could see, usually they adopt. And there's this case that started in 2020 where somebody posted like a antisemitic squidward meme where it's like, you know, it suggests that Jews control the world or something like that.

Emanuel:

And the history of it is that people see the post, they see it's antisemitic, they flag it to to Instagram where it was posted, and Instagram repeatedly decides to keep it up. So it it keeps getting reported. Instagram keeps saying it doesn't violate the rules. It keeps going like that until it's, like, elevated to the oversight board, they deliberate it for a year, and then they put out this ruling and they're like, we decided after much consideration that this should be removed, and from now on, all types of content that suggests that Jews control the world or something should be should be removed. And this whole process took four years.

Emanuel:

Like, this meme was posted in 2020, and the ruling came down late last year. So Meta has spent so much time and so much effort to have this charade about rules and make decisions about what is and isn't allowed on the platform, and then you scroll Instagram, and like it's all of that stuff nonstop. Right? Like, you get fed into that algorithm, that is all you see, and it's upsetting on the level where it's like, okay. Well, like, what the what is the point of having a moderation team, and what is the point of having an oversight board if you don't even attempt to act on your own policies?

Emanuel:

Right? And again, it's not like I wrote the story and then published it, and I was like, and I caught Meta missing something. I sent them the post. I sent them the fucking post. I showed them the account, and I was like, here's every single thing that it says, and, you know, it's a violation of your policies, and they're like, shrug, we don't care.

Emanuel:

And I'm not some random user, right? It's like, I have a relationship with these people to a degree. We're not friends, you know what I mean? We're not We don't

Jason:

support They seek response to your emails, though,

Emanuel:

and they're like, who I am, and they're just like, whatever, man. We don't care about this. And it just, yeah, I mean, I I don't wanna I don't wanna talk in circles, but it's like Well,

Jason:

they also know that you have a platform. Like, they know you're writing an article about it. It's not like, oh, this is gonna be reported by, like, random user number 4273, and, you know, if we don't take down their report, they're going to, like, write a Yelp review about Facebook or something. It's like they know you're writing an article and that people are probably gonna, like, read the article and see what is has been left up. And, like, what has been left up is heinous.

Jason:

It's, like, heinous stuff.

Emanuel:

Yeah. It's really bad, and, like, I'm sympathetic to the moderation challenge as we always are. Right? Like, it's impossible to report on this stuff and not come away with a conclusion where it's like, wow, it's like really, really hard, if not impossible, to moderate all the bad content on these giant platforms. And I think this one, I think, like, right now, to be honest, I think it's particularly challenging because, like, this account and other accounts, there's like this range of stuff you can say where, you know, there's a bunch of memes and shirts and posts from other accounts that are very critical of Israel and are anti Israel, and I think by any definition of free speech and hate speech, like, that's fine.

Emanuel:

It's like you can totally criticize the government. You can say America is bad. You can say that Donald Trump is bad. You can accuse people of of genocide, and that's totally fine. And then, you know, it it slides into, like, you know, Jews control the media and and stuff like that the context.

Jason:

Caricature like, historically caricature, like, depictions of Jewish people that have, like we've, like, long decided are, like, antisemitic, like, classical antisemitism.

Emanuel:

And and and, again, to the point of the piece in, like, holocaust denial, which is, like, very clearly laid out in Facebook's policies and very clearly something that this account is doing. So it's like, I recognize that there is a challenge there, but if you step back for a minute and you're like, if any human being looks at this stuff, they can tell that this account is bad and has bad intentions and is racist and anti Semitic and all these things. So it's like, I can recognize that they have, like, a really big challenge in front of them, and at the same time be like, anyone with a modicum of humanity can look at this stuff and be like, fuck this and get it off my platform, and it's just, like, clearly not their something they're interested in doing.

Jason:

Yeah. I mean, I think we've maybe talked about this. I definitely thought about it quite a lot regarding the meta Ray Bans at ICE raids story that I did a few weeks ago. And it's like a lot of the PR people at meta who you know, I don't know exactly what's going on behind the scenes there, but it's like our experience is when we wanna talk to someone at Meta, we send a request for comment. And it has now been years since we were able to talk to anyone in, like, a kinda, like, let's interview you sort of way on the record at Meta.

Jason:

Like, there was a period where I would be like, oh, I'm curious, like, how your hate speech policy works or, like, why you're allowing this sort of thing. And so I would send a request for comment, and then I would get sent to, like, a policy expert there. And I would do, like, an on the record interview with them. Meaning, I wasn't talking to, like, Meta's public facing people. I was talking to people who were, like, actually making the policies, actually making decisions, whatever.

Jason:

And I found a lot of those people to be quite, like like, I didn't always agree with them, but it seemed like they were trying hard. Like, they were trying hard to, like, solve a difficult problem. And now and it's been this way for years at this point, like, last couple years. But now it's like you talk to a PR person, and then the PR person sends you a statement, or they say, like, we're not commenting. And it's like you have no idea whether that PR person, like, talked to anyone internally at Meta, whether they, like, it was actually escalated in any way.

Jason:

Like, you have no insight into that process whatsoever. And that's fine. Like or it's not fine. It's not a deal, but it's just like that's the way it is now. But it's weird because the PR people that you deal with will act as though everything is the same as it always has been in terms of, like, oh, like, we're we're trying hard to, like, to operate in good faith and sort of, like, make decisions that are good for society and so on and so forth.

Jason:

And it's like, the actual reality is that Mark Zuckerberg is, like, getting dinner with Trump, like, regularly. He's, like, going to Mar A Lago. He's, like, sitting in the Oval Office at these meetings. He's, like you know, I don't know if you saw this video the other day from when Zuckerberg was sitting next to Trump in the Oval Office, but Trump asked him, like, how much money are you investing in The United States? And he, like, makes up a number.

Jason:

And, like, he was like, oh, like, $9,000,000,000, sir, or something like that. And then or no. I think it's, like, $600,000,000,000. I think that's what he said. Don't remember.

Jason:

It doesn't matter. But then like a minute later, like, he's like on a hot mic, Zuckerberg goes to to Trump and is like, oh, you like put me on the spot. Did I like say the right number, sir? And it's like, they're not taking this stuff down because they're they don't want like, this is what is happening in my opinion. They're not taking this stuff down because if it catches the attention of Donald Trump, it's gonna cause problems for the company.

Jason:

Like, when they were moderating more, like, better, when they were mod like, actually doing, like, moderation, like, conservatives got mad. Trump got mad. Elon Musk got mad. Everyone got mad at them and said that they were, like, being too woke and it there were, like, repercussions for them. They used to also fear companies would stop advertising with them.

Jason:

And it's like there's not that fear anymore because there's not that many other places for companies to put their money, and so they don't care about brand safety that much anymore, which is like, you know, your Procter and Gamble ads showing up next to, like, on how to drink bleach, like Procter and Gamble bleach to kill yourself or something. Like, that is like not of concern anymore in the way that it used to be.

Emanuel:

I think that when they were doing moderation very seriously and also making a show of it, they thought what a lot of people naively but understandably thought in 2016, where it's like, Trump was elected, that was kind of a fluke, and it's going to swing back and we're going to return to some like, not even progressive, just like a liberal order of things, and they were preparing for that. And when it became apparent that that is not the case, they were like, fuck it, and that's where we at. And last thing I'll say to this, I said it again, but I just want to stress it, whatever the reason is, like if we're totally wrong about why this is happening, it doesn't change the fact that it is, I think, impossible to overstate how bad it is for society for us to be awash in this imagery. It's like, it's not normal, it's not good, it's I think a lot of us, because there's so much of it and we're used to it, and we grew up on the Internet and we saw a lot of this stuff, we're like, you know, okay, whatever, but it's like, it makes me kind of sick to think about how much of this is out there and how it's being shoved in our faces constantly.

Jason:

Dude, I also hate to say it, but it's like you could always find this stuff on the Internet, but, like, I don't know. When I found it on the Internet, I found it in the, like like, context of reporting or on, like, how deep does the Internet go? Like, what is on, like, rotten.com and then, like, linked from, like, rotten.com or, like, some weird forum to some, like, other weird forum or whatever. And and so it was, like, difficult to find, first of all. And second of all, it was, like, I came to it with the context of, like, I know I'm in a deep, dark, bad part of the Internet, and I've, like, clicked too many times.

Jason:

Yeah. Whereas now, it's like you have the entire social media going populace of the entire world shown this stuff when they log on. And and so much of the, like so so many people consider, like, Instagram to be the main thing that they do on the Internet or Facebook to be the main thing they do on the Internet.

Emanuel:

The moderate one. Like the like the like the the fairly normal ish one. But, yeah, that's a great point. It's like that used to be part of the sales pitch for our type of reporting. Right?

Emanuel:

It's like we're going to the dark corners of the Internet. And it's the dark corners of the Internet are the front page of Instagram. It's like algorithmically being spoon fed to you.

Jason:

Yeah. It kinda makes me feel bad as a reporter because I'm like, oh, like, I need to find something to write about today. Like, I don't I don't need to go spelunking anymore. It's like, just need to, like, click the Instagram app and, like, scroll for five seconds. Yeah.

Jason:

But, anyways, anything else on this? Should we leave it there?

Emanuel:

Let's run. Let's leave it there.

Jason:

Alright. So we are gonna leave that there. If you are a four zero four Media subscriber, we're gonna have a bonus segment for you about the AI Darwin Awards. If you're listening to the free version of this podcast, you can get access to our conversation about the AI Darwin Awards by subscribing to four zero four media at 404media.co. You will get a bonus segment every week on our podcast as well as our behind the blog segments on Fridays.

Jason:

You get other bonus podcasts. You get other stuff as well. It's a it's a good deal. So consider subscribing. But if you're listening to the free feed, we'll play you out now.

Jason:

Okay. We are in the bonus segment, and it's just me and Emmanuel. So no rules at all, I guess. Losing my time. Even fewer rules than there than there currently were on the the previous segments.

Jason:

But we are gonna talk about the AI Darwin Awards, which is, really, it's just a website that a dude made, but I thought it was funny. Matt Gault wrote about it. The article is AI Darwin Awards Show AI's Biggest Problem is Human. Emmanuel, are you familiar with just, like, the regular Darwin Awards? Surely you are.

Emanuel:

I'm I'm I've definitely heard the name. I've never really looked into it. I'm gonna guess what it is, and you tell me. I thought it's like a like, it works for, like, the dumbest way people died. Is that what it is?

Emanuel:

Okay.

Jason:

That that is what it is, and Matt wrote about it a little bit in the article, and I think they're still going on. But, yes, it's basically it came from a no. Where did it come from? Well, it came from the eighties. Like a real like, the eighties Internet came from Usenet.

Jason:

And it was I believe every year they did it, they had a competition on Usenet or wherever for the dumbest ways that people died in any given year. And I think it's still going on. I'm not sure. The article is not about the real Darwin Awards, so I I'm not super up to date on it. But Matt Gault has an example of one of the examples from 1993 where a Canadian lawyer would jump into the glass in his high rise office building every single day to show that the glass was indestructible, for example.

Jason:

And one day, it shattered and he fell out the 20 Fourth Floor and he died. And so he won the Darwin Award for that year. I don't know if the the one that always sticks in my mind is there was, like, a there was a missionary a few years ago who went to an island where there's an uncontacted tribe, and he was killed, like, immediately immediately.

Emanuel:

Darwin award or was that just I don't

Jason:

know if it was a Darwin award, but that's one where I would provide the Darwin award to this person because everyone told him, like, if you go to this island, they're gonna kill you. And he went to the island and he died immediately, like, upon arrival.

Emanuel:

So the AI Darwin Awards are not about people dying. Correct?

Jason:

No. They're about people who are deploying AI in really stupid ways or people who are deploying AI really, really recklessly and, you know, kind of immediately are facing consequences. So, like, immediate consequences for for deploying AI. So interestingly, a lot of the a lot of the nominees for the AI Darwin Award are things that we have reported on. So there is the guy who gave himself a nineteenth century a nineteenth century psychiatric illness after he asked Chad GPT how he could remove salt from his diet, and he replaced it with sodium bromide and gave himself temporary psychosis.

Jason:

Not of it it was AI induced psychosis, but it was, like, caused by what he was eating. There was the Chicago Sun Times AI generated reading list full of books that didn't exist, which is something I wrote about. The Tee dating app, whole the saga of them sort of being hacked and using, like, AI to verify like, quote, unquote, verify whether someone was a woman was nominated, but was disqualified because it wasn't really AI, I guess.

Emanuel:

It was it uses AI. The app uses AI in various ways, recommendation and, I think, facial recognition and things like this, but it wasn't central to the to the idiocy of the t app.

Jason:

Yeah. The so they actually disqualified it on the website. It says, quote, the app may use AI for matching and verification, but the breach was caused by an unprotected cloud storage bucket, a mistake so fundamental it predates the AI era. So I think we all agree with that. It's not really

Emanuel:

I feel like one reason it might have been nominated at all is that people were convinced that it was vibe coded. Have you seen this? But it's like, we got a lot of tips about it being vibe coded, and it's just, like, not the case. Vibe coding being you using, like, AI generated code to build something.

Jason:

Yeah. I actually missed this one, which I think it just happened, and I'm I'm really sad we didn't write about it. But I guess Taco Bell launched an AI drive through, which we have written about AI drive through Windows, but there's a glitch where someone said some like, they they used a sleeper cell wake word and ordered 18,000 cups of water. Yeah. And the write up on the Darwin Awards is, quote, Taco Bell achieved the perfect AI Darwin Award trifecta, spectacular overconfidence in AI capabilities, deployment at massive scale without adequate testing, and a public admission that their cutting edge technology was defeated by the simple human desire to customize taco orders.

Jason:

I thought that one was pretty good.

Emanuel:

We sent Matthew Gull to test was it Taco Bell or Chick fil A? I think he tested No.

Jason:

It was a third rate one. It was neither of those. Was like Crazy. Like it was like it was a chicken place, I believe.

Emanuel:

Not Cranes? Or

Jason:

I I don't know what Cranes is. Cranes?

Emanuel:

Cranes? Is that what it's called?

Jason:

No. Canes. Raisin canes? Canes.

Emanuel:

Canes. Yeah.

Jason:

I don't even think it was that one. Whatever.

Emanuel:

We sent

Jason:

him Bulljangles. Bulljangles. Bulljangles. Bulljangles.

Emanuel:

So we we sent him, and he was like, oh, it was fine, but we didn't have the the the motivation, I guess, or the idea to to have him try and fuck with it. That would have been interesting.

Jason:

Yeah. It's interesting because I very rarely say yes to PR pitches that come my way, but I got a PR pitch that was under embargo, meaning you have to agree not to write about it until the company, like, formally announces something. Like, you basically you get information about it so you can, like, report the article and prewrite the article and so on and so forth. And I agreed in this case because I was curious. It was like it said something like AI drive through technology.

Jason:

And the the, like, thing that they were announcing was that one AI powered drive through company acquired, like, another AI powered drive through company, and it was, like, the world's most boring story. But so I agreed to the embargo and was never gonna write about delayed it, like, 14 times. I kept getting updates. So, like, we pushed it. We pushed this back.

Jason:

We pushed back this acquisition.

Emanuel:

You're yelling down to the printers?

Jason:

Yeah. I'm just like, okay. We can't sorry. We can't do that article. We get we

Emanuel:

Scrap got the front page.

Jason:

Yeah. So there is that one. But, yeah, I just really like the idea of the AI Darwin Awards in general because if you're listening to this, you will certainly know that, you know, we care a lot about when AI goes wrong. It's usually either, like, pretty impactful or, like, really stupid and often pretty funny. And so the person who made this agreed, and I thought this was just, like, a fun little story to talk about.

Emanuel:

Can you think of anything that would be in the running in our history of reporting on AI?

Jason:

Oh, man. So this is kind of the opposite, but I feel like there are many, many, many examples of AI that is actually just like a bunch of humans doing stuff.

Emanuel:

What I was gonna say. Yeah.

Jason:

Yeah. So, I mean, we right after we launched four zero four Media, there was three d modeling company that was like, we will make three d models for your video game with, like, you know, a $300 monthly subscription, and the AI will make it for you. And, actually, it was just a call center of people who were doing, like, really quick, usually pretty shoddy three d models in, like, sweatshop conditions. I I felt like that was pretty

Emanuel:

I was gonna mention, like, a very similar case, but in a more much larger scale, Builder AI. Have you heard about this one?

Jason:

I completely forgot what that is, to be honest.

Emanuel:

Builder AI, we kind of missed it. We we would have covered it if we had a way in, but Builder AI was like an I think it was an Indian AI company, and they were doing, like, AI code generation. And it was literally just 700 poorly paid software engineers in a building doing the the coding, and it raised, like, some ridiculous amount of money, like billions of dollars, and it is now in bankruptcy because people found out that there was no AI. It was just people.

Jason:

Yeah. That's a good one. I think there's also a couple, like, AI finance ones where it's like, I I gave he did this as a stunt, so it's totally fine. And I actually thought that this was quite clever, but, like, Casey Newton, when ChatGPT launched its AI agents, he was like, order me some milk or something. Ordered him, like, a $400 $400 worth of milk from, like, you know, a Safeway that was, like, 700 miles from his house or something like that.

Jason:

So I feel like there's probably some AI agents gone wrong. There's also there there's also some of the, like, AI interviewing software. Like, I think McDonald's was doing AI interviews, and they left they left some of the stuff exposed. They they were, like, leaking all sorts of stuff, which is maybe not the AI's fault, but definitely, like, AI was disqualifying people slash perhaps offering them jobs when they shouldn't have been. So that that's a good one.

Emanuel:

These these are obviously all the lighthearted examples. We're not we're not even getting into, like, you know, this person is in jail because the the facial recognition was wrong.

Jason:

Yeah. That that's part of the I believe that's part of the criteria for the AI Darwin Awards because it is supposed to be somewhat lighthearted, and it is not supposed to be, like, AI facial recognition misidentified someone and now they're in prison or an AI insurance algorithm turned down someone's health insurance claims they couldn't get coverage and then they died Yeah. Which is, like, happens all the time, to be clear. Also, like, lot of AI surveillance, we talk about all the time. But this this is as described on the website.

Jason:

It's like, an example is, quote, seen a tech executive confidently deploy an untested AI system because, quote, machine learning fixes everything, encountered a decision so magnificently shortsighted it made you question humanity's collective wisdom. We wanna hear all about it. And they describe it as spectacular artificial intelligence misadventures. So maybe misadventure is the correct term. Yeah.

Jason:

And, yeah, if you're listening to this and you know of any that are, like, unreported, please let us know because we would love to do more reporting and and give these these awards more things to consider.

Emanuel:

We did it, Jason.

Jason:

We we did the pod without running it totally off the rails.

Emanuel:

I think it's our best pod yet. No?

Jason:

It's easily the best pod that four zero four Media has ever recorded, bar none, I would say. I would say. Okay. Well, I guess I think we're all back next week, which is bad news for the audience because you'll you'll miss out on on this dynamic. And Joseph will probably make us stick to time or something like that.

Emanuel:

Yeah. Yeah.

Jason:

And he'll read the scripts properly, which we don't like. Just kidding. Anyways, thank you for listening to four zero four Media. This is the four zero four Media Podcast, a production of four zero four Media LLC. As always, we will be back with a new episode next week.

Jason:

This episode was mixed, mastered, produced by, Kaleidoscope, and you can subscribe to us and find more at 404media.co. I am this week's host, Jason Kevlar, and, we'll see you soon.