The 404 Media Podcast (Premium Feed)

from 404 Media

DOGE's Website, Hacked

You last listened February 21, 2025

Episode Notes

/

Transcript

This week we start with Jason's story about anyone being able to push updates to DOGE.gov website. Then we talk about other stories with the DEI.gov and Waste.gov sites. After the break, Sam tells us all about some lawyers who get caught using AI in a case. In the subscribers-only section, we chat about a true crime documentary YouTube channel where the murders were all AI-generated.

YouTube version: https://youtu.be/AArZaKFP6m8
Joseph:

Hello, and welcome to the four zero four Media podcast, where we bring you unparalleled access hidden worlds, both online and IRL. Four zero four Media is a journalist founded company and needs your support. To subscribe, go to 404media.co, as well as bonus content every single week. Subscribers also get access to additional episodes where we respond to their best comments. Gain access to that content at 404media.co.

Joseph:

I'm your host, Joseph. And with me are the four zero four media cofounders, Sam Kole Hello. Emmanuel Mayberg Hello. And Jason Kebler.

Jason:

Hey. What's up? Definitely not a robot. Yeah. We Definitely not a robot.

Joseph:

We had some, audio issues. I guess we'll see if you turn into a robot halfway through, and we'll deal with it as it comes. But right now, let's talk about the first story, and it is one that Jason wrote. Anyone can push updates to the doge.gov website. There there there is some context layout here, but I think the funniest place to start is just some people, you know, defaced the Doge website.

Joseph:

What did they write on it, Jason? And then we'll get into the how and the why, etcetera. But I think what they did first is probably interesting.

Jason:

Yeah. So the doge.com website is Dotgov. Gov. Oh, yes. Dotgov.gov.

Joseph:

Yes.

Jason:

Is a website that didn't exist, like, at the beginning of last week. And Elon Musk went in front of, you know, he had he did that interview with media in the Oval Office behind the Resolute desk with Donald Trump and his son, and that he got got asked a question about transparency of what Doge is doing. And he said, we're the most transparent organization in the history of mankind, something like this. And then he was like, just go to Doge.gov to see what we're doing. And if you went to doge.gov, there was nothing there.

Jason:

It was just a blank website. And then the next day, it was updated to have just a stream of x posts from doge.gov, and then it was updated again to have this database of supposed cuts and structure of the government and things like that.

Joseph:

Yeah. You could click through and it would be like, oh, here's the army or whatever, and it has this number of employees, and this is the average salary they get and the average age of an employee. Right?

Jason:

Yeah. And so some of these pages were defaced to read, quote, these experts, which was their own quote, said these experts left their database open. And then another one said, this is a joke of a .gov website. And then I've seen a third one that said, this .gov is hosted on insecure CloudFlare pages, which happened over the weekend.

Joseph:

So that's pretty funny. That happens. It gets defaced like a lot of websites do. You know? There there's always people trying to deface government websites or corporate websites or whatever it might be.

Joseph:

This one's, like, a little bit different, through the way they did it. And, I mean, I don't think we have to get too technical, but, like, what was the issue here? Like, was it, you know, a fancy vulnerability, or was there something exposed? Like, how was this being defaced?

Jason:

Yeah. I found it to be pretty interesting. I'm not a web developer, so some of the technical details might be a little bit off here. But I spoke to two different web developers who independently and separately found this vulnerability and then messaged me about it within about an hour of each other. And then some other folks sort of verified it after.

Jason:

But, basically, like, doge.gov is not hosted on doge.gov. It's not hosted on a government server. It's hosted on CloudFlare, which is an Internet infrastructure company that does work with the federal government, but it has, like, a special, like, program for the federal government. But it it was hosted on a Cloudflare Pages website. And so, essentially, the page itself was not doge.gov.

Jason:

It was some Cloudflare URL. It was like a a long string of URL, and we shouldn't tell say what it was. But, basically, like, these web developers, inspected the source code, found out where the page was actually being hosted, saw that it was just, like, a random Cloudflare page where code had been deployed to from, like, a GitHub or, like, some sort of code repository. The database that was, like, powering this website was deploying to this Cloudflare page, and then that Cloudflare page was pointing to doge.gov. Now that's, like, a little bit complicated, but, basically It's

Joseph:

where it's pulling the data from, basically. Yeah. Yeah. From the Cloudflare pages. Yeah.

Jason:

And it was pulling the data from the specific databases that were on the Cloudflare page, and they were able to essentially find the API endpoints for these databases, which were left exposed. Meaning, they were able to find out where the database was pulling from, and they were able to push their own database records to the database that were then reflected on the live page. So, like, the TLDR is that they were able to edit, the database that was powering the live page by being able to push new entries to it. Yeah, I asked if they were able to edit, like, existing entries, meaning, could they fuck with the data that's actually on the website? And neither one said that they had tried, and we are not allowed to ask people to go poking around government websites.

Jason:

And so it's sort of unknown whether they could have done more damage than this sort of defacement that they did.

Joseph:

Yeah. To be clear, for legal reasons, we never ask people to go do this. But if two people independently find it and tell us about it and I think only one of them has to do the defacement. Right? If one of them decides of their own volition to go and do that, well, thank you.

Joseph:

That helped verify, but we're never gonna ask you to do that, obviously. So

Jason:

I do think I can talk a little bit about, like, how this came to be because it's not Sure. Super sensitive. But, basically, I, like, got a message with a link to doge.gov that went directly to the page that had already been fucked with. And so that was them proving, like, hello. I've already done this.

Jason:

And then I talked to them and was like, well, how did you do it? And they explained. And then that was the same vulnerability that a different person had, discovered. And I asked that other person, like, have it, like, have you modified anything? And they said, no.

Jason:

Because that's probably a crime, could be a crime, and I don't want to do that. But if I wanted to, I certainly could because they found, like, the same thing that the other person did.

Joseph:

Yeah. It's pretty interesting verification. It reminds me of some other cases I've had where, somebody broke into, I think, a stalkerware company, you know, this malware that abusive partners were put on people's phones and that sort of thing. And they'd actually managed to get a ton of data from the company, but one way they wanted to prove their access was by also doing a defacement. And if I call recall correctly, what they did was they defaced it.

Joseph:

They put my name on it, which I don't know. Okay. Thanks. And then they pushed it to the Wayback Machine, so it was archived. So I could go back and be like, oh, yesterday, the somebody, presumably these hackers, put my name onto this website.

Joseph:

So it is, useful for purely journalistic purposes when somebody sends you a link like that, for sure. I would say, maybe not ordinarily, but a lot of the time, a defacement just like wouldn't be a story. There's a hacker who pings me every so often. He's like, hey. Look.

Joseph:

I defaced Biden's website. And it's like, I don't care. Like, this this, this doesn't matter. That that was obviously, you know, last year or something like that. This one is different, probably for multiple reasons.

Joseph:

I mean, why do you think it's important? And maybe what does it show us that the doge.gov website, which is supposed to be targeting fraud and waste and abuse in government and making it much more efficient, what does it really tell us that, like, even their website was apparently held together by, you know, digital string, basically?

Jason:

Yeah. I think there's a few reasons that it's interesting. I think, one, the way that they did the defacement was kind of interesting to me where they were able to push to this specific database, meaning, you know, potentially, other sorts of information on the website could have been could be changed in some way. But I think more importantly, it shows that this group of coders who is going into every government agency and asking to examine source code, to get access to really sensitive systems, to get access to systems that they probably don't understand because they run on COBOL or they run on, like, old mainframes, things like that, seemingly was unable to push, like, a very simple website without, you know, having these very basic vulnerabilities, included in them. One thing I also thought that was very interesting was, usually, when a website is defaced and then there is an article about it, it's fixed very quickly, like, within minutes often.

Jason:

This was up for something like eighteen hours, these defacements. So that suggests to me that they had trouble finding them or they just weren't paying attention. Like, I'm not sure because it got a lot of, attention online. And then the other one that that happened over the weekend suggests that they didn't close whatever vulnerability, was allowing this to happen. And that one was still up as of the time I checked, you know, before this podcast.

Jason:

And so they still haven't fixed it, which is pretty concerning.

Joseph:

I mean, maybe they don't care. Maybe they, ironically, don't have the resources to do it. I don't know. It's it's it's impossible to know, really.

Jason:

I think also, like, right after we published our article, the Huffington Post published an article about how, Doge had published classified information about the staff makeup, like, the number of employees at a specific government agency that, you know, is so secretive that the number of people who works there is classified. And so that just suggests that they are not taking care, when developing something like this.

Joseph:

Yeah. Sam, you had a story that was somewhat related, and it was Researcher Captures the contents of di.gov before it was hidden behind a password. We're gonna talk about another JSON story in this segment as well. But just briefly, what's the what's the deal there? They put a password on di.gov, but before that, what was exposed or available, or what's going on?

Sam:

Yeah. So it was, left unpassword protected for, like, let me see. A maximum of thirty minutes is what, this researcher that found this told me based on his, scraping and, you know, archiving of the site. So it was up for thirty minutes without a password, and he had been running an app that was capturing, government websites, like, automatically. So it grabbed it in those thirty minutes, which is so crazy.

Sam:

And while it was up and, you know, exposed to the web like that, it had this long list of, I don't I mean, again, you know, it's like, quote, unquote, waste. It's like what they're trying to track or whatever through Doge or what Elon says that he's trying to do. And it was like, I I couldn't even include all of it in the story, but the what's in the story is really long, so people should go check it out. But it's, like, just a laundry list of random shit that, like, they are claiming is wasteful use of federal funds. So it's, like, things like it's, like, $3,400,000 for a Malaysian drug drug fueled gay sex app.

Sam:

No source on that. No reference to where they got that from. $15,000 to queer Muslim writers in India. It's like, you know, it's just this random stuff that they're claiming. You know, it's, like, $1,300,000 to Arab and Jewish photographers.

Sam:

Are they American Arab and Jewish photographers? We don't know.

Joseph:

But And more and more broadly, they keep making mistakes, basically. Like, I saw there was a a bit of confusion between 8,008,000,000, So I missed a decimal place or something.

Sam:

Kind of important. That's a big that's a big difference. So yeah. It was up. And then they immediately gave it a a WordPress template and that kind of hid all of that information.

Sam:

And then that's that's kinda where the story comes in that Jason's talking about where then they, people wrote about it being defaced. They wrote about it being we've wrote about it being a WordPress site, a WordPress template that looked like, you know, random and, generic, and then they were like, oh, and then they put it behind a password.

Jason:

Yeah. All three of these stories are very closely related because they're all new websites that have been spun up to track, yeah, like, quote, unquote, government waste and also Doge's efforts to, cut things. And so the types of things that were captured on, di.gov that Sam wrote about are some of the things that have now shown up on the Doge website as part of, like, the Twitter stream and things like that. So, I guess I'll just jump into to that the third story very quickly, which is Elon Musk's waste dot gov is just a WordPress theme placeholder page. And so the three websites are doge.gov, d e I Gov, and waste.gov.

Jason:

And deigov and waste.gov were both registered about a week after Donald Trump was inaugurated. And there was never anything on waste.gov to my knowledge, but Sam spoke to this researcher and captured he captured that information on DEI.gov. And then I went to waste.gov 1 1 day, and all of the information there was about an imaginary architecture firm, that was pulling directly from just like a WordPress template.

Joseph:

Yeah. It was clear that it was, like, some sort of default landing page, essentially, when I don't think that's really what you're going to expect when you go to an alleged official government, website. So so what do you see? Is it just, like, pictures of this made up architecture firm or something?

Jason:

I mean, it's like, if you register for any website ever, you can usually click through different themes. And then the person who makes the theme, which is just like the layout of how the website is gonna look, will try to demonstrate the features of that theme, and they do it with placeholder language. So in this case, waste.gov said, a commitment to innovation and sustainability. Etude, which, like, e t u d e, French word, is a pioneering firm that seamlessly merges creativity and functionality to redefine architectural excellence. It's funny because the placeholder language for this imaginary architecture firm violates, Trump's executive order against DEI because it talks about how this imaginary architecture firm cares about diversity and cares about sustainability.

Joseph:

Right.

Jason:

And it was live on a government website, which is in violation of the executive order, which is pretty funny.

Joseph:

Right. It was too it was too inclusive. And then, I mean, did they realize their mistake and now the website is dead or or locked? Or, like, what happened after, if anything?

Jason:

Yeah. I mean, immediately after we published that article, they put it behind a password wall, and that's when they put dei.gov behind a password wall as well. So, like, sometime in between when I wrote this article and published this article, they briefly exposed what was supposed to be on dei.gov. Then that researcher scraped it, and then it went behind a password wall. And both of the those websites are still behind a password wall as we're recording this.

Jason:

So it's unclear whether they're gonna, like, use them in any way. But if you go to those websites right now, it just says this content is password protected. To view it, please enter your password below.

Joseph:

That is rarely transparent from the most transparent entity, agency, organization, or whatever it was that you said earlier. Yeah. And I know this this segment turned into the the web development of Doge or or whatever, but I think it just shows how haphazardly it's being rolled out. I mean, with all of the chaos across the US federal government with the actual actions they're taking of, you know, dramatically downsizing workforces and then getting rid of essential employees and have to ask them to come back and all of that sort of thing. They can't even, like, run a website properly.

Joseph:

You know, it's not a great look. Let's say that. Alright. Let's leave that there. When we come back, we're gonna be talking about AI and lawyers, and a particular set of lawyers who basically got caught using AI to hallucinate a bunch of different cases.

Joseph:

We'll be right back after this. And we are back. This is one that Sam wrote. Lawyers caught citing AI hallucinated cases call it a, quote, cautionary tale. Not entirely sure where to start with this one, Sam.

Joseph:

Maybe we just do it with what did the lawyers admit to doing? And then I guess we're getting some implications for all of that as well.

Sam:

Yeah. I mean, that's a good place to start because I also kinda worked backwards from there. This is an article that we did in collaboration with CourtWatch, which is Seamus Hughes' independent newsletter slash outlet that, he digs up court records. And sometimes he sends us, like, the interesting ones and says, do you wanna write this up? So, yeah, the the pitch for this one was basically just, like, these lawyers, called, got called out for using AI in, in a filing.

Sam:

And now they're like, oh, this is a cautionary tale, which is the headline. But, basically, they had used, they don't say what LLM, what chatbot that they use, whether it was ChatDBT or some other one. And there are a bunch of, like, you know, like, specific to legal uses, LLMs at this point, like, tools that are rolled out for lawyers to do research and to use AI to, draft, you know, responses. Ideally, not straight copy pasting, but that's what did. So, yeah, they they said in a in a filing, that let's see.

Sam:

I'm gonna quote it directly. Our internal artificial intelligence platform, quote, unquote, hallucinated the cases in question while assisting our attorney in drafting the motion in Lemayne. I'm gonna I probably said that wrong, but it's a it's a legal phrase. This matter comes as a great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm. This serves as a cautionary tale for our firm and all firms as we enter this new age of artificial intelligence.

Sam:

So at that point,

Joseph:

I'm actually warning everybody because there's a cautionary tale for all

Sam:

folks. For everybody. You know? They're like

Joseph:

We yeah.

Sam:

They're like, oh, this I mean, it's the implication is, like, this could happen to anyone, and we got caught doing it. And it's like, oops. We're so sorry. So they're They

Jason:

were just testing everyone. They were like,

Sam:

oh, yeah. Yeah. Yeah. Yeah.

Emanuel:

Yeah. Whenever I slip on a banana peel, I get up, and I was like, let this be a cautionary tale for everyone.

Jason:

Yeah. Don't slip on the banana peel, everyone.

Sam:

Right. Exactly. That's like, this is so this is somebody fucked up majorly. So, obviously, at that point, I'm paying attention. I'm like, you don't really see lawyers immediately apologizing profusely very often.

Sam:

What had happened was they had cited like, they say in their groveling apology that they had cited, I think it was, like, eight out of nine cases that they had cited in this document as, like, you know, if you read court documents and, complaints and case filings, a lot of the times they'll cite other cases that are similar and be like, this is the precedent that we're standing on legally. Like, these are similar cases where, you know, cases like ours won in the past, so you should, you know, grant us what we're looking for in our case. But they had made up eight of the nine that they cited. Just they didn't exist. Like, they don't exist anywhere.

Sam:

And, obviously, the judge looked them up. They were like because I mean, it's

Joseph:

So the judge is who caught them.

Sam:

Yeah. The judge caught them. Right. I mean, actually, I think it might have been, opposing counsel caught them and, like, the judge and then was like and told it was like, judge, like, hello. This is these are fake, and they're not they don't exist.

Sam:

And then the judge was concerned, and, he was like, well, these are, what are you gonna say for yourself? Like and the reason that they are apologizing immediately is that there actually is a precedent for this happening in the past. Yeah. It's not the first time.

Joseph:

Yeah. And and and maybe people listening are already aware of that. I think and and we'll talk about that in a sec. I think that the reason you correct me if I'm wrong, but the reason you covered it here is, well, first of all, it's very funny. Second of all, lawyers are admitting they're wrong, let alone grovelling.

Joseph:

That's probably newsworthy in and of itself. And third, yeah, it's like it's one thing for lawyers to use AI of all the problems it is. It's like another thing to get caught so publicly. I think, actually, just before we talk about previous instances you bring up, what was this case about exactly? Because this was like a pretty ordinary case involving Walmart or something.

Sam:

It was so it was actually once I kind of worked backwards, like I said, and kinda dug back to to the original complaint, the original lawsuit, it was filed in 2023 against Walmart and this company called Jetson Electronic Bikes. And Jetson makes hoverboards for Walmart, or for sale at Walmart. And what had happened was the the plaintiffs who so the people who were being represented by these lawyers who made this huge mistake with the the fake cases, they had bought a hoverboard, one of the Jetson hoverboards from Walmart, and they're claiming that this, the battery the lithium ion battery in the hoverboard caught fire while they were asleep and burned their house down. So it's a really horrifying case, and, like, it's pretty serious. Like, this like, we've we've talked about this.

Sam:

We've written about this, in the past with, like, scooters. It's like scooters have this problem a lot. It's like people's houses burn down because, these batteries malfunction sometimes. So they're claiming, you know, injury and, you know, severe, like, burns. Obviously, it's like their house.

Sam:

Like, it's just massive loss that they have from this, that they're claiming from this hoverboard company. So that's what they're bringing. And this case has been going on since 2023. So it's it's been, you know, a drag out thing. The docket is long.

Sam:

This is the thing that gets, like, attention is that the lawyers made this huge fuck up, which is kinda sad. It's like, look at my lawyers, bro, situations. Like

Joseph:

Yeah.

Sam:

This is a mess.

Joseph:

Your house burnt down and your lawyers are, like, fucking around with Chatt GPT or something.

Emanuel:

I was gonna say we unfortunately have a lot of experience with lawyers, both hearing from lawyers representing other people who are mad at us, and we have our own lawyers who are very good and represent us. And I feel like 90% of the time, what you need a lawyer to do is to, like, know the law and know the case law and reference specific cases and write long explanations for why what you did is perfectly legal or for why what someone else did is not legal. And the idea that you would pay a lawyer who is not cheap ever to just, like, have a chatbot do it and then not even double check the output is so crazy. Like, I feel like you would have to dump them immediately. Like, imagine if we had some sort of libel letter coming at us and our lawyers just like ChatGPT generated a response, we would be livid.

Emanuel:

We would probably try to sue them ourselves. Yeah.

Joseph:

Yeah. With a second set of lawyers who then also use ChatGPT, unfortunately, or something. So, I mean, that's a horrible case, and it's even more horrible as we all say that the lawyers using ChargepT or or what sorry. It's it's not specifically OpenAI. We don't know what it is, but some sort of LOM.

Joseph:

But what are some of the other instances of lawyers doing this then, Sarah?

Sam:

Yeah. So the most recent one, was in 2024 when it involved Michael Cohen, who I mean, we're not gonna get into Michael Cohen's resume right now, but, very famous lawyer. He, he and his own lawyer so we're talking about, like, again, like, a stacking of a Russian doll lawyer.

Joseph:

Nesting dolls. Yeah.

Sam:

Exactly. They had generated fake cases with Google Bard, and they weren't fined or anything. But the judge let them off, and he called he called the situation embarrassing for them. I think it's probably just, like, shaking my damn head at Michael Cohen's situation. But then in 2022 so this is kind of this is why I think probably they took it so seriously and apologize immediately.

Sam:

It's embarrassing, and it's also you can get, like, seriously sanctioned and big fines if you are presenting fake stuff to the court. So in 2022, this man had filed a action against Avianca Airlines that he he was saying that he was injured by a serving cart during a flight. And his lawyer was citing nonexistent cases. And but instead of that legal team apologizing immediately and being like, please please don't sanction us, they doubled down. And they were like, we we can defend the the the reason that these cases are in the filing.

Sam:

They thought they could get get out of it, I guess. And they were fined, $5, for that error, and the judge Right. I and that's what I was kind of like, okay. Maybe that's I don't know. I'm just maybe there were other repercussions that I don't know about, but it's also just, like, highly humiliating.

Sam:

It's like the judge is, like, reaming you in official court documents. Like, the judge was, like, they abandoned their responsibilities. You know, they they stand by these fake opinions after judicial orders called their existence into question. It's like, I don't know. I'm, like, secondhand humiliated just reading a judge be super mad at lawyers.

Sam:

So yeah. I mean, it's like it's it's something that I think is gonna come up more and more, honestly. I'm surprised it doesn't come up more often, because, again, this work is really tedious, that lawyers are doing a lot of the time to present their cases. But yeah. It's I mean, maybe it does happen a lot and we just aren't hearing about it most of the time.

Sam:

But,

Joseph:

It's I think you're right. It's gonna happen more because I literally just typed in, like, legal chat GPT,

Jason:

and they cut There's so many.

Joseph:

With all of these companies are offering these tools, which Right. I I mean, I don't know how many customers they have, but, presumably, they see a market there. And, obviously, some people are using it. Jason, you wanted to bring up, the statistics.

Jason:

I'm gonna do I'm gonna do a whirlwind roundup of, like, stuff that motherboard reported on about robo lawyers. And then, also, I wrote one article for The Atlantic in my life, and it was in 2017.

Joseph:

I didn't know that.

Jason:

It was a freelance piece that, for some reason, Motherboard let me do, and it was called rise of the robo lawyers. And it was about all of these startups that were trying to automate law. And, like like every other industry, the legal industry has been like, we can AI ify that. And that was all the way back in 2017 when things like this were happening. So, like, LexisNexis, which is a database program, it's just it's massive, has something or at least did at the time called Lex Machina that allowed you to try to predict whether you were gonna win a case based on, like, other it helped for, like, venue shopping, I believe, where it was basically, like, a defamation lawsuit in this jurisdiction in Texas is more likely to succeed than if it's in North Carolina or something.

Jason:

So it was being used by law firms to venue shop. Then there was also a lot of really dystopian startups at the time. I don't know if any of them are still around, but one was called Premonition that was basically, like, you enter your legal documents and the ones that have been filed against you, and then it predicted whether you would win or lose the case. And then it was like I guess it was supposed to, let you know if you were better off, like, pushing for a settlement versus, like, going to trial. There was a company called Legalist that allowed you to bet on the outcomes of lawsuits.

Jason:

So, basically, would use AI to determine whether a specific case was likely to succeed or not, and then you could do commercial litigation financing, meaning you could, like, pay the lawyer's fees to, like, you know, essentially, like, help someone sue someone else, and then you would get some of the winnings of of that. And I think that one is gone. I need to go check, but that one was, like, some weird idea. There's been a lot of, like, chatbot type lawyers as well. There's this company called Do Not Pay that initially started off, like, helping people fight parking tickets and was very successful at that because many case many times you can just, like, write any sort of, like, form letter to contest a parking ticket, and you'll you'll win just by attempting to contest it.

Jason:

And then do not pay got into, like, more complicated legal situations and eventually was fined by the FTC or was at least threatened by the FTC for representing itself

Joseph:

as Eight days ago, FTC finalizes order with do not pay that prohibits deceptive AI lawyer claims, imposes monetary relief, and requires notice to past subscribers. That was earlier this month. Yeah?

Jason:

Yeah. And then as part of this article, which is pretty good now that I'm reading it again, I talked to some law professors and and people like that. And one thing they did say is that the legal process is incredibly, incredibly expensive as Emmanuel already pointed out, and a lot of cases are quite straightforward, theoretically. Things like divorces, for example, are, like, really, really expensive or can be really expensive, but a lot of them are really simple in terms of the legal filings are not that complicated. And so there, like, may be some sort of role for, like, here's how you fill out the forms properly if you don't have, like, a contentious divorce going on.

Jason:

So you don't have to pay tens of thousands of dollars to a lawyer to do, like, a pretty straightforward sort of thing. But, like, what Sam's reporting on here goes far beyond that because it's a situation where the cases are complicated and the people have hired real lawyers. And then the real lawyers are, like, outsourcing that work to chat GPT, and that's, like, fucked up. Yeah. So I know that was, like, a weird tangent.

Jason:

I'm good for at least one of those per episode. I know. But, this has been, like, a dream of the legal of, like, big law legal professionals for a long time is, like, how do we how do we, like, outsource all of this work to a robot and still collect our, like, really, you know, extreme fees? Yeah.

Joseph:

They're not lowering their prices. No.

Jason:

No. No. No.

Joseph:

It's gonna be the same crazy per hour fee, and then it's just being done on some LLM back end or whatever. I know. It's crazy. It's outrageous. Alright.

Joseph:

We will leave that there. If you're listening to the free version of the podcast, I'll now play us out. But if you are a paying four zero four media subscriber, we're gonna talk about a true crime YouTube channel, you know, who ran true crime documentaries, except all of the murders were AI generated. You can subscribe and gain access to that content at 404media.c Alright. And we are back for the subscribers only section.

Joseph:

Here's an article written, by an outside, contributor called Henry Larson. Really appreciate, him sending this in. The headline is, a true crime documentary series has millions of views. The murders are all AI generators. I guess I'll just give a quick, summary of the piece and why it sort of jumped out to me, and then maybe, Jason also edited this.

Joseph:

I'll ask what you make of it as well. But you go to this channel. And what what's it called specifically? Because there's a few of them. I think it's called True Crime Case Files.

Joseph:

That's it. And you see all these thumbnails, and they're very, you know, inflammatory. They have all of these sensational titles like, oh, estranged wife then murdered this person or whatever. I think there's some transphobic stuff in there as well, like trans kids did x y z or or whatever. And you then watch this alleged documentary.

Joseph:

There's a lot of, like, grainy footage of police cars and then people's, portraits and that sort of thing. And then there's a a voice and narrator laying out the story. And these can go on for, I think, you know, half an hour or something like that. Some of them are pretty long. The one we've embedded in the piece is twenty five minutes and forty six seconds.

Joseph:

And then you're you're going through it and, you know, it I would say that some people will be fooled by it, and some were, as we'll get to. But all of these stories are produced with the assistance of AI, essentially, meaning, you know, the murders didn't actually happen. The the rule it's all fiction. It's not true crime at all. And you will go through the comments, and some of them are people calling it out as AI.

Joseph:

Others are people earnestly debating and reacting to the content of the story, itself. And Henry managed to speak to the creator and the owner of this channel, and I guess we'll get a little bit into, you know, why they made it and that sort of thing. But to me, as someone who, for someone who consumes a lot of true crime content, even though I don't really want to, this jumped out to me as particularly interesting because true crime is already a very messy space of entertainment, predatory behavior, sensationalism, and really plays with what is the truth, essentially. Jason, what do you

Jason:

that your, is that what you were thinking about when you wrote Dark Wire, a true crime book?

Joseph:

That's true true crime. That's, like, double that's, like, double true. Because it's definitely true. Yeah. Yeah.

Joseph:

I mean, I know. I know. Totally. And I hope that I don't get bundled into that, but that is absolutely a case.

Jason:

Yours is not yours yours is true real crime

Joseph:

Yeah. Real. That's good.

Jason:

Sober told soberly and and ethically and accurately.

Sam:

I wanna hear the distinction in Joe's mind after the podcast. Not right now. Let's keep let's get back to the story because I wanna hear that later.

Joseph:

Sure. So I I I did the first edit on this, Jason, and then you and then you came in and then you did an edit as well. I just what what did you think, of this YouTube channel? You know? Because you put it more in the context of AI slop, whereas I was looking at it more in the context of true crime.

Joseph:

You know what I mean?

Jason:

Yeah. This is actually, like, a a small there was a little bit of beef between the writer and myself in the editing process because I I was like, this this strikes me as, like, classic AI slop that we've been writing about. Like, it's pretty much the same thing. It's just they found this quite interesting niche, which is fake murders and, like, really grisly stuff and also really long videos. And it is interesting because both he and you were looking at it more through a lens of not necessarily a critique on the true crime genre, but, like, like, looking at it through that lens where I was just like, oh, this is like a YouTuber making, like, a bunch of money by making, like, really long AI generated slop videos about nothing.

Jason:

Yeah. Well, because you you

Joseph:

because you've done so much stuff about AI slop and the people flooding Facebook with AI generators crap and all of that. So, like, that is that where you're coming from?

Jason:

I'm coming from where's the PDF on how to do this? Right. And, like, you know, like, there must be tons of people in the AI slop world getting into it, but it turns out that it's I thought the story was interesting regardless and worth doing regardless, but it turns out that the person who was doing this was more of, like, a true crime obsessive than they were, like, an AI hustler. Right. Obsessive.

Jason:

And I would classify this as AI slop. Like, I think that it is AI slop, but the way that they were coming at it was like, I'm I'm really obsessed with true crime. I know a lot about the genre, and, therefore, I can make really long, somewhat immersive videos using artificial intelligence where the murders didn't happen, whereas a lot of the AI slop that I have written about is, like, how do I reverse engineer getting a gazillion views by typing one thing into a prompt?

Joseph:

Yeah. And I think this the person who made this YouTube channel, we call them Paul in the article. As you say, they're familiar with true crime, and they came to it more from I think it was during the pandemic. Everybody was locked inside, and they were watching, Dateline with their family. Right?

Joseph:

Which is that, you know, very, very long running, sort of, I would do almost call it procedural documentary or whatever. But then they get into that. And then, they start playing around with AI, and they're making, like, these fake Christmas movies at first and telling people their AI. They don't take off. Nobody's watching them.

Joseph:

And then Paul pivots and is like, well, maybe I can make true crime documentaries because Dateline is so formulaic at the end of the day that maybe I can do something with that. And he does. And, crucially, he decides to not tell people it's AI generated because he thinks people won't watch it then. And he's probably, right in that. Even going I think I think this is in the piece or maybe it was just in the pitch, but, like, even going in and deleting the comments of people who were calling it out, as AI.

Joseph:

Yeah. I definitely think well, I was gonna say he the pool doesn't come from, like, an AI grifter willed in the way that you the the people you follow, Jason. But he is making enough or was making enough money, to apparently live off in at least some capacity. And then he does that. And during the reporting process, YouTube then actually takes down, the channel as well and several others.

Joseph:

There are some copycats now. Like, do you with similar names and that sort of thing. Jason, do you think, like, they're almost closer to the grifter mindset because they just saw this channel getting really popular or something that I will I'll copy that.

Jason:

Oh, for sure. I mean, I think that the people who are doing it now I think that the first person, who like Paul, who we profile tried to, like, retcon this in an interview to make it seem like he was doing a commentary on Right. The true crime genre, and I think that's kinda bullshit as well. And and Henry says as much in the article, it's like, he was doing this to make money, as you said. And he was like, well, it's actually a commentary on whether true crime is ethical or not and also, whether people believe it or not.

Jason:

I I don't think that super matters, and I I think that the result is the same regardless. But the people who are making things now, they're just stealing the videos that Paul made that went viral, putting other, like, visuals behind them and reposting them sometimes with the exact same title. And so it's like this. It's like layers of different grift occurring here.

Joseph:

And

Jason:

then I guess the I I would really like to hear Emmanuel's take on this because he consumes a lot of YouTube. I feel like there's, like, a lot of AI stuff on YouTube in general, and, weirdly, a lot of it is quite long. Whereas I I'm really used to just seeing AI images, AI articles, and AI ten second reels, but these are, like, 30 videos.

Emanuel:

Yeah. I mean, I don't know if we're there yet, but one thought about this story is that all of our articles about AI Slept Now are like, oh, look at all this bullshit that is flooding Facebook and how bad it is. And, yes, there is interactions with it. But anyone who is reading our story can kind of identify it as garbage, and they don't like it. And I feel like this is getting closer to its AI slot, but maybe it's good enough for people to actually consume and enjoy whether they know it's AI or not.

Emanuel:

And I could definitely see it taking off on YouTube where, for example, like, I was on a kick of, I'll be playing a video game in the second monitor. I'd be listening to YouTube videos that are, like, 40 k lore videos. And, essentially, it's like the the people who make them care a lot and they're well produced, but you can imagine someone using AI to digest a bunch of 40 k wikis and then auto generate videos that are, like, good enough. And that's, like, a lot of the content on YouTube. So I don't know.

Emanuel:

It's like I feel I what I worry about is this stuff getting good enough to actually replace what what what people are doing. And I could see it happening in true crime. Right? As, like, any anywhere where you can identify a very set formula that people like, then there's the potential for AI to step in and kind of replace a lot of the human labor there.

Jason:

For the last, like, two months, I've been falling asleep with the screen off listening to YouTube videos of an archaeologist and historian who talks about ancient America's indigenous cultures. So, like, he does, like, hour long videos about Mayan pottery and Incan, like, pyramids and things like this, and I listen to it just to fall asleep. I I find it pretty interesting, but I don't like, to be totally honest with you, I don't know anything about this stuff. I don't like, he could be saying anything, and I would be like, oh, that's crazy. Didn't know that the the like, they had cave paintings at in, like, 4,000 BC in this part of Mexico, and I didn't know that these tribes interacted in this way.

Jason:

And, like, I am pretty certain that this is a very good researcher who knows his stuff. But I think if it was complete nonsense, it would serve the same purpose for me, which is just falling asleep to something that's, like, somewhat monotonous, and that's kind of scary.

Emanuel:

And surely people are consuming true crime content on YouTube in the same way that you're consuming this archaeology videos. Actually, I know that you listen to ASMR videos. I imagine the ASMR YouTube space must be getting flooded with AI generated content even though I haven't seen it.

Jason:

But I must be. I I think that, like, I have specific ASMR artists that I follow, and they're very much humans. I bet that they are writing a lot of these scripts with AI because it'll be, like, an hour and a half long barbershop session or something like that. And that's, like, a long script to write, and I I it doesn't super matter what it says necessarily. It's, like, more about the props and the voice and the cadence and all that sort of thing.

Jason:

So I bet a lot of people are using ASMR to help them write the scripts. As far as, like, AI voice ASMR, I'm sure it exists. Don't know. I I haven't gone looking for it, but I bet it exists.

Joseph:

I think what you said, Jason, about the archaeologists you listened to, and you said even if the facts weren't accurate or whatever, and I think you said it would serve the same purpose. Yeah. That is exactly what is happening here with the true crime stuff because I I agree with you that this Paul character is sort of retroactively justifying it. But they but in that lens, they they have the line, quote, true crime. It's entertainment masquerading as news.

Joseph:

That's all there is to it, end quote. Again, I don't think he's doing the sophisticated social commentary and there's more after the fact. But I think we can almost do that social commentary because we're a step I mean, we're not making we're not making the YouTube channel. You know? We're not trying to try to grift off it or anything like that.

Joseph:

We're just writing an article about it. But I think that's exactly the case here, that for a lot of people who consume true crime, it is not it is ironically not actually about whether it's true or not. It's part of it is, entertainment. Another part of it, is potentially, being scared and then playing into that. And then, you know, all of these adverts and true crime, podcasts for a Ring camera or whatever it is or another security camera.

Joseph:

There's a whole, basically, true crime industrial complex around, a lot of this. And, yeah, I just found it interesting, and maybe I put too much emphasis on it because maybe everybody who consumes this knows it's entertainment. Maybe everybody who consumes true crime knows it's, like, a bit flippant with the truth. But, you know, it doesn't present itself as such, so I found that particularly interesting. You know?

Jason:

It is I mean, I hadn't thought about it until, we started talking about it on this section. But so the the YouTube that I like, it's called Ancient Americas, and they don't post that many videos. Like, they post video every few months, and they're usually forty five minutes to an hour long, and they're deeply, deeply researched. It's, like, clear that this person, cares a lot about it, knows a lot about it, so on and so forth. The titles of the videos are, like, the Mesoamerican calendar, potatoes, South America's gift to the world, the decipherment of Maya script, which is, like, how, historians learned to read the, like, Mayan writing, Tiwanaku part one, the city, Tiwanaku part two, the empire.

Jason:

And, And, like, to be totally honest with you, this is not stuff that I learned about in high school, really. Like, I I know a fair bit about the Incans and the Mayas, but, like, the smaller ancient indigenous cultures, I don't know very much about them, and learning about them is takes a lot of painstaking human archaeological work, anthropological work, and then history like, he's all he's reading history books. He's like, in this decipherment, it this is what it means, and then in another one, it means something else. And I really do worry because, like, I'm falling asleep to this stuff. I'm not that interested.

Jason:

I'm interested in it because it's helping me fall asleep. And the idea that this guy's work could be ripped off and replicated probably to, like, 80% as good as he's doing, but with totally made up facts or, like, hallucinated facts or just random stuff is super depressing. It's, like, really upsetting, I think, because you could you could emulate this guy's video and make hundreds of them in five minutes if you wanted to, and they would just be like nonsense. But, you know, if you're just looking for background noise, maybe that's good enough, and that's that's not good for us as a society, I don't think.

Joseph:

Yeah. For sure. Alright. There's a lot more I could say about true crime, but I'm gonna save it off the pause. And I need to formulate my thoughts a little bit, but we will leave that there, and I will play us out.

Joseph:

As a reminder, four zero four Media is journalist founded and supported by subscribers. If you do wish to subscribe to four zero four Media and directly support our work, please go to 404Media.co. You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week. This podcast is made in partnership with Kaleidoscope.

Joseph:

Another way to support us is by leaving a five star rating and a review for the podcast. Here is one of those reviews from someone who used the username leftist tech bro. Good show. Blunt to the point. I like it.

Joseph:

This has been four zero four Media. We will see you again next week.