The 404 Media Podcast (Premium Feed)

from 404 Media

We're Not Ready for Chinese AI Video Generators

You last listened March 13, 2025

Episode Notes

/

Transcript

We start this week with Emanuel's great investigation into Chinese AI video models, and how they have far fewer safeguards than their American counterparts. A content warning for that section due to what the users are making. After the break, Joseph explains how police are using AI to summarize evidence seized from mobile phones. In the subscribers-only section, we chat about an AI-developed game that is making a ton of money. But your AI-generated game probably won't.

YouTube version: https://youtu.be/EmVLOI8So0A
Joseph:

Hello, and welcome to the four zero four Media Podcast, where we bring you unparalleled access hidden worlds both online and IRL. Four zero four Media is a journalist founded company and needs your support. To subscribe, go to 404media.co, as well as bonus content every single week. Subscribers also get access to additional episodes where we respond to their best comments. Gain access to that content at 404media.co.

Joseph:

I'm your host, Joseph. And with me are two of the other four zero four Media co founders. The first being Sam Cole.

Sam:

Hello.

Emanuel:

And the

Joseph:

other being Emmanuel Mayberg. Hello. I think Jason's on a plane right now.

Emanuel:

I think

Sam:

Jason's on a plane right now. I left Jason behind in Austin. He we had a busy night. We last night was our, our big takeover and after party. Flipboard kindly, hosted us for a shindig downtown for a south by.

Sam:

So, yeah, it was a it was a long night. I'm a little my voice is a little hoarse, so I apologize. It's, got a little fry more than usual, and I'm a little hungover, but we're here. Yeah. I just got in this morning.

Sam:

So it was an amazing time. It was really good for everybody.

Emanuel:

Justice hungover by speaking loudly at a party that might be from the alcohol.

Sam:

Yes. It's a whole and, yeah, I'm I'm an inside cat. I do not go to networking events much unless we're throwing them. So it's, you know, it's like it was amazing to see everybody and meet everybody and met a lot of, podcast listeners, which was very cool. Yeah.

Sam:

It was it was a good time.

Joseph:

You heard that some people only knew about the event through this podcast. Right?

Sam:

Yeah. Yeah. A few people came up and said, oh, yeah. I heard about this happening on the pod. I was like, that's amazing.

Sam:

I you know, we plug it on the podcast, you know, because we know a lot of people listen. But, yeah, it was it was cool to see how many people actually are tuning in. So yeah.

Joseph:

Yeah. And it and it brings up one thing I'll just say super briefly, which is that clearly some people only listen to the podcast rather than reading the website. And at first, when we sort of realized that, I found that quite strange. And then I realized, wait, I do that myself. I listen to the Verge podcast every single week, twice a week, if they've uploaded without fail.

Joseph:

But then I generally don't read the website just because I prefer to digest it that way. So if you are one of those people who only listens to the pod, thank you very much. We really, really do appreciate it, and the pod is growing. With that said, let's get to our stories because we have a bunch of complicated and interesting stuff to get through. The first couple of stories, which we're kind of putting together, they're written by Emmanuel.

Joseph:

And the headline of this first one, Chinese AI video generators unleash a flood of new nonconsensual porn. Emmanuel, this is something you've been working on for a long time. What is the top line of your investigation? Is it about the guardrails of these Chinese developed AI models? Like, what's the top line?

Emanuel:

The top line of the investigation is that there are a bunch of AI video generators that are available via apps that you can get via your web browser or the app stores. And I don't think many people know about them because they come from smaller, lesser known AI companies. And to explain for a bit why I have been working on this for so long, I think we need to travel back in time to I think it was February, of last year when OpenAI revealed Sora, which is their AI video generator. And I think that's the first time that people saw kind of high quality AI generated video, and that really blew people away. It blew me away.

Emanuel:

They kinda came out with those video samples out of nowhere. And that tool wasn't available at the time, and it wasn't available for a long time. It is available now if you pay.

Joseph:

They were more just showing it off. Right?

Emanuel:

Yeah. They were just showing, like, look at how powerful AI videos can be, and they definitely deliver that message. What I did that day is immediately go to the chat rooms where I monitor the communities that create and share nonconsensual AI generated pornography. And I wanted to see how they reacted. And, obviously, they were kind of salivating over having access to those kinds of tools, but it wasn't really a concern, a, because OpenAI didn't give them access, and they rolled it out very slowly and, additionally, very safely.

Emanuel:

OpenAI is notorious for having pretty strong, some would say, overbearing unnecessary guardrails around all their, AI tools, and that is true for Sora as well. But what happened in between the time that Sora was announced and today is that a bunch of other competitors in the market rushed to launch, competitors. And at first, these competitors seem not nearly as good as what we saw from Sora. But as everyone who's familiar with this beat by now knows, AI kind of develops at a very fast pace, and, now there are a bunch of AI tools that produce pretty damn convincing video. And for, reasons that we can speculate on here in a minute, they just have really, really, really bad AI, prompt guardrails.

Emanuel:

Right? So people might remember one big story we did. God. It's 2023 now, I think. No.

Emanuel:

It was also, 2024. But that was around, people using Microsoft Designer and kind of writing prompts that tricked it to generate nonconsensual images of Taylor Swift. And that loophole, we've seen replicated in many AI tools since then. But, overall, the big AI companies realized that people were using this loophole and abusing it and have gotten a lot better at, having guardrails against that sort of abuse. These newer AI companies, not so much.

Emanuel:

And over the months, I've seen people find these AI tools, find the loopholes, and just generating, at this point, mountains of nonconsensual videos of celebrities. So, yeah, it sounded like it almost started with the Taylor Swift stuff that we reported on, obviously, and you got and I think Sam as well

Joseph:

on that piece as well, got more information, and then everybody sort of because it went viral on on Twitter or whatever. Now you're saying these other, predominantly Chinese developed AI models, are basically being used for the same thing, but to a much, much larger degree. It's not just some Taylor Swift stuff. It's all of these different, celebrities. When it comes to the video generators themselves, what what are they doing exactly?

Joseph:

Do you do you give it a video and it auto completes it? Do you give it a photo and it animates that? Do you give it a text prompt? Like, what does the user put in to then make all of this stuff?

Emanuel:

Yeah. So it's both. You can, generate something out of what appears to be nothing, but it's not really nothing because it's pulling from, huge datasets that the AI model was trained on. But you can basically write a text prompt and generate a video that way or and I think this is important. You can do image to video, which you give it a still image.

Emanuel:

And then in the text prompt, you write how you want the AI tool to sort of animate that image. And the latter is harder to moderate because the text prompt, you can fairly easily filter out terms that you don't want people to use. And those could be names of celebrities, nicknames of celebrities, and a bunch of sexual terms. Right? That's like a fairly easy way to filter out a bunch of bad content.

Emanuel:

You could do that with images, but that is much more complicated because then you need to train other AI models to recognize a person in the image or recognize nudity in an image, and that just takes a lot more effort to filter out those kind of visual prompts. It just appears that the AI tools that I've found, most of which are developed by Chinese companies, are not doing that very hard work of visually detecting images that are used to animate pornography.

Joseph:

Yeah. Because it could be a red carpet photo of a celebrity or something like that. It's like, what? You're gonna ban all red carpet photos? Well, then they'll just find another photo or something like that.

Joseph:

Right? So what are people making exactly? Is it, like, sort of lewd images where maybe they get a celebrity flashing their breast or something like that? Or is it, you know, more full on pornography? Obviously, there's degrees here.

Joseph:

I mean, it's all nonconsensual, and it all sucks. But, like, what are people making broadly?

Emanuel:

So I think if I went into these chat rooms and, put all these videos into a spreadsheet and counted what is the most popular type of video, I would say it is probably videos of female celebrities taking their tops off. The reason for that, I think, is that there's a very popular tool called Pixverse, which, is just I I think, to be fair, is used for non harmful reasons by a lot of users, but it's just it's it's just an easy accessible tool. You can get it on the web. You can get it via the Apple App Store. So it's very easy to access.

Emanuel:

And this community has figured out how to abuse it in this specific way. They found the specific written prompt that you can use to create that kind of video, and it's really easy. So that is the most common one. But what I saw is that people move from tool to tool depending on what kind of video they wanna generate and what kind of vulnerabilities they're finding in each tool. So Pixverse is good for that.

Emanuel:

Then there's this other tool I talk about later, which is actually from an American company called Pika, and that one can, I mean, produce straight up, like, videos of oral sex, fairly easy? And it it it looks a a bit janky, definitely looks weird, but it is also, I think, would be, fairly horrific to to find someone, doing this to your likeness.

Joseph:

Yeah. And, again, it just takes an image or or something like that. So depending on the tool, but it it's very low effort if they have the prompt workaround. What specific Chinese tools are we talking about, and are they, like, little upstarts? Are they, like, open source projects?

Joseph:

Are they well funded operations? Like, who who are these or what are these tools, and sort of where do they come from?

Emanuel:

It's this new crop of AI tools that, are doing this exact thing I think I talked about in the beginning, which is OpenAI presents Sora, and everybody sees the potential in that. And rather than, be careful and release it very slowly and safely, their their tactical move is to just get to the market first, get something that is honestly probably not as powerful as Sora and definitely not as safe, but it's really easy to access. And there is a demand for this kind of tool and just getting their first and and and and giving people access, that's just a a a better business strategy. Or, look, the only competitive business strategy that they have. They are well funded.

Emanuel:

They all have millions of dollars in in in venture capital. For example, this app that I talked about, Pixverse, it has some notable people. Like, the the person at the head of that company used to be the head of machine vision at ByteDance, which is the company that owns TikTok. So they're new companies. It's like a new generation of AI companies.

Emanuel:

We're seeing similar type and scale of company in The US, but in this case, they happen to be almost exclusively Chinese.

Joseph:

Yeah. And there there's a line in the piece where you say four zero four media is not sharing the exact prompts that produce these videos. Can you explain why? I mean, I think it's obvious, but I think it's useful for people to hear why you don't include these exact prompts. But then also, can you just mention so you won't mention the prompts, but you'll mention the companies.

Joseph:

Like, is that because, well, they're massive multimillion dollar companies. Like, why wouldn't we name them? Like, what's the thinking there?

Emanuel:

Yeah. I mean, as Sam knows very well, when covering this kind of thing, you're always trying to walk this very complicated line where you want to report about an important issue. You wanna, name and shame, basically, these companies and hopefully apply pressure not just on them to build better protections, but also build pressure on Apple and Google who are making these apps accessible via the App Store. But at the same time, we obviously don't wanna, teach people how to create very harmful content. So we're kind of sharing the responsible parties here, but we're not sharing the communities where people will teach you how to do this or the specific prompts that will generate those harmful videos.

Joseph:

Yeah. Sam, what do you think of that balance between highlighting something because there's a public interest is important while not amplifying the bad stuff? Like, how do you figure that out in your head?

Sam:

I mean, it's just so hard to to write about something and especially to illustrate what's going on without saying what's going on, like, saying it plainly. I think when we kind of beat around the bush, so to speak, and try to, you know, use, like, euphemisms or, descriptors that aren't exactly not the prompts, but, you know, like, naming what these companies are. Saying, you know, for example, like, last week, we talked about Instagram gore. It's like saying exactly what people are seeing without being cute about it, I think, is really important as journalists. So it's it's a hard calculus.

Sam:

I mean, it's, definitely something that I struggle with a lot early on, just kinda figuring out when to, be very blunt about these things and name the companies and when to not. And I think a lot of it comes down to its I think just I think a lot of people's reaction when they see, like, oh, you're talking about, like, a Telegram group or an AI model or or a tool or whatever it is. It's like they're like, oh, I haven't heard of that, so you're amplifying it. It's like, no. Actually, tens of thousands of people have heard of this.

Sam:

You just haven't. You know, lots of people are using this, and this company is making a lot of money on those people. And it's doing, in a lot of cases, real harm to a lot of people. Just because you haven't heard about it doesn't mean it's not already a huge thing. Maybe it'll become more of a mainstream thing and more pressure will be put on it to, you know, like Emmanuel said, be taken off the App Store and things like that, but that's also out of our hands.

Sam:

It's not really, part of our job to to do that. So, yeah, it's a tricky thing for sure. It's something we think about, I think, every time one of these one of these stories comes up.

Joseph:

Yeah. I mean, I think the stakes of this example, I'll say, are lower, but almost the quintessential one I always remember is when Gawker first covered the Silk Road website. You know, people were debating me like, oh, you shouldn't do that because now people will know to go do it. It's like, I don't know, man. The revolution in marrying Bitcoin and the Tor anonymity network to allow the borderless online exchange of narcotics is probably something that's worth putting in an article, and it applies here when it comes to, like, the unbridled use of this AI technology.

Joseph:

And and on that so just to get back to the the the models we're talking about a little bit, Emmanuel. So it's mostly Chinese in this article. There are some US ones or one you you the you did mention. Is it more that the new generation of ones without guardrails just happen to be Chinese and that's sort of why Chinese isn't the headline? Or, like, is there is that sort of, a commonality when it comes with Chinese companies, or they just don't have these guardrails?

Joseph:

Like, how exactly does the China element play into it? Because as of course, you're always careful, and we all are here. But recently, we had the deep seek stuff, and people lost their fucking minds over the Chinese element. That's a little bit different because it's like, well, you're giving data to a Chinese company, blah blah blah. But, like, is it more they just happen to be Chinese here, or what's that?

Emanuel:

So I think there's two things that are happening. There are American companies that are doing this. There is, like, a mere competition, but I think there is less. And, we've written about some of them. I wrote a story about, one app called Dream Machine.

Emanuel:

Sam wrote a huge scoop about I forget the name. The AI video generator that we got the training data on. Runway. Runway.

Sam:

Runway. Yes.

Emanuel:

Thank you. Yeah. Runway. We're getting all our stories.

Sam:

That's a long time ago. A lot has happened.

Emanuel:

Yeah. So they exist. But I think one thing that is definitely happening is that and just that the the Chinese companies saw an opening. We're, like, in this great competition, in the AI industry between The US or the West and China. And it was just a place for them that they can get ahead a little bit, and they did.

Emanuel:

And and and I think that's one thing that is happening. The other thing that that I think is happening, and I haven't been able to to prove this, and I didn't put this in the article, but, I feel comfortable saying it here. And I invite people who are listening who might be interested in safety or red teaming and and might be able to to to teach me about this honestly. But I do suspect or I wonder if there's a language barrier problem here where the American companies are just better at building, what we call, like, the semantic filtering, right, like, the word based filtering of prompts where the Chinese companies, since they're initially built for Chinese markets and are prompted in Chinese, maybe have, like, fairly good filtering in Chinese, but the English language filtering is not as good. I was wondering if that's one one issue here, but I I don't know that for a fact.

Joseph:

That's super interesting. Yeah. I think there's one more thing you wanted to mention on this story, Emmanuel, before we move to Alibaba. There was Yeah. The apps and the models.

Joseph:

Right?

Emanuel:

Right. So just to transition here to the next story, so far we've been talking about apps. These are, user friendly, consumer grade, anyone can use them, are advertised to the average user type of tools. The other thing that is happening at the same time is kind of a rerun of what we've seen with this website called Civitae and, these more open models as opposed to apps. So there's two Chinese companies, Tencent and Alibaba, which are, like, two of the the biggest tech companies in the world.

Emanuel:

And they have released essentially the video version of Stable Diffusion, which Stable Diffusion also does video, but Stable Diffusion is basically an open weights model. It's a AI image generation model that you can tinker with to customize it and make it better at producing specific type of images. And Tencent released this tool called, Hanyun. I hope that's how you pronounce it. And Alibaba produced this other, model called, WAN.

Emanuel:

And they're just the exact same thing. They released all the documentation. There is a GitHub where you can go and and download the code and tinker with it. And as soon as this happened, very rapidly, the exact same the same thing we saw with AI images happened. The the models were adopted by the civite community.

Emanuel:

They were modified to create videos of highly specific sexual acts and fetishes and then also videos of very specific small time YouTubers and Twitch streamers, Instagram influencers. And while Civitai at this point is pretty good at preventing you from posting nonconsensual content to its website, it also makes it incredibly easy to, like, I'm gonna take this AI video model that has been designed to create videos of blowjobs, and I'm gonna take this other AI video model that's been designed to recreate the likeness of this Twitch streamer that I like. And you kind of put them together and make, nonconsensual videos, which are also of much higher quality than what these apps that I talked about do. But at this point, they are more difficult to produce. You need to navigate Civitai, know how to run these models, either do it locally on a a a fairly powerful GPU or rent that GPU time in the cloud and setting up the workflow for that.

Emanuel:

And it's it's it's not impossible. Like, I could figure out how to do it. Anyone can figure out how to do it, but it is, several degrees more difficult than just downloading an app and clicking generate.

Joseph:

Yeah. And I think just the the last thing on that before we take a break is when Alibaba released this open video model, and then, obviously, then got used for porn as you reported, What was the actual intention by with with releasing this? Like, what why did they wanna release it? What were they hoping it was gonna be used for?

Emanuel:

That's a complicated, question to answer because it gets into this greater debate of, like, why is Mark Zuckerberg releasing llama as an open model. Right? So the theory is that it becomes, widely adopted across the world and then question mark question mark question mark monetize it somehow. It's it's how that works out, that's kind of, like, above my pay grade, but you're just like the the the plan is to make it open so as many people as possible adopt it So the technology is developed by a community along with the company and has a lot of investment from that community, and then you sell them something. I don't know how how how that last stage works out, but, it's just the it's the open model of of AI.

Joseph:

Step four, profit. You don't need Profit. In between steps. That's just how it works. You know?

Joseph:

Yeah. Alright. We'll leave that there. Really, really amazing stuff. When we come back, we're gonna be talking more about AI.

Joseph:

This entire episode is about AI. I mean, we're going back to, like, almost our 2023 roots, you know? But it's gonna be about how police are using, AI when it comes to analyzing seized evidence. We'll be right back after this.

Emanuel:

Okay. So this next story is from Joe. The headline is, Cellebrite is using AI to summarize chat logs and audio from seized mobile phones. I think let's start with, what is Cellebrite? A notorious company in in our little world, but for people who don't know, what is Cellebrite?

Joseph:

Yeah. So Cellebrite is an Israeli company. I mean, it has US subsidiaries and stuff, I'm sure, but predominantly an Israeli company. And it's basically ubiquitous in the world of law enforcement. So when a police officer seizes a mobile phone, maybe that's at the border with CBP, maybe that's ICE when they arrest somebody, maybe it's a cop traffic stop.

Joseph:

What they'll often use is a piece of technology from this company called Celebrate. And, you know, it comes in lots of different forms there, but the generally, the tool is called I pronounce it uFed. I'm not sure if that's entirely correct, but I like it as Fed in there, u f e d. And you plug the phone in. If it needs to, it will crack or bypass the password or the passcode requirement, and then it will download all of the data on the phone.

Joseph:

You know, in some cases, able to get deleted chats or chats that you thought were deleted, but they weren't forensically deleted necessarily. And then it takes all of that information, and the police officer can, you know, safely store it for later. You know, if there's, I don't know, a murder investigation and you have the phone of the victim, you want a forensically sound image of that phone. Or if someone's crossing a border, you just wanna download everything on their phone and rummage through it without a warrant because that's what authorities, are able to do. There's a funny sidebar about how celebrate started, which is that when you would go into the Apple Store and you were like an Android user, but you wanted to change to Apple iOS, Apple would have celebrate devices in there because it could help transfer your data.

Joseph:

I don't think that's the case anymore because that I don't think that's necessary. But it's just funny, especially around the San Bernardino attacks of twenty sixteen, and there was all of the FBI versus Apple stuff. There's a lot of coverage about Celibright because people were speculating that Celebrate was the, company that unlocked the phone for the FBI. But, I think it was Kim Zetta at The Intercept who actually did a really good profile of Celebrate and how yeah. You go to an Apple store, there's actually a Celebrate in the back, probably.

Joseph:

But, yeah, it's a very, very common tool and company across law enforcement. Its main competitor being Grady, which has probably taken actually a sizable chunk of its, of its customer base, I imagine.

Emanuel:

Every once in a while, someone on Reddit posts a a picture of the Cellebrite device at the Apple Store, and it's like the Leo DiCaprio meme pointing at the Oh, there it is. So Cellebrite is doing what every other company in the world is basically doing right now, and they slapped AI on it, which is what your story is about. What does that mean for them? What does it mean in this context for them to use AI?

Joseph:

Yeah. So they've slapped AI into their products called Guardian. Again, that's not the one that's actually guessing the data from the phones. But after the cop has extracted the data, they upload it to the system, like an evidence sharing system. This is a bad analogy, but it's almost like Google Docs for cops where you can collaborate across the cloud live with each other on a piece of evidence.

Joseph:

It's like that, but for stuff that's been taken from mobile phones. So you'll have all of the chat logs in there, the voice memos, the photos, all of that sort of thing. And what Guardian is now capable of doing, with this SlappedOn AI, it can summarize all of that material. So rather than a police officer have to read through every single text message or listen to every single voice memo, Guardian can use AI to potentially summarize it. Now I I haven't seen sort of a a Guardian produced AI report, and I would absolutely love to see one if anyone gets hold of one.

Joseph:

But that's the way they frame it in that it can really speed up investigations. It can save cops time. It can do all of all of the promises of generative AI, essentially. But this isn't this isn't a school kid trying to generate an essay. This is a cop generating summary a summary of evidence seized from a mobile phone.

Joseph:

And, that is very, very different, in my opinion. You know? The the stakes here are a lot, lot higher when it comes to the sort of this use of AI.

Emanuel:

Do we know what cops think about this new feature?

Joseph:

Yeah. So there's a couple of testimonials included with Cellebrite's announcement. And I should say, like, this actually came from a press release, from Celebrate in February. I'm not really in the habit of reporting on press releases, but it seemed like nobody picked this up. You know?

Joseph:

I was searching around for something, probably to do with Celebrate because of all these leaked documents we get from GreyKey and Celebrate, that sort of thing. And then I came across this February press release saying, yeah. We're we're putting AI into this product. So included in there are testimonials from police officers. Now this is very, very common.

Joseph:

We saw it with Ring when we did a ton of reporting back at Vice where the cops would provide, you know, pretty positive reviews. And then, I don't know, they get cameras or whatever in exchange rate. I've seen that with other surveillance companies and with this mobile forensics firm as well. So there is a testimonial in there, and it comes from a pretty small police department who says they were piloting the AI capabilities. And they said, quote, it is impossible to calculate the hours it would have taken to link a series of porch, package thefts to an international organized crime ring.

Joseph:

The GenII capabilities within Guardian helped us translate and summarize the chats between suspects, which gave us immediate insights into the large criminal network we were dealing with, end quote. So there's a couple of things there. The first is that translating. So, you know I mean, that that's not that revolutionary, obviously. And I don't have a super problem with cops using an automatic translation tool as long as it's verified, later on.

Joseph:

But then summarize the chats between suspects. Now we're getting clearly into gen AI territory. But, also, they say they've linked these different cases together by summarizing the evidence. Now, of course, maybe, if not most likely, they would have linked those together as well manually. You know?

Joseph:

I don't know. They're reading the chats or something, and the same money mule comes up or the same safe house comes up across all of these different chats, whatever. So maybe they would have found about found out about it in any way. But they're saying explicitly here that they linked all of these together with this AI tool, basically.

Emanuel:

Yeah. I think one of the reasons it is so easy for us to write articles about AI and make fun of AI is that one of the most common uses for it is summarizing other large pools of of information, and it so often gets it wrong. And it just increases in severity or, like, becomes less funny depending on the context. Right? So if it's a Reddit search I mean, a Google search and it's pulling some random comment from Reddit that tells people to put glue on their pizza, That's not great, but mostly it's like, AI is stupid.

Emanuel:

And then increasingly, we've been seeing more and more stories about lawyers who are using AI in court, and then you get the AI hallucinating citations of cases that never took place. That seems pretty bad. And I sort of believe the cop quotes here about it being much faster to summarize just, like, massive amounts of chat logs and other it's like, sure. It's easier, but it's like, yikes. Yeah.

Emanuel:

And, you got some pretty alarming quotes from civil liberties experts to, like, speaking to this exact issue. What what do they say?

Joseph:

Yeah. And this is from, Jennifer Granick from the ACLU. And, you know, she brought up Fourth Amendment issues, which is, like, when you get a warrant, it's only supposed to be for a particular device or maybe even a subset of that device and and that sort of thing. And that's obviously a long standing Fourth Amendment, warrant issue. But I think she brought up some other really, really interesting points, which were, you know, she said there's a 10 there could be a tendency to believe that an AI tool will successfully identify patterns which reveal criminal behavior more so or better than the human reviewer.

Joseph:

So you could end up trusting the AI more because, oh, well, the AI this is an AI tool sold by law enforcement contract. Like, why would I not trust it? You know? That said, and I think we'll get a little bit into this at the end, but Celebrate says there is always a human in the loop, an h I t l. I think they made that acronym up because I've never I've never heard that.

Joseph:

And when Jason was editing it, he highlighted it like, oh my god. So there's there's always a human in the loop, they say. Okay, but I don't know. The the this is this is still crazy to me, and the idea that it could introduce errors, which then the police officer has to catch. And maybe they will catch, but wouldn't it be better if they just did it from their own experience?

Joseph:

I'm not entirely sure. But you you brought up the one how it's easy to dunk on chatbots and here the stakes are higher. I mean, we're seeing seeing, more and more of this. There was this BBC study when was this from? From February.

Joseph:

And they tested, it seems quite methodically, ChatGPT, Copilot, Gemini, and Perplexity to summarize news stories. So basically, it's kind of doing what's the same here, summarizing some sort of corpus, some sort of body of work. And it found that 51% of all AI answers to questions about the news had significant issues of some form. Half of them are getting the shit wrong. Like, that is really, really crazy.

Joseph:

When you're talking about, obviously, news, that that's very, very bad. But then in in this context as well, I don't know. It it it's just alarming and concerning for sure. And again, I haven't seen any errors in it, but I think the potential is absolutely

Emanuel:

there. This is not gonna be the end of AI and police work. There are several stories that we're working on that will dig into this more in the future. But what do you think about investigators? I don't know.

Emanuel:

The FBI, any law enforcement agency increasingly adopting AI tools into their investigations, their their reporting of crimes?

Joseph:

Yeah. So there's there's a couple of examples. One is, you know, the thing I go on about most in the entire world, which is how cops are increasingly compromising entire encrypted chat platforms like Sky, AnchorChat, and Nom. And then all of these other smaller ones exclude Ghost, Matrix, blah blah blah. The Dutch authorities in particular, they did create AI tools to surface content in those massive, massive datasets.

Joseph:

So if a criminal was talking to another one about cocaine, for example, the AI would surface that chat, tell an analyst, hey, here is a conversation about cocaine. And in my reporting speaking to those law enforcement officials involved in those sorts of investigations, it seems that that sort of AI was especially useful. I guess the good thing there is that it's very much limited to that dataset. And but then, you know, it still brings up questions of, well, eventually, those people are gonna be prosecuted. So it would still involve the human going and verifying the actual evidence against them.

Joseph:

That's one way. The other one, which I think is probably a lot more relevant for more people, is that Axon, the law enforcement contracting giant. You know, it makes what? Tasers, body cameras, basically everything for everybody. Right?

Joseph:

And they have this relatively new capability called Draft one. Yeah. Draft one. And it uses, basically, ChatGPT or OpenAI's model to summarize the audio of body cam footage. So it takes that audio, it listens to it, and basically summarizes what happens.

Joseph:

And examples are, oh, a man came over to the police officer, and he said x y z. He described the suspect as blah blah blah. And the idea is that police officers can be sort of more engaged in the moment. They can be talking to the witness or the victim or the suspect, and they can be really engaged in that conversation without having to, like, sort of remember bits and bobs and that sort of thing. And when I was going through a lot of the Axon material, they were saying or maybe the police officer was saying that that with the rise of body cameras, officers have now they speak more to the body camera than to the person in front of them because they know they're being recorded.

Joseph:

So they say they almost repeat everything. So it gets recorded on the body camera. In a way, I don't know. That sounds like a good thing. There's there's more evidence.

Joseph:

Right? But the idea of that AI is that the cop can just go and sort of do their job, and then Axon's tool, Draft1, will summarize it. I mean, that made quite a splash in good and bad ways, when it was announced, several months ago at this point. And it brought up basically the same concerns as the celebrate one here, which is the you're asking an AI to summarize stuff, which is really, really important. This is not trivial.

Joseph:

This is not some dumbass lawyer. Summer and to be clear, the stakes are pretty high there as well, but the judge is always gonna catch them. Here, it's a lot more, asymmetrical because well, it's a cop generating the evidence and summarizing it with their AI. You don't know if the victim or the witness or the suspect or anything necessarily gets a chance to challenge that in the moment. Right?

Joseph:

It's a black box happening over there. So I don't know. It it AI is gonna continue to become a more and more relevant part of policing. And I think we're gonna start to see the side effects of that in the same way that we saw side effects for facial recognition where more cops were using it. And, yes, it's an exceptionally powerful tool for them.

Joseph:

But, you know, the wrong people have been arrested because they happened to be black or something like that. And these systems make mistakes. Does that sound does that sound right, Emmanuel?

Emanuel:

That sounds horrible. So, yes, it sounds right. Yeah. It sounds extremely frightening and bad. So, yeah, that sounds correct.

Joseph:

That is what we do on this podcast.

Emanuel:

If you're listening to

Joseph:

the free version of the podcast, I'll now play us out. But if you are a paying $4.00 4 Media subscriber, we're gonna talk about the rise of Vibe Coding and a game someone made with AI that is now making them allegedly $50,000, which is crazy. You can subscribe and gain access to that content at 404media.co. We'll be right back after this. Alright.

Joseph:

We are back. Sam, do you wanna take the lead on this one? So I can maybe go get a glass of water while

Sam:

you're just Yeah. Yeah. You've been yapping for a minute, Joe, so I'll take over. I mean, I I edited this one, so, I I was into this one. It was a fun blog, I thought.

Sam:

So this is by Emmanuel. The headline is this game created by AI, quote, unquote, vibe coding makes $50,000 a a month. Yours probably won't. First of all, such a good headline. I feel like it made some people mad.

Sam:

I think it broke out into, like, AI, coder, Twitter, which is funny. Always a funny time to be in that sphere. So yeah. I mean, you'll tell me about this game. I played it a little bit.

Sam:

It's actually kinda fun. What's what's going on with the the vibe coding game that we're talking about here?

Emanuel:

I think it's kinda fun really hinges on how much time you spent on it, which for me was, like, probably five minutes, and I imagine is the same for you. Right? Like, five minutes of of tinkering around with this game, which is, it's this three d, simple looking polygonal game where you fly a plane. It runs in your browser. It loads immediately.

Emanuel:

You can play it on your phone even. You just load into this world, fly your little plane. You can, shoot balloons that are in the environment. It is, massively multiplayer, meaning that all the other players, who are playing it, you can see them in the game and you can shoot them as well. Though when I was testing it, the netcode wasn't really up to the task and the other planes were lagging too badly to actually shoot them.

Emanuel:

But in theory, they are there. And that's about it.

Sam:

There's more little characters now. I don't know if you looked at the site today. There's more little things. There's like a tank.

Emanuel:

Oh, yeah. Yeah. Yeah. He did he did he had a you can you can you can do the same thing with a tank and, like, shoot things on the ground. Yeah.

Emanuel:

There are buildings that are just like squares. It looks like what I call it in the story is, a gray box environment is what they call it in game development, which is the three d environment early on in development before you apply the textures, the two d textures that you flap on the three d models to make them look realistic and cool. That's kind of what it looks like. It looks like a very early three d. That's that that's the vibe.

Sam:

Yeah. I think maybe that's why I like it. It reminds me of, like, the, little, like, Java games. Yeah. It's, like, back back in the browser game day.

Sam:

So let's talk about vibe coding, I guess. Do you wanna, first of all, just define vibe coding? What is vibe coding as a phrase? This is the first time I'd heard of it, by reading the story. So how was it made through, quote, unquote, vibe coding?

Emanuel:

So, historically, coding is a very methodical practice. And some people are better at it. Some people are kinda messy, but a lot of people really take pride in how, clean and minimalist and efficient their code is. And that becomes especially useful with video games where the code base is really large and you're having to do all these tasks. So the more efficient it it is, the easier it is to run and and and modify and and fix if something goes wrong.

Emanuel:

Vibe coding kind of takes that meticulous philosophy, throws it out the window, and says, instead, now we have these AI tools. You can basically speak to an AI tool that will do the coding for you, and you just kinda tell it what you want. It will make something. Maybe you don't even look at how we coded it. You just check that it works.

Emanuel:

And if it kinda works like you wanted, you keep going and do some additional vibe coding. And that's that's basically the idea. You're just like, I want a app that does this. Make this for me AI tool, and it makes it for you. You're like, oh, cool.

Emanuel:

This kinda work. Now I want you to modify it like this. And eventually, you get a working product, but maybe the guts of the thing is, like, really messy. But who cares? Right?

Emanuel:

Because you made it instantly with no effort.

Sam:

Yeah. I mean, it's it's a fun game, but it's definitely, like, someone's first try at a game type five. It's like the game that you would submit in college to, like, a development class.

Emanuel:

Yes. And I'll say, like, the reason this caught my eye and the reason that I wanted to write about it is because it's made by this guy called Peter Levels. He's known on Twitter as levels. Io, and, he's just like a real pioneer of this whole vibe coding thing. And his whole philosophy is we have all these powerful AI tools.

Emanuel:

I can use them to make not one start up, not one app, but several. And I'm just gonna spin up all these companies. I'm gonna spin up all these apps. And if it catches on, great. And if it doesn't, who cares?

Emanuel:

It didn't I didn't spend any time doing it, and I'll just move on to the next thing. And he just celebrates this, like, moving really fast with AI tools to make things philosophy, and he even goes as far to like, if you look at his bio, it's his name, and then it's the name of each of his apps and how much money it's making a month. And the whole idea is to be like, hey. Look. I just vibe coded these things really quickly, and now it's making me this many dollars a month, which is making me rich.

Joseph:

What are some of the f ones?

Emanuel:

The one that really made him a lot of money is just like another AI image generator. It does what every other AI image generator does, but it's sort of specifically tool to help you make your social media presence. So it's like you feed it your image, you tell it what you want, and then it creates a bunch of, like, here's an Instagram, profile picture. Here's one for LinkedIn. Here's one for Facebook.

Emanuel:

Here's one for you know, you need a headshot for something. That's just kind of what it does. It's an AI image generator. That's it's simple as it goes.

Sam:

Yeah. So it's the headline spoils it, obviously. But let's talk about the money. It's a lot of money for what it is, you know, for just kinda, like, shitting out a game using AI. He's claiming that he's making $50 a month.

Sam:

What do you make of that? Like, I don't know. It's like, you know more about, like, the game the the game industry than any of us here, I think, for sure. But, like, is that something that is totally off the chain crazy, or what's going on there?

Emanuel:

So it's definitely it's a it's a really good, number for any game to hit. But let's break it down to, like let's first talk about the money that he's making, and then let's get to, like, why you're not gonna make this money yourself. So the money comes from, two sources but really one source. But the two sources are there are in game purchases. This is how every free to play game makes its money.

Emanuel:

You give people the game for free. They like it. They invest time in it, and then you sell them in game items because they're in love with the game and and and they want, I don't know. If you're playing a war zone and you want a cool looking operator, then you'll pay $5 to, you know, play as Nicki Minaj in in in Warzone. So he's doing the same thing.

Emanuel:

You can fly the little plane that you get when you run the game, but you can also buy a jet. And I think, I have the exact numbers in the story, but I think he sold, forgive me. He sold 12 jets for $30 each. Obviously, that doesn't add up to $50,000. So where is that money coming from?

Emanuel:

That money is coming from the real source of income, which is in game advertising. So you're in this three d environment, you're flying your little plane, and there are blimps in the environment. And then later on, he also added planets, and an advertiser can come to mister levels and be like, hey. I wanna advertise my company. Put my name on this blimp, and he sells it for x amount of dollars.

Emanuel:

Some numbers that we heard is, like, the bigger deals are $5,000. Right? So you wanna put your name on a a blimp for for that month if you pay him $5,000, and that's where the $50,000 is actually coming from. I think that's, like, a great model. Right?

Emanuel:

It's like if you make a cool little game, you sell it on the Apple App Store, you sell it via Steam, and somebody wants to pay you $5,000 to put, like, an in game ad in it, great. But he's not the first person to think of it. Like, the idea of in game ads have existed for a long time. They don't work because people don't trust that they'll get the eyeballs and the type of eyeballs they need in order to convert them to sales. And the reason that he sold all this ad space is, one, he, on his own, is a notable figure.

Emanuel:

Like, Levels IO is, like, a big name in the AI community. He has over 600,000 followers on Twitter, and they're heavily leaning into, like, the AI business, and a lot of the advertisers are AI companies. They know AI people are following him. They know AI people are gonna check out his game. Those are the kind of customers they want, so they are happy to give him $5,000 in order to get in front of that specific audience.

Emanuel:

Then the other thing, he tweeted out his game. People thought it was really cool. I think it's a cool proof of concept, and we could talk about, like, AI's place and and game development later. But, Elon Musk saw it, and Elon Musk retweeted his game. And that is kind of like, you know, just gonna I I I can think of fewer things that will increase your visibility in the world today than Elon Musk tweeting your product.

Emanuel:

So that obviously, like, hugely helped him in the advertising aspect of this.

Sam:

Yeah. For sure.

Emanuel:

And that's not gonna happen to you. Right? It's like and and Elon Musk is not gonna tweet your shitty game and No. PC gamer, a publication that I love, has really done a good job of following the Steam ecosystem, which is where a huge portion of the video game business takes place these days. And one thing they've tracked over the years is how many games are coming out.

Emanuel:

Like, every year, more and more games are coming out, and they counted in twenty twenty four, nineteen thousand games hit Steam. And it's just incredibly hard. This is the problem for free to play games and games that you pay for upfront. It's just like you can make a very good game. It's incredibly hard to get noticed and find an audience because there's so much shit out there.

Emanuel:

And one way to solve that problem is getting retweeted by Elon Musk.

Sam:

Yeah. I mean, if you're I feel like that's a a blessing and a curse, to get the the Elon touch. But, you just remind me of a story that you wrote a couple months ago about Netflix kind of being, your headline was Netflix is bullish on Gen AI for games after laying off human game developers. And I definitely see more and more kind of noise being made about generative AI games, even, like, vibe coding or not. Do you feel like people getting more into vibe coding for games is pushing that needle forward?

Sam:

Like, are we actually gonna have more and more just like Netflix is just gonna flood the zone with generative AI games as they do with trash TV shows and just see what sticks? Like, what's kind of the is there, like, a bigger industry, you know, like, takeaway here, I guess? Like, is this something that people should be looking at more, or is it just kinda like, oh, this guy had a hobby, whatever?

Emanuel:

Yeah. So I think one thing that is definitely happening in video games, which is happening across the tech industry is just a thing that is happening in software is no matter what you think about AI, it is becoming increasingly apparent that it's going to take less people to produce more code, and you're seeing tech companies making cuts based on that. And there's all these crazy quotes. I believe it was the CEO of, Claude today saying he thinks in, like, the next three years, it's gonna be half of the code is gonna be AI generated. And I don't know if those numbers are right.

Emanuel:

I wouldn't dare speculate, but it does seem to be trending towards like, a lot of that labor can be offloaded, and that's gonna be happening and is happening in games as well. That's not exactly what Levels is doing and talking about here, and this whole idea of vibe coding and, like, very quickly getting projects off the ground with AI. To that, I would say, and Joel made a note of this here, Microsoft a few weeks ago announced that they have, like, this great model for prototyping gameplay that they've been, using internally, and they have some game developers quoted in that paper promoting and saying what a useful tool it is. I highly recommend, listening to our friends at Remap, always because they have a great podcast, but they also had a great discussion about this very subject. The thing I would say is that, from what I understand about game development is that there is this prototyping phase where maybe you're making, like, these type of gray box environments that I talk about where you're just like, is the idea I have for a game an actually fun game?

Emanuel:

And, what we know about the industry is, like, there's a lot of these prototypes that people try, and then they find out that it's not really, worth the effort, and they move on to the next one. And one of those seems sticky, and that becomes like a fully fledged game. I can imagine that, if you're an independent game developer, like a one person team or like a two person team using AI tools to be like, I I have an idea for a thing that might be fun to do in a game, and it might be easier to get to that prototype phase with AI. And maybe if you find that it's really sticky, then you kinda dig in and do the actual technical and creative work to develop that into a fully fledged game. I can imagine that, but that is not at all, again, what Levels is doing.

Emanuel:

Like, the actual story here and the actual reason that it's making money is Levels made a video game. And it almost doesn't matter. Like, if you see the discussion around his project and what people who are playing the game and putting ads in are actually saying, there's, like, zero discussion about whether the game is fun or good. It's just the fact that he was able to get it off the ground and running that has caught people's imagination. So could other people use it to make something good?

Emanuel:

Yeah. Maybe. But that that that is not, like, the point of his of his project. Like, the quality of the game is besides the point.

Joseph:

Yeah. And there'll be marketing stuff as well. Right? There was a, an example recently where there was, like, a photo or an alleged screenshot of a new Guitar Hero game going around on social media, then a Call of Duty sniper one, I think. And people are like, oh my god.

Joseph:

They're making these games. And then you click through, and it's actually a survey for, would you play this game if it was real? And then, of course, you look at those images, and they're quite clearly AI generated. So then they'll they're just using AI to make the fake games and the real games now. You know?

Joseph:

Yeah.

Emanuel:

Yeah. That's prototyping marketing, which is insane. That is such a crazy thing that I think it was Activision that did it I love it. Yes. Activation.

Emanuel:

Yeah. Yeah. It's it's it's such a seems like such a bad idea, technically.

Sam:

Me so upset. Yeah. Yeah.

Emanuel:

Just like imagine it's I don't know. It's like, here's an AI generated of half life three. Would that be cool? And it's like, I guess. I don't know.

Emanuel:

You have to make the game.

Joseph:

It's cruel. Yeah.

Sam:

Yeah. The bait and switch. Yeah. Yeah. That sucks.

Joseph:

For sure. Alright. Let's leave that there. Thank you for taking the lead, Sam, so I could sit. And I wrote the show notes while you were doing that.

Joseph:

So efficiency efficiency. Hyper efficiency, which I love.

Sam:

Multitask. Love it.

Joseph:

Yeah. Alright. I will play us out. As a reminder, four zero four Media is journalist founded and supported by subscribers. If you do wish to subscribe to four zero four Media and directly support our work, please go to 404media.co.

Joseph:

You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week. This podcast is made in partnership with Kaleidoscope. Another way to support us is by leaving a five star rating and review for the podcast. That stuff really, really does help out.

Joseph:

This has been four zero four Media. We'll see you again next week.