from 404 Media
Hello, and welcome to the four zero four Media Podcast where we bring you unparalleled access to hidden worlds both online and IRL. Four zero four Media is a journalist founded company and needs your support. To subscribe, go to 404media.co. As well as bonus content every single week, subscribers also get access to additional episodes where we respond to their best comments. Gain access to that content at 404media.co.
Joseph:I'm your host, Joseph, and with me are two of the other four zero four media cofounders, the first being Sam Cole.
Sam:Hello.
Joseph:The other being Emmanuel Mayberg.
Emanuel:Hello.
Joseph:Alright. If you're a new listener to the podcast, you would have just heard some fun, interesting, funky, cybernetic music. I'm running out of adjectives at this point. If you're a regular listener, you will of course also have heard the same music, but you would be thinking, woah, wait, that's not the ordinary introduction to four zero four media. That's because we've replaced the very cheap stock music I bought very quickly when we were launching the company in 2023 with new music written by our audio producer, Alyssa Midcalf.
Joseph:So really, really appreciate it. Love it. You're gonna hear more music for the ads, I think, and definitely the outro as well. The outro is my favorite. Sam and Emmanuel, what do you what do you think?
Emanuel:We're pros now. We're legit. It really elevates the podcast that makes us sound really, really professional.
Sam:Yeah. We had a bunch of, like, comparisons to what we thought it sounded like, and several of them were like, we need an eight bit version of this. I had some good DOS games in there. I think I was I was talking about DOS with Harlow, which is a she's on an interview episode with us in a couple weeks that'll come out. And we were talking about, like, MS DOS and Linux and stuff like that.
Sam:And I think it was on my brain thinking about that and the way that this new intro music sounds. It's like it's very, like what did you say, Emmanuel? Was like sounds like we're, like, in an underground, like, water world dungeon or something in, like
Emanuel:It's like yeah. We're getting into the dungeon level, cave level. That's the that's the vibe.
Sam:Cave level. Yeah. It's good.
Joseph:Yeah. I love it. Another piece of housekeeping, Sam, there is a new podcast which came out today. We're recording this Tuesday. It comes out the day after for nonpaying subscribers on Wednesday.
Joseph:What is this podcast?
Sam:Yeah. So I've been working with the CBC on a podcast about deepfakes. The podcast is called Deepfake Porn Empire, and it's in their Understood series. So if you just go to your podcast app and search Understood CBC, it'll probably pop up. But, yeah, we talk about, the history of deepfakes, which goes actually way deeper than I even realized.
Sam:I learned a lot just making this podcast with them. It goes back to my favorite places on the Internet, which are, like, Usenet forums back in the nineties. People were making, you know, like, Photoshop and collage level, quote, unquote, deepfakes, but it was like, you know, porn face swap stuff. Yeah. And we also talk about the investigation that the CBC did into mister deepfakes, which is one of the biggest deepfake porn sites out there or used to be because it's gone now due to their investigation.
Sam:But I got to talk to the journalists who, the several journalists who worked on that, who confronted the guy, who found the guy. He, like, lived in a a suburb of Toronto, and they, like, found him backing out of his driveway in his Tesla through, like, a overnight stakeout situation. It was So Yeah. It was really it was a really crazy investigation on their part. So I got to talk to some of those investigators from Bellingcat, from CBC, from Checkedat, couple others.
Sam:But, yeah, it's a good time even though it's a really dark topic. I think just bringing bringing in some of the targets of deepfakes especially and hearing from them has been really important and is a huge part of this podcast as well. So and also the rest of the show, the rest of the understood feed is probably really interesting for for for listeners. We had they had Corey Doctorow on for a season to host about certification. They had Jacob Silverman on about Elon Musk.
Sam:They had, like, a a Celine series, which is so funny to me. I've kind of left field, but, like, I haven't listened to it yet, but I'm absolutely going to. I guess because she's from Quebec.
Joseph:But Right.
Sam:Yeah. It's it's a good good series in general. But, yeah, the one that I hosted is first episode is out today. Trailer's out. Next episode two will be out next week.
Joseph:Sure. We'll try to put a link in the show notes. If it's out, we should be able to grab that and point people to it. But, yeah, definitely check it out every week. Add it to your feed if that sounds like something you'd be interested in.
Joseph:As for this podcast, as ever, we have a bunch of stuff to get through. The first one is one that Emmanuel has been working on for a while. It's complicated, but absolutely crazy, frankly. I remember when Jason read it today and pushed out on Blue Sky, he said like, every bit of the story is insane and it does get crazy as it goes along. The headline is a quote, students are being treated like guinea pigs, end quote, inside an AI powered private school.
Joseph:So let's set just a little bit of context, Emmanuel. This is about something called AlphaSchool. Some people may have heard of that. I imagine a lot of people won't have, but it is a big deal. And AlphaSchool has got a lot of attention recently, and of course, that's one of the major reasons why we're covering it beyond obviously you doing all the reporting as well.
Joseph:To lay the groundwork, what is AlphaSchool and why is it a big deal?
Emanuel:AlphaSchool is, as you said, an AI powered private school. What that means in practice is there is like an AlphaSchool curriculum or program. They have physical locations across the country. They also have some sort of homeschooling homeschooling product that people can use if that's how they educate the kids, and kids go to these locations. They study for two hours a day, working on kind of like the core curriculum.
Emanuel:This is the stuff that the government said that kids have to know, the stuff that will eventually lead them to the SATs or the advanced placement AP classes that kids eventually take in high school, the standardized tests, all of that is condensed into two hours, and then once that is done, the kids can focus on more enriching activities, stuff that they work on with their teachers and other students, whether it's they're starting a business or they're working on public speaking or whatever, just like enriching more social, more more involved, innovative ways of of learning. And the idea of condensing that core curriculum into two hours has existed for a while, but the Alpha School method is essentially to help facilitate that with AI, with generative AI. So broadly speaking, that means that they are AI generating classes or courses that kids can take depending on their current level, and part of the pitch is that it's personalized. That is like a big concern for parents is, you know, kind of the one size fits all system in public schools. And this is an alternative to that that is allegedly made better with AI.
Emanuel:And AI has been hitting education really hard in all forms, college level, public schools, private schools. Jason wrote a really great piece, I think it was last year, that was based on interviews with teachers kind of talking about how AI is already affecting them, and Alpha School has emerged as the leading example of how this goes well, how AI is incorporated in a smart, useful way. It's promoted by a lot of people in tech. It has like Bill Ackman as a backer, Lyndon McMahon, who is the Trump appointed Secretary of Education, visited their campus in Texas and talked about how wonderful it is and how it's the future of education. And I think just like finally, how to set this up is, I think a really good way to think about Alpha School as a company is by talking about two of the people who are leading it.
Emanuel:One is Mackenzie Price, who is the person who's promoting like this two hour learning philosophy, and she's been doing this since 2014. And then there is this other person who is the principal of the school, his name is Joe Lyman. He is the person who's at the head of a quietly gigantic software company called Trilogy that does a lot of the development work for Alpha School, but also is has created something called I'll have to look up the name. It's a crossover, I think. And that is a remote work platform where, like, companies can connect with remote workers, and the key to that company is what we call bossware, just like very closely surveilling the workers to make sure that they're doing their work on time, that they're being efficient, and that philosophy has kind of made its way into Alpha School as well.
Joseph:Yeah. So it's like that combination of this idea of two hour learning where you condense the learning into there, and then you combine that with that bossware slash spyware, and we'll get into more specifics in that in a minute, but combining all of that with AI, with the promise that you send your kids to this school, which costs more than $60,000 a year potentially, right?
Emanuel:Prices vary, but it's most often 40 and can go up as high as 60,000 a year in California.
Joseph:Yeah. So it costs a lot of money. That's the promise. It has received media coverage. It's not like we're the first outlet to cover this, but it's been of a different vibe.
Joseph:Right? They've been on the Hardfork podcast by the New York Times. Just before we then move into our investigation, how would you characterize the coverage and the perception AlphaScore had up until this point? You said it's popular in Silicon Valley. Is it popular in the tech press?
Joseph:Like how do people perceive it? Do they think, oh, this is actually an interesting idea and like this could be fruitful? Like what's the vibe really?
Emanuel:The vibe is that it's swept up in AI hype and that is the main sentiment of the coverage around it. People talking to mostly McKinsey Price and discussing how wonderful AI is and how it's helping students without many specifics. Local news covering it, Fox News covering it, kind of all the like tech influencing people on Twitter tweeting about it. I would also mention that our friends at Wired did a very good investigation about Alpha School, but that was more about how the close monitoring and kind of productivity mindset of the school was really burning out some students and leading to parents to pull them out. But the focus of my investigation is the AI product, some privacy concern, and that is based on a lot of internal documents that show how Alpha School itself thinks about these issues.
Joseph:Yeah. So we'll talk about the specific findings in a second, and it's a really detailed investigation, so I do encourage people to actually go read it themselves as well. But I pulled out sort of three main areas, and we'll get into those in a second. But you mentioned this material you received. What were these leaked documents and who did you speak to to inform this investigation?
Emanuel:I talked to three former employees, but I think most critically is I got a snapshot of something called Workflowy. Workflowy is another organizational productivity tool. It's basically a note taking app that you can share with as many people as you want, and at Alpha School, the philosophy was that everybody documents their work and shares it with everyone else. So it's kind of like a really, really, really long document that shows what everyone was working on at that time, what they were thinking about it, the problems that they had. They use this method where they call it their second brain.
Emanuel:So it's like everyone is essentially saying in public, like, here's what I'm thinking about the thing that I am currently dealing with. Here are the problems. Here are the proposed solutions. Here's my strategy. And that also references a lot of documents where people are sharing, like, feedback from students and examples of things going wrong, and I had some access to that stuff as well.
Joseph:Yeah. So it's like a series of documents, but almost like a really helpful overarching document as well in this work, Flowy, think it's the name. And yeah, it gives you really piercing insight into how people inside AlphaSchool are actually thinking about the products and the AI that they're making and deploying and the impact of that as well. So again, there's a lot in there, but here are sort of the three things I pulled out, or rather the first one. The faulty AI generated lesson plans, obviously that is going to be probably the biggest concern to parents or to students who are actually at AlphaSchool.
Joseph:What is the deal with these faulty AI generated lesson plans? What's happening there?
Emanuel:Yeah, so before I get into the specifics, I would just say that it's hard to overstate how bought in Alpha School as a company is on AI. They use AI for everything. This thing that I just talked about, the second brain, like, everybody feeds that into ChatGPT and then talks to ChatGPT about their projects to get ideas and find solutions. But it also is how they generate a lot of the courses. They will digest a bunch of material, feed it to an LLM, and then essentially say, like, give me a class about this subject and generate multiple choice questions about this subject.
Emanuel:The best example of this, I think, is something that the company calls Alpharead. This is an app that students access and work on their reading comprehension skills all the way from grade school up until SAT level reading comprehension questions. And I think the most damning example that is in the story is there's a multiple choice question about an article, and the AI generated this question, and the answer is illogical. Like none of the four choices that are given to students make sense. They don't it's like they're asking them to complete a sentence to show that they understand what they just read, but none of the answers make logical sense
Joseph:given the
Emanuel:context of the art or grammatical sense. Like, literally, you can't make it work. And in the workflow, you can see employees talking about the fact that this is something that happens. It doesn't happen with every question. There are indications that these type of problems happen at least ten percent of the time, like hallucinations happen ten percent of the time with all LLMs, and that appears to be at least as common here.
Emanuel:And they talk about how this does quote more harm than good, because you're trying to teach someone and you're telling them that this is the thing that they should learn, but the thing that they're learning is incorrect. And that creates a problem where students, a, get frustrated and then don't trust the teaching. So and I don't know, just think about it as like, I don't know how you all were as students, but it's like school was pretty difficult for me as is, and the idea of me being presented with a problem that I literally can't solve because it's like it's been faulty generated by AI just sounds really, really frustrating.
Joseph:Is the teacher gaslighting me or playing a trick or something? And as a child, I mean, depending on your age obviously, you may not know that, Oh no, it's because it was AI generated and it hallucinated. And why would you assume that? And also, you would probably just assume that, well, this is a member, somebody in a position of authority, you know, my tutor or my teacher or the school or whatever, like surely this question is going be correct. And you try to read it, I think we actually included a screenshot in the article, and yeah, you can't pass it at all.
Joseph:It just doesn't make sense. Know you said the 10% figure, and some people may hear that and be like, well, it's only wrong 10% of the time. That sounds like quite a lot for a score, you know, and enough so that in these internal documents that you obtained, employees do really think it's an issue, clearly.
Emanuel:Yeah. I mean, the 10% of the time, which is a minimum. Right. Right? That is just about the highest or like the worst level of error where it's generating a patently impossible question.
Emanuel:We have no idea how common it is, but the workflowy shows that it is a problem where the AI generates things that the students can answer, but they're not actually preparing them for the SATs. Right? So they're taking a bunch of questions and reading a bunch of articles thinking that they're preparing for the SATs, but the comments on those materials from Alpha School employees show that they're actually not teaching them the reading comprehension skills that they need. They don't require critical thinking. They don't require a deep understanding of the articles that they're reading.
Emanuel:Like, an example that I saw is that you can answer one of those questions just contextually giving the paragraph that they're asking you about. It doesn't actually require you to have understood, like, the essence of the article that that you just read, which is what you're supposed to learn in order to allegedly, I don't know, succeed in life and do well on your SAT.
Joseph:Yeah. And it seems like some students are getting frustrated and they then have to sort of go back and do further education because they haven't fully understood what they're actually supposed to do, all of that sort of thing. The last thing on these lessons before we move on to the other couple is that AlphaSchool also uses AI to review and critique those AI generated questions. So it's using the AI to fact check and sort of probe and try to improve the output of the AI, which we've already said can be faulty. So there's like some sort of cycle going on there, right?
Emanuel:Yeah. So the company, again, very bought in on AI, and the future that they're striving towards is to generate the entire lesson plan with, again, quote, no human in the loop. A completely autonomous AI system That's their goal. That is educating. That's what they want.
Emanuel:That's how they wanna that's where they wanna get to. And in order to get there, what I heard from employees is that they're removing as many humans from the process as they can, and that means that students are presented with questions that are not vetted by humans, and they're relying on AI to vet the quality of materials. And this is sort of like a classic problem with AI that everyone understands is not the correct solution. It's like you have an AI that's generating some sort of material. You know that 1010% of the time, there are going to be hallucinations in it, and people before have tried to solve this problem with just saying like, oh, let's take another AI or take the same AI and have it do another pass on the generated materials in order to fix the problem.
Emanuel:Obviously, that's not gonna work because I
Joseph:just don't understand the thinking. I really, really don't. Like, maybe I'm missing something. I just don't get it. Like, why would you use AI to check the output of AI?
Joseph:I just fundamentally don't understand.
Emanuel:Yeah. I mean, anyone who thinks critically about AI knows that this doesn't work, and there's been many examples in the past of people trying this and failing, but I mean, yeah.
Joseph:That's what they're doing.
Emanuel:That's they want to do. They think they can get there. Yeah.
Joseph:Yeah. Okay. Then you found that Alpha School is scraping the content of other online sort of education platforms and resources to then incorporate into their own lessons. But it looks like based on the material, they're doing this without permission. Now, it's almost like a story as old as time at this point where Sam did a bunch of reporting about the Nvidia stuff as well.
Joseph:We've done lots of reporting about building AI tools based on scraped or repurposed data, that sort of thing. So is AlphaSchool basically doing the same thing? Like what are they doing there?
Emanuel:Yeah, very much this part of the story very much reminded me of Sam's reporting. You see, again, people in internal documentation instructing other employees to use their personal email accounts to sign up for various online learning sources and scrape them. And the reason that they want people to use their personal emails and not the company email is so that they don't get banned. Right? It's like if if if let's say Khan Academy, which is an online learning platform.
Joseph:Pretty big, pretty popular.
Emanuel:Yeah. And which it seems Alpha School scraped extensively. If Khan Academy notices that they're being scraped a bunch by the same company email domain, then they could, in theory, ban them. I've seen people being told to buy materials if they need to and then expense it. I've seen internal documents that just show a ranking of like, what are the most valuable textbooks that we can scrape?
Emanuel:And those rankings showing, well, here's what we can get from this textbook, and here's why we should scrape it, here's how we could scrape it, including instructions on how like to not get rate limited on on scraping a particular a service and and stuff like this. There's also, again, Wired, this article last year mentions that iExcel, one of these learning platforms, terminated its relationship with AlphaSchool for violating its terms of service. It wasn't clear why the article doesn't say so. My piece found that albert.io, which is another company that offers a similar service, terminated them also for violating their terms of service. So if there's a pattern of them kind of going to an online platform, scraping it as much as they can, the company finding out about it and like shutting them down and sending them a cease and desist or something like this.
Emanuel:And they're doing it to online materials, they're also just like doing it to textbooks, just whatever they can get their hand on.
Joseph:Yeah, gotcha. Yeah, so they're very much training their own systems rather than just sort of shoving it into church GPT, even though they seemingly do some of that as well. And then the last part is sort of the surveillance of the children. You mentioned the Wired did an investigation sort of touching on that a fair bit and then this looks at it as well. What did you find about the surveillance of these children?
Joseph:Like what is being monitored and how is that data being stored?
Emanuel:Everything is being monitored. Again, Joe Lyman, the principal of the school has this company called Crossover. It tracks mouse movements, it tracks what websites you're going to, attracts what apps you're using, how much time you're spending on each task. All of that is also true at Alpha School. They do this, they say, in order to improve their classes, but they also do it in order to like, I want to use a word that's not so harsh, but like, you know, compliance, right?
Emanuel:It's like to enforce their learning to make sure that students are actually engaging with the materials. And one of the ways they do that is they record videos of the students so they can see if the student looking at something else. Are they looking at their phone? Are they getting up from their desk to do an entirely different thing? And one of the first things I saw when I started reporting this story, which I thought was really shocking, is it was just a spreadsheet on Google Docs that had the name of each student, what grade they were in, what classes they were taking, and each of those names linked out to a video recording of a tutoring session they did on Zoom.
Emanuel:And that was also stored on a completely open Google Drive. Like, when you share files on Google Drive, you can have them completely locked down. You can share them with specific emails. You can share them only with people within your organization, or you can share them with anyone who has a link, right? It's like that's one of the most permissive permission is anyone who has a link can view this.
Emanuel:And that's how these files were shared, which is just like really irresponsible and possibly dangerous.
Joseph:Yeah. Maybe there are some legal ramifications there, which I didn't really think about while reading, but now just speaking out loud, maybe there's something there, but I obviously have to look into it. So you do all of that. And the piece is very nuanced, it's very fair, it's very balanced, but it does highlight all of that really important stuff you just said. What was your main takeaway from reading all of this material and speaking to these employees?
Joseph:Is it something like AlphaSchool's approach is completely fundamentally flawed or is it more like the marketing doesn't match the reality? Like like, what's your main takeaway after digesting all of that material?
Emanuel:So I think I've thought about this a lot, and I think the findings in the article, I think, are pretty damning. Like, I I think presenting students with faulty AI generated questions is really bad. I'm just thinking of myself as a student in high school having to go through that. That sounds awful. The constant monitoring sounds extremely stressful.
Emanuel:The scraping obviously is not fair to all these other organizations. So all of that is really bad. At the same time, I think sometimes we'll publish an article, and I as a reporter at the end of that process, and I think readers at the end of the article kind of walk away and say, this is completely irredeemable, right? Like this this whole thing from the ground up is is really messed up and should not exist, right? I'm thinking about a lot of websites dedicated to non consensual content or something like that.
Emanuel:Right? This is not that. I think and I have a two year old, and, like, the the the state of education in this country is such that even when he's this young, we're already thinking about high school and like, what school is he going to go to? Is it going be public? Is it going to be private?
Emanuel:Is it going to be some charter school, a magnet school? It's like parents are very freaked out about education for their kids. All the research shows that it's like such a huge determining factor for the rest of your life. So I understand why parents would gravitate towards this school, which a, does produce good results. Like, no one I talked to disputes the claim that Alpha School students are in the top 2% nationally in terms of their, you know, test scores.
Emanuel:And if you look at that as a parent, I mean, that's very appealing. If you can afford that, sure. That's great. I think the other core philosophy, like this two hour learning philosophy, that also sounds extremely appealing. Like, if you can condense all the the boxes you have to check for your education, for the SAT, for the standard right tests, for for for all this stuff to two hours and then free up the rest of the day for students to, like, pursue their passions and find themselves and and motivate themselves.
Emanuel:Right? Like such a huge part of education is motivation and just finding something that you actually want to do. Again, very appealing idea. I say this with the caveat of like, trends come and go in education and people get really bought in an idea and then we find out that it's bullshit. And also with the caveat that it's like, I'm not there yet, right?
Emanuel:It's like, don't know what it's like to have a kid at that age and deal with these questions. But is all, these are all valid concerns, and this is like a valid thing to pursue. And the employees that I talked to think of this as well and are very passionate about education. That's why they wanted to talk about this with me. It's just that at some point, like many other companies, they pivoted to this idea that AI is a magic bullet for all of these problems, and that at least so far is patently not true.
Emanuel:And they know it because they're talking about it internally at the company, knowing that it's not it's not working yet. And they think they can get there, but actively students are being presented with faulty AI generated questions.
Joseph:Yeah. I think that's a very, very good way to pull it. Alright. We'll leave that there. Please go read the full article linked in the show notes.
Joseph:When we come back after the break, we're gonna talk about one of Sam's stories about, well, I don't really know how to summarize except the headlines. We're just going to wait until we come back after this. All right. And we are back. Another quote headline, actually.
Joseph:The headline of this one is the most dejected I've ever felt harassers made nude AI images of her then started an OnlyFans. There is a ton going on in the headline, Sam. So take us to the beginning sort of chronologically. In January, Kylie Brewer, she is a content creator. She started to receive a bunch of very strange, ominous messages.
Joseph:What did they say?
Sam:Yeah. So she started getting DMs on TikTok and I think on Instagram and just like various places where people were messaging her and saying asking if she has an OnlyFans account, asking if she had set one up and didn't realize it or, like, are there two accounts and one is pretending to be her, you know, saying, I think someone has made an OnlyFans in your name. Here's a link. Some of them were men messaging her asking to get access to an OnlyFans account in her name and asking, you know, why they couldn't subscribe or, like, if they could if she could send them content or whatever. And then a close friend of hers messaged her and said, I hate to tell you this, but there's pictures of you going around and they don't look real.
Sam:They might be deep bakes or AI, you know, either way. I'm sorry. So she started getting these messages. Obviously, very weird, very concerning. And she wasn't really sure what was going on at first, but obviously someone had been making AI images of her.
Joseph:Yeah. Yeah. Very confusing messages all around. She then figures out sort of what has actually happened. Can you lay lay out for the listeners now what had actually happened once she figured that out?
Joseph:What were these people doing and sort of what was the connection to OnlyFans?
Sam:So this all started in December, early January. And if you remember what was going on at the time, a Grock was in the middle of making all these images of women in clear bikinis, you know, doing all these, like, sexual poses that people were generate asking it to generate of women without their consent on x. And from elsewhere, just posting those images and saying, hey. Can you take her clothes off on this picture? And then Grok would return return an image.
Sam:There was something like 3,000,000 of these sorts of images going around at that point. There were, I think, let's see, 23,000 that appeared to depict children according to the Center for Countering Digital Hate. So it was a huge problem at the time. It was just like kind of in the thick of it with the grok issue with the nudifying images. So it wasn't that hard to kind of trace back like, okay, that's what's going on here.
Sam:It's people are making images of her using Grok and posting them online and then also taking it a step further and creating LA fans with with that content. So using those images, putting them behind a paywall, and then trying to, like, scam people off of it and also harass her in the process. So, yeah, I mean, it's just she was she's a very, like, public person. She's a content creator herself. So she's used to harassment, especially like sexualized and gendered harassment online for many years.
Sam:Definitely draws the betrayal of the manosphere, which would encompass like in cells, red wing or red wing red pill, right wing influencers, and just people who are in that sort of cesspool. So this wasn't like shocking to her that she was being targeted, but it was definitely like highly triggering and also just like the OnlyFans thing was just another step introduction that she had never seen before in in her time online.
Joseph:Yeah. I think it's really, like, it's really rare to see this whole sorry to put it so, like, coldly, but, like, this supply chain basically or this sort of pipeline where, yes, we all know that especially because it's all been done publicly as you say on Grok, that people are using AI Nudify apps. We've reported a ton about how they're being used in schools and elsewhere, we've done that a ton. And we also know definitely through Jason's reporting that some people like try to scam by running OnlyFans or other sort of almost fake influencer accounts as well where this person doesn't really exist, but they're presenting it as this. This is a weird combination of the two that I feel like must be happening, but one, you don't really get the chance to see it maybe because we just don't or the people don't want to talk about it, is totally valid.
Joseph:Here, it happened to someone who is already public facing as you say, they're already a content creator producing the sort of content that may attract harassment and trolls and that sort of thing. And then they're able to actually lay it out as well. So Kylie then does go to TikTok to talk about what is happening, and we'll just play a section of that now so people can hear it for themselves.
Speaker 4:This is kind of embarrassing to about, but I recently found out someone has been using AI to remove my clothing, and they've been posting these nude photos of me online and on an OnlyFans account. I don't have OnlyFans. I've never consensually posted any nude photographs. I don't have any nude photos out there. I've been very careful.
Speaker 4:I have no tapes. Nothing. So to realize that someone is not only creating these images without my consent, but also profiting off of them and posting them on on a website that I have nothing to do with is sickening. I truly believe this is an attempt to intimidate and silence me because of how critical I've been of men, specifically white men as well as racists. And they have already attempted multiple doxxing campaigns.
Speaker 4:They found my phone number. I had to change it. Although it's not a surprise, it is still distressing. And I wanted to let you guys know this isn't real, but for any female content creator, for anyone who has their Instagram public, this could happen to you. And it probably will happen to you.
Speaker 4:You're already seeing the profoundly negative impact of AI, and that is why we absolutely regulation. I'm especially unsettled and terrified that this software is going to be used on children. If you could spread the word and interact, I'd really appreciate it, especially because this video keeps getting taken out.
Joseph:So she posts that TikTok. That's when you learn about it. Right? And then you you message her, Sam. What was that conversation like?
Joseph:What does she tell you?
Sam:I mean, she she explained to me a lot of the ways that it feels to be a target of this sort of harassment, which is it's like no one is no two people are the same when they deal with this sort of targeted abuse imagery, but there are so many common themes that happen every single time this happens. It's like, first of all, you're getting messaged by people who you don't know in a lot of cases or, like, acquaintances or people you haven't talked to in a really long time, school like, past schoolmates, whatever it is. People messaging you saying, like, hey. Heads up. Just wanna let you know something weird is going on, which is just unsettling in so many ways.
Sam:Like, there are layers to that. It's like, first of all, people have seen these images, so they must be spreading pretty widely. So lots of people have seen them, and they've seen them before you have you even had a chance to see them in a lot of cases. Like, often, this is how people find out that they're even in abuse imagery, deepfakes or not, AI or not. This is how they learn that they are posted on porn sites without their knowledge or consent, things like that.
Sam:So as people messaging them and saying, hey, heads up, which it's not that those people are doing anything wrong by saying that, but you're already kind of behind a ball that you didn't even know existed yet. It's like you're already on the back foot as far as getting this stuff under control because it spreads so widely that people you don't even know have found it. So she talked about that, and she talks about that quite a bit in the piece. But she also mentioned and this is something that I've heard from lots of different people who've been targeted by deep fakes especially is, you know, before it happens to you, you don't really realize how it feels. And, also, the feeling that people assume that because it's not real, that it's not damaging or it's not something that you should be that stressed about.
Joseph:Which is sort of the opinion of someone, someone is only gonna say that if they haven't experienced it themselves.
Sam:Right. Yeah. Exactly. And like and so many people who have then experienced it say, oh, I didn't even realize that I that, you know, I was like, that's really horrible, but like experiencing it just kinda drives home like, oh, this is actually like really traumatizing. And she made the point that and I don't I don't know if it's like worse is not the right word.
Sam:It's just like different and weirder and kind of adds to the uncanny valley effect of it is that you don't have control over these images because they're AI. So anyone can be making any images of you at any time. It's not just like there's existing content that was shot and you're being posted about your consent. It's that you don't really have any control over anything that happens anymore as far as your imagery. So, yeah, I mean, that was that's definitely, I think, a valid point for sure.
Sam:And she also mentioned that for someone, and this is something that QDCinderella, Twitch streamer, talked to me about years ago when she was targeted with deepfakes, is that as a sexual violence survivor, it just compounds all of that to a hundredfold. It makes all these things resurface or it can again, everybody's different, but it can make a lot of these these old traumas resurface in a way that you don't even realize that they were still there. It's like, Katie Sondrell talked about resurfacing, like, disordered eating and body image issues and things like that after she had seen AI images of herself in, like, sexual scenarios. And Kylie talked about something similar. She's like, you know, this is this lack of control feeling is something that really brings up a lot of traumatic things from my past.
Sam:So I think just again, it's like everyone's different. Everyone experiences something different, but it's just such a telling thing that these points hit people in the same way or in a similar way every single time. It's like the the damage is like very definable in all of these cases.
Joseph:Yeah. And we've only really been able to figure that out recently because unfortunately, it takes a minute for there to be a lot of victims and then for journalists like yourself to be able to then speak to a bunch of victims across time and then see, oh, wait, similarities in the responses here from people even while the technology is barreling forward, getting more and more powerful, accurate, sophisticated, however you want to characterize it, easier to use, all of that sort of thing. And then you get of course more and more of these reactions as well. On that lack of control, yes, lack of control of being able to make the images and crucially the distribution because somebody has made them and now they've made an OnlyFans and it's not her OnlyFans, she didn't create it, she can't immediately shut it down, so she doesn't control that. What did OnlyFans do here exactly?
Sam:So by the time she checked these DMs and clicked on the links that people sent her, the account was down. OnlyFans doesn't allow impersonation accounts or deepfakes. It's in their terms. So I'm sure her fans or followers or people who knew her probably reported the account, and it came down pretty quickly, it seems. It's still it's a huge problem that it was up in the first place.
Sam:It's not good that it existed ever. I think this is something that OnlyFans needs to figure out quickly, how people can open accounts and then change the names or profile pictures or the content of the account without the oversight, apparently. I don't know what's going on there. I ask OnlyFans so many questions for every single story I do, and they never ever reply to me or they haven't in years. So I don't know what happened in this case.
Joseph:They just don't respond now.
Sam:They haven't replied in a really long time to anything I sent them, so it just kinda goes into a void. They used to they used to have, like, a PR team, and they used to reply, but, I don't know. Not anymore. So, yeah, we don't really know in this specific case, and they probably wouldn't comment on a specific case anyway, but it's it's not good that it was ever online in the first place, obviously.
Joseph:Well, and there are other platforms as well. I won't name them because I don't know their exact policies off the top of my head, and there's no evidence they were being used in
Emanuel:this case. But there
Joseph:are other platforms like OnlyFans where somebody might just go and make an account as well if OnlyFans is shutting it down for
Sam:Yeah. And it's I mean, this is something that we see people doing all the time with AI generated sexual imagery is spinning it out and trying to profit off of it. I mean, wrote that piece about the the guy with the Ray Bans walking into massage parlors and then trying to sell or, like, you know, who knows what he was actually selling, but selling that content on subscription sites. And people were doing it on Patreon as far as, like, creating AI generated sexualized images and then putting them on Patreon and putting them behind a paywall, which sucks. It's like you're monetizing at that point the abuse of someone else under their name.
Sam:It's just it's so it's a fucked up ecosystem that I I expect better from OnlyFans as far as preventing this stuff, but there are other channels that that this happens in as well.
Joseph:Yeah. As you say, at least they they closed it down seemingly before she could access it and that sort of thing, but it still exists in the first place. To wrap up, what are the legal ramifications here, if any, somebody who took her likeness, generated those images, then created a fraudulent OnlyFans? Like, is there potential legal ramifications? Like, I'm not expecting anything to happen.
Joseph:I'm just worried. I'm just curious about the legality.
Sam:Yeah. I mean, it's so a lot of it is the way that it stood for a really long time. Lots of states now have deepfake laws on the books. Like, if you're in a deepfake, you have legal recourse. But in a lot of states, revenge, like, quote unquote, revenge foreign statutes have existed for a long time, and they're just hard to enforce.
Sam:It's like you're dealing with the Internet. You're dealing with anonymous users a lot of the time. So it's hard to find the person to sue them in the first place, but it's possible, and that is something that victims have tried. It's also it's hard to go to the police about this stuff because police don't really know what's going on with AI still even though it's, you know, 2026 at this point. They going to the police about gendered violence and sexual violence in general is really hard and often retraumatizing.
Sam:We have the Take It Down Act, which is the first federal level deepfakes law, but it has a lot of its own problems. It it created a forty eight hour turnaround time for platforms to get deepfakes taken down when they're reported. But, you know, Trump has said, yeah, I'm gonna use this law to take down the stuff that is mean to me. And Melania Trump and Ted Cruz have been the champions of that law. So we don't really know what's gonna what's gonna play out with that yet, but it's relatively new.
Sam:AOC and Paris Hilton are pushing. It's they did a press conference a couple weeks ago about this law. It's called the Defiance Act. I love the acronyms that they think of for these things.
Joseph:Does this somebody in the congressional office is tasked with coming up with those, and I love it every time. Yeah.
Sam:And you know that they know that they ate with this one. It's the disrupt explicit forged images and nonconsensual edits act. Like, okay. But that just passed the senate. It's gonna go to the house, but it would allow it would open up a venue or avenue for people who've been the target of deepfakes to sue the people making the content, which is different from some of the other laws which allow targets to sue or go after platforms.
Sam:So I think that's a decent step. All of these are, like, decent steps, but none of them are, like, encompassing the entire issue yet without creating more censorship and more problems for people producing consensual adult content in the process. So that's where that's at. We'll see we'll be following that and see where that goes. But, yeah, it's just it's hard legally.
Sam:It's just it's still incredibly hard.
Joseph:Yeah. We'll be following in. And depending on the case, you can even fall back to, like, an old cyber stalking charge. Like, I don't think that would probably apply here, but like, you sometimes see that, right, and you can fall back on those sorts of things. We'll leave that there.
Joseph:If you are listening to the free version of the podcast, I'll now play us out. But if you are a paying four zero four media subscriber, we're gonna talk about how cops are buying Geospy, an AI that can, you know, allegedly geolocate photos in seconds. You can subscribe and gain access to that content at 404media.co. We'll be right back after this.
Emanuel:Okay. Our next story is from Joe. The headline is Cops are buying Geospy, an AI that geolocates photos in seconds. Joe, we talked about Geospy on the podcast at least once in my opinion, but refresh our memory. What is GeoSpy?
Joseph:Yeah. So it's this tool that came out, you know, a couple of years, year and a half ago at this point, and it's run by a company called Greylark Technologies out of Boston. And what it aims to do rather than focusing on text or LLMs is much more focused on geography, photos, indicators in photos as well. And the idea is that a law enforcement officer will upload a photo, say, of a car, the side of the road or something, and the AI will then look at clues in the soil, in the hill in the background, maybe some other sort of characteristic as well, like there's a building with distinctive architecture, something like that, and it will geolocate that photo, like at least to the city or the area of the world, but in some cases allegedly a bit more specific than that. And we actually tested that out a while ago at this point on a couple of photos I think in San Francisco, and it did geolocate those to the correct cross section, the correct part of the city.
Joseph:We could do that because back then Geospy was a publicly available tool. They were planning to sell to law enforcement if they weren't already, but anybody could just go and use Geospy. When we contacted the company for that story, they closed off public access like a day later. So then it very much became a law enforcement only tool, but it's based on millions of images scraped from the internet. I presume it's actually much more than that.
Joseph:It just says millions and some marketing material I found online, but it allows any random cop to basically try to win at GeoGuesser, which is that game where it'll show you a photo and you have to as quickly as possible say, oh, that's from South Korea or water. Have you ever played GeoGuesser, Emmanuel?
Emanuel:I have. I I was really into it. I also, whenever we talk about Geospy, immediately think about GeoGuesser because, like, my one line pitch for it would be, it's like a Rainbolt AI, Rainbolt being like the GeoGuessr savant that immediately pinpoint accuracy finds every every image he sees on GeoGuessr.
Joseph:I don't think it's as good as him. Would say I totally agree with you, but I feel like that's something that is my fault. It actually gets like lost in the coverage because I'm I'm like quite amazed by this tool. Like, I'm not really like, I'm obviously not into AI hype, but when I tested out the tool and saw it, I was like, this is nuts that cops could be able to do it, but it's not as good as like a rainbow. Yeah.
Emanuel:Yeah. And he's made content with it. He has tested it on in his videos and stuff, and has shown that he's better. Yeah, and I'm also really impressed. It's like one of those, oh shit moments with AI, when you see something actually work.
Emanuel:Right. So we cover Geospy a while ago. We see it, we immediately wonder, okay, it's like, who's the first organization that is actually going to use it, or that we can prove uses it, and I think that is the reason we jump back into the story, right? Like, what agencies did you see are are using it or have used Geospy?
Joseph:Yeah. This is the first confirmed set of purchases of Geospy. Like, obviously, we, you know, assumed that some cop somewhere has probably bought it just based on the amount of marketing that Geospy does, the amount of posting a line, all of that sort of thing. But we found out the Miami Dade Sheriff's Office, MDSO, they bought it and the LAPD, obviously, Los Angeles Police Department, they have a license as well. We did know the LAPD was interested before because we got some emails and they were talking about the tool.
Joseph:I think they said, Oh, this is interesting, blah, blah, blah. They've never responded to my requests for comment on it, which is kind of weird because it's not even like It's not spyware. It's not a stingray. It's not like sophisticated surveillance technology. It's like an OSINT tool, a powerful one it seems, but I don't know why they wouldn't acknowledge it, you know?
Joseph:Anyway, whatever, they never responded. But those are the two agencies that now we know have bought it.
Emanuel:And how did we know that? What did you find that actually shows this?
Joseph:Yeah, so it was two gigabytes of emails from MDSO, and we got those through public records requests. It cost around $700 I think. So again, thank you to you as a paying subscriber for giving us money. So we don't even like We don't have to have a horrible agonizing discussion over whether we buy these very important emails or not. We can just like go and do it and we've done that with court transcripts, we're doing it with emails like this, etcetera.
Joseph:But got those. And what happens every so often, every few months is I'll message either you two because you're both PC users and be like, can you open this file format which only works with Microsoft Outlook? And every single time I forget that you can't do it in bulk or whatever or it's a massive pain in the ass, but I still ask the question. And then I finally realized, or Emmanuel just told me I think, well just download Outlook on your Mac and then it actually works. Oh, well that's solved that problem.
Sam:But in Mac mindset, man, I can't. The Mac mind cannot comprehend.
Joseph:It can't. Very honest. I I I'm googling, like, convert file format to work on Mac, and you get this sketchy website with all the efforts and stuff. It's like, you just just download Outlook. It's fine.
Joseph:I'm still in that mindset where, wait, you can get Microsoft Word on a Mac? Woah, that's crazy. They do that now? Nuts. Anyway, download that, I'm going through.
Joseph:And initially the emails are just kind of interesting. It's like, Oh yeah, they bought it. Whatever. That's pretty obvious. There's a purchase order there, a quote.
Joseph:I then go into a couple of subfolders in the two gigabytes of emails because I'm like, there's got to be more than this. I just read like 20 emails, that can't be two gigabytes. I then go in the deleted items folder and there's way, way more in the deleted emails, obviously they're still preserved. And then that's where I find the interesting stuff, like the mentions of the LAPD in the presentation, think, other material as well. But yeah, it was a pretty big interesting cache of emails we learned this from.
Emanuel:What are the most interesting things that you found in there? Like, what did we learn about the company?
Joseph:I mean, the price is pretty interesting. It's something like 15 no. It's like 5 k for a license, and that like an annual license is a software service. They got two of those, that's 10 ks, that gives you about three fifty searches. You can then buy many more searches as well and I think they actually bought a ton.
Joseph:In all, they spent about $85,000 on GeoSpy. But I think the two interesting things are that Greylark, the technology behind GeoSpy, they offered and it seems they probably did or rather they sold this capability. They offered the MDSO to build a custom model for their neighborhood. So rather than just the global GeoSpy that all the cops can use whatever, is we will train a special model specifically on the MDSOs jurisdiction so then you'll be able to even better geolocate photos in that area. And they say it's accurate to one meter and elsewhere they say accurate one to five meters.
Joseph:I don't know that because that is not We can't really test that really reliably, but that is their claim. I just find it very interesting that it seems the marketing strategies have this very big glitzy product, Geospy, with all of these tweets that occasionally go viral and the founder is often posting videos of like, look how powerful this is. Then they get a customer or go to a customer, and then it seems like that's when the actual really hard work potentially of making a custom model actually kicks in. And maybe that's when the tool gets good. I don't know again, but I just found that an interesting business strategy.
Joseph:And then the other one is just that it was like a cybercrime department inside MDSO that bought access to Geospy. And when I contacted them for comment, they said, Yeah, we're using it for child exploitation, that sort of stuff. But there's a form in there, just a Word document that cops can fill out to request that the cybercrime division or bureau does a lookup for them. And you go for the emails and there's interest from the robbery department, there's interest through other parts of the police departments as well. So it's not just like, oh, well, it's a cybercrime people buy, it's just they're going to use it.
Joseph:No. They're actually offering it to the rest of the organization as well. I thought that was pretty interesting.
Emanuel:Yeah. Just an observation about the custom model offering as someone who is mired in the world of AI business people, if they don't offer that, they're kind of dead on arrival as a company because surely what they did is scrape some public database, probably Google Street View or something in order to build this model, and that's something anyone can do and probably has already done. And there's no problem in like offering this product for free if it's not already out there. If you know the free version, please let us know. And like you have to offer something exclusive that only you can do, and maybe that's the custom model, right?
Emanuel:It's like if you go to each municipality and say, we can make you something more accurate for you if you want, then maybe then you have a business. But if you're just offering this thing, don't think it's going to last. Yeah. So I guess final question, as with all tech, and particularly AI, that various law enforcement agencies bought, surely you saw that Miami Dade Sheriff's Office got this, and then they drove the crime rate to zero, and they arrested everybody.
Joseph:Yeah. That's exactly what happened. Crime has been solved in Miami Dade County because of a AI powered OSINT tool. No. They as I said, they sent a statement, which is interesting because LAPD is just continuing to ignore it, but they acknowledged the purchase.
Joseph:Again, they said we're using it for child abuse investigation sort of thing, even though the emails clearly show, it's not ambiguous at all that they are trying to share it with other parts of the agency. But they said some pretty interesting stuff such as it hasn't led to any significant investigative breakthroughs, which
Emanuel:I is don't
Joseph:something that Geospy would really like to highlight. When you go to the Geospy website, they have a blog post where it's like, look, this is how cops using our tool managed to find where a stolen vehicle was in minutes or seconds or something like that. It doesn't mention the police department, it doesn't name it. My hunch is Las Vegas. I have another foyer out with them.
Joseph:That's just a guess though, it's just a hunch. But when you get a statement from the police who have bought the tool and it's not marketing, they say, no, it led to any significant investigative breakthroughs. And the other one is hasn't led to any arrests. Now, of course, on one side, well, maybe it's good. Largely untested AI at all has not led to any arrests, but I think that's pretty telling on at least the utility of the technology right now.
Joseph:Of course, there will absolutely be use cases where, I don't know, there's a child abuse investigation and they upload a photo and it does manage to locate where somebody I can 100% see that happening and maybe it has and it just wasn't a NDSO or it wasn't a LAPD or whatever. I should say that Geospy is also offering like a tool for interiors, which I'm really fascinated by where you upload a photo and it will tell you that's a house probably from this city or this part of the city or this hotel or something like that, which I know through reporting years ago, like Europol are doing that all the time to find child abusers. They are identifying the hotels, they're in the background and stuff or rather the interior. So I can definitely see a use case for that. And Geospy is now, it seems also offering it to the insurance industry where you go on the website and the button doesn't actually work, but it says under industries, it says insurance.
Joseph:So it's a police tool now, but who knows in a few months.
Emanuel:Alright. Shall we leave that there?
Joseph:Yeah. We'll leave that there. And with that, I will widen my Google documents so I can see it and I'll play us out. As a reminder, four zero four media is journalist founded and supported by subscribers. If you do wish to subscribe to four zero four Media and directly support our work, please go to 404media.co.
Joseph:You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscriber's only section where we talk about a bonus story each week. This podcast is made in partnership with Kaleidoscope and Alyssa Midcalf. Another way to support us is by leaving a five star rating and review for the podcast. Stuff really does help us out.
Joseph:This has been four zero four Media. We'll see you again next week.