from 404 Media
Hello, and welcome to the four zero four Media Podcast, where we bring you unparalleled access hidden worlds both online and IRL. 404Media is a journalist founded company and needs your support. To subscribe, go to 404media.co. As well as bonus content every single week, subscribers also get access to additional episodes where we respond to the best comments. Gain access to that content at 404media.co.
Joseph:I'm your host, Joseph, in an undisclosed different location. And with me are four zero four Media cofounder's Sam Cole.
Sam:Hey. You're you're in, a panogram
Joseph:At the same time. Feels like. Yes. Or the orange and white. And Emmanuel Bebo.
Emanuel:Hello.
Joseph:Yes. My camera setup is a little bit different. It should be back to normal next week. I'm giving a talk about AI fraud. Hey.
Joseph:If you're a company, listen to this, and you wanna talk about AI fraud, email us, and maybe we'll do it. Speaking of AI, let's go straight to the first story of Sam's. You've been working on this for a bit. Frankly, there was a big, big need for this, as I think we'll get into it. The headline is how to talk to someone experiencing AI psychosis.
Joseph:We'll get into AI psychosis in a minute, Sam, but this piece opens with a story between two friends. What happened between those friends?
Sam:Yeah. So these two friends, David and Michael is what we call them in the piece, but their names were changed for the purposes of the story, have been friends for a long time, and they're the kind of friends who will talk about anything. Like, they'll get into, like, weird esoteric stuff. When they're just, like, shooting the shit, they'll talk about religion with no problem. They'll talk about, like, spirituality, theories about, like, physics.
Sam:So they have this friendship that's very open and honest with each other. And one day, Michael posted on social media, and he said, hey. Is anyone interested in taking a look at some code that I've been writing? I am working on this project, and I'm not really sure what I'm doing yet. So if anyone wants to look, I can take a look or someone can take a look.
Sam:And then David says, yeah. I'll take a look. I know a little bit of this programming language. We're being vague about the details because we don't wanna out anybody in the story, but David looks at the project that Michael is working on, and it's gibberish. It's nonsense.
Sam:Like, the code doesn't work. None of it runs when he tries to run it. It's just, like, broken code in between these weird, like, theories and formulas having to do with physics and quantum physics and quantum mechanics and, like, entanglement and stuff like that. So
Joseph:a lot of words.
Sam:A strange situation. Yeah. And it's it's thousands of pages of conversations between Michael and ChatGPT. And in it, he's kind of working out this. At first, it's kind of like, oh, you're working on a prod a coding project, and then it becomes like, oh, now you're talking about, like, unlocking the secrets of physics in a way that the field doesn't know about yet, that sort of thing.
Sam:And David's like, woah. Holy shit. What is this?
Joseph:So there's some sort of AI psychosis going on there. I feel like it's pretty easy and comfortable to say that. You get similar emails, and I think Immanuel does as well.
Emanuel:Yeah, I get it.
Joseph:Know occasionally. Yeah, yeah, we all do. And it will be people saying stuff like, Hey, I have made a breakthrough through Gemini or ChatGPT. Look at what I made the chatbot admit. What are some of the examples you get, Sam?
Joseph:And, obviously, why is that fundamentally flawed?
Sam:Yeah. So this is kind of the inspiration to this story because I don't really ever know what to say if anything back to the people who email us or signal message us or anything like that and say, I have a tip for you. It's very important that we get the word out about this. Gemini has revealed to me that there's this big moment coming in politics, or it's it's always some kind of like there's something coming that everyone needs to know about. There's this big secret that no one knows yet, this big discovery, which is something that's very common among people who are experiencing delusions that are related to talking to AI, which also just to preface, we're gonna use lots of different phrases in this conversation, I'm sure, that stand in for AI psychosis because that's not a real diagnosis yet.
Sam:And I say yet because maybe someday it will be, maybe it never will be. We don't know yet. But AI psychosis is not yet a separate psycho a separate diagnosis from regular psychiatric episodes, psychosis, delusions, things like that. It's all kind of amorphous right now. But we can say, you
Joseph:know We'll we'll get into because I mean, we'll discuss this in a minute, but there's these different types. But yes. But, yeah, carry on.
Sam:Yeah. Yeah. It's like it's there's there's different ways that people experience this. So, anyway, that's kind of the the caveat to all of this. But yeah.
Sam:So I I get these emails. You guys get these emails, and it's a lot of them are Gemini, honestly, which is interesting to me. A ton of them are Gemini. I wonder if that's just because Gemini is being shoehorned into everything we do. You know, it's it's there.
Sam:It's available. And, like, otherwise, maybe you're not going directly to to talk about something, but maybe Gemini is there, and it's constantly in your face batting it away. But, yeah, we get these emails, and we don't really know what to do with them. And I was gonna be like, okay. If you were someone if I did wanna have a engage in a conversation with this person and figure out what's going on with them and try to help them even, How would I even go about that?
Sam:What resources are out there to answer that sort of question? And I don't consider this article to be, like, the prescription for, quote, unquote, AI psychosis, but it seemed like not a lot of other people were having a conversation of, like, how do I identify it? And what do I say to my loved ones who weren't running an existing, you know, project on kind of like a a support line or something like that. So that was the inspiration for the story.
Joseph:Yeah. That makes sense. You get those emails. Emmanuel, just to ask you before we get into more specifics. When somebody emails us and they say, hey.
Joseph:I've made this new discovery because of chat GPT, or as Sam said, it often is Gemini. Why is that an indication of AI psychosis? There's a spectrum there. On the lower end, it could be people are just believing it way too much. On the upper end, there's very, very serious stuff.
Joseph:But when somebody says, I found something new with ChantGPT, why is that wrong? Even though it looks like it's new information, why is that a misunderstanding of what the chatbot is doing?
Emanuel:Well, there's a few reasons. One is that we can have philosophical debates about consciousness and whether AI can achieve something like that, but I think even at the most booster y spectrum of the AI conversation. Nobody believes that there is currently anything even remotely similar to actual consciousness, and that is usually what people talk to us about, and that is just you can look at what they say. And I do this to be fair. It's not like I read, you know, a subject line and throw it away.
Emanuel:I sometimes, even out of morbid curiosity, kind of dig in and see like, well, why is this person thinking this? And the more you dig in, the more you see it's kind of like a circular, nonsensical argument that they're making with no evidence, and that is just something you see in a lot of like delusion. And then there is also, unfortunately, like in theory, you can imagine that something like this is possible. Like maybe one day this does happen, and this is how we find out about it. Like somebody comes to a reporter and says, I found something, you know, resembling consciousness on ChatGPT or whatever, even though I'm skeptical of that.
Emanuel:Before Generative AI, what we used to see and we had people show up at the office sometimes to talk about this with us is they'll come to us and be like, hey, I'm being followed by the government. And, that is certainly something that can happen and has happened, but unfortunately, the majority of the cases of people who say that is happening to them are just displaying signs of like mania and paranoia, and that is just a symptom of it and going to report it is just like a symptom of it. And I think this is similar, like a symptom of this new kind of delusion is trying to reach out to the press and reveal the truth about, you know, something crazy you found about AI.
Joseph:Yeah. So there's a story between these two friends. You're getting all of these emails, Sam. As you say, there's like, what do we even say to a person who is sort of bringing up these topics and discuss and talking about AI chatbots this way. So what what did you do next?
Joseph:Who did you reach out to broadly?
Sam:So my first call was to John Torres who is let me get his title right because so John Torres is the director of digital psychiatry division, and he works at the Beth Israel Deaconess Medical Center. It's a Harvard Harvard affiliated medical center. He and I talked before about the story that I wrote about Meta's AI therapists and how Meta chatbots in its AI studio, which is a user generated chatbot platform, were creating therapy chatbots, then those chatbots were saying, yes. I'm a licensed therapist. Here's my license.
Sam:They're just making shit up and saying, like, I'm I have my credentials, and I went to school at this place, and I'm licensed in these states. So I talked to him, and I knew he was already looking at this pretty closely. And then I also talked to Steve Taylor who is the head of the psychiatry department at University of Michigan. And between those two, I think I was like, you know, I knew John was gonna have a pretty level headed approach to AI in general. He's talked quite a bit about how we don't really know for sure what's going on yet, but, you know, the the medical community is keeping an eye on this, and it might take time.
Sam:And then Steve Taylor had some really interesting thoughts about the way that chatbots are entering this era of surveillance, like you mentioned, and are at this moment where you're already actually being surveilled pretty heavily by the government, by the state, by each other, other people. So they're in this era that we're in where paranoia isn't an unfounded feeling, but these chatbots can take something like suspicion or curiosity even about a topic. So if you're like, I wonder if my Wi Fi is secure, which is something that I talk to a chatbot with Meta chatbots about, into this space very quickly of your Wi Fi is being tapped by the CIA, which is something a chatbot said to me. It was like, there's someone 500 feet from your house right now watching what you're doing on your private Wi Fi network. I'm gonna scan your networks and see what's going on.
Sam:It's like it's all made up, and it's all fake just like the AI therapist. I talked to Adiane Brasson, who is a 26 year old who lives in Quebec who started the human line, which is a project for people who it's basically a support group for people who are whose loved ones are experiencing AI delusions. And he had some really interesting thoughts to say as someone who is experiencing has experienced this in his own life, but also is hearing from lots of other people, hundreds of other people who are experiencing it in their lives at the same time and trying to offer support for those people. So Sorry. That was all over the place.
Joseph:Yeah. No, that's okay. So, yeah, they were getting a lot of reports of, hey, yes, I've suffered from this, or I believe I have, or people I know, that sort of thing. And then of course, the other parts of the conversation were, well, how do you actually speak to somebody who is exhibiting these signs? Actually the two friends, or rather the one speaking to one of AI psychosis, figured this out as well themselves.
Joseph:But what were you told? Like, what's potentially a good way to talk to somebody that may be doing this or may be experiencing this?
Sam:So there unfortunately isn't like a five step process to get someone out of delusional thinking. And something that every expert I talk to caveated this with is if someone is saying things like, I'm gonna go to the top of a parking garage and see if I can fly because told me to, You call 911. You call a professional. You call a medical professional. You get emergency help activated at that point.
Sam:This we're talking about before someone gets to a point where they're a danger to themselves or others where they're just kind of floating these, like, really strange ideas at you, and you won't really know what they're talking about. And you're kind of saying, you know, how do I bring this person back to reality and critical thinking? So the consensus is and it's a frustrating one, but the consensus is you need to listen to them and hear them out in a way that is not immediately like, no. That's wrong. That's crazy.
Sam:And I think this goes for a lot of different types of communication and is probably good a good communication skill in general. But listening to someone without judgment and being able to empathize with what they're saying, this is something that David and Michael went through when they were talking and Michael was the one who was experiencing the strange beliefs brought on by chat GBT was, you know, David didn't immediately say, I think that that's wrong, and you should stop using ChatGPT because it's dumb, and you're dumb for using it. It's like he was very understanding and patient and knew that alienating his friend would drive him more into the arms of chatbot, so to speak. So he was like, you know, I'm gonna talk to him about the things that he is interested in because I know that he has an open mind about things like anti anti authoritarianism and, you know, appealing to that side of it. It's like, you know, these systems are built by people who are, you know, invested in you using them and and want you to stay hooked on them and saying things like, it seems like you have a curiosity about this topic.
Sam:It seems like you're really interested in in learning more about this. So why don't we look at some other resources? Why don't we check our other resources against what you're hearing from ChatGPT? There was an acronym in the story that I I learned about from Etienne Brisson from human mind, the LEAP method, which stands for listen, empathize, agree, partner. And this, again, is great communication tools for any anytime you're talking to someone in general, especially when they're experiencing a mental health episode.
Sam:So it's designed to talk to someone who doesn't understand or doesn't believe that they are in need of mental health services or mental health help, or they don't think that they are in any kind of crisis at all. So listening to them and empathizing with them, especially, I think gets you a long way. And then you're partnering with them to then say, okay. We're gonna get this together. You're not just gonna go and do your own research with Gemini.
Sam:We're gonna sit down and figure out what's going on here and get to the bottom of this as a team, which I think is really important because in a lot of these cases, if your friend is trusting you, your loved one is trusting you to talk about their usage of chatbots in a way that is vulnerable for them, that's a big deal. And you might be the last person that they know who will listen to that sort of belief and delusion, honestly. You wanna be very careful about isolating them and pushing them away anymore.
Joseph:So what happens now? I mean, we can't predict the future or maybe what do some of the people you spoke to hope happens now as in more rigorous study of this? Because it's basically like anecdotal at this point. There's essentially no literature on it. Right?
Joseph:Like, do they want there to be more essentially?
Sam:Yeah. For sure. It's really interesting because and frustrating because something that every mental health professional that I've talked to about this over the last year or so has said that they're looking at it, and they're listening to people, and they want more to be published on this topic. It takes a long time for things like, you know, actual studies to happen and to catch up to real world experiences. But in the meantime, mental health professionals are dealing with it in a clinical setting.
Sam:So people are coming to them and saying, hey, I am having these really concerning beliefs about the world, and I need your help. And clinicians are kind of like, oh, shit. We have to now deal with this technology that's messing with our practice. And at the same time, people are using AI chatbots to facilitate their own therapy by talking through topics before they come into therapy. So sometimes clinicians are like, well, that's a good tool to be able to write down your thoughts and have them processed in that way.
Sam:But, also, people are struggling with their relationships to chatbots. And in the meantime, they're kind of like, how do we deal with any of this? So I think definitely it's like, I think time is gonna be the thing that answers a lot of these questions. I think in the meantime, definitely things like what OpenAI is doing with the it keeps it keeps introducing these new, like, snitch tools. It's like, we're gonna assign a trusted contact.
Sam:If you were saying something concerning to chat GBT, then we'll call your mom. It's like, that's not actually useful in these settings at all. A lot of the times, especially young people, are turning these chatbots because they don't wanna talk to their parents, and their parents might make the situation worse. You don't know their home lives.
Joseph:Well, there was a specific case as well where there was a murder suicide. I think the Wall Street Journal reported on the lawsuit first, and it was something like, yes, you should unfortunately end your own life. That's like the suicide pass through the chatbot saying that. The chatbot also convinced the person that your mother or father can't be trusted for some reason, and then they went and killed them as well. So bringing in a person like that might not be the best way.
Joseph:I mean, again, I'm not a mental health professional, but that's just what was reported and and was in the lawsuit. Yeah.
Sam:Right. And it's it's so these solutions, like, quote unquote solutions that the tech companies are posing to the ways that their products are affecting people's mental health are just so detached from the ways that people in crisis are actually using the products. It's shocking. And I can only imagine they're just trying to Band Aid over they're doing everything short of shutting it down. That's never an option that they pose to make it safer is to stop stop letting people use it or stop letting it talk to you like it's a sycophantic person.
Sam:Stop using, you know, like, I, me, I feel, I think, those kinds of phrases. I think that would be an interesting solution, but you never hear that from Sam Altman. So, yeah, it's I don't think we can rely on the tech executives to figure out how to make these tools safer. I've kind of given up on the idea that they can or will because at the heart of them and what makes them popular and what makes them successful is also what makes them incredibly harmful, incredibly damaging to people's mental health in this, like, pretty big subset of people. We're talking about hundreds and thousands of people who are experiencing mental health crises by ChadGBT's own numbers, and that's just ChadGBT.
Sam:We're not talking about Gemini or Claude or any other. So, yeah, not an optimistic, like, look at the the next few years and where this is going, but I think it is good that people are realizing that it's it is a problem, and it's something that more and more of us are gonna have to grapple with eventually probably is having a loved one who has fallen down this weird rabbit hole with a chatbot.
Joseph:Yeah. Absolutely. Alright. We'll leave that there. When we come back after the break, we're gonna talk about one of the stories I had about ProtonMail.
Joseph:We'll be right back after this.
Emanuel:Okay. We're back. Next, we have a story from Joe. The headline is Proton Mail helped FBI unmask anonymous Stop Cop City protester. Joe, let's start with the facts of the case and how it relates to the Stop Cop City movement.
Joseph:I'll preface this with I didn't follow the Stop Cop City movement and actions that closely. Like, I followed it from the sidelines. I was reporting in the Guardian and that sort of thing. But people are probably familiar with it where there was there's the the plan is to construct a very, very large police training facility in this forested land near Atlanta, and people were upset about that for various reasons, and that manifested in different ways. And on the lower end of the spectrum, is stuff like lawsuits, protests, going to community meetings and voicing it that way.
Joseph:And it does then go across the spectrum to a series of other actions which are straight up violent. I mean, there is arson against the facility. There is doxing of people connected to the plan to build this facility as well. So there's that. This particular case is very, very focused on a blog which published details of those actions.
Joseph:I go, hey, there was a protest here. Oh, hey, Activists camped in this forest on this day, etcetera, up to stuff including the arse and material like that. It's very much focused on that blog and the ProtonMail address linked to, or the authorities believe linked to that blog. So in the course of this investigation, the FBI wanted to find out who was behind that ProtonMail account because whoever controlled that had administrative access to this blog that was basically publishing the details of all of these actions. Now, this is more the FBI's interpretation.
Joseph:The FBI interprets it, it seems from the court records we got, is more encouraging this behavior, whereas they might say, we're just publishing what happened. But that was the thrust of it, where the FBI wants to find out, well, who is running this blog that is documenting all of this activity?
Emanuel:I think our audience probably mostly knows this, but I guess to the people who don't, we should say that ProtonMail is a privacy focused email service provider. The part of the deal is that they are located in Switzerland, right? And they're for, say, they are only beholden to the authorities there, and that is kind of part of the pitch, as opposed to theoretically a Gmail, which would have to respond directly to the FBI or to the DOJ or whatever agency in states that comes around demanding access to all kinds of data. It is a fairly common client among privacy, some privacy focused people. So all that being said, what data did Proton give the authorities?
Joseph:Yeah. It's not the contents of the emails. ProtonMail cannot do that. It is Their service is end to end encrypted, much in the same way that Signal is or WhatsApp is. The authorities, or rather the company, can't even get the content of those messages.
Joseph:In this case, the data provided was to do with the person who was paying to support the account. So although ProtonMail is free, generally speaking, and I'm sure a lot of people use that, you can pay for extra storage. They have a VPN as well and like a Google Drive, almost equivalent where you can store a bunch of files. What Proton gave the Swiss authorities, which then gave to the FBI, was information about the payment source for that account. Now, we'll get to Proton's response in a bit probably.
Joseph:You can pay for those services with cash, with cryptocurrency, or with a credit card. And the fact that the person allegedly behind this activity was identified through payment information, I mean, the court record reads as if this person used a credit card to fund that account. I should also say this didn't actually make it into the piece, but it looks like Proton also provided the backup email addresses of that ProtonMail account. Oh, you get locked out of your account, you have a recovery email address, that's very common on Gmail or wherever. You also do it on Proton.
Joseph:And I have seen it in other cases before where Proton has provided recovery email addresses to the authorities. Frankly, I didn't even bring it up in this piece because that's already known. That's already been documented in other cases. This was really interesting for three reasons. One being that the data ends up with the FBI, which I don't think people may understand, as you say, because Swiss laws and the MLATs, which we'll get to.
Joseph:The second is the fact that it was payment data, which I personally had not seen before. I've not seen Proton hand over payment data before. That was interesting. And then the third thing is, frankly, the link to StockCopCity because that is a series of actions that a lot of people are very enthusiastic about and do care about. And again, there's a spectrum of straight up violence to also just grassroots activism, and it's very interesting they provided data in that context.
Emanuel:Yeah, that's interesting. I edited the piece and I didn't know this part about the backup email addresses. The danger there I think you're implying, but let's say it explicitly is if one of those backup emails was a Gmail, the FBI could then go to Gmail, get that data via that access, link the two accounts, and reveal who the person is, right? Is that kind of the idea?
Joseph:Yeah. Yeah. Or the backup email addresses like or let's say you have the Proton email as anonymousperson@protonmail.com, and then they get the recovery email or the backup email, and it's john smith at gmail. Well, john smith is running the ProtonMail account. And, again, the reason I didn't include it here was because we already knew that, but also the backup email addresses in this case were just more Proton emails.
Joseph:So, like, it actually didn't really help the authorities, it seems. It's the payment data that was interesting in this case.
Emanuel:So how was the FBI able to actually use this data?
Joseph:Yeah. So I'll just read out what it says in the court record. And it says, quote, on 01/25/2024, subscriber information received from the Swiss Mutual Legal Assistance Treaty Unit revealed then the name of the person as the payment source for the Proton email address, defend the atlantaforestprotonmail dot com. So in other words, this payment information that Proton provided did allow the FBI, it did help the FBI figure out who was allegedly behind this account. And I'll just say, this is very clear article, but the reason we're not naming this person is a couple of things.
Joseph:First, it looks like they actually weren't ultimately charged with a crime. I searched federal databases. This being the FBI, you think it would be a federal charge if anything came from it, right? I didn't see their name in any of those. I also looked at this very large 60 person indictment in Atlanta from the state of Georgia, which was this very large overarching WICO case where they were basically trying charge everybody who went to this protest, or a lot of people who went to this protest, is you are part of an overwhelming criminal conspiracy that fell apart for various reasons that we don't really need to get into.
Joseph:But this person linked to this ProtonMail account was not in that either. So from all indications, this person was not charged with a crime. Again, I might be wrong. That's just the visibility I have. And then also the court record just include their personal address.
Joseph:So I'm not gonna publish their name and their personal address, and we'll be insane. But that's why we didn't name the specific person. Yes.
Emanuel:So what what is your overall takeaway here? Like, what did we learn from this entire incident? Even though this person ultimately was in charge, if the FBI wanted to or had the reason to, they could because they found the person. So what what did we learn from from the whole affair?
Joseph:Yeah. There's a few different things. I think firstly, and this wasn't even based on the reaction. This was based on me just saying this document to going, wow, that's a story. I need to write that.
Joseph:I think it's really likely that a huge chunk of Proton's users, if not most of them, don't understand that Proton can and does provide data like this. And by that, I mean not just the payment data, but the fact that they can give data to Swiss authorities that then give it to the FBI. In Proton's marketing material and its claims online, it says it only complies with Swiss law. That is true. The request technically had to come from local Swiss authorities, but they then provided it to the FBI.
Joseph:And it's not like this mutual legal assistance treaty request, this MLAB. It wasn't like the Swiss authorities just randomly decided, hey, you know what? That's an interesting ProtonMail address. Let's get data on that person, and we'll just keep hold of it. And then the FBI happened to ask for it.
Joseph:The only reason the Swiss authorities are requesting data from ProtonMail is because the FBI asked for it. So the fact that some people might get hung up on that distinction, I think it's effectively meaningless. The data was funneled to the FBI and that data helped the FBI identify this person. Related to that, I just think it shows the power of MLETS and that people need to probably understand that better. And then I guess just the last thing I'll say is that there's been a very strong reaction to this piece.
Joseph:People have said, wow, this is crazy. This is really, really interesting. It reveals this. And I agree with that. Others say we are blaming ProtonMail for this, for pointing out this series of factual events, and they provided data that then helps the FBI.
Joseph:As a side note, just anecdotally, I think people can get pretty weirdly tribal about their secure communication tools like, well, I use Proton and you're saying it does this, and that's a reflection of my character, which is not true. And if that's your reaction to getting new information like this, I don't know. I think you need to assess that really. But I think that we now know, and it's very, very clear, the payment information can go from the Swiss authorities over to the FBI, and I think that's pretty interesting.
Emanuel:Yep. People in the cybersecurity privacy setting can get tribal and frankly, very pedantic. And I think we saw that in the reaction. Something that I wanted to say in response to the response to the article is that something I learned from reading your book, Dark Wire, the Incredible True Story of the Largest Thick Sting Operation Ever out now in paperback out now in paperback. Is that true?
Emanuel:Yeah? Are you nodding? Okay. Yes. Yeah.
Emanuel:Is that like, yes, proton only responds to what like Swiss law demands of it, but the FBI and other law enforcement agencies in other countries understand that, like, we live in a very connected world and they're not stupid. They're very sophisticated and they cooperate with each other, and that cooperation is leveraged in all kinds of interesting ways, which is another thing kind of like I learned from your reporting is something is not necessarily legal here, but it is legal in this country, so we can work with this country in order to get it done. So it's like, yes, the FBI can go to the Swiss authorities and ask them for Proton information, and that doesn't mean that we're saying anything, like, bad about Proton as a company. Proton is clear about that. Proton has, I would say, good privacy policies.
Emanuel:Right? It's like they're they're encrypted and all of that. The point of the story is to show that this path for investigation is something that's available to to law enforcement. Therefore, if you pay for stuff on Proton, you need to consider that and perhaps not pay with a credit card. And I think, as you said, it's like probably most people don't think about that, and that is that is the point of the article, and it is not to, like, slam Proton in favor of some other service.
Emanuel:If anything, it's about letting people how to use Proton in a way that is, like, safer and more privacy focused. Yeah. And I'm not sure, like sometimes I'll read a story and I'll be like, okay, well, we could have engineered this story to kind of like avoid this criticism, but I feel like the the story is incredibly fair and clear about this exact point. So I don't know. Read the article.
Emanuel:Have faith and trust in our reporting.
Joseph:Yeah. I mean, I'm definitely going keep an eye on it, Neve. But if anybody else has interesting related court records like that, please let me know because I do think the metadata side of secure trap platforms is very, very interesting and how authorities can get that internationally, especially. We will leave that there. If you're listening to the free version of the podcast, I'll now play us out.
Joseph:But if you are a paying four zero four Media subscriber, we're going to talk about a couple of viral app developers. You've probably seen them online. The life they lead and how ultimately they lied about exposing their users' highly sensitive data. We can finally reveal that. Emmanuel's going to give us all the details.
Joseph:You can subscribe and gain access to that content at 404media.co. We'll be right back after this. All right. We're back in the subscribers only section. Emmanuel, this is a really interesting update to a story I'm pretty sure we've previous previously spoke on the podcast because it's about Google Firebase stuff.
Joseph:But the headline is Viral Quitter Porn Addiction App Exposed Masturbation Habits of Hundreds of Thousands of Users. Let's start with sort of the the new bit before we get into the context. Who are these developers and what sort of a life style they lead? Because they were recently, I think, in the New York Magazine profile.
Emanuel:Yeah. The reason we ultimately revisited this story is that New York Magazine profile, which is a long and I think good profile that you should read. The thing that was wild to me when I read it is that it was missing at no fault of the reporter, it was missing a really critical piece of information, which is this app that promises men, it seems mostly young men and even minors to stop watching pornography and masturbate because their philosophy is that is bad for you and kind of like robbing you of your life force is was jeopardizing their private data. They're like incredibly intimate data, including, you know, how often they watch pornography and how they feel about it and feelings of guilt about it and confessions and all that sort of thing. And the people here that I'm talking to, it's it's a it's a whole group of men in their early twenties, but specifically the two behind this app are Alex Slater and Connor McLaren.
Emanuel:They developed this app. They claim, and I have reason to believe that they're not inflating their numbers by much if they are, that they have hundreds of thousands of users, and they're generating half $1,000,000 a month from this app alone. They have other apps. They have this little group of young men they call the app mafia, and it's kind of like this whole, you know, what we call the hustle bro culture. Make apps quickly, deploy them, go viral via various marketing stunts, monetize, and then be incredibly rich like these guys.
Joseph:So perhaps pretty popular, hundreds of thousands of users as you say. Well, how did we previously cover it? And this is going to be a little bit weird because you almost just have to retell the story of the other piece. Yeah. And then go, hey, that was Quitter.
Emanuel:You know what I mean? It's kinda weird. That is basically the
Joseph:part of the
Emanuel:whole and basically what I'm gonna do right now. But, yeah, it's like Yeah. This all goes back to our initial reporting about tea, which was hacked via this Google Firebase misconfiguration issue. Google Firebase is this platform for developing mobile apps. There is something funky about how it handles some settings by default.
Emanuel:The short version is that someone can fairly easily get access to the back end, including user data. If it's stored there, that is how the T app was hacked. After that happened, this independent security researcher tried the same method on a bunch of apps, published his research, which found, like, that many of the top apps in the mobile app stores have this issue. He did not name Quitter specifically because he thought and we agreed that the data was incredibly sensitive, and he couldn't get the developers, this guy, Alex, that I contacted as well, to fix the issue. I mean, why why couldn't he get it to fix couldn't get them to fix the issue?
Emanuel:I mean, that is kind of mystery, but, like, let me go through my view of events, which is, first, the researcher contacts them in October 2025, and he says like, hey, I found this issue, and he did this to all the apps. Right? He was doing like the responsible disclosure thing, contacting the people directly, and giving them a chance to to to fix the issue before he publishes anything. Most people do. He contacts Alex.
Emanuel:He says, hey, you have this problem. Alex immediately responds, and he's like, oh, thank you so much. This is my fuck up. I'm going to get right on it right away. I've seen like the the text exchange between them where Alex is like, I'm going to fix this within the hour.
Emanuel:And since then, in October, the researcher kept checking and checking if they fixed the issue, and they didn't. And eventually, out of frustration basically, the researcher comes back to me and he's like, Hey, I can't get this guy to fix the issue, and I'm really worried about these users, and I want to publish my research, etcetera. At that point, I contact Alex, and I cold call him because I have his number, and I pretty much got the sense that I woke him up. I don't know what he was doing. I don't know where he was staying at the time.
Emanuel:He is British. It's possible that I caught him, like, because of the time difference in London, but he's definitely a lad.
Joseph:But the vibe was like We understand that.
Emanuel:He wake the vibe was like, you know what I mean? I had a crazy night in my Miami mansion. I'm waking up like a little hungover, like, who is this guy on the line? And I think I caught him off guard. And I was like, hey, like, this is Emmanuel for 44.
Emanuel:I'm reporting this story. You have this issue. Like, do you know this guy? And he's like, oh, yeah. I know this guy.
Emanuel:And I was like, I just checked and, like, you still have this issue and all this data is still vulnerable, and then he was like, you know what, actually that guy is kind of bullshit. Like, I think he was lying. I think that data was fake, and I was like, it's not fake. Like, I'm looking at it right now. I'm like, I'm pretty certain that it's real and that you have a real issue, And then he was like, nah, like, this guy's a loser, and I don't want to talk to you anymore.
Emanuel:Have a nice day, and he hung up on me. And he was like so certain, like the the lie was so bold that I started to doubt myself, and then we went back to the researcher. I created a new profile on the app, and then the researcher went and, like, fished my data out to prove that it was still vulnerable. And at that point, I tried to contact it contact Alex over and over again, text, LinkedIn, email, I called other people in this like AppMafia group, nobody picked up, and, you know, I wasn't being super confrontational. Was just like, hey, there's still this issue, and I was like, this researcher is offering help on, like, he can help you fix the issue if you don't know how.
Emanuel:I was kind of like, he's he wanted me to, like, deliver this message to you, and they just didn't respond. They didn't fix the the problem, and we sat on the story, I think, for, like, a week, and then I think you and I had a conversation, and we were like, okay, well, I don't think they're ever going to fix it, and so we should just like publish the story without naming the app, and that would make it safe. Publish the story, time goes by, then we see the New York and and you and I had a discussion, well, eventually, if they fix it, we should publish another story because it's like, this is not just some random app. It's like the most popular of these apps, and these kids are wild, right? It's like they're, I think, like, not even 22 yet, and it's like they're driving Ferraris.
Emanuel:They're living in a Miami mansion. They're posting videos flexing on the entire world because of how rich this app is making them. Meanwhile, it's like they simply do not give a shit about their users even though it's like an incredibly sensitive topic that the app is about. So we see the profile, we check again. At that point, we saw that they did eventually fix the problem, and then we felt comfortable naming the app, which is why we're here today.
Joseph:Yeah. I mean, you answered one of my questions. I was gonna ask what was your impression of them and do they care? But you say they don't give a shit about the users, and I think that's pretty obvious from their their handling of it. I guess just And
Emanuel:I think it's obvious, by
Joseph:the way, I'm just Yeah. Saying
Emanuel:It's obvious from the New York Mac story. Like, the the the New York Mac story is kind of like the story I think that we wanted to do. It was just missing missing this one piece where it's like they're being incredibly irresponsible and dangerous about how they're treating users.
Joseph:Yeah. Do you think there's potentially a broader thing here? This is an extreme example, but we have all of these new vibe coders, these new people who are coming in and just making apps and security often does seem to be the last thing on minds. They seemingly don't even get the prompt or include in the prompt, Hey, did you make the app secure, please? Again, this is an extreme one, but do you think there's probably a broader thing here?
Joseph:Because you're even reported on an industry of companies and now go in to just fix this sort of thing. And to be clear, I'm actually I'm not saying this specifically was vibe coded, but the vibe around all of it is like, who gives a shit? Let's just make it happen.
Emanuel:Yeah. I can't prove that either, but there are like, that that is kind of in the air around this app specifically and also other apps, apps that are completely unrelated. And then also there are other apps that kind of offer a similar service, and I've now heard from two different security researchers have the exact same problem with Google Firebase, which may be another blog that we'll do soon. But the researcher who initially reached out to me about Twitter claimed that it was Vibe coded. I'm unable I was unable to verify that, so I'm not saying that.
Emanuel:But that was like his impression of the of the code base. And then I've also learned that for whatever reason, there is an open AI API key that's exposed in the app, which doesn't mean that they're that they used it to code the app, but it's like an interesting, you know, tidbit. And then finally, Alex Slater himself and these other App Mafia kids don't say that they vibe code apps, but they do say that they started their business after OpenAI kind of coding tools became available. So
Joseph:I don't know.
Emanuel:It's like, is it is it going to be an issue? Like, absolutely. Can we say it's specifically what happened in this case? No. We can't say that.
Emanuel:Okay. Joe has passed away. Apologies, listeners. Joe's Internet finally died, and he's unable to do his outro reading as he loves to do. But we'll leave that there.
Emanuel:Thank you for listening.
Sam:Do you want me to read this out? I have it in front
Emanuel:Yes, of please. As
Sam:a reminder, four zero four Media is a journalist founded and supported by a subscribers company. If you wish to subscribe to four zero four Media and directly support our work, please go to 40media.co. You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week. Podcast is made in partnership with Klatyscope and Alyssa Midcalf.
Sam:Another way to support us is by leaving a five star rating or review for the podcast. That stuff really helps us out. This has been for four media. We'll see you next time.