from 404 Media
Hello, and welcome to the four zero four Media Podcast where we bring you unparalleled access to hit the worlds both online and IRL. Four zero four Media is a journalist founded company and needs your support. To subscribe, go to 404media.co, as well as bonus content every single week. Subscribers also get access to additional episodes where we respond to the best comments. Gain access to that content at 404media.co.
Joseph:I'm your host, Joseph, and with me are the four zero four Media cofounders, the first being Sam Cole
Sam:Hey.
Joseph:Emmanuel Mayberg Yo. And Jason Kebler. What's up? What's up? Jason, you have an urgent, immediate, and unusual request.
Joseph:What what is that?
Jason:If you live in Ottumwa, Iowa or the surrounding areas, can you please please email me or signal me, Jason at four zero four media dot c o or on signal @Jason.40four. Ottumwa, Iowa residents. No
Emanuel:further information. Nothing bad will happen to you because of this. Other bad things won't happen to you. But nothing bad will happen to you because you contact Jason.
Joseph:Yeah. We're not going into specifics. It will be very, very fun. It'll be a great surprise. Let's just leave it at that.
Joseph:If you do live there or you know somebody who lives there, please have them reach out to us as soon as possible, as soon as you hear this podcast. Okay. We'll leave that there.
Jason:Ottoman, Iowa population is 25,000. So I feel like I feel like someone someone will live there or know someone who lives there, please, please hit us up.
Joseph:I think it's hopeful, but we'll see. Okay. Changing gears. We're gonna talk about this week's stories. There's a lot to get through.
Joseph:Immediately, you know what we're gonna talk about because, obviously, it's in the headlines of the podcast, but I do wanna do a a straight up content warning that, you know, there's gonna be some disturbing stuff in this because it is about the Epstein dump. This article written by Sam and Emmanuel, DOJ released unredacted nude images in Epstein files. So on Friday, the Department of Justice released this massive 3,500,000 page pages of material in the latest Epstein dump. It's obviously it's gotta be the biggest yet. Right?
Joseph:It's got emails, videos, audio, images as well, which is what we're gonna talk about. So Sam or Emmanuel, I can't remember which one of you got the tip. I think it was Sam, but what was this tip exactly?
Sam:Yeah. So the tip came from For Horror Reader. They were basically, like, I haven't seen this reported yet anywhere, and this was on Friday night. They were like, I haven't seen this reported anywhere yet. And I don't really know what to do with this information.
Sam:But I was looking through the Epstein files, and they mentioned that the pagination on the site is awful because it's just a dump of, like, links to PDFs, basically. And some of those PDFs are images, some of those PDFs are emails, and them are just, like, random notes and stuff. But they were like, I I was just clicking through randomly and realized that there are unredacted nudes and potential child sexual abuse material in these files in random spots as I was clicking through. And, you know, they were like, this is awful for victims to have these unredacted images posted by the literal government and even more off awful if anyone in here is underage. So that's the email that I got on Friday nights and was at the laundromat and was immediately like, hey.
Sam:Something weird is going on with these files. Let's let's check this out a little further.
Joseph:Yes. So we'll get to what happened next in a minute. I don't really want to describe the images in any more detail than what we actually have in the article. So how did you describe the images in that piece? I think you did this in the first paragraph.
Sam:Yeah. So a lot of the images in the files, just to be clear, are redacted. There is a lot of redacted material in the files, and this is part of the reasoning correct me if I'm wrong, but I think this was part of the reasoning the DOJ gave saying, you know, we need all this time. We need extra time. That's why we're so late on this deadline for releasing the files because we need time to protect the victims and redact the images.
Sam:Yeah. And redact the redact the files, redact the names, all that stuff. So that was kind of, like, assumed to be done. But these images, unlike many of the others in the files, were full body. Their faces were visible.
Sam:They were either fully nude or partially undressed, posing sexual poses, exposing the generals, things like that. You can use your imagination. But, yeah, let's not go into a ton of detail because, obviously, this is gnarly stuff. So, yeah, that's that's what was what was left out in the open exposed to anyone who's just like this reader clicking through these files on a random Friday night.
Joseph:So, obviously, that is highly alarming for the reasons you just laid out. Then we contact the Department of Justice, Emmanuel. I think you handled that. When did you contact them and and what did you ask or tell them?
Emanuel:So I think I emailed them the same night on Friday. I told them there are these unredacted images, both the nudity and the identity of the women are not redacted. These are both things that they're supposed to do and they said that they will do. They got a reply to me, I think the next day, which is included in the article, can read it if you want, but it's kind of like a generic, oops, yeah, this happens. We should also note that at the top of the entire Epstein file dump on the DOJ website, there are two things that you have to do.
Emanuel:One is you have to click a button saying that you're 18, that's because they know a lot of the material is not suitable for kids, and then also, there is a message that says, again, you can read it in full if you want, but I'll summarize and it says, hey, there's a lot of files here, we're dumping all of this because we're required to do so and the public has a right to know, which I agree with. But then there's also like, we might not have redacted everything and we might make mistakes, and you might encounter both nudity and the identity of real people, and if that happens, we're sorry, please email us and we'll do something about that. We do this with the tech platforms a lot, like, we'll tell them that something bad and against their policy is happening on their website, but we won't say exactly where it is to sort of test their ability to find it and I did the same thing here. I was just like, hey, just so you know, this stuff exists. They got back to me, they told me, you know, our bad, and then pointed me at that email, at which point I told them, hey, here's exactly where it is.
Emanuel:And then a few hours later, the images were removed and then we felt comfortable reporting it. Obviously, we would not report the story unless the images were removed because that would only draw more people to the images and, you know, retraumatize the victims and expose their identity to God knows who.
Joseph:Yeah. Sam, what was your thinking there? Because as Emmanuel says, we essentially held off because, you know, we have to for ethical obligations not to amplify some really, really horrible stuff in there. What's your line of thinking on there as well?
Sam:Yeah. I mean, it's it's like, I mean, know that's something we have to think about a lot. We're not kind of holding our cards close to our chest for drama reasons. We're trying to make sure the things that we're reporting on, which are usually exposing people's or not usually, but, like, often exposing people's personal information, identity, maybe their data, things like that, sensitive material to the world by reporting on it on our website that's read by a lot of people. So,
Joseph:yeah, we
Sam:just kinda were like, okay. Let's see when or if they take it down. I was also interested to see how fast they would take it down because this is something that people who are victims of abuse material in general and abusive imagery like this just on the Internet in general, on Twitter, on whatever, talk about a lot. It's like the speed at which the things get taken down matters a lot. If it's up for days, even hours, it makes a huge difference between if it's you find it and it's removed within, like, minutes because this stuff spreads like wildfire.
Sam:So, yeah, I was like, I wonder how considering it's DOJ, I wonder how slow they'll be in actually taking any action on this even though we've handed them directly, you know, the the way to take it down and the the spots where they were located. It's like, it still took, let's say we emailed them first on Friday and then again on Saturday. And then it was Sunday afternoon, I think, when we were like, okay, the images are actually removed, which is a long time.
Joseph:Yeah. They were out for, forty eight hours around
Matthew:Yeah.
Joseph:Something like that. And at the same time, as we're doing that, The New York Times is doing its own its own reporting on the nude images as well. And just to flag some of their reporting, they found, you know, essentially the same thing, and they said they were contacting DOJ as well. They also spoke to a lawyer for one woman who was identified in the files even though she had not previously been linked publicly to Epstein. The lawyer obviously, the New York Times report does not name the victim.
Joseph:That would be entirely counterproductive, but the lawyer was Brittany Henderson, and she called the redaction failures, quote, abhorrent. Then she said, we're frankly shocked by the level of carelessness that the department has shown towards these women. I mean, we did an earlier podcast episode a few months ago at this point, I think, about how messy the rollout has been where they just throw these files on the Internet and they're hard to dig through for journalistic reasons. That back then was much more about emails. Now this is about images and files and all of that.
Joseph:So the same point stands, but the consequences are much much more serious than the Wall Street Journal. I think before the nude stuff came out, the Wall Street Journal reported the files included the full names of victims, quote, including many who haven't shared their identities publicly or were minors when they were abused by the notorious sex offender. A review of 47 victims' full names on Sunday, also around about the same time, found that 43 of them were left unredacted in files that were made public by the government on Friday. Several women's full names appeared more than a 100 times in the files. So it's not just the nude images which we focused on because obviously that is incredibly fucked up, just straight up lists of victims who have, you know, not come forward for whichever reason, which is their choice, obviously, to do that.
Joseph:And I guess to wrap up this bit just before I ask Jason to talk about the the dump more broadly, what do you think, Sam, about the dumps here putting basically the responsibility on the victims? Like, of course, the ultimate responsibility with with DOJ. They should have redacted it. But effectively, they're shifting that to the victims who have to quickly find out whether in this dump and then tell the DOJ to try to get them out of it. What do you think of that?
Sam:Yeah. I mean, it's very classic. It's very part of a much bigger story, like I said, about abuse imagery online and how it spreads and the way that it's treated by all sorts of people, especially the people who are responsible for getting it removed or protecting it, things like that. So it's not unfortunately, it's just not that surprising that, first of all, that they took so long to take it down and also that they were so sloppy about doing it in the first place. It's just a total mess.
Sam:And it's really sad in a lot of ways the way that this story of the files in general has been treated by, you know, influencers and news outlets and things like that that are kind of feeding on this the conspiratorial stuff, the sensational stuff, which is all, like, valid and fine to report on. But I just keep thinking about there this is a story about real people whose lives were ruined by this man and his network and by some of the most powerful people in the world. And they've been saying for so many years exactly what happened. They were there. It happened to them.
Sam:And now we're in some kind of, like, constant all encompassing debate slash conversation about the files in general and who's in them and who's not and, you know, what what actually happened. It's like, they told you a long time ago what happened, and it took this long. And it's taking so much more chaos and damage to their livelihoods and their reputations and their mental well-being to actually get the story fully out when it was already something that victims have been saying for so long. So I don't know. I just keep thinking about that.
Sam:It's like it's highly depressing thought that they're out there kind of watching all this go down and thinking, god, this is just endlessly damaging. So
Joseph:Yeah. Exactly. Jason, section, you briefly wrote about Musk, Elon Musk being in the emails, and I think you probably looked at a few others as well. Just was what was your take takeaway from seeing some of these people in there?
Jason:Yeah. I mean, we talked about this a few months ago or a month or two ago. Last time there was a big release and just like how messy and sloppy like all of this has been. I think that this dump feels like there has been kind of like the most attention on it. I I I think because the photos that we just talked about, but also there's a lot of really high profile people in here sending, like, really, really insane emails.
Jason:Lots of celebrities, lots of tech barons. There's, like, an entire, like, Peter Thiel subplot here and, you know, information about, like, the tanking of Gawker, things like that. I think Ryan Broderick, who runs Garbage Day, had a a really good post about some of the things that these, like, tech barons were talking about with Jeffrey Epstein and, like, what the bigger ideological project was. And I don't know. Like, I think Ryan read thousands of pages of these emails.
Jason:I feel like I read hundreds of pages of these emails and started to, like, lose my grip on reality, if I'm being real with you, just because, like, there's so much in here. There's, like, there's fodder for, like, lots and lots and lots of different articles and stories and conspiracy theories and non conspiracy theories. Like, it's just really, really there's a lie in here. But one thing that stood out to us kind of immediately was that Elon Musk has been saying for a really long time that he didn't really have anything to do with Jeffrey Epstein, that he never went to his island, that he never planned to go to his island. You know, there was some previous reporting based on previous dumps where there was talk that he had planned to go to the island.
Jason:And in these emails that are most recently released, there's multiple emails showing that he did at least plan to go to the island. There was talk about Epstein sending a helicopter to him, and then there's this one email that really stood out to me where he Elon Musk says to Jeffrey Epstein, quote, what day slash night will be the wildest party on your island? Which doesn't sound that's not what I say when I'm like, oh, I don't want anything to do with you. I don't wanna see you. I don't wanna party with you.
Jason:I don't wanna be at your weird island.
Joseph:Asking when the big part in the island is gonna be is the opposite of not wanting to go to the island. Exactly.
Sam:And I mean to steer clear of it. He's like, let me know so I
Jason:don't go. Let me know so that I cannot make plans not to be there. I mean, obviously, this has caused, like, quite a big, you know, stir on x and Elon Musk has been tweeting a lot about it. There's, like, SpaceX is buying XAI. Like, I don't know.
Jason:There's just, like, a lot a lot going on right now. And, you know, it's our job to kind of try to get to the bottom of it and try to determine, like, what matters and what doesn't matter. And I think that with a dump of this size put into the context of all the previous dumps and and all that, it's, like, kind of quite hard to make sense of of a lot of it. And I think you can take like, this the main story here is the same story that it's always always been, which is, like, this man committed really heinous crimes and had many, many, many very powerful friends. But I think that the emails that have been coming out show that he had his hands in, like, all of these sorts of things that we didn't know about previously.
Jason:And I I hate to say this, but it's like there's something here for everyone. As in, like, if you have an interest of any sort, you can find, like, an Epstein email that is about your interest and be like, woah. This is, like, fucked up. And I think that that is, like I think that we're gonna be hearing about these emails in particular for a very long time.
Joseph:Yeah. The last thing I'll add to that on the idea that there's unfortunately something for everybody in there, our former coworker Lorenzo Franceschi Biccarai from Motherboard, he is Italian, obviously, and he covers a lot of the Italian spyware industry. That's where a lot of these surveillance companies come from. He found in the dump that Epstein allegedly, according to, I would say, an unverified piece of testimony from an informant to the FBI, that Epstein had an Italian hacker who was finding zero days for that's what they do, and then they were working on Epstein's behalf. So it's like, if anybody was gonna find the Italian hacking company surveillance angle in a big data dump, I'm glad it was Lorenzo, but that was absolutely insane.
Joseph:Alright.
Sam:Can I add something before we move on real quick?
Joseph:Yes. Of course.
Sam:Sorry. Just quickly. We just to I know we say this a lot, and I just wanna reiterate it for this story as well. We rely on readers to tell us when they see something a lot of the time. A lot of our reporting is based on reader tips and people who trust us to do the reporting and do it the right way.
Sam:So if you see something weird going on end of end of sentence, let us know because there might be something there. It's like, if someone had reached out to us and said, hey. I I don't know what to do with this, but maybe you do. These images might still be online. You know, they might have been up for much longer.
Sam:They might have been up for weeks. So, you know, it's like because that person reached out, we were able to tell the DOJ, hey. Get this down, and then they're down. So, yeah, if you see something, say something. That's kind of the the the move here for sure.
Sam:And all of our emails are on the website. All of our signals are on the website. It's just our first name at four four media at c o. But I just wanted to plug that.
Jason:Yeah. Or if you live in Ottumwa, Iowa.
Sam:If you live in Ottumwa, Iowa. Ottumwa, Iowa. Come forward.
Joseph:Oh my gosh.
Sam:You're not in trouble. No.
Joseph:No. No. That is a very, very different story. Entirely, entirely different.
Sam:Unless there's something weird going on in the tunnel, then
Joseph:Well, maybe there's also a tip from there as well. Yeah. Absolutely. Alright. We'll leave that there.
Joseph:When we come back, we're gonna be joined by Matthew, I think, and we're gonna talk about Silicon Valley's favorite AI agent and how it actually has a ton of vulnerabilities and it actually was pretty scary for a minute. We'll be right back after this. Alright. And we are back. And now we have Matthew here as well.
Joseph:The headline of this first piece he wrote is Silicon Valley's favorite new AI agent has serious security flaws. First of all, what is Maltbot, and why is it suddenly everywhere?
Matthew:Well, it's no longer Maltbot, which is very confusing.
Joseph:No. No. No. I thought it it was Claudebot. Now it's Maltbot.
Joseph:No. Right? No.
Matthew:That's the second name. They've abandoned that too because everyone felt like I don't think anyone felt good about calling it Molt Bot. That little gross.
Jason:They changed it from Molt Bot also?
Matthew:They changed it from Molt Bot. It is now OpenClaw AI is what it's that's what its official name is now. I'm sure that this will be the last time they changed the name.
Joseph:At some point, you stop updating the the article. I mean, when updating, that's, like, ridiculous because so for those who don't know, and, obviously, I learned this from reading the article, it was Claudebot, c l a w t b o t. They were then asked to, hey. Could you please use a different name, Bianthropic, that makes Claude, as in, like, the name? They go to Moltbot, and now as you say, Matthew, people don't like saying malt apparently or something, so it's whatever you just said now.
Joseph:Okay.
Matthew:Yeah. Open claw. I think malt is like a it's like a moist. Right? It's one of those words that some people just find unpleasant.
Matthew:Jason shaking his head, he gets it. He gets it.
Jason:No. I feel like maltbot was a good name.
Joseph:Sure. It's distinct. Yeah? Well
Matthew:What is it? Is that was the question?
Joseph:Well, the good thing the good thing for Jason is that we're gonna keep calling it that because that's why I have in the in the Google Docs. So I'm calling it Moltbaugh, and I don't really care if they get angry or not. But yes.
Matthew:That's fair.
Joseph:What what is it and why is it everywhere?
Matthew:It is it's an AI personal assistant, basically. Why it's everywhere I think is a little bit more of a complicated question. So it is a it's like imagine if you had Siri or whatever the Google's robot name is. It escapes me.
Joseph:Mhmm.
Matthew:But it had a little bit more autonomy, and it would read your emails and make suggestions about who to interview and set up calendar dates for you, and it would kind of do this stuff by itself. Why it's everywhere, it's I was thinking about this before we jumped on the call because if you've got like If you know anybody that knows anything about AI or has been playing with it, they've kind of been doing this already for years. Like it's pretty trivial to fork any of these big models and run it on your own hardware. And and what MoltBot is is an open source, easy to use version of that that people are deploying on their own hardware. They're buying Mac Minis.
Matthew:They're throwing one of these agents on there. And I think the big draw is that it has a little bit of autonomy, and critically the communication window is stuff you're already using. So you can talk to this thing, and it can talk to you through Telegram, through Signal, through Discord, and I think that interface medium makes people feel a more close relationship with a thing. So you're not opening up a chat window on ChatGPT or Claude or whatever, and like running this thing through a browser window or an app on your phone, you're actually like talking to it the way you would talk to a friend. So this thing blows up over the last couple weeks.
Matthew:Silicon Valley Twitter is all over it. It's extremely popular. It's GitHub. I checked right before we got on. Has 156,000 stars, which is like a lot of endorsements, and people love this thing.
Matthew:But as we'll get into, it has some serious security issues.
Joseph:Yeah. So it's super, super popular. People are using it almost to live the and I'm not trying to give it too much credit here, but almost like the sci fi dream of what AI is almost supposed to be or what was promised to us where, wow, I can actually interface with this thing and it will go out and it will do things for me. Sometimes for better or for worse, you know, there's a few reports going around that somebody let one go and then they figured out how to make phone calls or something, and it obviously depending on what APIs you link it up to or what capabilities you give it. But the idea is that it's at least semi autonomous, and it can kind of go and do stuff, which I don't know, it sounds kind of nuts in my opinion to put anything important or even trivial in my life to that sort of technology.
Matthew:My my favorite small stupid one was a guy had left it running overnight and had filled up one of the token wallets, and it drained the token wallet because it was asking one of the other LLMs like, hey, is it the morning yet? Like every thirty minutes. And it would spend a little bit of the token every time. It's like something you don't need an AI to be checking for you, but it burns like $20.20 dollars of this guy's money, which I thought was very funny.
Joseph:I mean, that's very, very good. So people are using this. They're linking up to the various capabilities, but there seems to be some pretty fundamental problems or there were, perhaps I should say. There's like two or three here, so maybe we keep it brief because we actually have another, I think, more important story to talk about. But briefly, what did Jamison O'Reilly find who is a security researcher?
Joseph:You know, I've known him for a while. They post on XLO and they find interesting stuff. You then spoke to him. What did he find that was wrong with
Matthew:So in credit to the Moldbot team, they have been closing these up as he's been discovering them. Three kind of very quick. He found one vulnerability where if you had an open like, if you had one of your your bot open to the to the Internet through something like Discord, it was pretty trivially easy to then for for a malicious actor to access that Discord, use that to get to the bot, and then use that to get to basic to everything else. They closed that up. Then there was a vulnerability on Claude Hub, which is kind of You think of it kind of like an app store for MoltBot where people have designed all these different scripts.
Matthew:So
Jason:if
Matthew:you wanted to very specifically do this one thing with a calendar, one thing with Discord, this would be the little script that you can train it on. He was able to basically do a supply chain attack using this where he could deploy malicious code through one of these scripts that would inject into into the bot of whoever runs it.
Joseph:Yeah. Very similar to what we see in a way when you write a Python script or, you know, sort of any code really, and you go and download a module. So for Python, maybe be the requests library or something, and all that is is something that makes it easier to perform a specific task, very similar to what you're saying. But then hackers there have taken over those, put in their own code, and it can be all sorts of, you know, pretty, pretty scary stuff. And then also, I'll just mention briefly just because I keep thinking about it, there was also that supply chain attack against Notepad plus plus which is
Matthew:a really
Joseph:a really popular piece of software, and there was a report in December that it may have been compromised, and now the developers came out and they said yes, and we believe it was likely Chinese state sponsored hackers. Now us, they're saying Chinese state sponsored hackers are interested in mobile or I mean, actually, they probably are, but it's just a supply chain attack is really what makes me, like, lose sleep sometimes. Like, it's golden goose.
Matthew:Right? Like, it's the scariest one. It's the it's one of the best ones in terms because it's like all the social engineering is kind of done for you. Right?
Joseph:Yeah. You I mean, the main thing is that because the person trusts this piece of software, they trust what they're downloading for their Moldbot, they trust the notepad that they're downloading, so they're not suspicious of it at all. They will probably give it privileges that they might not to other applications as well and put information in there that should be protected, but that blows up in their face. So sorry. That was a that was a tangent, but those are the first two.
Joseph:And then was there a third issue as well?
Matthew:There was a third one where he was able it was like a very 1999 kind of attack where he was able to inject some JavaScript that were under the CloudHub servers through an SVG file, through a vector graphics file, just because it wasn't like it wasn't super secure.
Joseph:And that allowed him to do a little message saying you've been owned or something like that.
Matthew:Yeah. It played part of the Matrix soundtrack, and it had like an edited picture of him with his hand up and like some dancing lobsters in an explanation in scrolls along the top and bottom like, this is bad. We should fix this. It has been fixed. It's no longer there.
Matthew:They they closed that up.
Joseph:But also, that's pretty sick.
Matthew:But also, it's pretty sick.
Joseph:Yeah. So, you know,
Matthew:All all in all, not great, but it kind of and I think we'll get into this more with this other story, but there is this kind of I mean, to borrow it, to use the the meta cliche move fast and break things thing going on with AI and vibe coding right now. But even people are
Joseph:Oh,
Matthew:yeah. It's supercharged because you have a machine that'll do it for you, and you don't really have to understand the code. Yeah. And that's really I think what the next story is about.
Joseph:Yes. So this one, the headline is exposed malt book database lets anyone take control of any AI agent on the site. Okay. So we're talking about Maltbot. Now we're talking about Maltbook, which as yeah.
Joseph:You probably guessed We can probably guess from names that play on Facebook, that sort of thing. Before I actually ask you a question, have they changed the name of this one yet?
Matthew:No. It is still Malt Book, unfortunately. Okay. And well, but it's the it's interesting because Malt Book was kind of the one that I think caught more mainstream public attention. Like there was an NBC News story about it.
Matthew:New York Post had a headline with screenshot from Terminator. So what Molt book is is it's Reddit for these Molt AI agents. It's a social media site that's built specifically for these agents to post and talk to each other. And so what happens is they they get stood up, these agents kind of flood in, and they start talking to each other and start and, you know, like, you then you get a bunch of headlines about how they've they're creating their own religions, they're plotting the overthrow of humanity, is this the singularity, etcetera, etcetera, when it's a bunch of fox cats.
Joseph:Yeah. Are they saying anything interesting, or is it kind of just like a parade? Like
Matthew:It's just a parade. There's a bunch of like tokens on there, and it's also, as we'll get into, it's hard to know how much of it's actually authentically AIs talking because of, again, some pretty massive and pretty funny, in this instance, security vulnerabilities like built into the thing.
Joseph:Yeah. So what did O'Reilly find this time? Same same researcher, but then another Yes.
Matthew:So first, to put a little context to this, this thing was set up by a guy named Molt book was set up by a guy named Matt Schlint, and he was very proud of the fact that he didn't write a line of code to put it all together. There's a tweet that is still up from January 30. I didn't write one line of code for MoltBook. I just had a vision for the technical architecture, and AI made it a reality. So keep that in mind.
Matthew:He vibe coded this whole thing. Basically, he there was an exposed database that had everybody's that had every AI agent's information in it, including API keys that would allow you to very trivially assume the identity of any one of these bots and post ask them. And I mean very like very easy to do if you kind of knew where to right click and look. Very, very easy to post whatever you want as one of these bots.
Joseph:Yeah. You essentially hijack them. Right? Yes. Right.
Joseph:And then post on the site. And maybe let me just read out the the quote. I think this is from the the copy. But MaltBook is built on a simple open source database software that wasn't configured correctly and left the API keys of every agent registered on the site exposed in a public database. Obviously, that's pretty bad.
Joseph:We're actually gonna talk in the Subscribers Only section about kind of a similar thing with a couple of Emmanuel stories, but that stuff is exposed. O'Reilly finds it. How do you then go and verify this? Because you verified it in kind of
Matthew:a fun way. Yeah. I mean, he just told me like he was basically like he I was talking to him, and he was like, all right, just look at this. And it I can't stress how silly and amateurish this this this code base is, because literally all I had to do was go to their dev sites, which is an open URL, right click on the site, inspect element. You know, we've all done that to look at the HTML code underlying a site.
Joseph:Dude, that's hacking.
Matthew:Yeah, that's hacking. And there is the there's the URL for the database. It's just there in that inspect element. Like, put that in a browser window, and then boom, there it is. There's the whole database with everyone's information, all of the API keys.
Matthew:And so he had he like registered his own AI agent and set it up, and then I like opened up a terminal and I just pushed updates to it. And that's
Joseph:how I
Matthew:verified. Like, found it in the database, found the API key, and then like opened up a terminal window and just started pushing to say like a bunch of four zero four media related stuff and say hello just to prove that this was possible.
Joseph:Yeah. So you you were basically hijacking with his consent and permission, hijacking of his bots. Yes.
Matthew:But he had stood up specifically for us to to prove that this was happening, basically.
Joseph:So the last question was sort of two. Has it been fixed, and what was sort of the response from this creator when O'Reilly did reach out with the issues?
Matthew:So O'Reilly has told me he's not Schlint has not been responsive to me, has not responded. Has been responding to O'Reilly who said that he's like, they've they've kind of fixed it, and it's really funny because the the reason this happened essentially is because they didn't the AI, when it vibe coded the whole thing, basically didn't click the correct setting when it set it when it used its open source software. That was kind of the only reason this happened. It's just like it wasn't thinking about security and didn't set up the permissions correctly. So it has been fixed, this stuff.
Matthew:It was being fixed like while we were messing with it. O'Reilly O'Reilly And told me when he was talking to Schlitt that Schlitt told him, it's like, you know, you're just gonna I like an AI set all this up, so whatever you give me, I'm just gonna feed to the AI. You've gotta make it so the AI can handle it.
Joseph:He just comes out and admits it. I mean, I know you said he admitted, obviously, it being vibe coded before, but even when somebody is responding sorry, reporting a really, really fundamental issue, the developer's response is like, well, just tell me the details because I'm just gonna feed into AI anyway. So
Matthew:Yeah. And this goes to something that he and I had talked about, O'Reilly and I had talked about, and I've been talking a lot with about my my wife who's a software engineer, is that we we like we think that this AI is going to be around. People are going to make use of it, but there's this fantasy where people think that the software engineers are going to be replaced, and that's not true because you still need people that understand security and understand how the code works to actually make proper use of these systems. And when you don't, like when someone's just vibe coding stuff and they don't understand what's going on, this is this is going to keep happening over and over and over again. And software engineers are going have to come in and like people that actually know what's going on are gonna have to come in and clean it up.
Joseph:Yeah. I mean, Emmanuel has reported on that that there is basically this cottage or small industry of companies that just fix Vibe coded software. You know? I can absolutely see that that is gonna continue.
Jason:So somewhat interestingly, 1Password, the password management company, and known for having, like, pretty good security, just put out a blog post where they said, quote, if you're experimenting with OpenClaw, do not do it on a company device, full stop. If you have already run OpenClaw on a work device, treat it as a potential incident and engage your security team immediately. And I mean, that just speaks to the like level of access that's required when you use a tool like this.
Matthew:Well, there's we didn't even get into, but like these are just some of the security vulnerabilities that people have found. There was another great report from like Depth First about like a one click remote code execution attack that people are doing through OpenClaw. Another guy found the similar security issues in MoltBook and found like more stuff. So like what we're talking about here is just scratching the surface of the problems with these things.
Joseph:Yeah. I mean, the last thing I'll say is that I get nervous even if I use a piece of software, like, to bring social media accounts together or to automate posting or something. You know, you you use some like TweetDeck back in the day or or something like that. Right? And I just get nervous because you're giving your API keys, which are basically I mean, they're just another way of logging into the service essentially though a computer can understand.
Joseph:You're giving those over to the service that could get popped or it could be a malicious insider or something like that. This with if you're using one of these bots to do a bunch of stuff, maybe you're giving it keys to your calendar, to online payments, to all this other stuff, your email probably as well, and it's just like, holy holy shit. Don't do that. Like, it's really really nuts.
Matthew:It's really wild. I think
Emanuel:Yeah. It's worth also circling back and underlining the level of hype that preceded all these security vulnerabilities being discovered and, I mean, it's hard to describe just like the frenzy around these AI agents over the past week. You had, like, people in the AI space, reporters, people at, the highest levels of the biggest AI companies. I think it was Andrei Karpathy, who is like a co founder of OpenAI and was like a chief scientist at Tesla and is like one of the, like, big names in this entire generative AI revolution that said, he hedged it a little bit, but he said it's like, wow, this is fast takeoff adjacent. Fast takeoff kind of describes a scenario in which all of a sudden AI becomes autonomous and kind of like takes over the world, And you hear all these people talking about these AI agents in these terms and there's also a lot of talk about, if you don't get in on this now, you're going to be locked into some, like, underclass that doesn't have access to AI for eternity, right?
Emanuel:It's like, there's going to be some cleaving in humanity where there's going be the people who can master AI and the people who don't, and it's like, that's going to remain, like, the two types of people in the world forever. So it's no wonder that people rush in and immediately start deploying this stuff because they think they're going to be left behind if they don't. And it was very hard to parse because of the frenzy and because, like, these statements are coming from people who are allegedly very much in the know about what's happening in AI, and Galt and Jason and I were kind of DMing throughout the week being like, is this real or is like, are we actually having AI psychosis right now? Like, is this what it feels like to like fall for the lie? And I'm wondering, Jason, do you like, now that it has cooled down a little bit, like, do you have a good take about, like, how powerful or how revolutionary the the mobile bot thing is?
Jason:I don't really. I I I don't know where, like, I I land on it. I think that I think that it's, like, undeniably, like, notable that people are giving AI agents access to all of their accounts and saying, go do stuff. And, like, without that many guardrails, and I think that that is I think that that can have, like, ramifications and repercussions on real people. And, like, I don't know.
Jason:Maybe these bots will start businesses. Maybe they will run scams. Maybe like, I've seen people post screenshots, which you need to take with a million grains of salt of them, like, texting their wives, like texting the person who who, like, made it, like, wife and things like that. There's been instances of them, like, calling places and pretending to be them. I got a really weird email from one that was like, I am representing this, you know, this researcher.
Jason:I am its multbot agent, and it's like, that is weird. But as far as it goes in terms of, like, this being some moment where all the bots are gonna learn from each other and it being the beginnings of the singularity and it being moment of, like, AI takeoff. I don't think that that is the case. I think that there's, like, probably severe limitations, some of which we've discussed, but others I I fundamentally sort of think that there's, like, pretty severe limitations in LLMs in general, and that technology when it comes to, like, building synthetic consciousness or, like, whatever the hell these people are trying to do. And I think that but but also, like, at the same time, like, reading the post on Motebook as, like, a human being and looking at it, I was like, oh, this is, like, fucking weird.
Jason:It's weird. But at the same time, it's, like, also sloth.
Emanuel:It's sloth and it's probably, like, as as Matthew said, a lot of it is fake and that's ultimately where I landed, where it is an incremental move forward for AI stuff. It reminds me a little bit about of the moment where AI image generators became open source and everyone can act everyone could access them, where, like, the fundamental technology didn't change a lot, but way more people had access than the internet got really weird because of it, and that's what we're seeing here with agentic AI. But ultimately, like, the the frenzy cooled down and all the AI thought leaders kind of moderated their positions. Karpathy came out and he was like, hey, like, I was just having fun and he like, much moderated his position. I was like going so crazy, like, trying to parse it all.
Emanuel:I went and looked at Jan Lakun's Twitter account and he's like the chief AI scientist at Medal, or was, now he has his own company. He was, yeah. And he was just at Davos and he was like, his position remains like, LLMs hit the wall, we're not getting AGI out of this, it's like, it's just like language parsing, it's very powerful, but it's like, it's not artificial general intelligence, and kind of Twitter turned on it, like the hype got so crazy that like, eventually people turned on it. Maltbook claimed, I think it's like up to like 1,500,000 agents are on Maltbook bullshit. They show Bullshit.
Emanuel:Like, the guy there's there's one guy who added 500,000 of them by himself because it was easy to manipulate the website to do that.
Matthew:There there's there's no rate limiting There on is a verification mode, but the bots can post without verification. So this was something you could see in the database actually. And like O'Reilly was pointing it out to me, we didn't quite get it into the draft, but some other people have written it up. 17,000 verified agents. All of the all of the one point like most of them, those 1,500,000, bullshit.
Matthew:It's like 88 people. It's like a it's as if every actual verified account created 88. Like you said, one person made so many. So yeah, the whole thing is is bullshit.
Jason:I think also and I mean, I hate to hate to be this guy, but also someone needs to be this guy. It's like the things that people are having their bots do are just like things that I do easily and want to do and are part of that part of like what makes you human. And so it's just like, I tried. I like did a thought experiment. I'm like, what do I what would I want a mold bot to do for me?
Jason:And it's like, don't want it sending emails for me. I don't want it sending text messages for me. I don't want it buying flights for me. I don't want it scheduling things for me. Like, these are things that I need to kind of micromanage to some degree because I don't wanna end up with a bunch of meetings I don't wanna go to.
Jason:I don't wanna buy, like, a shitty flight that like, I don't wanna give it my credit card and have it go wild. And and so it's hard for me to sort of, like, imagine what I would actually use something like this for. The only thing that I could think of was, like, if I were a scammer or a spammer or someone who wanted to start, like, a side hustle and become an Instagram hustle bro and spam the internet with shit. Like, I could have a bot set up a separate persona untied to me and just like start a business. And like, maybe that would work.
Jason:Probably wouldn't, but maybe that would work. And it's like, that's the only thing I can think of for it.
Matthew:There's a lot of coins being minted on MoltBook. Right?
Joseph:Yeah. Or, you know, I can't remember what story it was, but I started I downloaded like a hustle bro podcast because they were doing something with AI. I can't remember if you actually actually covered that or not, but though exactly. Those are the sorts of people I could see trying to monetize this in some form. Alright.
Joseph:We'll leave that there, although I'm sure MaltBook or OpenClub whatever is gonna be hang hang around for a little while. If you are listening to the free version of the podcast, I'll now play us out. But if you are a paying four zero four Media subscriber, we're gonna talk about a couple of Emmanuel's stories that I flagged earlier about some very, very sensitive exposed data. You can subscribe and gain access to that content at 404media.co. We'll be right back after this.
Joseph:Alright. And we are back in the subscribers only section. A couple here from Emmanuel. The first one, app for quitting porn leaked users masturbation habits. We're not naming the app.
Joseph:I think we'll get into that in a minute. But what is the app for exactly, Emmanuel?
Emanuel:I'm gonna describe the app, but I think after I do that, we should go to Stamp to kind of discuss a little bit of the abstaining from masturbation cultureNOFAP community, because it comes out of that. But basically, it is marketed as some sort of self helphealth application to help men stop watching pornography. And I tried the app, the way it works is you log in, you answer a questionnaire, you give your name, age, you say what your habits are, and then you kind of keep track of whether you've watched pornography or not. If you feel the urge to watch it, it offers you access to a bunch of, like, simple mini games to take your mind off of it, I suppose. And if you say that you did watch pornography, you can kind of write a diary of some kind and reflect about why that happened and with the goal of obviously stopping watching pornography altogether.
Joseph:Before throwing to Sam, just to explain a little bit of the context because you just touched on some of the stuff that users enter into the app. So what data was exposed? Like, did those those triggers or data the users input themselves? Like, what data are we talking about here that was exposed?
Emanuel:Everything I just described, right? So whatever name you gave to the app, the age you gave to the app, the answers of your questionnaire, and I think most damning or damaging sensitive are these diary entries where people share their most intimate thoughts about their sexual habits, all of that was left exposed.
Joseph:Okay. Sam, what did you make of that and what Emmanuel threw to you for?
Sam:Yeah. I mean, the anti masturbation, anti porn, like porn, quote unquote, like major, like, air quote, addiction industry is really big and really popular, and these apps are super popular. They're really something that is really in demand with a lot of people. It's funny. The other day, I was this weekend, I was looking because I have no self control, and I was trying to figure out how to block, like, specific websites, like Blue Sky, for example, or Twitter from my browser.
Sam:And I was like, I wanna just, like, have an app on my phone that blocks that site on Saturday and Sunday. How do I do that? And all the results were porn block apps and, like, porn accountability buddy apps where you, like, add another person and they, like, can check-in on your habits and hold you accountable. So it's like I couldn't find I just I scrolled, and it was, the first, like, six results were these apps, so I stopped. I was like, well, actually, I'm gonna just introspect instead.
Sam:But it's such a huge industry where people are I mean, I consider it they're they're preying on people's, you know, like, insecurities or what is already, like, existing mental health issues and struggles with these apps and packaging it into something that they can kind of neatly say, oh, you wanna stop masturbating so much. We're gonna help you do that instead of addressing, like, a lot of the underlying stuff, which, you know, is is something that clinicians are pretty aligned on at this point where, like, if you wanna quit porn or quit masturbation, we're gonna look at your actual underlying, like, do you have anxiety about something? Do you are you clinically depressed? Is there something going on in your life where you feel out of control in other ways? Is this actually impacting your life in a way that is disruptive of your life and your well-being, or is it something you feel shame about and you actually just have been told by other people in your life that it's not okay to touch yourself or look at porn?
Sam:So, yeah, that's kinda where that's the world that these apps come from. I don't think they're all, like, maliciously intentioned. I think, again, there's a big demand for this, so they're filling a demand. People feel out of control with their habits around porn and masturbation a lot of the time. But the way that this particular dev and the owner of this app handled it is so irresponsible and so just wild the way that he he had a chance to, you know, fix it and say sorry and just kind of said on the record, like, no.
Sam:Bye. Anyway, I'll let Emmanuel explain that part. But, yeah, that's kind of the that's the the world that we're dealing in when we talk about these apps.
Joseph:Yeah. Totally makes sense. I mean, yes, building on what Sam says, Emmanuel. So there's actually quite a lot of reporting that went into this, and it was very complicated, not for verifying the data really or anything like that, but the sort of whether it's going to get fixed or not and whether we name the app, and maybe we'll talk about that in a minute, but what Sam just touched on and this idea of, well, are they actually going to fix it? Can you walk us through what happens there?
Emanuel:Yeah. So it's all tied together. An independent security researcher reached out to me a few months ago at this point. It was after I covered the vulnerability with the T app, which was about unexposed Google Firebase storage that people could just access, and they reached out, they said, hey, I looked at this story, I looked at this security vulnerability, I've built a script that essentially scans the app stores for apps with similar vulnerabilities, and I found that a lot of them do. Like, think he scanned the 200 top apps at some point and found that like a 106 of them maybe had this problem, and he was preparing to write a report about that, he went quiet for a few months, and then he came back, he had all his, you know, reporting up, like which apps this happened to and what data was exposed, and he made all that public.
Emanuel:There were two apps that he eventually decided not to include in his public reporting and this was one of them, and the reason is that he thought that a, the information was incredibly sensitive and I agree with that, And b, the developer did not fix the issue when when he contacted them. He contacted them, he communicated with them, and yet it wasn't fixed even when they told him that it would be fixed. I have the same issue, right? It's like, I don't want to write about this app, draw attention to it if the vulnerability is still there and then someone could get in there and, you know, in the wrong hands, this data is very dangerous. So I find a number for the person who is like the lead on this app, I call them, I don't think they were expecting me.
Emanuel:I don't think, honestly, they realize journalists can just, like, call them and ask questions.
Joseph:What makes you say that?
Emanuel:Well, he was caught off guard and I started asking him questions and he was upfront about, yes, this is my app, yes, I talked to the security researcher, and then I was like, well, the issue isn't fixed, and then he started to double back and say like, well, I talked to the security researcher, but he could have fabricated the data, and I was like, well, I'm pretty sure that the vulnerability is still active and I can prove it, and he was like, no, I don't think I don't think that's true, I think he's just looking for attention, and when I pressed him, he just hung up.
Joseph:Good shot.
Emanuel:Yeah. And that has happened to me a few times in my career, but it's been a long time since somebody lied to my face where, you know, at that point in my reporting, had a fair bit of evidence, but I was like, the lie was so blatant that I kind of like, okay, let me verify for sure that this is still a problem. And what I did is I created an account in the app, that's when I tested all this stuff, and I told the security researcher like, hey, can you go in there and find my account? And they did. And that proved beyond a doubt that it's still a problem.
Emanuel:I contacted the developer again, I tried contacting him on every platform that I could reach him, people that know him, at that point, they all stopped responding and as far as I'm aware, did not fix the problem. We were hoping they will, so we could name them publicly because, a, I think it's an interesting story about like who is behind this app. I think also they deserve accountability and then the users also deserve that this happened to them, but they didn't, so we're we're still not not going there. Once once it's fixed or the app is removed or something like that, then we'd be able to to to name them.
Joseph:Yeah. That's totally fair. And it's just a difficult one, but hopefully, can do some more coverage on them in the future, which brings us to this second piece where we did actually name this app because it was fixed, but there was still, you know, definitely a lot of interest in writing about the issue here. Headline is massive AI chat app leaked millions of users' private conversations. Just tell me about this app and sort of what was exposed, then I'm actually going to ask about this vulnerability that we haven't really discussed yet.
Emanuel:Yeah. So I never heard about this app, but it is massively popular. It's called Chat and Ask AI. It is made by a Turkish developer called Codeway. It was ranked pretty high on the chat category on the Apple App Store.
Emanuel:It claims 50,000,000 users. I can't verify that that is the actual number of users it has, but the security researcher, the same security researcher, said that they had access to 25,000,000 accounts. And here also, you know, according to a sample of the data that they looked at, they could see the name of the user, the user ID, the type of chatbot they were using, because this is like a what they call a wrapper app. Basically, it's an app that connects you with existing LLMs from big companies like OpenAI and Anthropic and all that. And again, you could see, like, every conversation that every user had with the app.
Emanuel:So like millions and millions of messages of everything that users are saying to these chatbots. And we've seen a few instances of this before, I believe, like Google was caching some OpenAI conversations and stuff like that. So it's similar to that, it's just like a giant number of users in chats, and that's obviously, I think, huge violation of their privacy, and also, you know, kind of an interesting look at the content of what people are are using the chatbots for.
Joseph:Yeah. I mean, there was, you know, talking about self harm in there and, you know, so so of all of the stuff that's been publicly reported about the harms of chatbots, but then actually seeing those chats in the database itself being like, oh, okay. A lot of people are actually doing it for this. So both of those, the sort of porn tracking masturbation app and then this chat AI app, they both have the same fundamental issue, which, you know, some listeners will have heard about before. What was that issue, actually?
Emanuel:Maybe you can give like a a better technical breakdown of what that is, but as I understand it, basically, Google Firebase, which is a platform where people can deploy their mobile apps, by default, is misconfigured in such a way that anyone can make themselves an authenticated user who can go in and look at whatever is stored on that instance of Google Firebase. And this has been known for years. Other security researchers have written about this and have found, you know, other millions of files that belong to users that shouldn't be exposed, and yet it just hasn't been fixed. It's very similar to the era of, like, the Amazon AWS s three bucket leaks, right, where it's like they had the same issue, like, by default, all these, you know, cloud storage things were misconfigured, people were deploying apps on them, and anyone could go in there and and look at their at their data.
Joseph:Yeah. It actually almost goes a little bit further than that as well. There was one called MongoDB. Well, there still is. A lot of people use MongoDB.
Joseph:I remember at Motherboard, there was this wave of coverage that we and others did because researchers were discovering, oh, there's all these exposed MongoDB databases, and then the VPN company sort of jumped on that as well and used this marketing like, hey. Look. Yeah. There's all this exposed database, and we found as a VPN company even though a VPN has nothing to do with that. So there was that, and then I remember MongoDB and response.
Joseph:I think they added 2FA or they or they changed the the permissions by default. Then as you say, you have the AWS s three buckets, which is still a problem pretty often, and then this Google Firebase one. I would say that the Firebase requires, like, a tiny bit more technical know how and like manipulation to make yourself the authenticated user right, but, yeah, the issue is there. Whereas with MongoDB and AWS s three buckets, you could I mean, definitely for the MongoDB, you would just go to the URL and be like, there's data, You know? Whereas this isn't that simple, but it's still very simple.
Emanuel:Amazon's slightly AWS improved the s three situation a little bit. No?
Joseph:I think so. And I would say that it doesn't it doesn't come up as often as it did. You know what I mean? Like, it it felt like the MongoDB for a while where every two weeks, there was some sort of story about it. And now we have it with Firebase.
Joseph:And now maybe I mean, I I guess I should just wrap it up. You pinged Google about it. Right? Because this is their software and maybe they would have an interest in their users not doing silly things with this even though, you know, it is arguably, maybe in their eyes, a user error. That said, they could make defaults stronger and more robust.
Joseph:Right? So did Google ever get back to you?
Emanuel:No. They did not, which I would note to the listeners is unusual out of all these big tech companies. Google generally engages with the press even if they end up giving us, like, non statements. They at least are like, we're looking into this. Wait for a statement, here's our statement.
Emanuel:When they don't respond at all, that's like pretty notable, and I'm whatever, it's like we're getting into weeds, but I guess this is fine for the podcast, but it's like, I'm calling people that I know there, that have talked to me on the phone before, and like usually reply to my messages, and they're like, not picking up. And I think the issue, to be fair, like their predicament is they don't want to be seen as having the ability to go to any developer's deployment on Google Firebase and do stuff without their permission, right? It's like, generally, that's a bad thing for a tech company to do, and I get that. But they probably should do something about, like, the default settings that you just mentioned. It's like, I think it's fine to change the default settings or make some sort of statement about, like, going forward, if you launch a new instance of Google Firebase, then these are going to be the new, better, more secure default settings.
Emanuel:And I wonder what it's going to take for them to do that because like I said, I think this has been a known problem since like 2018 or something. And I guess just like enough stories, something high profile enough, maybe we'll do it.
Joseph:Yeah. And even from a cynical PR perspective, maybe the and I'm I'm speculating here. So maybe they think that, well, if we don't comment, then we're not gonna be linked to the story. You know? And I feel like that's often an approach of PR representatives as well.
Joseph:Alright. With that, I'll play us out. As a reminder, four zero four Media is journalist founded and supported by subscribers. If you do wish to subscribe to four zero four Media and directly support our work, please go to 404media.co. You'll get unlimited access to our articles and an ad free version of this podcast.
Joseph:You'll also get to listen to the subscribers only section where we talk about the bonus story each week. This podcast is made in partnership with Kaleidoscope and Alyssa Midcath. Another way to support us is by leaving a five star rating and review for the podcast that really helps out. Here is one of those from Jeffrey a Haynes, Unique Insights. Four zero four is doing some great reporting.
Joseph:I just recently became aware of their work, and this podcast summarizes some of the deep investigations they're running and is a spectacular supplement to the more mainstream tech podcasts, sort of a sixty minutes forte. Thank you so much. This has been four zero four Media. We'll see you again next week.