The 404 Media Podcast (Premium Feed)

from 404 Media

How This Trippy Image Started A Massive Conspiracy Theory

Episode Notes

/

Transcript

This week, Jason explains the conspiracy theory circulating behind a trippy stock image that went viral after the White House Correspondents’ Dinner—was it sent here by a time traveler? (Spoiler: It was not.) Then Sam unpacks what’s happening at Arizona State University with a messy rollout of a new AI-powered tool that generates lessons by scraping professors’ lectures without their knowledge. In the second for subscribers at the Supporter level, Emanuel gets philosophical with a discussion about the question of machine consciousness and how it relates to a new paper from a Google-affiliated scientist.

00:55 Joseph's Signal Update

Story 1
04:24 “Time-Traveling AI” Conspiracy Breakdown
13:23 Debunking the Image & “Time Machine” Org
19:23 How Fake Prediction Accounts Work
21:50 Public Reaction to Shooting

Story 2
27:06 ASU AI “Atomic” Controversy
31:12 Professors’ Content Used Without Consent
35:00 AI Errors & “Slop” Examples
37:24 Faculty Backlash & Ethical Issues
42:15 Platform Status Update

Story 3
43:58 DeepMind Paper: AI & Consciousness
50:04 Why AI May Never Be Conscious
52:33 Expert Reactions & Criticism
57:57 AGI vs Consciousness Debate


YouTube Version: https://youtu.be/dT8fMfzzfso
Sam:

Hello, and welcome to the four four Media Podcast where we bring you unparalleled access to hidden worlds with online and IRL. Four four Media is a journalist founded company and needs your support. To subscribe, go to 44media.co. As well as bonus content every single week, subscribers also get access to additional episodes where we respond to the best comments. Gain access to that content at 44media.co.

Sam:

I'm your host, Sam. And with me are four four media cofounders, Jason Kebler Hello. And Emmanuel Mayberg. Hello. Joe is gonna come in.

Sam:

Joe's gonna appear as a ghost in this next clip that we're about to play. He has an announcement.

Jason:

He has a special special message. He couldn't show up, but he has a special message.

Sam:

Joe got Joe got some really good impact in ulteriorness, so he's gonna tell you about that in the next for the next one and a half minutes. I've not heard this yet, so we'll hear it for the first time right now.

Joseph:

Hi. I can't be there today on the podcast because I'm traveling, for some work related stuff, but I wanted to give you a quick update on a story we covered a couple of weeks ago at this point. The headline of that one was FBI extracts suspects deleted signal messages saved in iPhone notification database. And the issue was that even when people seemingly set signal messages to delete, or in the case of this specific story, deleted the Signal app, copies of incoming signal messages were still stored inside the iPhone notification database. So even though the FBI was not able to forensically extract messages from signal itself, they were captured in this relay point in the notification database.

Joseph:

Obviously, we thought this was very, very important. We covered it. After that, actually, a few people sent me different cases where this has come up as well. So it's clearly a tactic that third parties and authorities are turning to, to obtain signal messages in and of themselves, but also ones that are supposed to be deleted. Well, then just last week or around about that time, Apple said it has now fixed this issue.

Joseph:

This follows Signal asking Apple to look into it following car reporting. I'll just read a little bit of Signal's statement here. We are very happy that today Apple issued a patch and a security advisory. This comes following four zero four media reporting that the FBI access Signal message notification content via iOS despite the app being deleted. There's then a link to the security advisory on Apple's website.

Joseph:

Signal then adds, note that no action is needed for this fix to protect Signal users on iOS. Once you install the patch, all inadvertently preserved notifications will be deleted and no forthcoming notifications will be preserved for deleted applications. And that's the key part, right? Because there was this very important question of, okay, well, I could turn off notifications for signal or I could change it so only the sender appears or not the message content or something like that. But we really had no idea for how long these notifications had been stored, how long they were going to be stored for in the future, and all of that sort of thing.

Joseph:

And in an email to me, Apple also added much the same. It said, this is actually going to purge all of the notifications that have been collected inadvertently, the ones which should be marked for deletion. Now, Apple describes it as a bug, as in this was not intended behavior. They say it's always their policy to delete notifications that are supposed to be removed, but clearly, obviously, that was not happening here. So I just wanted to give you that quick update that this is the sort of reporting that four zero four Media subscribers are helping bring about, you know, and now Signal users, and to be clear, other app users as well probably that have, you know, deleted messages, but they're still saving notifications.

Joseph:

They have to worry about that now. Anyway, I'll give it back to the rest of four zero four Media. Thank you so much.

Sam:

Okay. Thank you. Thank you, Joe. Thank you to our foreign correspondent. Okay.

Sam:

So the first story we're gonna talk about is one from Jason. The headline is, did a time traveling super intelligent AI try to warn about White House Correspondents Dinner shooting an investigation? You always know some something is gonna be up when we end the headline with an investigation. So, yeah, this is this is about a very weird meme that's been going around. Meme or conspiracy theory?

Sam:

I don't know which. But, Jason, you wanna just, like, start with describing the imaging question here? What are we looking at?

Jason:

Well, first of all, someone, like, mentioned me yesterday and said this is Betteridge's law of headlines here. And it's like Oh, yeah. No fucking shit. That's the point. That's the point.

Jason:

If you don't know what that is, it's like, there's this theory in journalism where anytime there's a question mark in a headline, the answer is no. And it's like, yeah. The answer is no. A time traveling super intelligent AI did not warn about White House correspondents in her shooting.

Sam:

Anytime there's an an investigation, we're up to some silly shit.

Jason:

We're up to no good for sure. So, basically, in the aftermath of the White House Correspondence Dinner shooting, which I hesitate to even call it a shooting. I I like, was it a shooting?

Sam:

Like an attempt.

Jason:

I guess there was it was like an attempt.

Emanuel:

I believe shots were fired. Right?

Jason:

Okay.

Sam:

But no one was shot, were they?

Jason:

Allegedly, maybe a secret service person was shot, but it was like wildly difficult to get any information about this despite there being hundreds of journalists there, which we don't need to talk about, but it was hard to find information about what was going on. Anyways, in the aftermath of this, someone, I don't know who, but X, the community of X, found, this account called Henry Martinez, that had exactly one tweet from 12/22/2023, so like two and a half years ago. And the tweet just said Cole Allen, and Cole Allen is the name of the suspected shooter. And the Henry Martinez account has a Pepe as its avatar, Pepe holding a wine glass. And then it has this header image that is like I would describe it as like generic three d art, psychedelic rainbow art, of like falling like a like a stalactite stalactite situation, like interior of cave, but it's rainbow, make it Technicolor.

Jason:

And this is just like a genre of AI compute not AI computer art, but just of three d rendering art that is very, very common, and it's very common on stock image websites. And so this was like the Henry Martinez cover photo, and it started this conspiracy theory. Should I explain what the conspiracy theory is?

Sam:

Yeah. Please get into I guess, like yeah. So this is all very weird. The the Cole Allen thing is weird because it was posted years ago. So what's the theory behind I guess, first of all, what are we supposed to be seeing, and then what is the theory behind what we're seeing?

Sam:

Because it it's like a magic eye thing people think. Right? Like, you're supposed

Emanuel:

to be able to

Jason:

Yeah. So I mean, one version of the conspiracy theory is that it's a magic eye. So you can, like, just stare at this, you know, Technicolor image for a while and you'll start to see something. And the thing that some people are saying is that it is like a digital representation, like AI representation of the iconic photo of Trump after he got shot in Butler, Pennsylvania, with his fist in his air in the air. And, that is like being used alongside some other information to suggest in a conspiracy theory that's been viewed by millions and millions and millions and millions of people, to be clear, that a time traveling artificial intelligence created this Twitter account in 2023 to warn of the White House Correspondents Dinner shooting and possibly the Butler shooting, like unclear whether that that is part of it or not.

Jason:

But that is like sort of the unhinged conspiracy theory that that has occurred. And we can talk about like why, but that's that's kind of like the long and short of it. There's also like a bunch of images where someone does like a a digital slider and you can see the like Technicolor image turn into the Butler image like back and forth, back and forth. I gotta say that the way that they manipulated that one is pretty it's like, I think you can do that with any image because that's like how, these sliders work. Like, they're they're designed to make one image look like another image.

Jason:

But I don't know. What what do you guys think? Like, have you have you tried the magic eye? Are you able to do magic eyes in general? I feel like I I don't have that skill.

Sam:

I can. I used to there was like a this reminded me of like this little game program that you would project onto a TV when I was a kid, and it was all magic eye stuff. So I would be constantly going up to my TV as a child, getting this close to it, crossing my eyes, and then backing up. And I was like, that that's probably why my eyesight is really, really bad today. But I tried it with this and I couldn't I was like, this is nothing.

Sam:

I don't see anything, but maybe I don't have the specific unlocked alien awareness necessary.

Emanuel:

I can do magic eye very easily, but I'm one of those people who sees the image recessed instead of pop out. Do you know that's a thing? Because it's like, it's supposed to pop out, but I can only see it recessed. So everything looks like a mold or something like that. The reason Sam can't magic eye this is because it's not a magic eye.

Emanuel:

Right? Like, reason that the way that you do magic eye is like you take an image and you put it into like a pattern that is then like offset and, you know, obscured by like an abstract image. And this isn't that. I will

Jason:

say It's not because she doesn't have the correct magazine. Yeah.

Emanuel:

But, I mean, there is something about the texture of the image that kind of matches the composition of the photograph. Like, I think that's fair. It's like not one to one, but if you look at it side by side, it does kind of maybe look like somebody with his hand up. Similar to

Sam:

a cloud or like a constellation. Exactly. I guess we could imagine. It is a weird image. It kinda looks like it moves to me.

Sam:

It's a weird one. If I was on enough mushrooms, it would probably hurt my brain.

Jason:

I can't do magic eye because I got LASIK. And once you do that, you lose your you lose the part of your retina that is able to see magic eyes.

Emanuel:

Nuh-uh. You're making it up. Is that real?

Jason:

I I mean, I don't know. But it

Emanuel:

sounds it sounds like it's okay.

Sam:

Your ability to see this away.

Jason:

Yeah. Exactly. It's it it removed. It was removed. So, okay, back to back to talking about this.

Jason:

And I swear that there's a reason and there's a point to doing this story and and all of this, other than, you know, it's ridiculous and we're just sort of like trying to explain a little bit how the Internet works these days. But basically, the time travel aspect of this story came from, I guess, one, the idea that, well, this guy's name was tweeted two and a half years ago. But the other thing is that as the conspiracy theory goes, there was an institute in Europe that used this image on blog post, and that institute is called Time Machine. And so therefore, people are like, well, how much clearer could you get? Like, this image comes from this organization called Time Machine and there and and, you know, all this is happening.

Jason:

And they do like AI research to some degree, which we'll talk about. But basically, it was like, okay. Well, a super intelligent AI, therefore, built a time machine, came back in time, used its powers to make a super vague x account, and tweeted the name of this person and nothing else. But that's not really what happened, of course. What happened is that, again, this image in question is a stock image.

Jason:

It's not an image at all. It's a stock graphic that was made by someone called Distinct Mind, and it was uploaded to Unsplash in 2021. Unsplash is like a royalty free stock image, library that we actually use all the time. And it is notable in that it's like fully free. Like, it's you don't you you do need to make an account, I believe.

Jason:

I'm not sure if you even need to make an account to download anything, but it's like you have access to more things if you do make an account. And so this was uploaded in 2021. And like all stock images, it's been used for all sorts of things all over the internet. So I did a reverse image search on TNI and then I also used Google's reverse image search, which shows you like where where else the image has been used and when. And it's largely been used on blog posts about psychedelics and psychology.

Jason:

So there was like a Medium post by a doctor who went to a ketamine psychotherapy retreat and and wrote about this image or wrote about that experience and use this image. Someone is selling it on Etsy as a poster. Is used as like an ADH treatment websites image. Someone wrote about the Bible and used it, and then, like, a finance firm used it about pricing integrity, like a blog post about pricing integrity. And so basically, it's been used, like, over and over and over again.

Jason:

And one of the places it was used, again, this is not the institute that created the image, it's just they downloaded it and they used it for a blog post, is this research organization called Time Machine. And they are funded by the European Union. And I will say that their website is kind of batshit, but kind of in the way that, like, a lot of nonprofits websites are bad shit, especially in this space. And so what Time Machine is is it is a European research organization that is working to digitize historic documents and artifacts, from the European Union and so using technology specifically. So they're using a lot of AI to, like, parse handwritten notes and categorize them.

Jason:

They are doing a lot of three d scanning, which, like, Emmanuel's written a fair bit about three d scanning libraries and things like this. But basically, like, using technology to scan the inside and outside of buildings so that you can then, like, upload the three d models onto the Internet so that they are, like, saved forever. And the maximalist version of this is, like, you can do VR tours of buildings that have been, like, saved in this way. And then the other thing is, like, I mean, this this happened in a quite interesting way where, for Assassin's Creed, there was, like, a very high res scan of Notre Dame in France. And then after Notre Dame lit on fire, they like consulted this the three d renderings from from Assassin's Creed to help rebuild.

Jason:

Is that is that true? I feel like that's true.

Emanuel:

The scan is definitely real. I forget. And I I would have to look it up, but I do think that they referred to the actual Ubisoft scan of it. I'm not sure for what purpose.

Jason:

Right. Well, I mean, there's basically like all sorts of like organizations and museums that do stuff like this. And this is like the hot thing in archiving now, is like using technology to categorize all of these, like, old letters from wars and save, like, three d models of, old clothes before moths eat them and things like this. And so that's what time machine is. They're not building a time machine literally.

Jason:

They are like building these interactive experiences about castles and things in European history that you can then like go through and and look at. And they do talk about using artificial intelligence in some of their research because everyone in every field is talking about how they can use artificial intelligence now. And we've done a few articles about this. Like, in history right now, it's quite controversial to it's like the like anything else, it's like some are AI maximalists and then others are like, no, don't do this. And so, like, I did an article about The US archives wanting to use AI to, scan like civil war letters and then transcribe them and categorize them and all of that.

Jason:

And then people pushing back because the AI doesn't always transcribe it right or it gets things wrong or there's also like a bunch of people who are trying

Emanuel:

to

Jason:

make, like, bring dead people back to life by analyzing their letters and then making a a chatbot of them or whatever. And so that's like the type of thing that Time Machine has researched over the years. And this has been, you know, turned by conspiracy theorists into, oh, they're building like a literal time machine and they use this image and it was on x and and therefore, an AI is warning us about the shooting. It's very stupid.

Sam:

I mean, I think it's interesting, and you note this in the, story, but part of why some of these things go right because this has happened before where, like, an account will tweet a name years ago, and then that name will become newsworthy. And then everyone's like, oh my god. They predicted this. I didn't actually realize that this is like a game for some accounts. They actually just like are are blasting random names all the time.

Sam:

And then one tweet will hit from, you know, years later.

Jason:

Yeah. I mean, so that that's like to the extent that anyone cares about this at all, that's like the most interesting question here is like, what is this account? Why did it tweet the name Cole Allen in December 2023 and like the mechanism by which it happened? And so, I mean, we don't know. We don't know exactly like how this happened.

Jason:

But in the past, there have been x accounts that tweet just like, yeah, random names or like specific outcomes, and they will be a private account, and so people can't see what they're tweeting. And then something happens in the real world and they delete all of the tweets except for the one that makes it seem like they have psychic powers and they have predicted something. And so like a very high profile example of this was, during the twenty twenty two World Cup. There was, an x account that seemed to have predicted the winners and correct scores of a bunch of games. And that's exactly what he did is he had like a private account.

Jason:

He tweeted like hundreds, thousands of different potential outcomes of of games and then deleted the wrong ones. And then by the time people had discovered it, they were like, woah, this person is a freak. The and I guess like an interesting just another side note, like, one of the reasons why it's like this is because Elon Musk made it a lot harder to archive tweets. He killed, like, quite a lot of the archiving tools because he made the, API access to Twitter really, really expensive. And so a lot of the, like, fire hose of of, like, raw data of tweets is, like, people don't have it anymore because to to have that costs, like, hundreds of thousands of dollars a month.

Jason:

And so all these like free archiving tools and and things are like are are gone. And so, it's pretty like impossible to know what happened here because all of the tools that we would use to figure out what happened are no longer in existence.

Sam:

Awesome. Love that. Yeah. Thank you, Elon, once again. Very interesting.

Sam:

Okay. Well, I guess we didn't we solved the the conspiracy here despite not having the the gene that can see it. So good

Jason:

for us. There's a lot of other conspiracies about this and like a lot of people think it was just like a fake shooting and things like that. And it's like, we don't need to get into it. But I think like very broadly speaking, it's like it's it kind of talks it's it says something about the level of trust in this administration and then also just like what's going on on the Internet. And I think also people's like general exhaustion with shootings and with this administration and all of that where it's like, yes, this happened on a weekend.

Jason:

I feel like no one cares. Alright.

Emanuel:

I was gonna ask how you rate the the level of interest in this. I would say, and this is not to trivialize what have what could have been like really horrible violence, but it seems like low interest in in an attempted assassination at a public event filled with media.

Jason:

Pretty strange. Yeah. It's it's not just low interest, like, on the Internet. Like, like, I I see people talking about how they don't care, and I see as predicted, like, tons and tons of takes from people who are in the room being like, here's what happened, and here's how I reacted and all of that. And it's like, I don't know.

Jason:

We knew that that was coming. Like, that's just sort of how these things work. It's like, if something happens at an event that a lot of journalists are at, the journalists are going to write about it. But I have talked to people in my life being like, oh, did you see the shooting or hear about it? And the instant reaction from people who I believe are like smart are like, oh, that was, like, designed to make Trump's approval rating go up slash, like, I don't care.

Jason:

I'm moving on with my life.

Sam:

The popularity of it being the idea of it being fake is really interesting to me because everyone I mean, there I was at the gas station near my house, and this guy was talking very loudly about how it was fake, and who cares. He was he was reacting to a New York Post that was on the newsstand, but he was just like, this is also fake. It's like, just people it's like I've I we were getting coffee, me and my partner, and I had read about it that morning. And I was like, did you hear about the shooting at the White House Correspondents Dinner? And he was like, and then we just got coffee.

Sam:

And it was like, it just doesn't register anymore with with anyone, which I think is a really sad indictment of the state of things. But

Jason:

I was watching the NBA playoffs with a friend at his house and ABC, I guess it was ABC switched from the playoff game to

Sam:

They cut in.

Jason:

They cut in, but they also put a little, like, thing, like, if you wanna keep watching the game switch to ESPN two. And we watched the, like, news conference for, like, thirty seconds and we're like, oh, he's talking about fucking ballroom, like, put the game back on. And I've seen a lot of people who are like, yeah. We just, like, kept watching the basketball game because, like, this was great. Like, this is ridiculous.

Sam:

I mean, that's part of it too. Right? It's like Trump another attempt on his life, not the first. And he's just like, yeah, I think we should remodel. It's like, okay.

Sam:

So I guess we can go back to not giving a shit about this. Like, what

Jason:

do by that people that the administration has been talking about coming out of this. And, also, like, we don't need to get into it, but the White House Correspondence Center is not gonna be at the White House Ballroom. Like, that's not what it is supposed to be. Who knows what it's supposed to be at this point? But it's like theoretically an independent ish thing that is not like the president's event.

Jason:

And so I don't know.

Sam:

Anyways And he's like, I guess if this place was less of a dump, people wouldn't try to shoot it up. It's like, what are you talking about? Anyway okay. Well, shall we move on to our next story after these messages?

Jason:

University professors disturbed to find their lectures chopped up and turned into AI slob. Samantha, this is your article. It's quite hot. Like, you you really hustled on it. You got it out quickly.

Jason:

There's a lot going on here. What what happened at Arizona State University?

Sam:

So I think we're all at this point familiar with the the idea, and you wrote about this very recently, the idea and the phenomenon that's happening where, like, so many of people's workplaces are getting AI shopped on them. It's like you must use AI. We're gonna be implementing AI whether you like it or not. And usually that comes with a bunch of messaging. You know, leadership or administration will be like, we want you to use this tool.

Sam:

We're excited about it. You know, it's in beta. Here you go. Try it out. Let us know what you think.

Sam:

That's usually kind of the way it goes. Not always, but I think typically in the cases that we've seen where an employer is like AI time. This is very different. So I'm pretty sure a lot of people found out about AI Atomic, which is the name of ASU, Arizona State University's new platform for AI learning through a post from Chris Hanlon, who is a professor of literature, US literature at ASU. And he posted on Friday, April 21, I think was it was last week, where he says it's a subscription platform that claims to offer customized learning modules for fee paying users.

Sam:

None of the ASU faculty whose course materials were harvested for the module which I generated were aware that their image lectures, lessons, or other teaching materials are being used. So at this point, Chris is, like, kinda raising the alarm about, hey. There's this new platform. No one knows about it or what it is, and it's using all of our, meaning, like, faculty materials. So Atomic is and we can kinda just go through what the actual platform is.

Sam:

But the way that I I signed up immediately for an account, I didn't I wasn't really sure if it was public beta, but I signed up with my personal email and it worked. And it is a chatbot at first. It's a prompt, and you put in there, like, I put in, I wanna learn about AI ethics. Because I was like, I'm gonna find some good sources this way. So I put in, I'm gonna learn about AI ethics.

Sam:

And then it takes you through basically a quiz. So the chatbot's like, how long do you have per week to devote to learning about this topic? You you can say thirty minutes, an hour, a week, just ludicrous, stupid. But, you can say, like, several hours a week, unlimited time. How fast do you wanna learn about this?

Sam:

And it'll say, like, as fast as possible, which is why I chose because I was optimizing for, like, the most ridiculous content. And it kinda it's like, what part of AI ethics do you wanna learn about? Asks you a bunch of questions about the topic. And then eventually it says, okay, we're generating you a module, and it'll take a few minutes. And what it spits out is what it spit out for me was a series of sections.

Sam:

So I think it was seven or eight sections. And each of the sections includes, like, a subhead, and it was like AI's impact on ethics, bias, and responsibility. Ethics and responsibility in AI. But then it says several of the subheads in the sections are repeated. So it's ethics and AI is section one, section two, and three.

Sam:

So, anyway, it's just gen it's auto generating these sections. And then in the sections is just AI generated text, like walls of text, breaking down concepts based on lectures pulled from ASU professors content. And we can get into, like, how that how that how that stuff actually got there, but that's that's basically what it is. And then between the walls of text in the sections are really short, like, forty second to two minute, I think is the longest one I saw clips from professors' lectures, but they're totally out of context. They're video, like, seminars and then no names.

Sam:

It's like I like, who is who is in these videos? No direction back to, like, what the course is. No references. No credit whatsoever. So that's ASU's new

Jason:

tool. So so, I guess a few things here. One, nominally, purpose of this is like, hey. You're an ASU student. You're in this class, and you need to study for the class.

Jason:

And we are going to be like your AI guide to doing this. Like, as in we're gonna, like, take the take the lectures that you're supposed to watch off of Canvas, which is the, you know, online, like, learning portal thing that is used across high schools and colleges all over the country. We're gonna take, like, the video lectures that your professor has recorded, and we're going to, like, make them shorter for you. We're gonna give you like the the TLDR of them more or less. Right?

Sam:

I think, I mean, I think that is one way it could be used. I as far as I can tell from the way that it's marketed, it's like you can't get course credit from doing this. It says it in the the FAQs that it's not for credit. I think it says yet. It kind of implies that maybe someday, kinda leaves that open.

Sam:

But I think it's mostly for a public facing non ASU student to use to take advantage of ASUs, the stuff that it already owns. So it has this And it's

Jason:

expertise and all that.

Sam:

Yeah. And it's and it charges a subscription fee. Think so they took down the sign up page after I reached out basically, and I think other journalists were also reaching out about this. But within a couple hours, the sign up page was gone. I'm pretty I had a twelve day trial.

Sam:

I put my credit card information in there, which was probably not a good idea, but I'm on a trial, and I think after the trial is up, it's $5 a month. It's something like that. It was really I remember thinking it was really cheap, and this is not, you know, like, commensurate to the the quality that it should be giving you. Started doing it. I was like, oh, yeah.

Sam:

This is not worth more than 50¢ a month. So I think it's for people outside of ASU to experience ASU. Like, to kinda to audit a class almost. Like I'm gonna it's very customized. So it's like, if I just really wanna learn about like because some of the topics were like, I wanna learn about how to manage my finances.

Sam:

It was like very normal stuff that you might just wanna learn about from the Internet.

Jason:

Got it. Okay. Yeah. I guess that that's semi interesting because like for years, I feel like Ivy League at first and now probably like more universities have been doing MOOCs, which are massively online actually, what do they stand for? Massively online online college.

Sam:

Online oh, massive online massive open online courses.

Emanuel:

It was huge when I

Sam:

was like in college. It was MOOCs were a big thing. That's like

Jason:

Yeah. It was basically like, oh, watch the lectures from computer science one zero one at Princeton or whatever. Stanford did a bunch of them. So this sounds like something similar except for like masterclass slash, like, we're gonna, like, make it shorter and, you'll be able to learn all this stuff like very quickly, which is which I asked that question because then it's great that, you know, it it will create these modules for you and then just throw in like a bunch of random shit. Like, basically, you talk to, you know, professor who had, some of her course material pulled into this, like, AI ethics module that you had created.

Jason:

And it had pulled her from, like, a class from years ago that was, like, totally unrelated.

Sam:

Yeah. It was a class from 2020. She's a film studies professor. And someone she told me someone had asked in the class a student asked in the class, hey. Let's can we talk about AI?

Sam:

And just kinda like, can we actually add this to the course and talk about it a little bit? Because this was in 2020, and it wasn't really a big deal yet, like, the way it is. Generative AI was not what it is today. It was a big deal, but it was like, oh, this thing is coming. So she kinda just, like, threw in a couple slides about, like, defining AI and machine learning.

Sam:

And she told me for this story that she would not define it that way today. It's not really an Upstate definition. And also it wasn't part of class about AI at all. It's not her expertise, but they threw in this, like, one minute clip from her class. Assume because they probably were scraping Canvas and looking for just things that mentioned the words AI and ethics or, like, definition AI.

Sam:

They were kinda slotting it in that way. And hers just cropped up to the top as far as, like, a a succinct version of this. It kinda fit into that minute that it was probably trying to hit, but this is all assumption. I don't really know exactly how it works other than it's definitely scraping from Canvas, like you said, which is their learning management system. And then ASU owns, like, Canvas the one of the professors told me that Canvas is very clear about, like, if you're putting stuff on Canvas, we have usage rights.

Sam:

If you're working at ASU, they're very clear about, like, intellectual property rights. It's like ASU doesn't solely own their rights to that content, but it does own the ability to do what they want with the stuff that you're teaching on Canvas.

Jason:

I mean, that's true. And I guess the professors that you spoke to didn't raise that as, like, the main thing that they were upset about. But a lot of, like, newspapers and media companies have been training these AIs on their reporters writing. And then, like, there was a story about McClatchy, for example, which is this big newspaper conglomerate. And then McClatchy was, like, basically generating stories or proposed to generate stories using AI and then putting writers bylines on them even if they didn't, like, agree to that.

Jason:

And so, I mean, I think that what's happening now at ASU, from what I can tell from your story is that it is, as you described, like chopping their lectures up and decontextualizing them, and that's super annoying. But it's not taking their lectures and generating new AI using like a deepfake of the professors and like their stolen voice and things like that, which Right. Hopefully, it doesn't get to that point, but it it would be easy to imagine them doing that. Like, that is totally within the realm of possibility for, you know, either ASU or for, like, another tool like this. But Yeah.

Jason:

As it stands, it it seems like it's a pretty, like, seems pretty useless at this point from what I can tell.

Sam:

It's really bad. It's like it's it's worse than useless. I mean, there was one the first the first section of the first module that I created, it was pulling from a lecture and the professor had a pretty heavy accent. And he was saying ex riskers to describe a certain type of people who feel a certain way about the risk of AI. So, like, again, I don't even know what the context of the lecture was because it was so short.

Sam:

But ex riskers was what he was saying, and he even wrote it on the board and spelled it that way. And then Atomic went in and trans like, transcribed that as ex riskus. So it was r I s c u s, and then it kept referring over and over to ex riskus, the ex riskus community again and again throughout the module. And then it did a quiz about the ex riskus community. And it's like, if I'm learning about AI ethics for the first time going into this, I don't I have no idea what this topic involves.

Sam:

And I'm reading this. I'm like, oh, there's a community of people called the ex riskus community. And I go tell my friend, like, have you heard of the ex riskus community? It's like, what? This is it's just making shit up based on really bad transcription.

Sam:

It also did that with another it did it with Chris Hanlon, the professor who tweeted or Blue Sky posted about it, where he was teaching about Cleanth Brooks, who's a literary critic, and it transcribed it to client Brooks and then referred to client throughout the the module. I like, it's not only did ASU not alert anyone to this, didn't send that email that I referred to earlier that was like, hey. You know, we're doing AI now. Get on board. Didn't even tell them.

Sam:

Didn't ask for the permission to use the lectures because they own it. They technically, they legally don't have to, but they didn't take the time to look through the modules and what was being created because they're highly customized. Everyone's getting a different one. If I went and generated a different module about ethics and AI, it would give me something totally different. So there's no fact checking and no references out to anything else.

Sam:

So you're completely in this closed environment learning about things that may or may not even be factually accurate or spelled correctly, and it's not telling you go here to learn more or here's a professor's full lecture or here's their name even. There's no names on anything. So I think it is like a lot of the professors I talked to weren't they weren't saying I want to be compensated for my work because this is an ownership issue in that way. They were saying this is an ownership issue because I took the time and effort to create and build that curriculum and that module. And I care a lot about my students understanding these topics, especially when we're talking about ethics and things like that.

Sam:

I I want them to grasp these topics in a way that I've crafted them because I took the time to learn about it, and it's insulting and dehumanizing to just turn that into this dog food slop that ASU has done with Atomic. And that was what really bothered most professors. And I think that's I mean, it was just like was honestly pretty. It was very sweet hearing them talk about their students and how much they cared about what their students were learning and how frustrated they were that this was done just completely in the dark without their knowledge and also as butchering. Because for a professor and for someone in academia, a lot you're not getting paid amazing.

Sam:

So your reputation is what matters, and your ability to speak on something as an expert is what matters. And this AI is just ripping clips out and turning them into fodder for bad information, which I think really is what frustrated most of them. But, yeah, it's it's I think ASU knows that it's in hot water at this point because they took down the sign ups. But

Jason:

Yeah. That's what I was gonna say. I guess, like, let's end it with, like, whatever the status of this thing is. I mean, it's like, again, a beta test. It seems like you can't

Sam:

sign up anymore. Wait. I can now today. I guess I'll play with this more today. The sign ups were a four zero four page yesterday and today we're back.

Sam:

So I'll make another account and see if I can keep messing with it, but I they haven't replied to any of my emails. I emailed, you know, the the people that are involved, president's office, communications office, no one has replied. I think they're probably doing some serious some serious crisis management at this point.

Jason:

If you are, if something similar is happening at your college, hit up Sam.

Sam:

Yeah. Definitely. It's I think this is happening elsewhere pretty rampantly. So yeah. And you might not even know it.

Sam:

So go digging around, I guess.

Jason:

Should we leave that there?

Sam:

Yeah. When we come back. We're talking to Emmanuel about, some philosophy. Speaking of ethics, consciousness.

Jason:

That's in our subscribers only section, which you can gain access to by going to 404media.co and clicking subscribe. Okay. We are back in subs only section. Emmanuel, Google DeepMind paper argues LLMs will never be conscious. Can you first just tell us what DeepMind is real quick?

Emanuel:

DeepMind used to be a more independent entity. It was folded into Google, but the short version is that they are one of the leading AI labs, what people call the frontier labs. It's it's Google's like leading AI effort. They make the models that, you know, regularly be benchmarks and are like one of the serious competitors to OpenAI.

Jason:

And I feel like over the years, Google has like both benefited from DeepMind's existence because it's like a very well respected AI research laboratory. But then also when DeepMind publishes things that Google, the entity doesn't like, Google, like, gets mad and it's like, that's independent. That does that's not what we think, which I I say just because, I don't know, this paper is sort of arguing that LLMs, like, perhaps are not a path towards super intelligence and consciousness. So what's the paper arguing basically, like, why LLMs will never be conscious? Like, what what is the argument being made here?

Emanuel:

Yeah. I mean, sorry. Just to your point real quick about the frequent divergence between things that people and Google Google DeepMind publish and Google's general narrative on AI or other topics. I it's like it has come up. While I was researching this article, I was referring back to a lot of stories we did at Motherboard where someone notable at DeepMind would publish a paper, we would reach out to DeepMind for comment, and they would say, we have nothing to do with this.

Emanuel:

This does not represent us. But it's like either the single author or the two authors in the paper, their only association is working at DeepMind, and sometimes the paper is published on something with the Google DeepMind letterhead, and that's always like a funny, annoying situation. I'm not sure why it happens. I think there are two things. One is it is a research lab, and I think it's good and healthy to allow people to, like, pursue research that does not fit the narrative that the company is trying to push.

Emanuel:

But also, if you recall, I don't know if, like, this is still the case at the company, but the Google used to have a thing where, like, 10% of your time, you were required to spend it on something that was like your own personal passion project, and that's how Gmail was born and all these other, like, huge products. So possibly it is like some vestige of that practice, but I'm not sure. Anyway, to the paper itself. The title of the paper is The Abstraction Fallacy Why AI Can Simulate But But Not Instantiate Consciousness. And I think it's a philosophy paper.

Emanuel:

It uses a lot of jargon. It's very complicated. But the argument boils down to what I think is kind of a common sense argument, and that is that LLMs will never be conscious because LLMs at some point in the process always have to rely on a human being that is organizing the data and labeling it in a way that makes it functional for the computational system. And the titular abstraction fallacy here refers to the fact that we have a bunch of computer scientists, they're building software, they're building neural networks, they're building these large language models, and these systems are very good at manipulating symbols. You end up with systems where you can have something that looks like a conversation, or you can ask it a question and it will provide you an answer that is correct and legible most of the time.

Emanuel:

And that gives us the illusion that that is something resembling intelligence, but that reflects the bias of the people who are making the system where we reduce the idea of consciousness to simply the manipulation of symbols. But again, the only reason that the system is able to do that is because somewhere in the production of that system, there is a human being that is like, this image means that, this word means that, or like, these words go together. And it's like, as we've reported many times, that is the purpose of training data, that is the purpose of data labelers, right? Jason was just talking about the people, the low paid workers in Africa who do all this labeling, and we have yet to see a system that doesn't rely on a human being in that sense. The argument is, like, further explained, and this is where I'm gonna, like, get into trouble.

Emanuel:

But you might wonder like, well, why is that? Like, why can't computational system do that? And I guess I don't want to get too far into it because it's very complicated, and I'm probably going to do a poor job of explaining it, but and people might be familiar, it's not an entirely new argument, which we can get into, but the philosophers that I talked to explain that basically, in order to have consciousness, in order to have this ability to make meaning, you need to have the intelligence embodied in some way and have like body connected to mind and have this like your genes build your body and your genes give you like this inherent will to survive. And it's this inherent will to survive that makes your consciousness do things in order to survive and requires you to make meaning of your environment. And it's like the interlinking of like this physical system and like this symbolic system, like the logic or whatever that consciousness relies on.

Emanuel:

And so far, we haven't seen any sign of a computational system that could do that, or like an embodied computational system that can do that. I spent like hours talking to people about this for the story, and like we went on this whole tangent about like, well, what if you put an LLM in a robot's body and kind of give it the the will to survive, and that doesn't work either for reasons that people who are interested in can can like read the article and follow the links to to see why that is, but I think beyond the scope of of this conversation.

Jason:

One of the interesting things is you you talked to Johannes Jaeger, Mark Bishop, and Emily Bender, who are philosophers slash AI researchers or mix of of all of the above. And pretty much to a tee, they all said like, yeah, we know this is old slash, like, this person didn't necessarily, like, cite previous work on this subject, etcetera. Like, what was kind of their beyond what you've already said, like, what was their vibe in terms of, like, the novelty of this paper?

Emanuel:

Yeah. So the way I found the story is the paper started making the rounds on the generally kind of AI boostery subreddits like AGI and Singularity and things like that, and people were saying like, wow, this is like the best argument I've read for why AI will never be conscious, those people are not usually convinced by arguments like that. So I was like, okay, what is this? And then I saw that it's coming from Google, which is interesting because that's not generally the company's line on this question. And then when I was just like googling around who this is and who else was talking about the paper, I ended up on a LinkedIn thread that was originally posted by the author Alexander Lerchner, I believe it is.

Emanuel:

And it was a pretty polite and lively comment section under his post by a lot of PhDs and philosophers and evolutionary biologists and things like that. But the the tone was very much like, good job. But it's like, it would be great if you, you know, read some of our work that explains all of this and builds on like basic arguments that people have been making since the nineteen twenties. That was the general tone of the conversation, and I reached out to those people and talked to them. And like I said, I spent most of the time begging them to, like, explain the argument to me in in plain English, and then going off on various philosophical tangents.

Emanuel:

But, you know, they said like, hey, it's like, it's good. It's like, it's a solid paper. It's a solid argument. We're glad that this argument is coming from one of the leading AI labs because it's important that people be aware that this is kind of like the consensus in other disciplines about consciousness, but it would be better if this person cited all the work that we did already.

Jason:

Which I mean, I think it is relevant and important and like good that you wrote about this and good that DeepMind is publishing something like this. But I do feel like similar arguments have been made, like, many times before that, like, LLMs are not LLMs, like, may become very capable and they may become, like, capable of, like, mimicking something that appears to be intelligence, which is, like, you know, we get so many emails from people who think that they found, like, sentience and chat GPT or whatever. But that is different from like achieving true consciousness and true sentience because of the way that they were trained, because of the all the things that that you mentioned and that that this paper gets into. So I guess the last thing is sort of like Google's response to this slash, why why did Google, let him publish this paper? It seems like the paper is still online, but Google has distanced itself from it to some degree.

Emanuel:

Yeah. So like I said in the past, we reached out to Google about stuff like this and they implored us to not associate the company with the publication of the paper, which is ridiculous and not something that we we really entertained. In this case, I didn't get a response, but I did see that the actual PDF that was uploaded to Philpapers, which is a repository for philosophy papers, was changed. It used to have Google letterhead on it, and that's no longer there. There was also a disclaimer at the bottom of the paper that says, you know, the findings in this paper, the conclusions in this paper do not represent the employer of the author.

Emanuel:

It's completely independent. That disclaimer is still at the bottom. It was also moved up to the top of the paper to be like right under the headline. But other than that, we haven't heard anything from Google and we really don't know why Google would let someone do this other than it being good and chill about letting one of their senior scientists publish something that he thought was important and not interfere and, like, censor him. To to be really explicit about it, the reason that Google wouldn't theoretically want this kind of argument out there being made by somebody at DeepMind is that the CEO of DeepMind, as recently as last week, was talking about how AGI is definitely coming.

Emanuel:

It is going it's going to have an impact that is 10 times the size of the impact of the industrial revolution, it's going to happen 10 times faster, right? And if Google is one of the key players in this, you know, giant thing that is going to happen in the economy, that obviously is very good to Google, that it's very good to the shareholders, that it's very good for the value of the company. And the there there's a contradiction between the ability to build AGI and the idea that a computational system can never achieve consciousness. I will say that at the very end of the paper, he makes the argument that, yeah, there will never be consciousness, but in all other relevant ways, like AGI is still possible. And after having like these long, very abstract, heady conversations with philosophers, I asked all of them, like, are we when he says that, when we talk about a computational system not having consciousness, is that for the average person, is that a distinction but without a difference, right?

Emanuel:

It's like, are we is this like a technical philosophical argument? Like, is this system really able to do anything a person can do cognition wise, but you're just making some sort of ethical or moral or philosophical argument for why it's not truly conscious. And they were all very adamant that it's like, absolutely not. This is like a very pragmatic difference that we're talking about, where it's like, if you don't have consciousness, you are not able to make meaning, therefore the system will not be as capable. And it was actually Mark Bishop who said that, you know, Jason, you and I recently spent some time in Waymo's, and we were very impressed with their ability to drive, but it's like there's lot of things that they still can't do, right?

Emanuel:

And as far as I know, no self driving car is allowed on the highway or certain road situations and things like that. And I think Elon says that in order to do that, in order to have what they call level five autonomy, you need AGI. And so the people that I talk to say it's like, we're basically never going to get there, right? It's like they're at the moment, there is no path to get to that level of autonomy and that level of AGI that's going to have this imagined gigantic impact on the economy. The paper doesn't necessarily agree with that, but the researchers I talked to disagree with with that part of it.

Jason:

Mhmm. Mhmm. I feel like you got this is the most philosophical we've been on the pod. No? Perhaps not.

Emanuel:

Dude, I I I was talking to you guys as was doing this, and it was very challenging because it's like

Jason:

because you're like, these guys are so dumb, they don't understand what I'm saying.

Emanuel:

No. I'm dumb. I'm dumb. I was talking to those guys and I'm like I was like, can you please explain it to me, like, in plain English? And then he would talk for fifteen minutes and I'm like, dumber than that.

Emanuel:

Like, can you please make it dumber than that? And it's just hard. Like, it's hard to to delve into this stuff, which is leads me to the final point that I make. One of the important insights of the paper is that, like, this fallacy that he talks about is a result of a bias that comes from seeing the world as software, seeing everything as software, right? Like, they're software engineers, they think that everything can be turned into software.

Emanuel:

Software is eating the world, right, and all of that. And everybody said like, hey, like, they would benefit and humanity would benefit if there were more disciplines involved in the building of these tools. And it was actually Emily Bender, who Sam recently had on the podcast, who said it's like, well, some of those people used to be there. And when they started to write about things she's a linguist. And like when they were linguists at Google and they wrote about how LLMs are really limited and problematic, they were the first people who were like cleansed from the project.

Emanuel:

So it's it's not a coincidence that we ended up with like this like tunnel vision on software. It's very much by design.

Jason:

Yeah. Yeah. I guess the the the last thing I'll say is on this, like, why did Google allow them to publish this? There is a bit of a history of big tech companies, like having these research arms and then there being research that comes out that the companies don't like. And then, you know, kind kind of like squashing that research has led to bad outcomes, Allowing that research to be published has, you know, maybe been embarrassing, but I think they're they're probably doing like calculation like, well, the drama associated with us, like, squashing this research is probably worse than just like it coming out and us pretending it doesn't exist.

Jason:

Maybe.

Emanuel:

I mean, that's definitely what I would do. Like if my if my job was to minimize the the fallout from something like this, the worst thing you can do is is try to remove it. That would really lead give the argument more weight than it already has.

Jason:

Should we end it there? Yes.

Sam:

Yes.

Jason:

It's ended.

Emanuel:

End it. End it.

Sam:

As a reminder, four four Media is a journalist founded and supported by subscribers company. If you wish to subscribe to four four Media and directly support our work, please go to 44media.zo. You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section when we talk about a bonus story each week. This podcast is produced by Alyssa Midcalf.

Sam:

Another way to support us is by leaving a five star rating or review for the podcast. That stuff really helps us out. This has been four Formia. We'll see you again next week.