The 404 Media Podcast (Premium Feed)

from 404 Media

AI Companies Are Opting You In By Default

You last listened October 2, 2024

Episode Notes

/

Transcript

Over the past two weeks we've had a ton of stories where AI and other companies have opted users into data collection and processing by default. What the hell is going on??? They're all doing it at once! Jason starts us off with how Udemy created an 'opt-out window'. If you missed it, you're out of luck until next year. Then after the break, Sam and Joseph discuss similar stories with PayPal and LinkedIn. In the subscribers-only section, Sam talks about how a woman was essentially trapped in a driverless Waymo.

YouTube version: https://www.youtube.com/watch?v=Mdk5QhkJrCs
Joseph:

Hello, and welcome to the 404 Media podcast where we bring you unparalleled access to hidden worlds, both online and Iara. 404 Media is a journalist founded company and needs your support. To subscribe, go to 404media.c0 as well as bonus content every single week. Subscribers also get access to additional episodes where we respond to their best comments. Gain access to that content at 404media.c0.

Joseph:

I am your host, Joseph. And with me, it's a miracle, are all of the 404 Media cofounders. The first being Sam Cole.

Sam:

Hello.

Joseph:

Emmanuel Mayberg Present. Jason Kevlar.

Jason:

Dude, I can't believe it. When is the last time we were all on a pod together?

Sam:

It's been, like, a month at least.

Jason:

It's been at least a month, easily.

Sam:

Pre anniversary.

Joseph:

Yeah. And speaking of anniversary, we are going to record our 1 year anniversary podcast special for paying subscribers only. We will get to that. It's, just a lot of stuff. A lot of stuff came up, but we always appreciate your continued support, and we'll get to that soon.

Joseph:

Breaking down what it's been like running a new independent tech investigation site for the past year, what we've learned, what's coming. I'm saying all this, but we haven't actually planned the podcast yet, so maybe it's gonna be completely different. But I'm sure it'll all be good.

Jason:

We'll also talk about why it's so late. We we your snake been health wise.

Joseph:

Not literally, but nearly.

Jason:

Yeah. I was bitten by a snake. Oh, there

Joseph:

you go. Okay. Alright. But for this episode, we've got, like, a few stories about opting in, opting out, the lack of the opt out. I don't know.

Joseph:

A lot of opting in various different directions. And the first being one written by Jason. Massive elearning platform Udemy gave teachers a Gen AI opt out window. It's already over. Very interesting words, opt out window.

Joseph:

Haven't really heard of that before. I guess, first of all, Jason, just what is Udemy for those who don't know?

Jason:

Udemy is this online class platform. Like, basically, it is they call themselves like an e learning platform, but essentially, anyone can become an instructor there and upload classes, and then you pay for the classes more or less, and there's, like, a rev split. It's one of the biggest in the world. Like, they have something like 70,000 instructors, and they have, like, 100 of thousands of classes. I think it's, very popular for its, like, coding stuff, but there's also classes on, like, anything that you can possibly think of.

Jason:

There's a lot of, like, web design classes. There's a lot of, like, how to do digital art classes, how to use Photoshop, so on and so forth. And I don't know if it's the biggest in the world, but it's definitely, like, among the biggest. It's worth more than a $1,000,000,000. It has, like, 100 of of employees, and it's a it's a big deal in this space.

Joseph:

Yeah. I've definitely seen it come up, especially sort of in infosec and cybersecurity, that sort of thing. So there's this big platform, all of these, crucially, human teachers and human instructors are making their own course content and teaching people online. Well, Udemy has this plan around, generative AI. What were those plans exactly?

Joseph:

Or as much as we know about them anyway.

Jason:

Yeah. So back in July, they, like, seemingly every other company, announced that they were gonna, like, have a generative AI program. It's kind of unclear. Like, Udemy has been very transparent about this and yet very vague at the same time. I feel like a lot of companies have just launched generative AI stuff and have opted people into it, and there's not been a big lead up to it.

Jason:

Whereas Udemy had, like, a webinar for instructors where they could learn about this. They also published, like, an ethics statement more or less. They published, like, a frequently asked questions. Like, they did publish a lot of stuff, but almost all this happened on its own community platform. But, basically, they told instructors that they are going to train an AI model on the instructor's classes.

Jason:

It is not clear, like, what they're going to do with this AI model. Like, they say that they're going to offer personalization to, users, but they have said that they do not intend to offer AI generated classes as in, like, Udemy does not intend to create new classes that are entirely generated by AI, but they also reserve the right to do that. So it's

Joseph:

like Kinda meaningless then. Yeah. Like, we're not doing it right now.

Jason:

So, I mean, I I think that, like, most of the instructors are very worried that they're going to train on their classes, and then they're going to launch, AI generated classes that compete with the human taught classes more or less. Right.

Joseph:

Yeah. That would be

Jason:

awesome. The big fear. Yeah.

Joseph:

It would be very, very concerning. So were peep well, you said they made all these announcements, but it sounds like it was more, as you say, sort of on the community platform or Udemy's forum, something like that. Maybe not everybody is seeing those. Were people opted in by default? And then I'll we'll get to the window part, but was it opt in by default?

Jason:

It was opt in by default as, will probably be the theme of this episode. But like many other platforms, they say, okay. Like, you know, we're putting you in this program. Here's how it works. They also crucially said, like, if you opt out of this program, you will not benefit from whatever AI thinks that they end up launching.

Jason:

And it's still unclear what this is gonna be or how people are gonna benefit, but they make it very clear that people I read it as a veiled threat. Like, I basically read it as, like, if you do not opt in or if you opt out, people have a hard time finding your classes. Like, that that is more or less what they told people. Like, they really heavily incentivize people to not opt out by saying, like, you're probably gonna make less money from our platform if you opt out of this.

Joseph:

Right. Which is obviously very, very concerning to people who, even if they're not making their entire living off it, which they might be, they're at least generating income from this, and the platform is just potentially undermining that, or at least some sort of veiled threat as you said. So people, I I won't say opted in. I'll say they're enrolled in the program by default. Udemy then gives this opt out window.

Joseph:

Like, what exactly is that?

Jason:

I mean, it's exactly what it sounds like. So, like, on August 21st, they said, okay. The opt out window is open. Meaning, if you don't wanna be part of this program, go into your settings and click opt out. So that opened August 21st, and then on September 12th, that window closed.

Jason:

And, basically, if you did not opt out by September 12th, which was 3 weeks ago, you cannot opt out until sometime next year. And even though Udemy has done a lot of posts about this, like, on its forum, people that I spoke to said that they didn't get emails about this opt out window, that it was kind of like, well, if you were were very active on the platform and were, like, logging in, you probably would have seen something. But a lot of people have created classes and then they leave it up there and people can take the classes at any point. So, it's not like it's not like it's not something where you meet at 7 PM every Tuesday. It's like they're self guided classes.

Jason:

And so people make the classes, they put them on this platform, and then they leave them there, and they passively collect income from it. And if you're not, like, part of the Udemy community, if you're not actively keeping up, it was very easy to to miss out on this. And so, basically, it's like once that date passed, you had a grayed out option. Like, it it was totally gray. You could not change the option.

Jason:

You can't opt out. And I also saw an email where someone was mad about this. Like, an instructor was mad about it and said, hey. I wanna opt out now. And they said, well, you missed the opt out window.

Jason:

You cannot you cannot do it.

Joseph:

I have never heard of this, and maybe that's because it is specifically about generative AI and AI models and that sort of thing. I mean, that's what we're talking about here. I'm just wondering if, like, maybe this is a new thing we have to to deal with. But I think it's absolutely wild as a window at all. But Udemy did give you some sort of actual explanation for why this is the case, or be it incredibly unusual.

Joseph:

Like, what was the reason they gave for, we're only going to do opt outs at certain periods?

Jason:

Yeah. I mean, the only thing that this reminds me of is, like, health care enrollment, like, open enrollment where you have a 2 week period or 3 week period where you can change health care plans. And if you miss that, you kind of, like, have to forever hold your peace till the next, opt out or open enrollment period, and that's kind of the model that this looks like. So Udemy says that basically they it is too time consuming and expensive for them to be retraining their model constantly. And I find this to be very interesting.

Jason:

1, it's like it's kind of absurd because they're taking these professors, these instructors' classes, and training it on them. And you should be able to decide at any point, like, hey. I don't want my stuff in there. But what they're basically saying is, like, it is really hard for us to go in and delete this data, and it's easier for us to do this all at once, one time per year than it is for us to every time someone says, hey, stop training on my stuff to go in and delete that. And, I mean, they've taken some flack for this since I published the article, and I think that's right.

Jason:

But at the same time, this raises a lot of questions for me, like, about other companies because a lot of other companies say you can opt out of AI training, but what does that mean? Like, are they going in and are they deleting things that they have already trained on? And if so, how often are they doing that?

Joseph:

Or they're just not doing it, like but Udemy is or something. Right.

Jason:

Yeah. Exactly. And in this case, like, Udemy says that they are going to go out and delete the data that it's already been trained on once per year. But I would imagine that a lot of other companies are saying, like, once we've scraped your data, you can opt out of future training where, like, new content is is being trained or scraped rather, but we are not gonna go back and delete what we already have. And, you know, there's there's, like, a couple, like, I I'm not aware of any company that really promises to delete data.

Jason:

The closest thing that I can think of is Leon 5 b that Sam reported on where there was a lot of, child sexual abuse material in this massive large language model. And once Sam reported that, they took the model down and they, like, quote unquote cleaned it and they retrained the model, and it took, like, 9 months or something like that. Right? Right, Tim? Like, what was the specifics there?

Sam:

Yeah. It took him a long time just because it was you know, it's not a simple thing to do to just say I'm gonna delete the the abusive material. But, yeah, you're right. That's that's the only similar thing I can think of.

Jason:

And then there's, like, a couple, academic papers that I looked at, and I think Emmanuel has maybe written about a couple of these or at least looked at at them before, but, I found it very interesting that there's a few academic papers where, researchers are trying to do large language model unlearning where, basically, they train the model to not output stuff that, is, like, undesirable more or less. And this is usually talked about in the context of, like, racism, sexism, like, you know, terms of service violations, things like that. But but unlearning is different than deletion and different than retraining altogether. Like, as I understand it, unlearning is where you teach the the language model to not output this stuff, but that doesn't mean that it's not still in there. And from what I can understand, unlearning is easier, like, less expensive and less time consuming to do than fully retraining a model from scratch, but I don't know for sure.

Jason:

Like, I think this is one of the big problems with generative AI where, it's pretty hard to get stuff out of there once it's already in.

Emanuel:

The difference is that with unlearning, which the reason people are researching this, I think, and the examples that they often show in research is for copyright reasons. Right? So if you're using stable diffusion and you wanted to not produce, an image of Harry Potter or something, it would be great for whoever makes the model if you could cheaply unlearn Harry Potter from the model. But the reason it's easy is that you're not going back and changing the data that you put into the model. The model is already trained and in the same way that we've reported many times that you can train a model to do specific things, you're essentially trying to train it to not do something.

Emanuel:

But from what I've seen and I looked at a bunch of these papers, like, none of the methods are perfect and people always find ways to pull that stuff out of it because it's in the data set. And in order to really have it not be in there, it has to be removed from the data set, which as we know, because of the lion 5 b example, it's much harder, especially with a large data set like lion 5 b to go back to it and remove specific things and then retrain the model. Like with line 5 b, it took them months before, like, they took the model down and then it took, I don't know, 6 months to put it back up, a much more expensive time consuming process.

Joseph:

Did they did they ever detail what that process actually involved? Because they were removing CSAM material. Right? Did was it ever said publicly, like, we went in and found all the CSAM 1 by 1 and removed it? Like, was that ever disclosed?

Joseph:

Or

Sam:

They did. They wrote a really long detailed, like, company blog about it. It was tricky because if you just removed the offending stuff, you would end up with a road map to the offending stuff in the the datasets people already have. So it's like you could just kinda match the difference.

Joseph:

Right.

Sam:

So it's a much more complicated process than just, like, deleting and republishing. But, yeah, it's, you know, it's, like, 5,000 words on their side. It's a complicated process involving a lot of math, that I'm not even gonna attempt to summarize.

Emanuel:

I think it's also tricky because they had to, like because it's a it's an open source thing. They have to teach other people how to remove it because the dataset has been copied. Yeah. Right? So very complicated.

Emanuel:

Very very difficult to once you're scraped and you're in a large dataset, very hard to remove you.

Jason:

Yeah. So, I mean, the this this problem wouldn't necessarily exist with Udemy, which is, you know, training presumably a closed source model that it controls. But that said, it's like it we don't know that much about how often, you know, like, OpenAI trains new models. You know, obviously, they have, like, GPT 4 and GPT 40, etcetera. And those are, like, big steps forward, like, in between these individual, releases.

Jason:

But I I don't really know, like, how often OpenAI, for example, is updating, these large language models. Like, I don't know if it only does it, when it has a big new release or if it's making, like, incremental changes much more often and what those look like and if things are being removed from it. And I think that that's, like, one of the big questions about large language models in general is how often are they updated? Like, we don't we don't really know, or at least I don't. Maybe there's, like, technical papers and things that do talk to this more often, but it's not something that is common knowledge, I don't think.

Joseph:

So when it comes to this idea of an opt out window, where you have the 3 weeks or whatever or, some point in the year or whatever it is, do you think we might see more of that just as more companies are developing generative AI in general?

Jason:

I think we need to see how this goes because after I wrote this article, a a bunch of people raised the question about whether this is compliant with the right to be forgotten in the European Union, for example, where, you know, you you are basically allowed to opt out of, or ask a company to delete information about you at any point. Yep. And whether this is compliant with, you know, that law, I have no idea. And Udemy does operate internationally. And so I think it I think it's something we could see more of, but people seem pretty upset about this, and understandably so in my opinion.

Jason:

So I guess we'll see kind of, like, if Udemy sticks down this road or if it changes its policy, says, you know, like, oh, we heard you. We'll train more often, something like that. But, yeah, I I really don't know because I simply I simply don't know how whether Udemy is just saying, hey. We only wanna do this once a year because it's annoying for us or whether it is actually very expensive, actually very time consuming. It's it's not super clear to me right now.

Joseph:

Yeah. I mean, you bring up the right to be forgotten. And when I read the article, I was just, like, how is this possibly legal to have an opt out window? And I know that's me as a European with that perspective, but, yeah. It really, really stood out to me.

Joseph:

We'll leave that there. But when we come back, we're gonna run through a couple of other stories, also about opting in or opting out, and sharing data without knowledge, and all of that sort of thing, from PayPal and LinkedIn as well. And that's not even all of the stories we did about this. We're just gonna do those. We'll be right back after this.

Joseph:

Alright. We are back. Sam wrote this one. PayPal opted you into sharing data without your knowledge. So, Sam, what was the what was the change here, exactly, and how and how did you find it?

Sam:

Yeah. So PayPal introduced this feature slash setting called personalized shopping, which I had never heard of until I saw someone flagging it on Twitter. And it's kind of deep inside of your settings on PayPal where it says that it will let PayPal share products, offers, and rewards you might like with participating stores. And you can toggle that feature on and off. So it's basically toggling the permission to share, you know, it's it says it's sharing products and offers, but, really, what they're sharing is your personal data.

Sam:

And they've they launched this without any kind of fanfare. I don't have any communications from PayPal about it. You know, people online were saying they didn't they hadn't heard of it before, so it's a little bit sneaky on their part, in my opinion.

Joseph:

Yeah. And it yeah. It's weirdly vague, while also trying to be transparent with having the little setting and saying, here's what we're doing. It's not actually really telling you what it's doing. But basically, sharing data about your purchases potentially with other retailers and as a company that sits in a very, very privileged position as a, basically, a payment processor, that's pretty concerning and pretty weird, to be honest.

Joseph:

We verified it was it was already opted in by default. Right? Like, how did you do that? Did you check your account? Or

Sam:

I checked my account. I also made a new account and checked it. You know, it's it was definitely already turned on. Lots of people were talking about and posting screenshots on Twitter about how it was turned on for them. So it's just it's very strange to me.

Sam:

And, I mean, I guess it shouldn't be that strange considering the world that we're living in, but, and all the things that we just talked about. But it's strange to me to have it opted in for you to have that feature at all, to have a little toggle and then say, we're giving you the option, but we're not gonna tell you you have the option, and we're gonna opt you in automatically. Something else that was interesting that was happening with this was since we were just talking about Windows, up in Windows, there was a line when I first checked my settings that said, what did it say? It was like, we'll use the info collected about you after November 27, 2024. And then I was like, well, that's that's a weird thing to say.

Sam:

We'll use it after November 27th. We're gonna collect the data after 20 27th. We're only gonna use it after that point. It's like, what does that mean? So I emailed PayPal, and I was like, hey.

Sam:

What does this mean if you opt out after November 27th? What happens to your data? Obviously, they're not gonna go in and remove it. You know, they have a clause in their policy that says we're we can't control what these other merchants do with that data. And they didn't reply, and that was 24 more than 24 hours ago.

Sam:

But they did remove that line. So, you know, it's like, what was the purpose of that line exactly?

Joseph:

I love it when companies do this.

Sam:

Oh, I love it. It's so funny.

Joseph:

When you reach out for comment and they never get back to you, but they change the exact thing you were asking about. Yeah.

Sam:

They're like, you know that they're forwarding this. They've, like, created a bunch of tickets. They're like, get this line off the page. But and then, I mean, that that was weird. And then the privacy statement link in that page, in that settings page, is a different URL than the privacy setting or the privacy statement that you can get to from anywhere else on PayPal.

Sam:

And that URL said, like, the slug was, like, preview November 27, 2024 or something. It's like, why what is going on here?

Joseph:

So they, like, publish their drafts or something?

Sam:

Yeah. It's I guess. It's so strange. But that link is still what you see if you're on that settings page. So, anyway, mysteries abound.

Sam:

PayPal is still not replied. So, I don't know. People were pissed off. It's it's something that, no one likes being, you know, chosen for. It's you should at least give people the the heads up or the option if you're gonna give them the choice.

Sam:

And a lot of people in the EU and other countries were saying that they don't have the setting at all. So

Joseph:

Right. As in, it's not that they can't opt out. It's just like, this isn't even a thing.

Sam:

No. Yeah. Exactly.

Joseph:

And I mean, just to briefly, bounce off something you said about I can't remember exactly, but, like, is this actually weird or not? That's, that's what I remember. But, like, I what is going on? With all of these companies, we've covered, like, 3 or 4 doing really, really weird automatically opt in stuff. And it's just like, did everybody decide to do it in the past 2 weeks?

Joseph:

I don't know.

Sam:

Yeah. People were really pissed off about LinkedIn, which we can get into. But

Joseph:

Yeah. Exactly.

Sam:

No. Like, for good reason. People were mad about this. Do you wanna kinda go into the the LinkedIn story? Because that was that was a hot one.

Joseph:

Yeah. So this was actually a little while ago, I think, before we recorded the last episode. It was almost like a a little throwaway story, but bringing in here because, as I said, all of these companies doing this. But, basically, the long and short of it is that LinkedIn was using its user's data for improving the site's generative AI, but we found it hadn't actually updated its terms of service. Yeah.

Joseph:

Like, its privacy policy, that sort of thing. This was again found, by various users of the social network, and they were posting screenshots of of the settings and where you could turn it off and all of that sort of thing. I just reported that with some context and got a statement from LinkedIn and that sort of thing. Before we get into that, a LinkedIn AI, I don't want anything to do with that. Just like Jenna Gen AI built on, like, Thinkfluencer lists and, like, B2B marketing strategy.

Joseph:

I think people talk about AI being like a danger to humanity. I think that's the one we need to be, like, scared of, The LinkedIn, AI. You don't know what's gonna happen with that. But the quote, from LinkedIn on its site was well, this is like the dialogue. Can LinkedIn and its affiliates use your personal data and content you created on LinkedIn to train generative AI models that create content?

Joseph:

It asks. And then there's another line that said, use my data for training content creation, AI models, and you could flick it off and on. Crucially, it was enabled by default, as I'm sure we've probably established, by now. I, as I said, have reached out to LinkedIn to ask, what's going on here? You know?

Joseph:

Why are people, opted in by default? And why have you not updated your privacy policy or terms of service to reflect this at all? And they basically gave me the corporate equivalent of, I'm sorry. I'm sorry. I'm trying to delete it, which is, they said, we're doing it shortly.

Joseph:

And I don't know. Maybe you should do that first before you start training on, people's data. I don't know. Similar to the PayPal stuff, they say they're not doing it on users in Europe generally either, so there's obviously, something going on there. But, yeah, I don't know what is going on with these companies, but just a really sir a series of really, really strange, frankly, some of them shitty decisions on automatically enrolling their users into various practices.

Joseph:

And I don't know. I guess we're probably gonna see more of this as well as more companies get into, generative AI and all of that as well.

Jason:

One thing that, I have not told y'all or maybe I did long ago. But back when we were at motherboard, I, like, LinkedIn's kept adding all of these AI features into LinkedIn where it would write your posts for you, etcetera. And so I was doing a stunt because I didn't use LinkedIn at the time, and I I only use it now because we have our own company, and I'm like, well, we need to promote on all platforms, but I didn't use LinkedIn then. So I was asking chat gpt to generate LinkedIn lunatic style

Sam:

posts about

Jason:

articles. Yeah. And so for, like, a few weeks, I just, would dump links into chat gpt and say, write me a LinkedIn post for this article. And the posts were really bad, but they were, like, pretty I got quite a lot of, engagement from it. And because we quit Vice and started a new company, I never ended up writing that article.

Jason:

But my first, like, 30 LinkedIn posts are all written by chat g p t. It's the only thing, like, I've ever really used it for, and it was for this stunt.

Joseph:

2 things on that. Does it include, like, the emojis that people love to use on LinkedIn? Because you're

Jason:

not using it. Yeah. Every every like, it was, like, 5 paragraphs each of them were, like, 5 bullet points and they all had different emojis. I have some pulled up now and, like, for example, you wrote an article about hackers that could remotely open smart garage doors. So the post that I wrote was the, drudge siren, like police siren, Security alert, smart garage doors vulnerable to hackers.

Jason:

I just came across this eye opening article from Vice highlighting a major security flaw in smart garage doors from Next, a par popular garage door manufacturer, blah blah blah. And then it's like, here are the key takeaways, and then it has the little 1, 2, 3 emojis.

Joseph:

Uh-huh.

Jason:

And then at the end, the takeaway, which is also part of the the LinkedIn lunatic type post is, as we increasingly rely on IoT and smart devices, let's not forget the importance of of robust security measures. Share this article with your network to raise awareness, and let's keep our homes and communities safe. Hashtag smart home, hashtag IoT security, hashtag cybersecurity, hashtag garage door security.

Joseph:

I love that. Likes. Well, I thought you said you got a time of engagement, but

Jason:

Well, that that one, they didn't like. Some of the others. Yeah. And then Katie Drummond, who was our boss at the time and is now at Wired said, thank you for sharing this important article in smart garage door vulnerabilities. As your boss, I appreciate your commitment to staying informed about cybersecurity risks and keeping our team and organization informed.

Jason:

Let's continue to prioritize strong security measures to protect companies and clients.

Joseph:

Was that with AI as well? It's kind of

Sam:

how that strategy? Okay.

Jason:

Yeah. So she was she was commenting on all of mine with chat GPT responses, and that was part of what the article was gonna be, but then then we didn't stay there. I bring this up just because LinkedIn is pushing AI super hard, and that's why I thought that these, like, LinkedIn lunatic AI generated posts would maybe do well there as as a stunt, as a joke to see what was going on. And, Joe, I know you use LinkedIn a fair bit or, like, you post our

Joseph:

Kinda think because I have to because we're trying to run on media.

Jason:

Yeah. Yeah. Yeah. But, I don't know if you've noticed, but I think that's one of the only social media platforms that has, like, a built in, we will write your posts for you with AI, like, in the post editor. It's pretty wild how, like, integrated it is.

Joseph:

Yeah. They actively encourage you, to do it. And, I mean, the new Apple intelligence are now slightly different, but that's gonna have that baked in. Right? Whereas you can generate images, like, directly in Imessage or whatever it is now.

Joseph:

So, like, it's getting closer and closer to you as a user experience all the time. I find it funny that you've made LinkedIn posts with ChatGPT. Now those are gonna be ingested into LinkedIn's own AI, and then it's gonna be used to make even more. So, it's like a very weird cycle where we could we could just poison the LinkedIn dataset by going full on LinkedIn lunatic. You know?

Joseph:

We could really push the push it to the edge. The last thing I'll say, LinkedIn, very interesting social network. The engagement I get on a lot of my posts are people asking questions where they're clearly trying to bait me into responding, and you don't get on any of a social network. It's just like, what do you think this means for security? I don't know, mate.

Joseph:

Like, what? Please stop asking me. Just read the article or don't. I don't know. This is a weird place.

Joseph:

But, yeah, definitely a strange social network, and we'll see how it is when it's completely overrun with, AI. That's gonna be great. Alright. We will leave that there. If you are listening to the free version of the podcast, I will now play us out.

Joseph:

But if you are a paying 4 zero four media subscriber, we're gonna talk about a pretty scary, I guess, side effects of the autonomous Waymo vehicles or a weird attack factor. I mean, Sam is gonna explain it to us. You can subscribe and gain access to that content at 404media.c0. We'll be right back after this. Alright.

Joseph:

And we are back in the subscribers only, section. So here's one that Sam wrote. The headline is, men harassed a woman in a driverless Waymo, trapping her in traffic. Maybe let's just play some audio first for listeners, so they can get the context of what we're gonna talk about, and then we'll chat about it after

Speaker 5:

this. Get out of the way. Move. Oh my god. Get out of the way.

Speaker 5:

I have to go. Please stop. You're holding traffic. Stop. No.

Speaker 5:

Go. Go. Go. No. Get out.

Speaker 5:

Get out. Move. Stop.

Joseph:

So, Sam, what the hell is going on?

Sam:

Yeah. So I'm gonna try to talk about the story without getting super fucking mad. I'm already mad just saying that.

Joseph:

You can get mad.

Sam:

Yeah. So what you just heard was a woman in Waymo who Waymo is just to give a little bit of context, Waymo is a ride hailing autonomous vehicle service. So a car with no driver pulls up, picks you up like an Uber, and takes you where you wanna go, via an app. So, yeah, that's, that audio is from someone in the car doing a ride. She's sitting in the passenger seat.

Sam:

There's nobody driving the car. It's autonomous. And then there are guys standing in front of the car blocking the car. I guess I could describe how these guys look. They are wearing, like one of them is wearing, like, an over like, a crossbody fanny pack thing with he's literally wearing a fedora and has a big beard.

Sam:

The other guy is wearing a beanie. They look like SF Tech Bros. I don't wanna, like, you know, stereotype these guys too much, but that's how they look. And she's yelling at them to move, and they're doing, like, a call me thing with their hands, where they're, like, refusing to move until she gives them her number is the context here. And she's like, move.

Sam:

Like, I'm waiting. I'm I'm trying you're blocking traffic. No. She's kind of laughing, but, like, if you've ever been in a a situation where you're just, like, hoping a man leaves you alone, you recognize that, like, that's a diffusion tactic. It's not fun, what's happening.

Sam:

Yeah. And they can't the car can't move because they're in the way, because it's autonomous, because, you know, it's it's trained to stop for pedestrians, which is good. You know, it's great that it didn't just run these guys over. But it is kind of this weird, like you said, like, this weird attack vector for Waymo's in particular.

Joseph:

Yeah. So, I mean, just to stress it. The autonomous vehicles, obviously, they're driving around whatever, and if they detect what they think is a pedestrian crossing a road, or maybe it's on a sidewalk or something, hopefully, they're not mounting the sidewalk. They are obviously designed to avoid it or stop or, you know, not hurt, the person, obviously. It's now almost like these I'm just gonna call them tech bros, so whatever.

Joseph:

The, these people, they know that. Yeah. So they're standing in front of it to deliberately and, like, maliciously, stop the vehicle, which Yeah. I mean, I'm not gonna lie, I did not see this coming. You know?

Joseph:

I I did not anticipate this. Did you see, like, this could be a thing that would happen?

Sam:

Or I mean, I, I've seen lots of stories that have come up in the last few months. Or, you know, Waymo has been testing for a couple years now, and it's now going public. So more people are are riding in Waymos, so there's more coming out about them. But people have been, like, trolling the Waymos, basically. Like, you can put, like, code like, traffic cones on them, and it'll make them stop.

Sam:

Like, if you block them in any way, they'll stop. I think we all saw the story where, like, they were all parking in a parking complex behind an apartment and, like, losing their shit and beeping at each other because they were all in each other's way. So it's like they they're trained like you said, they're trained to avoid obstacles, which is good, but they don't really even do that that well sometimes. Like, there's been a lot of close calls or actual collisions with self driving cars, when you want them to avoid a passenger or with in terms of avoid a pedestrian. Sometimes it's like, is it going to?

Sam:

Like, is it gonna stop? I was reading a story from NBC when I was reporting this story about how, it was, like, 1 in 4 crossing guards in San Francisco that reporters talked to had close calls with Waymo's where they were, like, not sure if this car was actually gonna stop in time, to not hit people in the crosswalk. So, yeah, it's it's like it should be obvious that this is something that people are gonna do. They're gonna harass people inside the car, and the people inside the car are gonna be stuck. But, yeah, I didn't really I wouldn't have seen this coming.

Sam:

I'm hoping that this doesn't, like, inspire more people to be total dickheads about self driving cars, but, or, like, harassing people inside of them. But I don't know. I don't know what the the solve is for this. They do have alarms on the outside. Like, you can they can make a noise that, like, deters people.

Joseph:

What's that gonna do?

Sam:

I don't know. Yeah. That's the thing. It's like, I don't really know what the I mean, I'm interested to hear what you guys think because I don't really know what the answer is, sort of like I don't even know. I don't know.

Sam:

I don't know. It's like sort of like not running these cars anymore.

Jason:

What's the I mean, I think this is one of the big problems with them. I mean, I hadn't seen this, but we've seen, people put cones on the hood of Voimos, and they just stop moving because someone put a cone on them. And, surely, some of these things will be fixed, but it's like a human driver has situational awareness and the ability to reason and the ability to, like, examine a situation and get out of that situation if needed. And it's like, first of all, they wouldn't have done this if there's a human driver in the car, but let's say that they did. It's like, you know, you can blare the horn.

Jason:

You can try to drive around them. You can back up. Like, there's all sorts of different things that people can do. And in this case, it's just like, well, the way most sees that there's a pedestrian in front, so it stops and it sits there indefinitely, putting, you know, putting the passengers in a a very uncomfortable situation and potentially dangerous situation. And so I think that we've we've seen this with things like delivery robots as well, where a lot of times someone needs to, like, remote in to like, a human being needs to remote into the car or into the delivery robot and see what's going on and then figure out, like, how to manage the situation.

Jason:

And I think that that's not something that an autonomous car or an autonomous delivery robot is ever gonna be able to figure out because it's never gonna have the logical reasoning skills of a a human being.

Joseph:

Just super briefly, because I don't think we mentioned this on the part before. You you did a story when a delivery robot hit somebody and knocked them over, and then it, like, started going towards them again? Can you just very brief I mean, we won't even get into it, but just like what happened there? Just because, you know, somewhat related.

Jason:

Yeah. I mean, this was at Arizona State University. It was a few weeks ago. It happened about a year ago, but the the story I wrote was about a few weeks ago. And, basically, it it was a starship robot, which is on college campuses.

Jason:

And in a parking garage, according to a police report, basically, it it abruptly changed directions and it hit a woman, and the woman fell over and she hurt herself. And the delivery robot starts driving away, and then it suddenly reverses direction and and starts driving at her again. And it doesn't hit her a second time, but, I mean, this is the sort of thing that that can happen. I think that, with autonomous cars, there's a give and take because human drivers are quite bad also, and they're often distracted and things like that. And so I'm a little bit more I think that that everyone needs to kind of think, like, what what do they think about autonomous cars?

Jason:

Have have being, like, driving around in Los Angeles with human drivers, it can be very terrifying depending on the time of day, depending on traffic, things like that. You see some wild stuff that human drivers do. And it's like autonomous cars theoretically won't do some of the things that humans do, but they have other problems. And so, I think that one of the big things is you can't see where the car is looking. It's like a big part of driving is silent communication with other drivers where you're kind of, like, looking at them and seeing, like, is this person's head facing me?

Jason:

Like, can they see that I am here? Especially if you're biking or walking, it's like, if I'm on a bike, I look into the car and try to see, like, does that person see me? If not, I'm gonna be very scared. And, like, with an autonomous car, there's none of that. So I don't know.

Jason:

There's there's lots of, like, kind of give and take here, I would say.

Sam:

Yeah. I find that very disorienting about driverless cars. I was staying in Venice for a bit in LA, and they were testing 1 in this, like, radius that I was walking in a lot. I don't know if they were testing it or if it was actually public yet, but, I was I saw it do the same route all the time, and it was rarely anybody actually in it. And, like, crossing the street and also biking, like Jason said, it's like you're there's communication between you and the driver whether you're speaking or not.

Sam:

It's like I if someone's on their phone, I'm not crossing the street. If someone's not looking my way, I'm not gonna, like, buy them while they're turning. You know? But, like, with this Waymo, I was like, I can't tell what it's gonna do at any moment, and they move kinda fast. Like, they they make moves very deliberately.

Sam:

So I don't know. I was like, definitely there were definitely a few times where I was crossing the street, and I was like, is this thing gonna stop? Like, is it actually gonna see me, or is it gonna keep blowing through? So it's very weird. I find it odd.

Joseph:

You, you spoke to the the victim of this weird, Waymo instant race, Sam, the the person who was basically trapped in the vehicle for a period of time. What did she have to say about it?

Sam:

She, she's she told me that she told me what happened, which was, you know, pretty much what we just described. And she said that Waymo support got on the phone pretty much immediately and was like, are you okay? Do you want us to call the police? Do you want police help? And I don't know.

Sam:

I was like, I don't really know what the police would do in that situation. And, also, they had moved on at that point. So

Joseph:

So the incident's already happened?

Sam:

Yeah. It's like it's already happened. But they called a couple times before she got to the destination, and then they called a couple hours later and were like, is everything still okay? I don't know. It's like it sounds like she was happy with, like, the the actual, support team at Waymo, which is, like, good that they checked in.

Sam:

And she also said something interesting, which I thought I thought I wasn't really expecting. I was expecting her to be like, I'm never taking another way now again in my life. But she had posted that, she was open to taking another one eventually. And she told me that she actually loves autonomous vehicles because, she thinks that if there is widespread adoption, that the technology will be safer than human drivers. And she was in a car accident as a kid and has anxiety with driving and has been looking forward to, like, the the influx of, autonomous vehicles.

Sam:

So and I think I mean, I I kind of agree. It's like the problem right now is, like, the the infrastructure and the public interaction with these cars isn't widespread enough. It's still so new that people can fuck with them in this way. And, also, there's drivers and autonomous vehicles on the road at the same time, which is chaotic and unpredictable. And with anything, it's like this technology is gonna be, you know, tested in real time by people in ways that the developers don't understand yet.

Sam:

The back seats are tinted, so she did say to me that, like, she probably would sit in the back. If she had to do it over again, she would sit somewhere that they can't see her. But I don't know. It's like tint the whole thing. I don't know.

Sam:

I don't know what the that's it's it's just interesting that, like, people have to have, like, these really unpleasant, often kind of dangerous, potentially dangerous situations in real time while these technologies are getting tested on us. You know? They're getting tested on the people using them and just the general public who's not using them, just the people people out there trying to exist alongside them. So, it's a complicated thing.

Joseph:

We've all been opted in to a beta program by default. Yeah. You know? I can't opt out of the Waymo well, I can. Don't don't go to San Francisco.

Joseph:

Yeah.

Sam:

I was gonna say, SF has been opted in aggressively until a lot of shit it didn't ask for. But, yeah, that's a that's a good point. Opting in is going outside at this point. So

Joseph:

I know for the article Waymo hadn't responded by the time of publication. Has the company responded yet?

Sam:

Let's refresh my inbox. No. They have not. So Cool. If they do, we'll we'll update it.

Sam:

Maybe they will by the time this comes out.

Joseph:

But Yeah. Probably. Emmanuel, I just wanna end with you just briefly because you brought up in our chat when Sam was, writing it. Just, you know, there is some context about, you know, people lashing out against these vehicles, you know, burning them, attacking them. Sam mentioned putting cones on them, but, you know, some people have done other stuff.

Joseph:

Right? What's that exactly?

Emanuel:

Yeah. I mean, at this point, we're looking at, like, maybe more than 2 decades of people in San Francisco lashing out against the big tech companies. Back when I lived there, there was a lot of anger about all the companies shuttling people who live in the city, but work in these campuses outside the city in Menlo Park and other places. Because living in San Francisco is cool, but the campuses that pay the salary for you to afford those apartments are not in the city. So what Google and Facebook and all these other companies did as a perk is just have a free giant buses, like these lush buses with Wi Fi and, you know, like nice seats, and just drive you for free to work.

Emanuel:

And that caused traffic and it made people mad and there were protests of the bus and throwing rocks at the bus and stuff like that, and that has I think evolved in the same way against the self driving cars where I think people have legitimate concerns about road safety, but I think a lot of it is just an opportunity to take your anchor out at the tech companies with this very clear symbol of, you know, self driving AI products on the road. And we've seen people, like, tag them and break windshields. And I think one of them was set on fire. I think it was in Chinatown. This was, like, a few months ago.

Emanuel:

And I don't think it no one got hurt in those instances, but you are kind kind of drawing attention being in in one of those self driving cars. Different kind of attention here that isn't Sam's story, but you're somewhat putting a target on your back, when you're one of those, and I think that's an element here as well.

Joseph:

Do you ever feel the inclination to go and do any of those things to the ice cream truck that drives around in the background? You've heard that this time?

Emanuel:

No. I'm afraid of that guy. I wouldn't fuck with him.

Joseph:

Wait till it's autonomous. I mean, maybe we'll be good. We'll see. Alright. Let's leave that there if I can open the interface correctly, and I will play us out.

Joseph:

As a reminder, 4 zero four Media is journalist founded and supported by subscribers. If you wish to 4 Media is journalist founded and supported by subscribers. If you wish to subscribe to 404 Media and directly support our work, please go to 404media.c0. You'll get unlimited access to our articles and an ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week.

Joseph:

This podcast is made in partnership with Kaleidoscope. Another way to support us is by leaving a 5 star rating and review for the podcast. That stuff really helps us out. This has been Media. We will see you again next week.