Lever Time - Premium

from The Lever

LEVER TIME: Will Artificial Intelligence Destroy Us or Set Us Free?

You last listened May 4, 2023

Episode Notes

/

Transcript

David Sirota sits down with Dr. Max Tegmark, president of the Future of Life Institute and one of the world’s leading experts in artificial intelligence, to dive into the growing debate around AI. In this deep-dive conversation, Dr. Tegmark breaks down recent breakthroughs in generative AI technology like ChatGPT and explains how it could replace most forms of human labor — for better or worse. They unpack why private interests are barreling forward with commercial AI development and why the Future of Life Institute has publicly called for an immediate pause on all AI experimentation. Dr. Tegmark also explains how AI could be the most transformative technology in human history — and the (small!) probability it becomes the Terminator.

A transcript of this episode is available here.

Links:
BONUS: Next Monday's bonus episode of Lever Time Premium, exclusively for The Lever’s supporting subscribers, will include David’s extended conversation with Dr. Tegmark. In this special bonus segment, Dr. Tegmark describes the safety measures that could prevent the emergence of the most dangerous forms of AI. If those precautions are put in place, he argues, AI could cure disease, stop climate change, end poverty, and just maybe set us free. Keep an eye out for email announcing the drop.

Thank you for being a paid subscriber! If you're having issues subscribing or listening to Lever Time Premium, email us at support@levernews.com.

If you’d like to leave a tip for The Lever, click the following link. It helps us do this kind of independent journalism. levernews.com/tipjar

00:01:41:05 - 00:02:10:10
David Sirota
Hey there and welcome to another episode of Lever Time, the flagship podcast from The Lever, an independent investigative news outlet. I'm your host, David Sirota. On today's show, we're going to be discussing one of the most important and terrifying issues of the 21st century artificial intelligence. With the recent advancement of generative A.I. technology like Chad GPT. A huge debate is already underway about whether A.I. technology will set us free or wipe us off the face of the earth.

00:02:10:20 - 00:02:34:18
David Sirota
So today I'm going to be talking to Dr. Max TEGMARK, who's one of the world's leading experts on AI and one of its most thoughtful critics. We're going to get into everything you need to know about A.I. technology and why some scientists are saying we need to pump the brakes on its development a.S.A.P. For our paid subscribers, we're also going to be dropping exclusive bonus episodes into our level premium podcast feed.

00:02:34:19 - 00:03:05:20
David Sirota
Last week, we published Oliver's interview with Abraham Josephine Riesman, the author of the new biography Ringmaster Vince McMahon and the Unmaking of America, which looks at McMahon's influence and pro wrestling's influence on American politics. And coming up next week is the extended interview with Dr. TEGMARK that you're about to hear. It's the part of the discussion where he goes really deep on the specific kinds of safety measures that can be taken to prevent the most dangerous forms of A.I. from threatening society.

00:03:06:03 - 00:03:27:11
David Sirota
He also discusses how if those safety measures are put in place, I could do everything from curing disease to stopping climate change to ending poverty. So stay tuned for that in the premium podcast feed. If you want to access our premium content, head over to our News.com and click the subscribe button in the top right to become a supporting subscriber.

00:03:27:18 - 00:03:47:04
David Sirota
That'll give you access to the leaver premium podcast feed, exclusive live events and all of the in-depth reporting and investigative journalism that we do here at the Lever. The only way independent media grows and thrives is because of passionate supporters and by word of mouth. So we need all the help we can get to combat the inane bullshit that is corporate media.

00:03:47:04 - 00:03:53:20
David Sirota
So go subscribe it directly funds the work that we do. I'm here today with Lever Times producer. Producer Frank. What's up, Frank?

00:03:53:21 - 00:04:20:22
Frank Cappello
Not much, David. Very exciting week here at the Lever. I'm really looking forward to your conversation with Dr. TEGMARK about. I know it's something that I'm constantly thinking about and sometimes worrying about. So excited to get the the deep dive from you, too. Also, we had a big episode of Movies versus Capitalism this week. Historian Harvey Kay joined us to talk about the Disney live action musical Newsies.

00:04:20:22 - 00:04:30:17
Frank Cappello
And it's a really, really good episode. Very fun. And Harvey provided a lot of historical context for us, so people should go and check it out.

00:04:30:21 - 00:05:01:08
David Sirota
You can find it at NBC podcast. It's part of the Leaders Podcast Network. Newsies, I believe, has a scene of police punching kids in the face. Yep. A kind of out of control depiction, Although maybe, maybe a realistic depiction, I guess. And a good a good topic for a show about what are the meta messages being sent by movies when it comes to politics, when it comes to the economy, when it comes to everything like that.

00:05:01:18 - 00:05:15:17
David Sirota
And yes, I am super psyched about this week's interview about artificial intelligence. I've been thinking about and worried about artificial intelligence since I first saw the Terminator movie. So I guess that's like 30, maybe 40 years of worrying about artificial intelligence.

00:05:15:17 - 00:05:20:07
Frank Cappello
You started you started worrying very early. You were like, I got to start worrying about this immediately.

00:05:20:08 - 00:05:49:02
David Sirota
Absolutely. I mean, you see that movie. I mean, the first scene is like a an artificial intelligence powered tank driving over a pile of skulls. Right. I mean, it's like the the robots are winning the war. And that kind of burned into my mind as I guess, a child who probably saw it at an inappropriately young age, which is probably why I've been worried about artificial intelligence, because I was essentially scarred by James Cameron and Arnold Schwarzenegger at too young an age.

00:05:49:02 - 00:06:08:02
David Sirota
Now, before we get into that, let's take a little time here to touch on some of the stuff that also has been happening at the Lever, our reporting that we do. And I hope everyone who's listening is a subscriber. We've done a lot of reporting in the last few days on the Supreme Court. We've talked about it on this show a lot as well.

00:06:08:11 - 00:06:37:15
David Sirota
ProPublica, of course, recently reported that Clarence Thomas had failed to disclose two decades worth of luxury trips and a real estate purchase provided by a conservative billionaire that has been in the news all over the place. And we have contributed to that reporting with two big stories over the last week. The first was about how Clarence Thomas helped kill an eviction ban that was directly threatening the same billionaire's business.

00:06:37:20 - 00:06:59:01
David Sirota
So there's been this whole idea out there that Clarence Thomas got these gifts from this billionaire, but the billionaire never supposedly had any real business before the Supreme Court. So in theory, oh, well, it's no big deal. Clarence Thomas is just, you know, living the high life off of this billionaire's gifts. But since the billionaire supposedly didn't have business before the Supreme Court, it's all cool.

00:06:59:09 - 00:07:27:06
David Sirota
But our story showed that actually the billionaire, his company, effectively admitted that they had interests in decisions before the case. Thomas voted to end federal tenant protections that Harlan Crow, that's the name of the billionaire that his company literally said threatened its real estate profit margins. That's in the documents in Harlan Crowe's company. So put put the two situations together.

00:07:27:23 - 00:07:52:02
David Sirota
Thomas is getting gifts from Harlan Crowe. Harlan Crowe's company is saying that eviction moratoria are hurting or potentially hurting its business profits. And then Thomas votes twice to end the eviction moratorium. Now, producer Frank, I've heard it thrown out there, you know. Oh, well, listen, maybe that's Schroder. Maybe that's true. You know, fine. It's we'll stipulate that that's true.

00:07:52:02 - 00:08:12:14
David Sirota
But Clarence Thomas is already super conservative and would have voted to end the eviction ban anyway. So now the new argument for why supposedly this corruption doesn't matter is because, well, okay, fine, the guy had some business interests in front of the court. But since Clarence Thomas is already a right wing extremist, he was already going to vote that way.

00:08:12:18 - 00:08:16:00
David Sirota
So the money doesn't matter. I mean, what do you make of that nonsense?

00:08:16:00 - 00:08:39:14
Frank Cappello
I wouldn't call it nonsense. I would call it total bullshit as what I would call it. Yeah, I don't care whether or not he would have voted this this way. Anyway. I don't want my unelected council of elders who serve lifetime appointments on the highest court in the land to be receiving any fucking gifts from anybody for any reason, especially ones that they're not disclosing.

00:08:39:21 - 00:09:00:02
Frank Cappello
It's complete like the impropriety is out of control, and anyone who defends this in any way is just doing mental backflips to find a justification for it. It's like people, man, we put out stories all the time about all of the dark money, all of the conservative dark money in our political system. And we'll get replies from people that are like, Well, what about George Soros?

00:09:00:02 - 00:09:10:00
Frank Cappello
It's like they're both bad. It's it's all bad. Why can't we just be like, No, we don't want any of this money. We don't want any of this influence. So that's sorry, I just got a little heated. But no.

00:09:10:01 - 00:09:31:00
David Sirota
I mean, listen, I look, money is is either corrupting or it's not. The Supreme Court, of course, has tried to insist that it's actually not corrupting, which we've continued to report, I mean, from the citizens United decision and many other decisions that try to insist that the money spent in politics is just not not corrupting, not influential, which is also a whole lot of horseshit.

00:09:31:20 - 00:09:50:21
David Sirota
But in this case, I mean, the argument that, well, Clarence Thomas is a freakish extremist. So the money that or the gifts that he got didn't change him or influence him. He was all always a freakish extremist. I don't even know how to respond to that other than to say it's a lot of garbage. It's a lot of bullshit.

00:09:50:21 - 00:10:18:09
David Sirota
As you said. Now, there has been talk of once again, talk has come up as it's come up in the past as well. Talk of imposing an ethics code on the Supreme Court. And we broke a story about this as well. Some Democrats are finally starting to talk about using Congress's power to force the Supreme Court to follow a basic set of ethics and anti-corruption rules.

00:10:18:15 - 00:11:11:16
David Sirota
The story that we broke was Chief Justice John Roberts, who, by the way, declined the Senate Judiciary Committee to testify at a hearing about all this. John Roberts also documents show, threatened Congress with a legal challenge to or not following any ethics rules that Congress tried to impose on the Supreme Court. I mean, you cannot make this up that back in 2011, John Roberts suggested that if the co-equal branch of government, the Congress that's supposed to make the laws passed the law, putting in place basic ethics rules, by the way, that other government agencies have to follow, that if Congress tried to do that with the Supreme Court, that the Supreme Court might ignore it,

00:11:12:08 - 00:11:41:10
David Sirota
might legally challenge it on constitutional grounds. Now, Roberts did this also the pressure for reviewing the Clarence Thomas situation. Also new revelations about Neil Gorsuch and a land deal. Also new revelations about John Roberts, his wife making millions of dollars, placing lawyers at law firms who have business before the Supreme Court, that the pressure on the Supreme Court to conduct at least an internal review.

00:11:41:22 - 00:12:21:08
David Sirota
John Roberts moved that review to a secret panel of lower court judges and will not disclose the names of the people on the panel. So under pressure, he said, I'm not testifying before Congress. I've already told Congress that the Supreme Court may not even listen to anything, any laws that you pass about ethics at the court. On top of that, I've taken the allegations of all of this massive corruption, and I've put them in a secret panel that you're not allowed the public is not allowed to even know who's on the panel.

00:12:22:02 - 00:12:24:03
David Sirota
I mean, this this is insane.

00:12:24:04 - 00:12:44:06
Frank Cappello
It's completely insane. I it's always amazing to see. You know, we see it usually more from the conservative side, but just the willingness to just like, flip everyone the bird and just be like, yeah, we're not I'm not doing what you want me to do. Like you want me you want me to testify at an ethics hearing. Now, I'm good on that.

00:12:44:06 - 00:12:50:20
Frank Cappello
You want us to investigate ourselves now? I'm not going to do that. The audacity is really. It's astonishing.

00:12:50:23 - 00:13:19:19
David Sirota
But I think it reflects the normalization of a lack of accountability. I mean, we are now live in a world where the expectation is that the most powerful people in this country simply do not have to respond to political pressure, simply do not have to follow the basic rules, the basic etiquette, the basic ethical conduct that we're supposed to expect of public officials.

00:13:19:19 - 00:13:47:07
David Sirota
I mean, there is just it's just completely normalized that there are no consequences for the most powerful people in this country when they do things like this. And these corruption scandals are not the Democrats fault. I want to be clear about that. But when I see Dick Durbin of the Senate Judiciary Committee saying essentially, well, the Supreme Court has to conduct an investigation, I'm like, dude, do your fucking job.

00:13:47:21 - 00:14:17:07
David Sirota
You're a juror on the Judiciary Committee. You've been a senator for like a billion years in a safe state. Right. And your response to, like, flagrant, in-your-face corruption is like, well, they better do something about that. It's like, Dude, why are you there? What what is your point as a senator on the Judiciary Committee? Your point should be, I am now going to make you do things or at least try to build the political coalition in Congress to make you do certain things.

00:14:17:13 - 00:14:52:07
David Sirota
And it's worth mentioning that in the past, the proposals for an ethics code at the Supreme Court, some of them came from Republicans. So in theory, there could actually be some Republican support for a basic code of ethics at the Supreme Court. But when you have Democrats like Dick Durbin who want to insist, listen, I can't we can't do anything, we're not going to even try to do anything, then you start to wonder whether the entire game, the whole shebang is rigged.

00:14:52:09 - 00:14:56:14
Frank Cappello
That should be the slogan of the Democratic Party is, Someone should do something about this.

00:14:57:07 - 00:15:17:04
David Sirota
Yeah, somebody. Somebody call somebody. Not us. Exactly. That's how it feels every day. All right. It's time to get to our big interview about artificial intelligence, by the way, something that I think somebody should probably do something about. But before we get to that, let's take a quick break. Welcome back to our time for our main interview today.

00:15:17:11 - 00:15:42:14
David Sirota
We're going to be talking about artificial intelligence, otherwise known as A.I., otherwise known as something I've been terrified by since I saw the Terminator movies four decades ago. In the last year, advancements in Generative I have explored coded across the internet. There have been AI image generators like Doll E that can create photorealistic images from a simple suggestion.

00:15:43:00 - 00:16:08:08
David Sirota
And of course there's Chat GPT, which was released last November, which has access to almost all of the world's online knowledge and has the capacity to write anything from college papers to legal contracts. Advocates for AI the triumphalist. They say it could become the most transformative technology in human history. But there are critics who say it has the potential to do the opposite.

00:16:08:14 - 00:16:34:12
David Sirota
And it's not just James Cameron and the Terminator, depending on who you're speaking to. Concerns range from A.I. is going to take all of our jobs to A.I. could wipe humans off the face of the planet. This may seem hyperbolic to some folks say it's the next step is not going to be Skynet. But there is a very important aspect of this story to consider.

00:16:35:02 - 00:17:05:00
David Sirota
Most A.I. research and development are being conducted by private companies, meaning companies that are more likely to be concerned with cornering the market and raking in massive profits rather than prioritizing the responsible implementation of this technology. And if there's one thing we know here at the Lever about private companies, it's that they rarely do the right thing on their own out of the goodness of their own hearts.

00:17:05:08 - 00:17:33:21
David Sirota
They rarely do the right thing unless there's public pressure and proper regulation. They rarely priori ties the public good over private profit unless there is accountability and oversight. That may be why this past March, The Future of Life Institute, which is a nonprofit organization whose goal is to steer transformative technology towards benefiting life and away from extreme large scale risks.

00:17:34:06 - 00:18:08:14
David Sirota
Why This organization published an open letter calling for AI laboratories to immediately pause experiments on the training of AI systems more powerful than GPT four for at least six months. That's jargon for saying slow it down because this may go in a really dangerous direction. This letter was signed by Elon Musk. It was signed by Apple's co-founder Steve Wozniak, amongst many others who are concerned with the rapid development of A.I. technology.

00:18:08:20 - 00:18:37:12
David Sirota
So today I'm going to be speaking with Dr. Eric TEGMARK, who's the president of the Future of Life Institute, an AI researcher at MIT and one of the world's leading experts on AI technology. And he's also one of technology's most prominent critics. He has likened the uncontrolled development of artificial intelligence to the premise of the movie Don't Look Up, which I co-created with Adam McKay.

00:18:37:20 - 00:19:01:10
David Sirota
He has likened the technology to the asteroid headed towards Earth. What follows here is an interview with Dr. TEGMARK as we explore the potential upsides and the potential terrifying downsides of A.I.. What you should be afraid of, what the benefits could be, and what we all should really be afraid of. Hey, Max, How you doing?

00:19:01:11 - 00:19:03:03
Dr. Max Tegmark
Good. It's an honor to meet you.

00:19:03:04 - 00:19:29:01
David Sirota
Yes, Well. Well, I really appreciate you taking the time to be on our show. As I've said, artificial intelligence is something that I've been terrified by since I first saw the Terminator many, many years ago. So I want to start with what artificial intelligence actually is. I mean, think people have a maybe a vague concept of it from movies, whether it's Terminator or whether it's Star Trek.

00:19:29:01 - 00:19:55:15
David Sirota
The guys like talking to the computers and the like. So for those who don't really know what we're talking about in this actual reality as opposed to a movie, what is artificial, all intelligence and specifically tell us some of the recent developments in generative AI technology and where we are in the, I guess, pursuit of artificial intelligence?

00:19:55:17 - 00:20:26:07
Dr. Max Tegmark
Yeah, I think people often overcomplicate and twist themselves into knots trying to find A.I.. Artificial intelligence is non-biological intelligence. It's really that simple. So then, of course, you should ask me what's intelligence? And they're the most relevant and interesting definition is simply define intelligence as the ability to accomplish goals. If your goal is to win chess games, then you're more intelligent.

00:20:26:07 - 00:20:52:20
Dr. Max Tegmark
If you beat the other chess computer and if your goal is more complex than that, you know that's more broad, then you're getting closer to what the ultimate Holy Grail has always been for AI research, which is to build machines that can accomplish all goals as well as humans or better. And if you look at the history, whenever we humans have figured out something about how our bodies work, we build machines that do it better.

00:20:53:02 - 00:21:15:10
Dr. Max Tegmark
So during the Industrial Revolution, we figured out how to make machines that were stronger than us and faster than us. And more recently, people started to realize, Hey, this intelligence thing, you know, it's all about information processing, and we can build that in Silicon rather than just having it in our squishy brains. It went much slower than people thought initially.

00:21:15:10 - 00:21:43:02
Dr. Max Tegmark
It turned out to be extremely hard. So the phrase was coined artificial intelligence. I actually here around at MIT in the sixties, slow progress. We got our butts kicked in chess, you know, decades ago. Then the intelligence was still programed in by humans who knew how to play chess. More recently, things have gotten a lot faster. When we realized that we could let the machines learn like children on their own from data and from playing against each other.

00:21:43:13 - 00:22:13:07
Dr. Max Tegmark
And last year we've seen on the consumer market some really spectacular stuff. The famous Turing tests, for example, which very vaguely speaking is about doing language as well as humans. We now have gbps for which rights better than most Americans and which scores higher than 90% of all Americans on the on the bar exam. This has taken a lot of folks by surprise who thought we had many, many decades to figure out what we were going to do.

00:22:13:07 - 00:22:22:05
Dr. Max Tegmark
And we were on the verge of getting overtaken by machines across the board when it might actually be the case that it's going to happen. That is almost happened already. Well.

00:22:22:13 - 00:22:50:12
David Sirota
When I think of the real world idea of artificial intelligence, I think of what is it, Big blue, right? The chess player. Yeah. You mentioned that the machine that beat the best chess players in the world, we've got I mean, that was a while ago. And so the I think one of the things that that I have a question about is the last year you just mentioned that that the last year there have been these these advancements that it seemed like you didn't hear much about it.

00:22:50:12 - 00:23:17:11
David Sirota
And then in the last year, actually the last six months, three months, you've heard a ton about it. I guess what were the the big developments that suddenly happened? Was there like some some moment recently over the last year or two years where we figure something out or the artificial intelligence itself figured something out that is allowing it to advance much more quickly than it had been advancing in the past ten or 20 years.

00:23:17:14 - 00:23:40:03
Dr. Max Tegmark
It's a great question. I'll say two things. First of all, there wasn't any great development that happened all of a sudden just now except the development that media suddenly started talking about it a lot. It's been a very quiet, steady progress in the industry, and a lot of people, therefore, were very much in denial about it until suddenly, you know, they could try to achieve it for themselves.

00:23:40:10 - 00:24:01:08
Dr. Max Tegmark
There is a very important tactical thing. The reason that progress is so slow, even after humans got defeated in chess, was because all the intelligence had to be put in by humans. And that takes time. And humans have to always figure out how to make the machines do stuff that's not how a child learns by children can get much better than their parents because they can figure it out for themselves.

00:24:02:03 - 00:24:29:17
Dr. Max Tegmark
And the Machine Learning Revolution has replicated that in silicon. And the same thing is happening now across so many other areas that humans used to be better at. So, for example, search for language. We still can't figure out how to write a program ourselves in some human programing language to write perfect English grammar and understand everything perfectly. People are talking about it translated into other languages, but it turned out we should need to.

00:24:30:03 - 00:24:39:17
Dr. Max Tegmark
We just have this machine learning tool which goes and reads everything on the internet and it's somehow figures it out in a way that we don't fully understand.

00:24:40:04 - 00:25:07:13
David Sirota
Can I can I stop you there? Because. Because I think that's a really important point and actually a really scary point is this I mean, is it fair to say that we have built the seeds of a technology that can then evolve into something and that we ourselves don't understand how the thing that we seeded, this thing that we created, we don't understand and how it evolved.

00:25:07:13 - 00:25:28:13
David Sirota
I mean, I mean, that's the classic, you know, we built we built a monster. You know, we've created a monster. It's one thing to say I created a machine that did something that did something bad or could do something bad. And I understand how AI how it was created and what it did. But that point about how we don't understand how the thing we created did what it did.

00:25:28:20 - 00:25:32:07
David Sirota
I mean, that to me seems like the central scariest part of this, right?

00:25:32:09 - 00:25:54:18
Dr. Max Tegmark
That's unfortunately thought on, you know, when GPT Jay convinced this Belgian man apparently to commit suicide or when Jackie Beattie tried to persuade this journalist in New York to leave his wife, it wasn't because some ill spirited dude at one of the companies was like, Hey, I'm going to program this in the mess with these people. No, they had no idea that was going to happen.

00:25:55:00 - 00:26:16:22
Dr. Max Tegmark
They really didn't understand how it worked. As to the monster, though, we have to be clear on what what the monster is that we're creating right now, a language model like GPT four is not itself the complete monster. It's just a so-called oracle, which can only answer questions. It's not itself able to directly do things in the world.

00:26:17:10 - 00:26:26:22
Dr. Max Tegmark
So the monster actually is the greatest ecosystem that that's part of the real monster is actually it has a name. Have you heard of Moloch?

00:26:27:02 - 00:26:28:21
David Sirota
Yes. That rings a bell.

00:26:28:21 - 00:26:56:23
Dr. Max Tegmark
Yeah, it's a moloch is the same monster that all so often lets makes us humans fall really short of our potential by putting us in these weird races to the bottom. If you have a bunch of countries that end up overfishing part of an ocean so that everybody loses out, everybody saw what was happening, but nobody could do anything about it because if they if they stopped overfishing, the others would keep going and everybody had an incentive.

00:26:57:16 - 00:27:17:01
Dr. Max Tegmark
And arms races are like that's today. When people try to slow it down, are the most common responses. But China, you know, and that's, of course, what Chinese researchers probably get told, too, if they want to slow down. So the interesting thing is now we have all these companies led by people who I had the honor to get to speak with myself.

00:27:17:01 - 00:27:47:07
Dr. Max Tegmark
So a good people who generally realize that this is really scary, but they also realize they can't stop alone because some other companies are going to crush them. So these commercial forces just keep making us build ever more powerful things that we that we understand less and less. So the monster itself, actually right now, it's a human machine hybrid includes corporations under these weird commercial pressures and this super powerful A.I..

00:27:48:03 - 00:28:12:21
Dr. Max Tegmark
And that's actually kind of interesting. A match made in heaven or somewhere else. So so now we're getting this artificial intelligence, which is all made of humans, like an old fashioned corporation, married with the artificial kind, to just make you ever more profit. And it's we are losing control over this in a big way as a democracy of this monster.

00:28:13:02 - 00:28:32:15
Dr. Max Tegmark
And the thing that keeps me awake at night is I see a future where soon this human machine hybrid monster is going to have less and less need for humans. You can start to replace more and more of the human roles by automated roles, and we might very soon be in a situation where the machines are so smart you don't need any humans at all.

00:28:32:23 - 00:28:37:18
Dr. Max Tegmark
And by that point, by that time, of course, it's going to be too late for us to do anything about it.

00:28:38:01 - 00:29:03:01
David Sirota
And before we get to that, I just want to go back to something technical That's an important, I think important for people to understand. As I've been learning about this, the difference between levels of artificial intelligence. Now there's AGI, artificial general intelligence, and there is super intelligence. I want you to tell us the difference between those two things.

00:29:03:08 - 00:29:18:13
David Sirota
And when you tell us the difference between those two things, then tell us what we know or what we think we know about how long it may take to go from artificial general intelligence to superintelligence.

00:29:18:17 - 00:29:36:19
Dr. Max Tegmark
Let's start with with general intelligence. Without the word artificial in front of it. That's the easiest one thing to show. And that's what you have. That's what a human child has. A human child can get quite good at anything if they put in enough effort to learn it. Right. And artificial general and thought it just means it's non-biological general intelligence.

00:29:37:10 - 00:30:04:21
Dr. Max Tegmark
The artificial intelligence we have today is sort of that it's always narrower. Yeah, it can multiply us, It can memorize massive databases much better. We can, but there's still a lot of things it cannot do. The list of things that machines cannot do better than AI. So it keeps shrinking, as you've probably noticed. But that's narrow intelligence stuff, which is sort of of general intelligence.

00:30:05:09 - 00:30:33:03
Dr. Max Tegmark
What about superintelligence? That's intelligence, which is just way, way above general intelligence, much like most people would agree that our intelligence is just way above that of a cockroach, for example. Why might that happen really soon? It seems really unlikely at first, right? If it took so long and even did not quite get the general. Well, the reason is so far the whole reason it's taken so long is because we humans have had to develop it.

00:30:33:11 - 00:31:07:18
Dr. Max Tegmark
We had to invent it. We had to build it. And we're slow. But remember, by definition, general intelligence means you can do all jobs. That includes the job of A.I. development, includes your job, and includes my job. So as soon as we get there, that means Google can replace it's thousands and thousands of A.I. developers and replace them by machines who don't need any other salary except some electricity can work 24 seven also dramatically faster than humans, and you can make millions of them or billions of these virtual employees if you need to.

00:31:08:11 - 00:31:35:22
Dr. Max Tegmark
So it's very likely that the timescale for further improvement will shrink a lot. It's not going to be the human timescale of years anymore. Maybe it'll be months, weeks, hours, minutes. You'll get to the point where you have artificial intelligence, which is way more beyond us than we are beyond squirrels, for example. And at that point it's pretty obvious that there is a there's a real possibility that we're going to lose control over it.

00:31:36:04 - 00:32:03:16
David Sirota
Right. There's there's geologic time, there's human time, and then there's computer processing time and AI. It's definitely obvious those are different speeds and the speed of the processing time, when you think about the computers essentially teaching themselves things and they never have to sleep, they never have to eat, there can be thousands and thousands, millions and millions of them doing this.

00:32:03:16 - 00:32:16:03
David Sirota
I mean, I mean, that's a speed that I think it's hard for the human mind to get to get a handle on. So let's talk about the warning that you and your organization to put out there.

00:32:16:04 - 00:32:41:03
Dr. Max Tegmark
Yeah, the answer that I'm so glad you're bringing these fundamental basic facts here. You know, I can think about one thing per second. Maybe, you know, my laptop that I'm using right now to talk with you. It does about 1 billion things per second. Also, the amount of seeking stuff I can put in my head is, you know, I have 100 terabytes of information here.

00:32:41:03 - 00:32:59:05
Dr. Max Tegmark
So to you sounds like a lot. Right. But it's nothing compared to what you can put in the big server farm somewhere. Once you get beyond the human scale and start going superintelligence the laws of physics, they do put a limit on how smart it can get. But that's about a million. Million, million, merely a million times above where we are.

00:32:59:17 - 00:33:25:08
Dr. Max Tegmark
I started to think of AGI as standing informally, not for artificial general intelligence, but just artificial godlike intelligence to come back to your question them. So why did we warn people? Because people actually at the top of the field of AI have warned about this for close to 100 years now. Alan Turing warned about this. Norbert Wiener warned about this in the 1960s.

00:33:25:16 - 00:33:47:07
Dr. Max Tegmark
Irving J. Good wrote this fantastic paper where he just pointed out the simple fact that once he gets to AGI, you can likely get this recursive self-improvement going and we humans will soon be left behind in the dust. And he pointed out that that will be the last invention we humans ever need to make, because after that machines can invent everything for us.

00:33:47:17 - 00:34:17:09
Dr. Max Tegmark
Provided he says that the machine is docile enough to let us control it. So what All of these folks warned about for all these decades was that when we start getting close to AGI, we need to make sure we don't proceed any further until we figure it out. How are we going to control this? Either by making the machines always do what's good for humanity because they somehow obey some humans or because we can't control them.

00:34:17:09 - 00:34:41:12
Dr. Max Tegmark
But we've made sure that their goals are just aligned with ours. Like, sadly, the progress in AI is gone much faster than our progress in figuring out how to align it to what's good for us. That's why I was involved in launching this open letter saying, Hey, let's take a six month pause and set in place safety standards that future systems need to meet.

00:34:41:17 - 00:35:12:19
David Sirota
And I want to and I want to talk about those standards. But, you know, I'm thinking so much about the speed of this stuff. I mean, I think about it this way, that it took thousands and thousands of years of natural carbon based evolution, and you had this great term called carbon carbon chauvinists where you you where people seem to think that intelligence can only come from organic life and and thousands and thousands of years of, you know, the survival of the fittest, evolution and the like.

00:35:13:00 - 00:35:52:09
David Sirota
Natural selection a created is ultimately the human brain that essentially be a computer powered intelligence can speed up that process for intelligence through computer processing power at a much faster pace than natural selection. And and essentially natural evolution. So those are the things I think we have to to think about. And I think you're right in pointing out that we have to be able to think about how just because we're an organic species, being a carbon based being doesn't mean we have a monopoly on intelligence.

00:35:53:02 - 00:36:19:00
David Sirota
And and I think that it's hard for people to understand because I think a lot of people think about, well, there are people, there are animals and there are robots and computers and. These things are certain. They certainly are different. But it doesn't mean that the robots and the computers can't have an intelligent use. And there are obviously there are philosophical questions that come with this, which is like, well, what is what is being alive mean?

00:36:19:00 - 00:36:37:06
David Sirota
I mean, is the is something that's super intelligent, that's not carbon based, it's a machine. Is that alive? Does does it does it have have feelings? And those are kind of philosophical questions, but I want to bring it back to the point that you made about putting in place standards. So let's dig down into that, the six month pause.

00:36:37:11 - 00:36:51:01
David Sirota
Okay. Private companies don't want to do that. They you know, as you mentioned, they're in a kind of technological arms race because if they don't if one of them doesn't do it and they think the other one will do it and make money off it and the like. And that's a bad kind of dynamic for there to be.

00:36:51:01 - 00:37:07:20
David Sirota
But let's say you could wave a wand and put some rules in place, right? I mean, at a policy level where you could pass a law, you could, well, I don't know, write a specific set of code into the artificial intelligence. Like what specifically are we talking about there?

00:37:08:00 - 00:37:32:13
Dr. Max Tegmark
Yeah. So coming back to your question, what sort of safety standards do we want? Well, let's start by just becoming like biology. You know, in biotech, if you want to launch a new vaccine, for example, you can't just start selling it in supermarkets before you first demonstrated that it's safe. There is a regulatory authority in every country. You have to go through and provide evidence.

00:37:32:17 - 00:37:55:13
Dr. Max Tegmark
This is actually going to do more harm than good. Then you can sell it. I researchers, in part because of lobbying. I think you also have will have a very hard time getting sympathy from their biotech friends. They because they would basically say, no, no, no you should be able to biotech should be free. Anyone who wants to be able to sell medicine openly on the Internet, then if people want to buy it, they buy it.

00:37:56:02 - 00:38:20:07
Dr. Max Tegmark
No, if it's something that has the potential of creating harm and they obviously has the potential to create way more harm than biotech ever has, the onus should be on the companies to demonstrate that their things are safe and having a product that you if you come and say, well, I have no idea really how this works, but I'm pretty sure it's safe, You know, that wouldn't fly in biology and it shouldn't fly.

00:38:20:07 - 00:38:35:07
Dr. Max Tegmark
And when it's systems that actually impact people's lives either. So with that basic principle, I think it shouldn't be the responsibility of politicians to have to figure out what the safety standards should be. It should be the responsibility of the companies to demonstrate the safety.

00:38:35:14 - 00:38:59:11
David Sirota
Yeah, I mean, I think about it in terms of pharmaceuticals as an example. I mean, the scandals in pharmaceuticals, there's pricing scandals, obviously, but there's also scandals. And when the pharmaceutical companies have too much control over something like FDA, but the FDA exists to make sure that when drugs come to market, that the drugs are have been tested and are safe.

00:38:59:11 - 00:39:29:17
David Sirota
And if a drug company can't get FDA approval for their drug, then the drug doesn't go to market. That's how the system is supposed to work and there doesn't seem to be any kind of system like that when it comes to artificial intelligence is part of the reason it doesn't exist is because it's hard to define exactly what AI is and is it in other words, we know what a drug is, okay, It's a vaccine or a pill or whatever.

00:39:29:17 - 00:39:55:02
David Sirota
It's a chemical compound. Artificial intelligence seems to be a kind of broad term for potentially many different applications of the technology. And so saying something is safe. Now, if these things if artificial intelligence can can learn on its own, something may be safe now. But how do we know it won't learn to be unsafe later?

00:39:55:05 - 00:40:19:10
Dr. Max Tegmark
Know excellent points. I don't think the main reason we don't have effective regulation of AI is because it's hard to define. In fact, it's just one of the favorite excuses. Lobbyists like to yet give. Oh, we just it's too hard to define. And I think the real reason is there are two reasons. One is lobbying. Of course, any company will not want to be regulated.

00:40:19:10 - 00:40:45:23
Dr. Max Tegmark
And there was a lot of resistance from tobacco companies also to be regulated as a recall. So it's just natural. He can't blame them for defending their their interests. The second reason is it's just gone so fast. Technology has progressed much quicker than policymakers are used to responding. And I think even a lot of of the researchers this happened so quickly, they're still used to not having any real impact on society being more like a philosophy, which there's no need to regulate.

00:40:46:00 - 00:40:46:13
Dr. Max Tegmark
My opinion.

00:40:46:23 - 00:41:27:11
David Sirota
I want to get into the the arguments that are being made by the industry to prevent regulation. And you had a piece in Time magazine likening much of the situation to don't look up the movie that I helped co-create a movie. And you've kind of likened artificial intelligence as the asteroid in the movie, and you've kind of likened the response to the asteroid that was in the movie to the response that we've seen by whether private companies, lobbyists and the like, the response that they've offered to the concerns and the push for regulation.

00:41:27:11 - 00:41:40:11
David Sirota
So why don't you give us a sense of some of the most prominent arguments that basically say either there's nothing to be afraid of or arguments say that even if there are things to be afraid of, we shouldn't really do anything about it.

00:41:40:11 - 00:42:19:14
Dr. Max Tegmark
Sure. So to start answering your questions and what the arguments are, let's be clear what the asteroid is. The asteroids that I wrote about in that time article is superintelligence. It's not losing our jobs, etc. Plenty of people are talking about that. The asteroid is the idea that we are on the verge of building something where we that we could completely lose control over where we might actually first of all, our democracy might get killed by some tech company, effectively becoming our government by getting this monopoly that nobody else can compete against, by having billions of the fiscally silly free digital employees that nobody could compete with.

00:42:20:09 - 00:42:36:22
Dr. Max Tegmark
And then I think it's very likely that at some point someone in a tech company will they will lose control over the tech when it gets too smart and the machines themselves will be in charge. And, you know, this is not a new this idea we could get hit by this kind of asteroid isn't new. This was something Irving J.

00:42:36:22 - 00:42:57:23
Dr. Max Tegmark
Good warned about in the sixties. As we discussed, it's the default outcome. Unless we can figure out a way of making it safe. I don't want to put a damper on your afternoon. I think the most likely outcome if that happens is that humanity goes extinct. It's not going to be like in Terminator, which freak threw out or somehow robots come after you and are directly trying to kill you.

00:42:58:11 - 00:43:21:03
Dr. Max Tegmark
It's more like, Yeah, you're trying to build a hydroelectric dam to get some green energy and there are some animals living there. You know, tough luck for the animals, right? It's not that you hated the animals, but you had another goal that wasn't really aligned with theirs. And you're smarter, so you're going to get your way. If superintelligence takes over Earth, they're going to probably want to compute all sorts of cool stuff that we don't understand.

00:43:21:03 - 00:43:41:22
Dr. Max Tegmark
So they probably want to maximize someone to contribute here. Maybe they want to cover Earth with computer facilities too, but they're not going to just chop down the rainforest like we do of less intelligent species, but they might want to get out of the cities too, so they can build their own. Maybe, maybe they are. Maybe they feel that the oxygen in the atmosphere makes the machines rust too much.

00:43:41:23 - 00:44:10:04
Dr. Max Tegmark
It'll get rid of the oxygen, whatever. We are like a sideshow. They will, by default, pay about as much attention to us as we pay to ants and other lifeforms if they're super intelligent. And it's just very inconvenient to have to share the planet with more intelligent entities that don't really care about us. That's why it's so important that we don't build that until after we figured out how to align that, how to make it actually do what's good for us and ideally and do what we want it to do.

00:44:10:10 - 00:44:28:15
Dr. Max Tegmark
So that's the asteroid, it's superintelligence. And I find it so stunning how still almost no one wants to talk about it. Like I've been on a gazillion interviews since we did the open letter. And I can tell even the journalists don't want to talk about the asteroid. They're like, Yeah, let's talk about jobs. Let's talk about this information.

00:44:29:06 - 00:44:48:20
Dr. Max Tegmark
And as soon as I'm like, What about the asteroid? Go like, Oh, no, but that's long term stuff. That's too speculative. And I'm like, What? So long term about like two years from now, you know, I've been I've been warning about this and working very actively to educate about this for about nine years now and let's be organized.

00:44:49:02 - 00:45:09:16
Dr. Max Tegmark
I conference is bringing together the worry words for the experts. And I wrote this book like 3.0 warning about this and so on. And I was wrong about something which I probably would not have been wrong about if I had watched your movie earlier. But I couldn't in my wildest dreams, imagine that we humans would have such a lackadaisical response to it.

00:45:09:16 - 00:45:30:14
Dr. Max Tegmark
I And then look what happens. A bunch of companies declare openly we are going to build AGI and then they spend the most time doing it. And even when they themselves admit that they're starting to get close and like the CEO of Openai, for example, openly talked about how the worst case scenario is like lights out for all of us.

00:45:30:21 - 00:45:51:09
Dr. Max Tegmark
Even then, there's still a lot more attention paid to reality TV shows and other things like this than to this. So when I watched the movie, it was like, Oh my God, how could I be so dumb and not realize how dysfunctional our our public conversation has become?

00:45:53:18 - 00:46:14:21
David Sirota
That's it for today's show. As a reminder, are supporting subscribers who get leaver time premium. You're going to get to hear next week's bonus episode, which is the extended interview with Dr. TEGMARK, where he goes deep on the specific kinds of safety measures that can be taken to prevent the most dangerous forms of AI from threatening society. So listen to love or time premium.

00:46:14:21 - 00:46:35:06
David Sirota
Just head over to Lever News.com to become a supporting subscriber. When you do, you get access to all delivers premium content including our weekly newsletters and our live events. And that's all for just eight bucks a month or 70 bucks for the year. One last favor, Please be sure to like, subscribe and write a for lever time on your favorite podcast app.

00:46:35:11 - 00:46:54:16
David Sirota
The app you are listening to right now. Take 10 seconds and give us a positive review in that app and make sure to check out all of the incredible reporting our team has been doing over at Lever News.com. Until next time. I'm David Sirota. Rock the Boat Every Time Podcast is a production of the Liver and the Liver Podcast Network.

00:46:54:20 - 00:47:01:10
David Sirota
It's hosted by me, David Sirota. Our producer is Frank Cappello, with help from the Livers lead producer Jared Chang, Mayor.