Lever Time - Premium

from The Lever

LEVER TIME PREMIUM: The Race To Artificial Super Intelligence

You last listened May 14, 2023

Episode Notes

/

Transcript

On this week’s bonus episode of Lever Time Premium, exclusively for The Lever’s supporting subscribers, David Sirota continues his conversation with leading artificial intelligence expert  Max Tegmark about the growing debate around AI. In this special extended interview, Tegmark describes the safety measures that could prevent the emergence of the most dangerous forms of AI. If those precautions are put in place, he argues, AI could cure disease, stop climate change, end poverty, and just maybe set us free. 

Thank you for being a paid subscriber! If you're having issues subscribing or listening to Lever Time Premium, email us at support@levernews.com.

If you’d like to leave a tip for The Lever, click the following link. It helps us do this kind of independent journalism. levernews.com/tipjar

00:01:57:00 - 00:02:20:03
David Sirota
Hey there and welcome to this week's bonus episode for paid subscribers. You probably heard the first part of our interview about artificial intelligence with Dr. Max TEGMARK. Today, we're going to be sharing the extended interview where Dr. TEGMARK goes deep on the specific kinds of safety measures that can be taken to prevent the most dangerous forms of AI from threatening society.

00:02:20:13 - 00:02:43:18
David Sirota
He also discusses how if those safety measures are put in place, I could do everything from curing disease to stopping climate change to ending poverty. Thanks again for being a supporting subscriber and for funding the work we do here at the Lever. Now here's that bonus interview. Well, I thought a lot about it, obviously, since since we did the movie and the reaction in the movie.

00:02:43:18 - 00:03:08:01
David Sirota
I think some of it well, there's a lot of things. There's not knowing what you're hearing. So many things to be afraid of, whether it's climate change, superintelligence, COVID, But etc., etc., that it's hard to know what to actually be afraid of or what to what to focus on, especially when there are things that seem even more germane.

00:03:08:01 - 00:03:39:19
David Sirota
And in your daily life, like struggling to economically survive, that you really do have to worry about in the absolute here and now. But I also think that some of these things are so, ah, so terrifying, so scary that that there's an impulse to just look away and there's an impulse to to not focus in on it. And I think obviously that's part of the problem because putting your head in the sand, you know, the, the dinosaurs that put their head in the sand when the actual asteroid hit the earth, that didn't save them.

00:03:39:19 - 00:04:04:11
David Sirota
Right. I mean, that's not the way to actually deal with the problem. And look, there are multiple problems that that that superintelligence may raise, which is everything from it will take our jobs to it can do homework assignments so that kids won't actually have to learn. There's a whole spectrum of things to be concerned about all the way up to.

00:04:04:11 - 00:04:06:22
David Sirota
It's going to wipe out all of all of humanity.

00:04:06:22 - 00:04:29:23
Dr. Max Tegmark
I can give you a compliment because in the film you have this beautiful scene in the Oval Office where the chief of staff of the president said, I hope nobody's going to be pissed at me for for being a movie. Spoiler says, Yeah, asteroid, whatever. But we get told every day the world is going to end because of this, because of that, because of that.

00:04:29:23 - 00:05:01:10
Dr. Max Tegmark
And you even had rogue A.I. on the list that you scornfully rattled off there. That's right. That's right. So crucial about this is both I and I threat and the asteroid threat, we actually know what we need to do about it. It's very preventable. The problem if I know enough about astrophysics that if you put me in charge of the asteroid deflection mission and we have some time, I wouldn't know exactly who to hire next.

00:05:01:12 - 00:05:18:12
Dr. Max Tegmark
We can actually deflect that. We can have a whole separate podcast on how to do it. They don't because they do this combination of ignoring it, get them freaking it out about. And it's the same way they say, Look, we humans have been here for over 100,000 years. What's the freaking rush? Why can't we take another ten years to solve a safety problem?

00:05:19:01 - 00:05:37:10
Dr. Max Tegmark
Problem first and then continue. And then enjoy a much higher standard of living and help life flourish? Not for the next election cycle, but for billions of years. And if you want to be really bold, not just on Earth, but so much of this amazing cosmos, we can totally do all that, right?

00:05:37:16 - 00:06:05:10
David Sirota
Yeah. And and I think I think that's it. That's such an important point. So I think it raises then the other question, the potential benefits of AI that the companies pushing I see now, they obviously see a profit motive in this but baked into the idea of profit is that we that I can produce things that people will want, that people that will better society and therefore people will will pay for.

00:06:05:18 - 00:06:29:10
David Sirota
So I think I just want to turn to the potential upsides of this for a second. I mean, what are the potential upsides of AI for people in their regular lives? I mean, I feel like it's it's potentially everything from mundane things to AI. And the superintelligence could develop cures for diseases, cures that we never would have invented ourselves.

00:06:29:10 - 00:06:40:03
David Sirota
So just give us a sense of the gamut of the potential upsides of AI that these companies are pursuing and and are excited about.

00:06:40:07 - 00:07:04:02
Dr. Max Tegmark
Their huge upsides. And I promise to talk about them. I just want to do my homework and answer all your questions fully about some of the mean dismissals that are used, because I think it's so important for people to listen to, to be prepared themselves when they get told those things. So, number one, you know, it's inevitable to start to just stop finding it.

00:07:04:18 - 00:07:25:01
Dr. Max Tegmark
People, in other words, say it's they conflate is with aught they say it is inevitable. Therefore you should want that. And that's equivalent to saying Moloch is going to defeat us and we're all going to go extinct. So let's just surrender to Moloch. You know, you could just as well say, okay, Ukrainians could be like, okay, it's inevitable that we're going to surrender this.

00:07:25:01 - 00:07:47:07
Dr. Max Tegmark
Let's say let's surrender to Putin. You know, why? Why is that a better argument with Moloch? Why should we surrender to Moloch, especially when it's actually not true, but it's inevitable anymore than it's inevitable that we're going to get destroyed by an asteroid? There are incredibly talented A.I. researchers working on figuring out how to keep all this safe and beneficial that the get the upside, give them a little more time.

00:07:48:01 - 00:08:12:04
Dr. Max Tegmark
Another argument I hear all the time is it just is a two word argument. But China as if it somehow if we just destroy this, if you might, it gets killed off by these incredibly alien minds, You know, why should we be less scared about getting why getting control by alien mines than by Chinese mines or this tech company or that tech company?

00:08:13:11 - 00:08:35:07
Dr. Max Tegmark
The sad truth is, nobody's going to care whose A.I. was it got out of control and wipe this out. We think of it as just the dumbest thing we humans ever did, and that's all that really matters. And then we're not going to think anymore because we won't be around, right? In short, many people are still they're under this delusion that we're just going to build a AGI and it's going to stop there.

00:08:35:07 - 00:08:51:14
Dr. Max Tegmark
And then whoever has that first is going to be like, really muscular and conquer the other half of the planet and rule happily ever after. Whereas of course, it's actually going to happen. It's very shortly after that. We're going to lose control if we build it, we're going to lose control of our democracy, and then we're going to lose control entirely and we're probably going to be gone.

00:08:52:09 - 00:09:15:14
Dr. Max Tegmark
The China thing is a red herring, and I really think it's quite cynically used now and again, just because especially is a way for Democrats and the tech lobbyists and the tech company to persuade Republicans to not limit the power, because this this China argument works. So excellently on Republicans absurd argument they get a lot is it's impossible Now, an asteroid can ever strike us.

00:09:16:02 - 00:09:39:09
Dr. Max Tegmark
I don't want to shame any particular people. But there is a famous an MIT professor who told me not so many years ago that he thinks there is a it's not going to happen for 300 years ago. So I asked him how many percent certain he was, and he said 100%. I think hopefully he's a little less confident now, but I still get that all the time.

00:09:39:09 - 00:10:09:07
Dr. Max Tegmark
And I think that's the carbon chauvinism, plain and again. And the fourth one which which you have so nicely in the movie also is where they actually launch a deflection mission. And then they call it off because they realize that there's precious metals in the asteroid I people can see the value and the super is like, let's just make it come here and, and let's make sure it hits the US first, the asteroid so we can say, I take it, or they have some super risky plan on how they're going to get the stuff out.

00:10:09:20 - 00:10:45:11
Dr. Max Tegmark
I think all of these ultimately would go away, these arguments. If we can just make people take seriously that there is an asteroid and it's heading towards us and it's not in anybody's interests, Chinese or European or American enough for it to hit us. And if we can just pause a little bit of the riskiest stuff and heart and steer it and a more beneficial direction, we'll have literally billions of years we can have we could have 3 billion years on Earth if we use the superintelligence to help us move us a bit farther from the aging sun, and then we can have billions more years of awesomeness.

00:10:45:20 - 00:10:55:07
Dr. Max Tegmark
So rushing into this and squandering all the upside would be incredibly pathetic. That's the way to go. And that's a segue way through your last question about what's the upside?

00:10:55:12 - 00:11:00:06
David Sirota
Yeah, the the upside. I mean, what are what are the triumphalist say, what do you think is realistic?

00:11:00:06 - 00:11:24:04
Dr. Max Tegmark
I think all the upside people brag about is it's real. I mean, let's be honest, Everything I love about civilization is a product. Does intelligence, right? Do I want to go back to have a 30 year life expectancy for humans? You know, I just had a baby and I want them to have a 30 year life expectancy in die of some stupid pneumonia that you could cure with antibiotics.

00:11:24:04 - 00:11:56:20
Dr. Max Tegmark
Of course not. Do I want to not be able to talk with you on Zoom? Of course not. This is all the product of human intelligence, so it just shows how much good you intelligence can bring. It's basically transformed us from being very disempowered, mostly just running around on this planet for over a hundred thousand years, trying to not starve to death, to becoming the captains of our own destiny, which I find incredibly inspiring, that we can start controlling the world around us with our intelligence.

00:11:56:20 - 00:12:17:02
Dr. Max Tegmark
And if we can amplify our intelligence with artificial intelligence, clearly we can use this to solve so many of these problems that have stumped us. You know, I was in the hospital not that long ago, well, with a friend of mine who was told that she had an incurable cancer. And and I thought to myself, this is not an curable.

00:12:17:07 - 00:12:38:06
Dr. Max Tegmark
It's just that we haven't been smart enough to figure out how to cure it yet. Of course, with A.I., we can. And we don't have to wait. Thousands of years. If you get the AGI or a little bit beyond, we can dramatically accelerate progress and probably cure all the main diseases we have. We can eliminate poverty. We can create a sustainable climate on the planet.

00:12:39:11 - 00:13:06:12
Dr. Max Tegmark
We if we figure out if we solve this problem, how to actually make sure that the A.I. works for us, not against us. No, we're just unlocking. We get the master key to unlock the power to make a universe really awesome and help life flourish like never before. And I when I wrote that book, I think I actually maybe in hindsight, made the mistake of emphasizing the upsides so much that people failed to notice the warning I had in there.

00:13:07:10 - 00:13:32:12
Dr. Max Tegmark
But maybe it's also because I tend to be the pretty optimistic person who gravitates towards the upside. We are falling so short of our potential right now as a species, partly because we haven't been smart enough to solve problems and but also very much because of Morlock that's pitting us in these pointless geopolitical pissing contests against each other.

00:13:32:12 - 00:13:32:21
Dr. Max Tegmark
Yeah.

00:13:33:06 - 00:14:10:08
David Sirota
The dynamics are wrong. The the the incentives. The incentives are wrong. So so I want to ask one final question here about all of this. I just reread the book iRobot, the old classic Isaac Asimov book, and the book, it's actually a book of philosophy wrapped in a story about robots and rules. And a lot of it focuses on, you know, the idea of these are superintelligent robots, basically, but there are rules baked into their programing.

00:14:11:12 - 00:14:57:04
David Sirota
And I wonder, I presume it's not as simple as that, but I do wonder if the solution let's look ahead to a world where we have taken seriously the risks of superintelligence. Do you think knowing that the superintelligence can continue learning and continue evading whatever kind of strictures we put in it, do you think this even is a controllable kind of technology, not just with like, you know, the three rules of iRobot, but that I guess I guess the fundamental question is, is can superintelligence as an entity, as a thing?

00:14:57:17 - 00:15:07:14
David Sirota
Can it ever be actually safe if it can always learn and can always evolve? Is there a is it possible to even make this thing safe? I would.

00:15:07:14 - 00:15:30:17
Dr. Max Tegmark
Say yes. I believe it can be made safe. But no, it will not be safe if it's built in the next five years before we solve the technical challenges for how to make it safe. So we have to find that solution because the default, you know, you just put some other much smarter species on on the planet that know, just look how I went for the wooly mammoths when we showed up, Right?

00:15:30:17 - 00:15:52:01
Dr. Max Tegmark
That's not what we want. Let me share the optimism of why I really think it's possible. I have a son who is going to be five months old tomorrow, right. No shade on him. You know, but his mommy is much smarter than he is at the moment, but he doesn't have to worry about that, even though she has actually developed her intelligence a lot throughout her life, she got smarter, smarter, smarter.

00:15:52:01 - 00:16:20:14
Dr. Max Tegmark
But this basic instinct to take good care of babies and particularly her own stead, and in the same way, there's a lot of very creative thought that's gone into this question How can you make a self-learning entity, make sure that when it keeps learning more and improves itself, that it retains the goals that it has? It makes sense from the point of view of I suppose I tell you, David, I can give you an intelligence upgrade.

00:16:21:14 - 00:16:40:00
Dr. Max Tegmark
Suddenly you're going to be able to you can have perfect recall memory for up to 100 terabytes and you can do this and you can do that. And if you want to speak in new language, just snap your fingers. And, you know, if I told you that this will also make you want to murder your best friends, are you going to accept the upgrade?

00:16:40:16 - 00:16:42:11
David Sirota
No, I'm not going to accept the upgrade.

00:16:42:12 - 00:17:07:16
Dr. Max Tegmark
And I wouldn't either. Right. So if that gives you an incentive to make sure that you understand the upgrade well enough, that you're convinced that it's not going to change your fundamental goals in any way when it happens, it's hard to solve this and there are even more basic problems you have to solve for that. One is kind of the third one people are working on the other two.

00:17:07:21 - 00:17:29:23
Dr. Max Tegmark
You've also gone through, you know, when first you have to make sure machines understand their goals. So Leo is not one. Even when he turns five, two months, tomorrow he's still not going to understand or be able to understand enough of the goals that I'm going to feel and my responsibilities starting right. And then when he's a teenager, he'll understand what I wanted.

00:17:29:23 - 00:17:50:07
Dr. Max Tegmark
Might not want to talk my goals. That just to the second of the three challenges where you have to make sure that they will adopt our goals. Humans go through this phase where you have a few years as a parent when the kids are smart enough to get what we what our values are and malleable enough that if a good parent, we can instill them in them.

00:17:51:11 - 00:18:27:18
Dr. Max Tegmark
All of these are unsolved still with machines, but I'm quite hopeful that if we can have have a few more years with a greatly amplified effort to study these ones, all of those can be solved. We can figure out how to make machines understand the goals that we want them to have, help humanity flourish and adopt those goals as their own and then have a technology to always retain them that we can have a guarantee that they will always retain those goals as they keep learning more and becoming smarter.

00:18:28:08 - 00:18:44:10
Dr. Max Tegmark
And if we can solve those three, we're in really, really good shape now as a species and our future will be way better than otherwise. So this is the upside I hope we can not squander by being so greedy that we just rush into it prematurely and flop.

00:18:44:16 - 00:19:24:18
David Sirota
Dr. Max TEGMARK, He's the president of the future of Life Institute. He's also the author most recently of the book Life 3.0 Being Human in the Age of Artificial Intelligence. Max, thank you for taking the time with us today. I don't feel like my my fears from watching Terminator have subsided, but I do feel much more intelligent after this discussion about what artificial intelligence is, what superintelligence is, and what can be done to make it safe and not such a huge risk, as you articulated.

00:19:24:18 - 00:19:26:00
David Sirota
Thanks so much. I really appreciate.

00:19:26:00 - 00:19:28:09
Dr. Max Tegmark
It. Thank you so much, David.