Conversation with Henry Kissinger: A.I. for Humanity


National Security Commission on Artificial Intelligence Conference – Conversation with Dr. Henry Kissinger: AI for Humanity

Subscribe to Dr. Justin Imel, Sr. by Email

Transcript

Good afternoon. My name is Rebekah Kennel, and I am a Director of Research and Analysis at the Commission. It is my distinct pleasure today to introduce the next two speakers of our fireside chat, on a topic focused on AI for humanity. So, please would you welcome, the two speakers who need no introduction, Doctor Henry Kissinger, and Doctor Nadia Schadlow.

Thanks so much Rebecca. Let’s see if we’re okay, can everyone hear me, okay, good. So first of all we’ll have to use our imagination a little bit because there is no fireplace here. But we are thinking it’s a fireside chat. And I’m so grateful to have this opportunity today. Doctor Kissinger and I met, several years ago now. I think right, Doctor Kissinger. He had been a key person where you can go to for advice, about both professional, and career things, as well as geopolitical events, right. Very few friends can do both. But Doctor Kissinger really needs no introduction. As all of you know, he’s one of the worlds most renowned geopolitical practitioner as well thinker. And he did all of that well before AI came into being. He has that rare combination also of a true intellect, and I really admire him for taking on something relatively new like AI after the height of his career. AI’s pretty daunting as someone also relatively new to it, coming to it. Doctor Kissinger decided he wanted to do a deep dive about the technology and about the implications of artificial intelligence for our political systems and for geopolitics at large. As many of you know, he’s written two articles both published in The Atlantic, 2018 and 2019. I’d encourage you all to read both of them. He also wrote a book in 2014, preceding that called, “World Order”. And one of the last chapters of that book… Sorry it’s sort of going in and out. One of the last chapters of the book, talks about the implications of technology. It has a really interesting insight. He talks about the ordering system sort of for the world. So, during the Age of Enlightenment, it was reason. In the medieval period it was religion. This era it’s technology and science that helps us sort out events, and I think that, that’s a useful way to think about what we’re going to talk about. He called them actually the governing concepts for age. (microphone interference) Articles that are relevant to the Commission as well, and I’ll draw out some of these, use them as questions to start out with. (microphone interference) Question in advance. First he described AI as inherently unstable, AI systems are constantly in flux, as they acquire and analyze new data. Now to those of you in the audience who are National Security professionals, stability is a key concept that we like to actually have in the systems. So, there’s an inherent contradiction in the stability of AI and National Security concepts. And that’s something I’d like Doctor Kissinger to talk about a little bit. But even preceding that, we’re here really ultimately as we talk about this competition and about the tension that the interim report also talks about, because ultimately this was a contest between two political systems, right? And we shouldn’t forget that essentially. It’s fundamentally between two political systems and the impact that artificial intelligence will have on those systems. It’s about whether or not artificial intelligence will advantage open in democratic countries like ours, or authoritarian states. And that’s something I’d like to start off with by asking Doctor Kissinger to talk a little bit about his views about that, and then we’ll move on to a couple of other questions. Thanks, Doctor Kissinger.

Thank you very much, Nadia. I’ve had the pleasure of working with Nadia on several projects, and I’ve seen her on the Advisor to the President job, also when it ended. (laughs) (audience laughs) And we were on the Advisory Board, Defense Advisory Board, together. So it’s a great pleasure to be here. So that you can calibrate what I’m saying, let me give you a few words about how I got into this field. I became a great friend of Eric Schmidt, who is today one of my best friends. He invited me to give a speech at Google. And before that, they showed me some of their extraordinary achievements, and I had barely met Eric before then, and I began my speech by saying, “I’m tremendously impressed by what I’ve seen, “but I want you all to understand that “I consider Google a threat to civilization, “as I understand it.” (audience laughs) This was the beginning of our friendship. (audience laughs) And the next step in my being here was, I was in the National Conference. Which in Europe, which on its schedule, had a provision for artificial intelligence. And I thought this was a great opportunity for me to catch up on my jet-lag– (audience laughs) and I was heading out of the dorm, when Eric, who was standing there said, “This might interest you, and you really ought to hear it.” Except for that he might’ve been spared. (audience laughs) This occasion. So, I went there, and somebody from Deep Thought was explaining, that he was designing a computer that would be able to play the game of “Go”, and he was confident that he could design it so that it would beat the champions of China, and of Korea, and as you know, Google, “Go”, has 180 pieces for each side, beginning on an open square. And the strategic game, goal of the game is to constrict the ability of the opponent until they can’t move at all. But when you put your first piece down, it’s not like chess, that you have it lined up, you put your first piece down, you don’t know how this is gonna develop. And it takes a long time to develop. So the idea that you could design a computer, that could master this as a creative game, seemed extraordinary to me, and I went up to this speaker, afterwards and said, “How long will it be “that we become Inca’s to the Spanish computers? “That they will achieve intellectual dominance?” And he said, “He was working on that.” (laughs) (audience laughs) And he is. So, over the years, Eric was kind enough to introduce me to a lot of artificial intelligence researchers, and I look at it, not as a technical person. And I don’t challenge or debate the technical side of it. I am concerned with the historical, philosophical, strategic, aspect of it. And I’ve become convinced that, artificial intelligence and it’s surrounding disciplines is going to bring a change in human consciousness. Exceeding that of the Enlightenment. Because of the inherent scope of the investigations it imposes. So, that’s why I’m here. And I gave a speech at Stanford a few weeks ago, at the opening of the Artificial Intelligence Center, and I said, it’s sort of absurd that I’m here, you people who sit in the audience, I said to them, “I’ve written thousands of articles, “I’ve written two, and one was a joint authorship, “with Eric and one other person.” And I said, “The only significance of my presence “and of what I do.” I said, “You people work on the implications, “on the applications, I work on the implications.” And I don’t challenge the applications. I think they’re important, they’re crucial, but frankly I don’t think you don’t do enough. You don’t go the next step, those of you who know something about the field, of what does it mean, if mankind is surrounded by automatic actions, that it sometimes can not explain, it can explain what happens, but as I understand it, not always why it happens. So this is why I’m here, and it’s in that context that you are to assess what I’m saying. But I have put aside some other work, for the last three years, to work on this, and to educate myself because I think in the conceptual field that is the next big step for mankind.

[Nadia] Hopefully they listen to you, Doctor Kissinger. Did they listen to you at the Stanford audience?

I think the technicians are too modest, in the sense that they’re doing spectacular things, but they don’t ask enough of what it means. I would just say the same for strategists. This is bound to change the nature of strategy, and of warfare, because if you, and some of you can judge better than I have, How much it’s taken aboard here, I don’t think on the global field it is yet understood what this will do. It’s still handled as a new technical departure. It’s not yet understood that it must bring a change in philosophical perception of the world. Much of human effort has been to explain the reality around it. And the Enlightenment, brought a way of looking at it on a mathematical basis, and on a rational basis. That was a huge departure already, that changed history fundamentally. But the idea that you can explore reality in partnership of what is out there, and that you explore it by means of algorithms, where you know what they will produce, but you do not yet know why. That is when people start thinking about it. And when, perhaps they will. That will fundamentally affect human perceptions. And this way of thinking up til now historically has been largely western thinking. Other regions have adapted it from the west. I mean the rationalistic did. As that spreads around the world, now unpredictable consequences are going to follow.

In the end are you optimistic in terms of AI and its interaction with democracy, and AI changing human cognition as you’ve pointed out, and AI having explanatory powers or not, humans having explanatory powers, AI not necessarily. There’s an interesting point that you made in some of your articles about how AI by it’s very nature is going to change human cognition and reasoning because we will not have the experiences that AI will get to, AI will get there first before us…

The point I made is AI has consequences that we elicit, but we don’t always know why. And so, now am I optimistic? First I would have to say honestly, the future of democracy itself putting AI aside, is something that should concern us, because for a society to be great it has to have a vision of the future. That is to say, it has to go from where it is, to where it has never been, and have enough confidence in itself to do it. When you look at too many democracies the political contest is so bitter, and the rivalries are so great, that to get an objective view of their future, it’s getting more and more difficult. Who would’ve thought the House of Commons could break down into a collection of pressure groups operating like The House of Representatives. But The House of Representatives is part of a system of checks and balances, while Britain is based on a unitary system, that requires consensus for it’s operation. So, what AI does, is to inject a new level of reality, a new level of perceiving reality. Most people don’t understand that yet. Most people don’t know what it is. But I think that those of you who work on it, are pioneers in an inevitable future. And when we think, the Defense Department about the future, this is a huge problem because increasingly AI will help shape the approach to problems. For example, I was in office in the period of it started with massive retaliation and then developed into various applications. But the key problem we faced and the actual crisis, as security advisor how do you threaten with nuclear weapons without securing a preemptive strike, on the other side, and as the weapons themselves became more esoteric and even in terms of the 70s, when we moved to fix land base missiles they had a high potential for retaliation. But next to no potential for being used diplomatically. It’s often when histories of that period are written, they about the trigger happiness of an administration, that went on alert. When we meant on alert, from level four to level three, which isn’t a high level of alert, but nobody no newspaper even knows that. But one reason we went on alert, was because we could generate a lot of traffic. And you could see things that were being done, planes were being put in the air, and troops were recalled. But themselves they were not yet threatening.

[Nadia] With AI you can’t see a lot of…

Well even with mobile missiles you had trouble. And much of what goes on in AI, we believed that arms control was an important aspect. And what you know of AI it makes it infinitely more important but much of what you can do in AI you don’t want to put on the table, as a capability to be restricted. Because it’s secrecy, is itself, part of its strength. But so, in the field of strategy we are moving into an area where you can imagine capabilities of extraordinary capability of, and even permitting tremendous discrimination. And one of your problems is, that the enemy may not, if you choose, may not know where the threat came from for a while. So, you have to rethink what are elements of arms control. When you have to rethink even how the concept of arms control, if at all applies to that world.

You have a nice line in one of the articles about how AI essentially upends all of the strategic varieties that we’ve taken, that we’ve sort of had as our way of thinking over the past 30 years, including arms control, including deterrence, including as we talked about in the beginning stability. But I wanted to ask you one sort of specific question and then I’ll open it up. So, are there situations in which you know, going backwards you are at the White House again taking decisions, are there situations in which today AI, you would trust an AI algorithm to make a decision at that level, in the National Security space, if you were faced with a tough decision. Are there areas where you could see AI algorithms helping National Security decision makers?
I think it will become standard that AI algorithms will be part of the decision-making process. But before that happens, or as that happens, the decision makers have to think through the limits of it. And what might be wrong with it, and they have to test themselves in war games and even in some actual situations, to make sure that, what degree of reliability they can give to the algorithms. And also they have to think through of the consequences. When I talk about these things, I studied a lot about the outbreak of World War 1, because the disparity between the intention of the leaders and what they produced is so shocking. Not the one of the leaders who started the war in 1914, would’ve undertaken it if they had, had any conception of what the world would look like in 1918 or even in 1917. None wanted an act of such scope. They thought they were dealing with a local problem and they were fighting each other down, but they didn’t know how to turn it off. Then once the mobilization process started, it had to go to an end. In which a crisis over Serbia ended with a German attack on Belgium, which neither of which had anything to do with the original crisis. But the attack on Belgium was an absolutely logical consequence of a system that had been set up. And that required a quick victory, and a quick victory could only be achieved in Northern France, so never mind that there’s a crisis in the Balkans, and that Germany, and France are not directly involved. And the outcome, the only way to get an advantage in time over the possible mobilization of Russia, was to defeat France, no matter how the war started. And it was a masterpiece of planning. One of the really juicy things is that they had to knock out, the Germans had to knock out France within six to eight weeks. And the man who designed this plan, allegedly said on his death bed, “Make sure my right flank is strong”. So, when the attack developed, and then Russia began to move in the east, the Germans lost their nerve, and put two army codes out of their right flank. Which is exactly where they were stopped, these two army codes were extracted, while the important battles on both sides were taking place. I mention that only, that if you don’t see through the implications of the technologies to which you’ve read it yourself, and including your emotional capacity to handle the predictable consequences, then you’re gonna fail. That’s on the strategic side, and how you conduct diplomacy when even the testing of new weapons can be sealed, so that you really don’t know what the other side is thinking. And it’s not even clear how you could reassure somebody, if you wanted too, that’s a topic very important to think about. And so, as you develop reference of great capacity, and even great discrimination, how do you talk about them? And how do you build a restraint of their use? So and how do you convince them? I mean the weapons in a way become your partner, and if they’re assigned certain tasks, how you can modify that under bad conditions. These are the key questions which have to be answered. And will be, I’m sure answered in some way. And so, that’s why I think, you’re only in the foothills of the real issues that you’ll be facing if you go down that road. And you must, not let the argument against AI be AI will exist, and will save us.

Before I open it up to the audience, just a quick comment because you are a geopolitical thinker, and you’ve talked about diplomacy and restraint, can you comment a little bit on how you see the evolution of the U.S., China and Russia relationship, just in brief. And then I will open it up to the audience, but I think it’s a missed opportunity to have Doctor Kissinger here and not to ask a question that’s a little bit broader.

Asking me for brief answers… (laughs) (audience laughs) Sign of great faith.

You’re getting set to go to China, so we’ve talking a little bit about some of your goals for that trip.

I look at this primarily as a strategic issue, and is the impact of the societies on each other over an extended period of time. When they have such huge capabilities. Now the conventional way, the historic way it’s been handled is that, some military conflict settles the relative position of the sides, and sometimes at huge cost. But historically it’s survivable and survivable cause. So, the key question is, do we define our enemy and then conduct our policy from a confrontational point of view, and with confrontational language? In every state, and against my preference of looking at a strategic issue, in which in every moment you try to shape the environment to get on the one hand a relevant advantage. But on the other hand give your opponent an opportunity to move towards a less threatening position. And so, if your basic strategy is confrontation then the other side loses nothing by being confrontational, because it’s there anyway. And there for I believe one should put an element of potential cooperation into these strategic relationships. I’ve studied at one point, I was in office in the ’73 war, and there’s a little booklet by somebody who served on the Public Bureau as a note taker. And if you go through that book, you’ll see that on the one hand, they have arguments and leaning towards involvement and counter play. But on the other, there’s always somebody arguing about what we call the taunt, so that they didn’t ever go all out. And so we could out match them, when we weren’t in them. So, I favor a strategy of complexity. And so, I would like containment to evolve out of a diplomacy that doesn’t put it into a confrontational style. What that means is that we on the outside have to know what our limits are. And we have to understand what we’re trying to avoid in addition to what we want to achieve. So, we have to have strategies in high office. Which is not the way we should elect people, but we’ve got to come to, I’m talking about what we have to come to, so. When you look at strategic designs of the 19th century, the Europeans had one of direct of lines with homage on both sides. The British on the road to India, had a lot of alliances and friendships, but not such a precise system. That when you got on the road to India, before you got very far, you’d meet a lot of resistance organized by the British. Even though it was not proclaimed, and nobody ever quite made it, after the 19th century. So, that’s what we have to develop at least in some parts of the world. And now, I don’t put Russia into quite the same category. Because Russia is a weak country, it’s a weak country with nuclear weapons. And one of it’s utilities is it’s existence because by sitting there in the middle of Eurasia it guarantees by it’s existence the absence of Yugoslavian type conflicts. In the middle of central Asia where it would draw in the Greek, the Turkish, the Persian, and all the other as well empires. So, what I think we need is a way of thinking about the world in that category. The basic principle has to be, we can not tolerate hegemony of anybody over parts of the world we consider essential for our survival. So we can not tolerate the hegemony of any country over Eurasia. But how to get there, would require, flexible thinking, and flexible technology. And we’ve never been faced with such a situation. And also if you go to most universities, you will find many, the huge majority, that will contradict this approach. So, maybe I’m wrong.

So I’ll open it up now.

Almost unthinkable.

At least some of your ideas about that strategic design can find a way to work its way into the AI Commission Report, so I’ll talk to Ilya about that. But I’ll open it up now to questions. There is a glare so it’s hard to see. I do see someone in the back there.

Thank you, Doctor Kissinger, thank you so much for talking to us today. My name is Elise Labott, I’m a practitioner in residence at the, Georgetown University School of Foreign Service. I was wondering if you could expand on your thoughts about the emotional intelligence quotation, and how do you take into account, relying on AI for issues of emotional intelligence, like empathy, um, when the internet was expanding a lot of critics of the new technology said, “it would make humans less personal and mentally lazy”, and the champions and the post-modernists said that, “it would free up the mind for bigger thoughts, “and more profound thinking”. And that’s true in some sense, but it’s also being used by smaller minded people to kind of spread their original negativity and thinking, so I’m wondering how you square intentions with the new um, avenues of AI? Thank you so much.

I don’t know. (audience laughs) I don’t know the answer to this question, because you have defined what the problem is, that we must deal with. When the Enlightenment came along, there were a lot of philosophers. Because growing out of a religious period, there was a lot of reflection about the nature of the universe, and if you studied the 16th and the 17th you find a lot of philosophers with very profound insights on the nature of the universe, and whether the universe was an objective reality or whether it reflected the structure of your own mind. Or whether you could express it in mathematical equations. But in our present period, philosophy and reflection, it’s not as major of a turn. We put our talons into the technological field, and this is why this happened, that for the first time world changing events are happening which have no philosophic explanation. Or attempt at explanation. But sooner or later it will come. I’m sort of obsessed with the AlphaZero phenomenon. Of teaching chess to a computer, who then learns a form of chess that no human being in all of history has ever developed, or has ever worked, against which we with our traditional chess methods, even in the most advanced computers based on previous intelligence, it’s in a way defenseless. So, what does that mean? That you were teaching something to somebody who did not learn what you set out to do, but learned something entirely different. And within that world, decisive. I don’t know the answer to this. But it sort of obsesses me.

[Nadia] Does anybody else have the answer?

I mean, what else are we gonna learn? No, I don’t know, there are two levels of this. One that I know of, and an answer, that’d be terrific. I’d become very rich. (audience laughs) But I’m 97, nearly. So that’s the only one. But the other answer is, the other concern is, that we have to get our mind to open, they’re studying this problem, and we have to find people in the key jobs that are capable of strategy, in relation to an ever-changing world. Which has been changed by our own efforts, that has never happened before, in that way. And, we are not conscious of that yet as a society.

[Nadia] We have time for one final question before we wrap, someone? Would like to ask, yes sir?

So there’s a story about the moon coming up over the horizon and this country going on alert, a strategic alert against Russia, but there were cooler heads that decided, that it wasn’t an attack, it was something else. So, are what you’re trying to say is, we need very elegant AI before we put it in control of the button?

What is he saying?

Do you want to? Do you want to repeat the question or? I mean essentially do we need more elegant AI before we put it in control of the button? That was his question.

In one way or another AI will be the philosophical challenge of the future. Because on the one hand you are in partnership with objects when you go to general intelligence, that’s never even been conceived before. And in a deeper way, the implications of some of the things I’ve sketched are so vast, that unless one reflects about it before, if there was one, (Henry stuttering) I’m told (stuttering) self driving cars, when they come to a stop light, stop because they’re engineered that way. But then when the cars next to them start inching forward to get a jump on the others, they do it also, why? Where did they learn it? And what else have they learned that they’re not telling us?

On that note, I think time is up now.

How do they talk to each other?

Well thank you so much.

Next time I come here, I’ll give you answers to that.

So we’re gonna take a 10 minute break now, and then we will be meeting back here with Commissioner Mignon Clyburn, who will look at AI in the work force. Thanks very much.

Share with Friends:

Leave a Reply

Your email address will not be published.