Defining the Future World Order: AI and Global Cooperation

National Security Commission on Artificial Intelligence: Panel IV – Defining the Future World Order: AI and Global Cooperation

Subscribe to Dr. Justin Imel, Sr. by Email


They produced 120 questions for me to interrogate this panel with. So this is actually a 10 hour session, not a one hour session. I’m thrilled to be here with this panel. Some old friends and new friends are with me on stage. To my immediately left is Andrea Thompson, known to many of you. She was the former Under Secretary of State for Arms Control. She was the National Security Advisor to the Vice President and potentially most importantly, 25 years in service, time in combat zones, and certainly served her country nobly. So Andrea welcome. I have the great pleasure of having Anders Fogh Rasmussen with us, the former Danish Prime Minister, Chairman of the Danish Liberal Party and formerly Secretary General of NATO. Most recently though, he is the CEO of Rasmussen Global as well as the founder of the Alliance for Democracies or of Democracies which we will get to a little bit later. Thrilled to see my old friend Avril Haines here. As many of you know, she was the former Deputy National Security Advisor. More importantly than that though, she was the Deputy Director at CIA. And she is currently at Columbia University. Also with us today is Michele Flournoy. Michele was the Under Secretary of Defense for Policy. She was the co-founder and CEO at CNAS. And right now is the co-founder and the Managing Partner at WestExec Advisors who are helping a lot of small companies. So thank you very much for that. I’m told I’m to make opening remarks. I will keep them very, very brief, because we’ve had a long day, and we’ve covered an awful lot of ground. I think we’ve talked about research. We’ve talked about applications. We’ve talked about talent. This notion of the global positioning or the geo-strategic positioning of the AI conversation is one that’s really, I think, founded on the notion of norms and values and the way democratic, free societies are going to embrace these norms and values. And I think that we’re gonna talk a little bit more about that as we go through it. That said, that the developments in AI, the advantages that will be attained through AI, can’t be separated from the emerging strategic competition that we have and we’ve talked a lot about with China and Russia. Some of these challenges are just never gonna go away, at least not in our lifetimes. But there’s a broader geopolitical landscape that I think and geo-strategic landscape that we need to talk about, and that is who are the friends and allies that we need to cooperate with and what does that conversation need to look like in order to assure American positioning. Our particular group within the commission was really looking at the United States need to develop a holistic strategy to ensure longterm competitiveness in this emerging environment. And I’m told that I should reinforce the five initial judgments that were made by this particular group. The first one is the need to foster cooperation amongst the US allies and partners. And doing so will be essential to retaining a longterm competitive advantage. The second is really this notion that the United States and our allies should seek to preserve existing advantages in AI related hardware. We haven’t talked about hardware an awful lot today. We’ve talked a lot about software, but that software’s gotta run on something. Thirdly, AI presents significant challenges for military inter-operability. And, when we look at this, if the United States and its allies do not coordinate early and often on AI enabled capabilities, the effectiveness of this combined military coalition will definitely suffer. Fourth, we should also be open to possible cooperation with Russia and China on issues of mutual strategic interest such as promoting AI safety, which we will talk about a little bit more and managing AI’s impact on strategic stability. I think that within a group like this we often think about the military applications, but I think those in the private sector would agree that we’re looking at AI for things like health, climate, and a number of other enduring problems that mankind faces. And finally, the United States should lead in establishing a positive agenda for cooperation with all nations on AI advances that promise to benefit humanity. So with those judgments being so read, I think I’d like to open it up to the panel to give their thoughts on what Dr. Kissinger was really teasing at. And this is this notion that AI is the philosophical challenge of our generation. And when it comes to negotiating treaties and engaging in agreements around AI, how do you do that with such a complex, nuanced technology? So, Andrea, let me tee it up for you. Why don’t you start?

Well, thanks Chris and thanks to the commission selfishly, and thanks to Chris McGuire who was detailed from the State Department, a little shout-out from my former team, from part of the T. family and is now with the commission and would like to echo Chris’ comments about the work of the staff and the work of the commission. If you haven’t read the report, read the report. We’ve taught a lot of amazing panels this afternoon. And to get back to Chris, to your comment, and when Dr. Kissinger, looks like what do we need to do, what’s the next thing, what are we missing. We need to implement the things that we’ve already raised. Read through the report. If we could make half of those come to fruition, we will defeat China. We will be the first out of the block. But three things foundationally, and this is what I’ve seen over the last two years at the State Department, traveling, meeting with partners and allies, talking about AI and cyber and other emerging technologies. And they’re some commonalities there. And it comes back to people and processes and partners. We’ve talked about software (mumbles), the people, the panel earlier today about talent management. We’ve talked a bit about processes, and this afternoon, the last panel, we’ll also talk about partners. So we work together. We’re facing these common concerns. Whether I’m in a NATO partner or whether I’m in Indo-Pak, whether I’m in Africa, our partners and allies are raising these concerns. This is not unique to us. So let’s implement what we’re seeing. AI is new, but the challenges of AI are not new. The principles behind it are not new. We saw it with cyber. As the SecDef just mentioned about standing up, when standing up CYBERCOM and the services. We learn through some of those applications. So the foundational elements are the same. We just need to integrate what we’ve already been talking about.

[Chris] Anders.

Thank you very much and also thank you to the commission. I think it’s an essential work. I’m definitely not an engineer, but when President Putin stated that artificial intelligence is the future, and whoever becomes the leader in this fear will become the ruler of the world he got my attention. And it demonstrates why it’s so essential that America is the leader. But American national security is strongly linked to strong partnerships and alliances. So I would say what we need is leadership of the whole free world. And against that backdrop, I would like to make three points. Firstly, what we need is what I would call a technological alliance of democracies. Democracies must be in the lead to be sure that we set the right norms and standards living up to the principles upon which we have built our free societies. And we must realize that artificial intelligence is an integrated part of our national security. So we need strong cooperation between government, industry, and academia. And I do not share the skeptic’s view, the skeptics who are reluctant to cooperate with the government. I think they miss the point. If we do not have this strong cooperation between the private sector and government, the Chinese will be the winners. It is as easy as that. So if the employees in those big tech companies want to make sure that it’s their ideals that will be the winners, they also need to cooperate with the government. I would call it a patriotic duty to cooperate in this field. And in this respect, by the way Chris, I would like to thank you very much for your work in In-Q-Tel. I think it’s a prime example of getting artificial intelligence right. We need much more of that. And you also need much more of that in Europe. That leads me to my second point, namely, we need a stronger Transatlantic cooperation. We should stop the fight between Europe and America. There is too much at stake. What we need is cooperation to counter the challenge from the advancing autocracies. That’s what it is about. Europe must do much more constructively, increase its own investments. The European Union should increase funds in its own European Defence Fund that was established a couple of years ago. It’s only around $600 million a year devoted to that fund. It’s tiny compared to what the Chinese invest in this area. Gradually, NATO allies are investing more in defense. In 2014, we decided that within the next decade all NATO allies will invest at least 2% of GDP in defense. At that time only three fulfill that criteria. By the end of this year, eight countries will do it. And I think, to his credit, President Trump has real done a lot to raise awareness of the 2%. But there is another goal that is equally important, namely 20%. According to NATO standards, NATO allies should devote at least 20% of their defense investments to investments in equipment and research and development. I think that need to be raised further. The US currently spends an amount equivalent to around 27% I think of its defense budget in investing in equipment and research and development. Why not raise that figure to 30% for all allies, including the US? So I would encourage more presidential tweets on the 30%. I think that would help. My third and final point is that we need to strengthen NATO. In that respect, you also need an awareness in the United States to take leadership in sharing data and intelligence. Sometimes, and I always speak based on my fills as NATO Secretary General, sometimes the United States is too reluctant to share data and share intelligence with other allies. But it creates a lack of confidence. It creates some mistrust. And we should avoid mistrust. We should strengthen our alliance, and we should make it natural to trade across the Atlantic without suspicion. And to that end you also need to be more open primarily because if the United States do not share data and intelligence and technological progress with its allies, then at the end of the day we will have growing inter-operability problems, because if the US is here and the rest of the crowd is here, then we cannot cooperate. It will weaken the alliance. We should strengthen the alliance. You should do what you can through American leadership to get other allies to increase their investments in artificial intelligence. We also need bigger NATO funds to invest in new technology, including artificial intelligence. Today NATO as such only devotes around 600 million US dollars a year for investment in equipment. The rest of it is national responsibility. And of course, it will remain national responsibility. But I do believe that we would make a leap forward if we devote more resources for NATO cum funding of artificial intelligence and other high tech investments. And we should speed up decision making processes in NATO. When NATO took responsibility for the operation in Kosovo in the ’90s NATO spent six months to take that decision. When we, in 2011, took responsibility for the operation in Libya we spent six days. In the future I think we will have maximum six minutes. So ambassadors cannot discuss this at length in Brussels. We have to speed up the decision making processes. And my final remark will be I also think the US should take leadership in preparing international conventions to regulate the use and production of artificial intelligence, because otherwise we might risk that the autocracies would misuse it in a way that we cannot accept. I know this will be a challenging task, but I think we should explore areas where we could cooperate. So in short what we do need is a strong and determined American global leadership.

Avril, you wanna weigh in on this?

Sure. Thanks so much, and thanks, by the way, for staying for the three 45 panel. You sort of set us on, Chris I think, on this idea of reacting in a way to Kissinger’s comments of implication versus application. And from my perspective, I think it’s this interesting question of right now we see autocratic governments, others, using artificial intelligence in ways that are helpful to them, to essentially achieve their own agenda. And in some contexts that’s weaponization of information or bolstering their surveillance or doing a variety of things that are concerning to us. We know artificial intelligence can be used actually in other ways to bolster our own agenda, and we’re not investing enough. And we’re not keeping up in the competition essentially to do that and to push back in this context. But overall I’d say that the purpose of our strategy would be one that really should be focused on the prosperity of our own systems, right, on our security and on promoting our values and pushing back where we need to in that context. And if that’s what we’re trying to achieve in terms of the overall implication and that certainly, and I think the interim report does an excellent job of identifying the steps that are needed to really push in on these issues, then I think the question that I think maybe is really worth digging into is how do you achieve those things most effectively with a strategy internationally for this panel, right. How do we actually promote the kind of international landscape that helps us to do that? And I think that’s about certainly cooperation and coordination, but I think it’s more than that. I think it’s about shaping the development and deployment of artificial intelligence more generally. And I believe that as Secretary General noted, there a whole series of ways we can do that through the context of working with allies and partners in Europe. And there’s no question that we’re stronger when we’re working with our allies and partners certainly to push back against Russia and China, a variety of others, but really to begin to shape that environment. But I think it comes in a whole series of different areas. And I think the commission can do a lot of good by mapping those out in some respects. I think there’s the question of building up norms and standards which are things that are talked about in the interim report a bit. There a whole different series of ways in which you can do that. For example, I would not recommend going out and trying to negotiate a treaty at this moment. I don’t think that’s the most effective way at this point, but I do think having discussions internationally with other states and developing ideas for what are the things that are acceptable, what are the things that are not, what’s in the gray area, how should we be thinking about that. I think the question of developing standards and thinking about them through the lens of safety, trust, trying to develop those types of things, but really doing it at a body that the United States trusts, in other words an international body where we think they can have an actually productive conversation about this and establish standards, thinking about whether or not you want a third party mechanism to be evaluating whether or not people are keeping up to the standards that you’re setting, thinking about how it is that you provide some accountability for not dealing with those standards. All of those things are things that you might wanna do. You wanna set up obviously international structures in terms of organizations and sort of bellybuttons and nation states that are dealing with these issues. You don’t have to remake the wheel. You don’t have to set up new institutions. But you do have to identify who is going to be doing the collaborating. How are you gonna establish those collaborating relationships? Should it be done on a bilateral basis, on a multilateral basis? What are the guidelines that you think are useful? Is that a whole area that you wanna work into? How do you wanna be thinking about the defense pieces, the interoperability pieces, all those things? This is not gonna be in one place. It’s gonna be in a whole series of different places, and I think those are the kinds of things that can be usefully thought of in the context of the work that you’re doing, but really trying to promote in all of these different areas that are so integrated I think the work that needs to be done to promote that. So, I’ll leave it at that.


So, I think you’ve been hearing all day, we’ve been hearing all day, riffing off Secretary Kissinger’s remarks about the strategic and profound implications of the AI competition and how it comes out economically, politically, and militarily in terms of relative balance of power. And I guess what I would focus my just three points on is the importance of marrying whatever we’re doing in the sort of AI, development of AI applications was in a much larger frame of American leadership and leveraging our allies as truly strategic and unique source of advantage. I think one of the mistakes we’ve made so far in the competition with China is framing it as a bilateral competition as opposed to a competition between an authoritarian state that’s trying to spread that model and the coalition of like-minded democracies which include the richest countries of the world, and that we together as democracies across North America, Europe, and Asia, if we really go at this together, we could be much more competitive vis-a-vis China. So let me just give three particular ideas. Number one, I think the first rule for all of us is the best way to shore up our competitiveness is to invest in the drivers of that competitiveness at home, research and development, science and technology, access to higher education, 21st century infrastructure, smart immigration policy that attracts the best talent from around the world and then does everything possible to actually keep it. This is a moonshot moment for all of the democracies. We’re not acting like it. We’re still sort of asleep. I think the national security apparatus is awake, and they’re thinking in these terms, but our societies are not, have not been led, have not been inspired to the kinds of public, private collaboration that we’re gonna need to be successful. Number two, it should be us, the United States and its democratic allies that lead the development of norms in this domain. I think the DIB’s principles on AI that were offered to the department is a great place to start. There’s a lot of good work being done out in industry as companies try to figure out what norms are going to guide their work. I think that we’re in a great position to lead an international dialogue, not with the expectation that Russia and China will necessarily sign onto that consensus, but to the extent you can build that international consensus and create buy-in, then you have the basis for pushing back on behaviors that violate those norms and imposing consequences for those violations. And then third, I do think it’s very important even now to be reaching out to China and Russia to have a dialogue about this. I wouldn’t construe it narrowly on AI. What we really need is a new dialogue about strategic stability. That used to be the realm of the nuclear priesthood. No, in a world of the potential for early cyber attacks that have strategic import or the world in which AI can inadvertently escalate very quickly, the speed up the escalation ladder, we need to be having conversations with countries like China and Russia about strategic stability in an era where there’s potential conflict in space, in cyber, and using tools enabled by AI. So those are three specific ideas of where I would start if it was up to me.

So Andrea you own the nuclear priestesshood. How is this a different conversation? It is a dual use technology. It is a more nuanced technology. How do you engage that dialogue internationally? What are the different (audio drops out) that you would engage, and how would that conversation go?

Absolutely, and I agree with the points were made. And if I could give some recency with it on dialogue discussions that we have had. So to answer the question, it does happen to have to happen bilaterally and multilaterally. And we’re having those discussions. (mumbles) know if Robert’s still here. Raise your hand. Giving a shout out to the State Department family again. Every time we travel, every time I had engagements, so with my counterpart at the Under Secretary level, we talked about cyber norms and responsible behavior. And we talked about AI, because you’re exactly right. Partners and allies are looking to the United States for that leadership. They want to know what we’re doing. They want to know what our private sector’s doing. They want to know what our strategy is. And we learn from each other. So we’ve had those discussions (audio is choppy). But, we’ve also had those discussions multilaterally. We’ve had discussions over, again, with counterparts with NATO. We’ve had those discussions in Indo-Pak. We’ve had those discussions up at the UN. So those discussions are happening, but I would say kinda of on the periphery. They’re happening because I do lead, did lead, up until 12 days ago, whatever it is, arms control and international security to include the nuclear nonproliferation. So by nature most of my counterparts are those same people. And most of our partners, the people that are leading the emerging technology piece many are within that same sector. So we’re having those discussions. And I agree you have to have increased information sharing as the Secretary General mentioned. We did that, an example it may come out is a partisan take with the INF. We did incredible work then. The IC, the intelligence community, did incredible work to get intel downgraded that we could share it beyond the Five Eyes. So when we went to NATO we could show this is where the SSC-8, this is when it was fired, where it was fired. That’s an example of what we need to do with AI. We need to share information. We need to get best practices. We need to share with our partners. And yes we need to have dialogue with our competitors. The most recent example in July, don’t quote me on the date, it was July, when we went to Geneva and met with my Russian counterpart on the strategic stability, strategic security talks. We can have these folks. We can have these discussions. The door is opened. It has not been as open with China yet. The President’s been clear that he wants to multilateralize and to have those discussions. We need to have the talks with Russia and China as well. They’re going to be part of the solution if we’re gonna accomplish this together.

Anders, you brought Up sharing data. I’m gonna change the context on you a little bit. Data is really an underpinning of AI. And GDP are and the privacy conversations that have been going on in Europe are going to impact Europe’s ability to lead in AI in many ways. The Chinese are collecting way more data today than anybody else. How is Europe thinking about China from a strategic competitor’s perspective around AI, and where are they going to draw the line between the rights of privacy and the needs of the defense community and the military communities?

Well, for a long time I would say Europe has not been aware of the strategic risks. Europe has been a bit naive maybe. But recently Europe has realized that it is necessary to focus more on what has been called the strategically important sectors. This is why the European Commission has introduced the so-called screening mechanism to investigate whether a potential Chinese investment or any other foreign investment might be done with the intention to make strategic decisions in Europe. Let me mention concrete example. The Chinese invested in a Greek port. They are in considerations investing in a Portuguese energy plant. They have created a so-called 16+1 format in which they gather the Eastern European countries. So what the Chinese are doing is to focus on European countries in economic need. And they exploit that. They offer their money. And we have seen how the European Union has been faced with increasing problems in criticizing violation of human rights in China, because all decisions in that sphere must be taken by unanimity. And there are always, at least one country, dependent on China that is opposed to such criticism. So Europe is I would say more awake now, but is still a lot to do. For instance, on 5G and UrWay some European countries have refused to cooperate with UrWay. Others are more reluctant to prohibit cooperation with UrWay. I would prefer a common European approach to that. I share the concerns concerning cooperation with UrWay. So it’s a mixed picture.

So Avril I’ll tackle you with this one. How do we have a dialogue around these sorts of things? You’ve got the military, civilian fusion in China. I think it was General Shanahan referred to Maven potentially being a canary in a coal mine and being of concern. How do we have a conversation both within the United States and outside of the United States on this concept of norms and values and what that would mean to the longterm way of life that we’ve become so accustomed to in the free world? How does that conversation go on? And who leads it?

I mean, I think at least from my perspective and Andrea I think said it as well, people do look to the United States for leadership on these issues. And I think if it is to our advantage to have this conversation, and it’s to our advantage to lead this conversation. And so I think the first step of it in the norms and values space is first of all to recognize that that’s gotta be a part of every aspect frankly of what we’re looking at in terms of artificial intelligence, right, because it comes across every area in which artificial intelligence is going to be used for the purpose of, whether it’s military or economic purposes or other aspects of it. So I think that’s something that’s gotta be built in. I also think that it’s not only a part of the sort of arms control in a sense conversation that you’re having, but it’s also built in to things like technical standards that are applied across AI in a whole series of ranges, right. So one way in which you have this conversation it seems to me is when you’re talking about what are the standards that you wanna apply to AI Systems in their development and deployment. You also wanna think about, well, so if they’re using algorithms that may be in fact legitimizing and reinforcing biases, how do we actually address that in the context of the development and deployment of artificial intelligence or if we are dealing with systems that have privacy information, how do we think about (mumbles) encryption or other ways to manage those types of issues in the context of what might be a broader system that also involves AI. But there’s a whole series of ways in which it’s got to be a part of the discussion. And I think it’s sort of… I think you can’t do it in isolation around AI. I think it’s your norms and values in a whole series encryption areas that you’re injecting into the technology piece. But I think it does. There was a prior panel, and I think Sue Gordon was talking about this a bit. I think it is… Behooves us to recognize that we have to very actively address this in a way that promotes a conversation on it that goes across the technical, the civilian, the military, all of these different sectors that talk about these things in different ways. And its not just about making policymakers smarter about technology. It’s also about helping technologists get smarter about policy. And I think it’s an issue that if we don’t start injecting into the conversation early, we’re gonna lose track of it.

Michele drawing on your experience in DOD policy, how do you think about export control within the context of AI? How should we be looking at export controls, one of the tools in the toolbox for AI or should we at all?

I think it’s a really hard question that this is where we could use, I’m sure the department could benefit from people doing some serious analytic work, looking at alternatives, but I mean, I’m not a technologist. But I have a hard time understanding how we would try to control algorithms. I mean, so for me the long pole in the tent is the data and making sure particularly when it comes to the military or security sphere, the data and the platforms that are going to be enabled by AI or to become autonomous or semi-autonomous. At first blush, those seem to be the primary levers where, and we already do the second. We already do the second. But that first one, figuring out how to actually do that is important. I think it also ties to the broader question of how we approach Chinese or competitor’s money, people, et cetera in our innovation ecosystem. And right now my worry is, I mean there’s definitely cause for some concern, but there’s also some substantial benefits that we reap from certain international collaborations. So right now my concern is that we’re taking a sledgehammer to this when we need a scalpel. And I’ll give you an example. Right now, if you put a slide in front of a potential DOD buyer that shows there was some Chinese seed money in a company that they wanna invest in or not invest in buy from, it’s usually that’s the immediate freak out. Forget it, no way. Whereas I think you need to draw a distinction between passive investment where it’s just another investor getting ROI and there’s no access to non-public IP. There’s no board seat. There’s no controlling interest. There’s nothing. It’s just blood in this bloodstream of Silicon Valley. We should not give a hoot about that. That’s us using their money to our advantage, right. Now it’s very different if it’s a controlling interest, if they get a board seat, if they get access to IP. So those are the kinds of distinctions we need to be making. Similarly yes we need to worry about talent coming into DOD funded labs and that kind of thing. But does that mean we need to treat every Chinese student as a spy? No. We need to have a very sophisticated process for doing due diligence, for vetting people, money, and so forth. (audio almost completely drops out) On the one hand a much more, that’s the Chinese, a much more clear eyed perspective from academia and Silicon Valley. Let’s not be naive. Their civil to military fusion is real. There is a real concerted effort to get inside our ecosystem. But on the other hand also realizing that without the benefit of foreign talent we don’t win this race. Our strength as an immigrant nation has always been attracting the best talent in the world and keeping it. And that’s so many founders’ stories in Silicon Valley, so many stories of the space race, how we won that and so forth. So I just think we have to have a very nuanced, clear eyed approach to this. And I don’t think we’re there yet.

So with that I’m mindful that I only have 9 1/2 more hours to go with this panel. And I could keep them up here for that whole amount of time, but I would like to open it up to questions from the audience, because I think, yeah, I’ve got a slew of them, one at the very back, one in the middle. First one with a mic wins. All right, gentleman in the back.

Thank you. My name is Ben Bain. I’m on the staff of the Defense Innovation Board. So for all the panelists, you’ve all served in very senior national security positions. One of the kinda common refrains that we’ve received from interviews across the department is that senior leaders don’t always fully understand or appreciate exactly what new technology can and cannot do. And so if you were to go back into service in a senior national security position, what would you do differently to give tech a bigger seat at your decision making table, if anything?

So one example that they have at the State Department and just talk about the evolution. I haven’t seen it through. I departed before it came to fruition, but many in the room received a phone call from me to get you on the board. So we have an in-house security advisory board. Traditionally, it has been used for arms control and non-proliferation experts. I approached the Secretary roughly last year and said I want to carve at least a quarter, possibly a third of that board for the tech industry. I want advisory’s services to come in on my emerging tech, whether it’s AI, cyber, quantum, hyper-sonics, CRISPR, et cetera. And I said I will never have all the experts that I need within the ranks at the State Department, but we will within our borders, our boundaries. Most of the experts will come from the outside. So that is moving along. So that’s one example. If I could do it over again, I would’ve started much sooner. And to get the folks on board, I hope that is one of the areas that continues beyond my legacy is to get the ISAB stood up to address emerging technologies. And I hope that the high paying folks out in the private sector that want to contribute, maybe not leave their company but wanna contribute will say, I will come quarterly. I will address when you get a phone call from the State Department and said we have a diplomatic challenge with an AI. We’re talking about norms. We’re setting up an international body that we’re going to lead with partners, how would you do it, that you say, yes how can I help.

Michele, what would you do differently?

I would say first and foremost keep the great external advisory bodies that have been formed like the Defense Innovation Board. I mean, it’s a treasure. Some of the best, highest priced talent in the nation working for free to help the department, I mean keep that. Don’t try to take it, and don’t try to dis-establish it. Second, I think we need to build in more tech advisors internal to the system, senior technologists who actually get a seat at some of the decision making tables at DOD, get a seat in the situation room when the NSC is debating an issue with huge technological elements or implications. Boot camps for staff. I mean, I would certainly try to bring in more technologists more broadly, but even for the non-tech staff that has to deal with some of these issues, getting them smart on being at least fluent in what they’re dealing with. I mean, I think about what I have learned in just the two years I’ve been doing the work at WestExec working with small, cutting edge commercial technology companies that want to play in the national security space. I’m only now aware of how much I didn’t know and still have to learn in technology, because it’s a whole other world out there. But I mean, we need to be looking for opportunities to provide people on the policy side with those kinds of exposure and experiences. Granted, they’re never gonna become technologists, but at least build fluency and vice versa.

Avril, your DCA next Monday, DCIA. What do you do differently?

Well, as you know Chris, we did a number of things. We made a number of changes in the last few years. And I think many of them are continuing and getting better over time. But one of them was including for example, a whole ‘nother directorate on digital innovation that was working across the whole system and that is a part of it. But also it is, and we had somebody who did this, having a senior leader in your senior leadership team that is the technology person that is constantly injecting things into the conversation. We also had during my time sprint teams that were capable of going across agencies and departments to work on different issues. And I have to say I found that to be one of the most effective things, but it is not easy to do. And a lot of the challenges that I found that we kept on running into in the context of developing different structural ways to improve basically the opportunity for senior leaders but also frankly every aspect of your agencies and departments to be able to leverage technology effectively had to do with talent management. And actually having the opportunity to bring people in easily and quickly to address the issues that you thought needed to be addressed. And honestly I’m on a national commission on military, national, and public service. And we’re looking at this issue, and we have a lot of recommendations that go to this issue. I mean, I think and some of them I suspect are gonna overlap with where you are on these issues too. It’s a really challenging space.

Anders, might not be a fair question, but if you were to be Secretary General again, are there things you would do differently?

Yeah. As Secretary General of NATO, I established a new division called the Emerging Security Challenges Division to deal with cyber security, endy security, and so on. So far so good. Today I would focus much more on artificial intelligence. I would create an office for artificial intelligence. I would provide it. I would try to provide it with a big budget, a really substantial budget. I would encourage NATO allies to actually within a NATO framework to create a commission like you have created a national commission. I think NATO should do exactly the same, to take NATO allies onboard in this very important discussion and that could also serve as a forum for exchange of data. And finally, I would take steps to improve the decision making process with a particular view on speeding up the process. One of the suggestions could be to give the military leaders, SACEUR, authority to take decisions. Of course he would afterwards have to be accountable to the NATO Council. But in the future you cannot discuss at length in Brussels whether you’re going to counter an artificial intelligence attack from an adversary. You have to make immediate decisions. And we should provide authority to our military leaders to take those decisions.

This is why it’s good for the commissioners to be on a listening tour for the last nine months as we get these ideas about maybe NATO has a commission so that we could make recommendations, perhaps. I think I have got a question at the back, and then there’s one over here.

Thank you. I think we’ve touched briefly on GDPR. And we’ve had some conversations about Europe maybe creating a third way around AI beyond the US and China. So the first question is if you have that third way, could that be a way to attract AI practitioners to work on, for instance, NATO security issues. And then my second question is does there even need to be a third way. Why are we competing with our companies and say EU regulations? Why not work together to create a better AI framework? Thank you.

Anders, I think that’s all yours.

Yeah, I think so. And I fully agree. I think Europe is wasting a lot of resources and attention by attacking big American tech companies for dominating the European market. I think Europe should do much more itself in a positive and constructive way, because a risk is if Europe is focusing on attacking big American tech companies without having any alternative itself, and Europe doesn’t, then we are weakening what I would call the whole democratic alliance, tech alliance, to counter the autocracies. So I think that’s the overall strategic mistake. So I fully agree. We should cooperate instead of confront each other. On the data privacy issue, I would say my advice to American tech companies would be to be at the forefront when it comes to the data protection. You should realize that for historical reason the protection of personal data is an essential issue in Europe, in Germany for obvious reasons, in Eastern Europe for similar obvious reason. People are very much concerned about government control and government supervision. So I think you should realize how important data privacy is in a European context. So my advice would be on the one hand to be at the forefront when it comes to protection of data, personal data, but at the same time at the forefront when it comes to the protection of free speech. And this strategy of two legs so to speak, I think could make it easier for American tech companies to improve their image in Europe.

[Chris] I think we had a question over here.

Thank you, hi. Yusuf Azizullah GBAC, Global Board Advisors CEO. I’m also write for World Economic Forum on corporate governance, and this introduction who I am. Some people weren’t here earlier in the day. A professor at University of Maryland Smith School of Business and Germany’s Mannheim Business School. Anyway, I’m really glad to hear the cooperation that I’m hearing across that we don’t have to be just looking internally within the United States, but actually a global cooperation at the world level. So there’s something I’m writing for the World Economic Forum, but actually this would be a good time to run it by you guys also. To keep America’s lead, the four, five, six companies that we are, Google, Apple, Amazon, Microsoft. They are AI leaders and Facebook and so forth other coming. And from China Baidu and Tencent and so forth. We need to have a global AI consortium based on OCD principles of corporate governance in a nutshell that shares American values of democracy, high values of fairness and transparency. If we take the position to lead the consortium, to create the consortium, then also fund it, not only from a US and NATO and other allies, we have to understand two-part solution. This is one whoever gets to third place is not only the race, but actually leading and so others fall our example. That’s one part. A case study that comes is Estonia which is a European Union small eastern country that has implemented a lot of AI in their society, autonomous cars, and so many other digital health and so forth already implemented for its European citizens. So when you look at it there countries and models in with the AI the next world order is gonna be definitely change with AI. And if the US wants to keep its lead we have to advance it. So that’s what I wanted to run it by you guys is what your thoughts are. US creating an AI global consortium and then leading it and then keeping up the advantage so there’s a fairness of not only at the code level, at the logic level, at the dataset level, not only a Japanese or Chinese or German based coders or American based coders but somebody that’s cooperative just on a level that we share what we need to share, and we keep our advantage where we need to keep our advantage. Just a comment. Anybody on the panel. Thank you, sorry long-winded comment.

Avril do you wanna take this one? From a standard’s perspective, it sounds like it might fit that seam.

Sure. It’s hard to react without really understanding exactly what this kind of consortium would do. And I… I don’t think that it’s a binary choice between either we’re all in and cooperating with everybody or we’re sticking in our stovepipes, right. It’s something in between. But I do think OECD could play a role on some aspects of AI. I don’t think it’s the right place for everything essentially across the board. And it would just depend on what the particular issues are and then getting the right people at the table on that, so.

Anyone else wanna weigh in on that one? No. I think we’ve got time for one more question, if there is one more question. If not, I will get to ask it. I lost. Got one over here.

I’m Jeff Starr Presidential Innovation Fellow supporting the Air Force. Thank you for your question earlier and your answer on export control, one of my favorite topics when it comes to AI, since it conjures up a 1970s image of preventing the Soviets from getting American microelectronics which didn’t work out so well. From the peer, competitor or adversarial perspective, the period 20, 25, just say 2030 or 2035 will be really important. For their own reasons independently, both the Chinese and the Russians look at that time period as being critical and a critical inflection point for the matching of their own military doctrinal innovations with military hardware and therefore the capability to execute missions that they are building toward but haven’t quite yet reached full capability to do so. The Russians are a little bit more explicit to themselves than the Chinese are, but of course everyone’s aware of China’s longer term strategy to achieve scientific, technical, financial, and also read dot mill dominance in a number of high technology areas, including in the next 2025, 2030 timeframe, including artificial intelligence. Therefore from the American perspective, we have to look at the challenge from their eyes. When are they going to feel the most emboldened to act based on their doctrine, their military dominance, and their dominance of high tech areas. That’s six years from now to 16 years from now. Export control’s not gonna do it in that timeframe, if it could ever do it anyway. One thing we’ve mentioned a couple times today, but really haven’t focused on is American demographics. 10-year-old kids will be out of college in 10 years, 2030. What are we doing to make sure that a higher percentage of those kids go into science, technology, engineering, math, AI. We worry about the number of Chinese students that private American universities are training in the United States, suggesting somehow we can’t let them leave the US. And some are very vanilla comments of induce people to stay or I’ve also heard comments from senior DOD officials, people in the government now, let’s not train Chinese students in AI. Let’s train them in history and archeology and English literature but not AI and computer science. So my question is. Sorry about that long thing, but it’s the last comment, so. We need to address this demographic issue in a serious way. Other countries do. IDF puts a lot of money into Israeli high schools and grade schools. So what are we gonna do?

Let’s stick with the tipping point question on, go ahead.

I think the timeframe issue you raise is very, very important, because we in our own planning and thinking and acting have to think in two timeframes. One is the, I think the next five to 10 years where the risk of miscalculation by China or Russia, but particularly China in terms of underestimating our resolve and believing that they might be able to act before we fully realize the capabilities we aspire to have that that could be a role challenge for deterrence. So we need to think about what are the things we can do now to shore up deterrence in the near-to-mid-term including added urgency to the AI applications that could be fielded very quickly using mostly current capabilities in new ways and new concepts. Then there’s the longer-term frame of thinking the big bets we’re making for the 20- to 25-year time horizon. But on the human capital piece the same thing. What are the things we should be doing now so that the kid who’s in middle school or a secondary school ends up? We have the kinda STEM talent we need for the future. But I also think, again, pulling it back to that near-term interim period, thinking not just about higher education but upscaling. We have a workforce. I know we’re starting to do this. Can we test people for aptitude and start upscaling the workforce that we have? Can we take the soda straw of talent that flows between places like Silicon Valley and DOD and the broader national security ecosystem and turn that into the superhighway? What are all of the different incentives, programs, efforts that we can make to do that in the near term even as we make the necessary investments in growing that talent longer term? So we really do have to challenge ourselves to think in two different time dimensions, because the near term quicker fixes are very different than the longterm investments. And sometimes in some cases they may actually compete for resources and bandwidth as well.

Unfortunately, our time is up. I have to say it’s been an absolute honor for me to be onstage with this esteemed panel. So thank you all very, very much. Please join me in thanking this panel. (audience and panelist applaud) Don’t get up. Do I got that right? Yeah, I got that right. It’s again my honor to introduce a friend and colleague. Jason Matheny will be coming up to close out the afternoon and the day. So stay with us.

Share with Friends:

Leave a Reply

Your email address will not be published.