The Defense of a Nation: AI Leadership for the Future


National Security Commission on Artificial Intelligence Conference Panel 2 – The Defense of a Nation: AI Leadership for the Future

Subscribe to Dr. Justin Imel, Sr. by Email

Transcript

It is truly a privilege to be up here with such distinguished panelists. It’s everyday I actually have to basically pinch myself to believe it too, really believe that I am part of this incredibly important effort. On our panel today is Andrew Hallman, Principle Executive of the Office of Director of National Intelligence. The Honorable Christine Fox, Senior Advisor and Assistant Director of Policy and Analysis at Johns Hopkins University’s Applied Physics Lab. Bob Work, who is also a commissioner with me, but is also the former Deputy Secretary of Defense. And Steve Chien, Senior Research Science, Caltech Jet Propulsion Laboratory, and also a commissioner with me. I want you to really join us today in the conversation and our plan will be that we’ll make a couple of opening remarks at first, and then a few questions. And then we’re gonna actually open it up to all of you to ask the questions that you feel are most important. I’m gonna start by just commenting and really underlining some of the things that have already been said today but with a very close focus on our national security and the application of AI because this is a very critical time. We are talking about maintaining our global leadership and the leadership of our, really our important philosophies which are about freedom and all of the democratic values that we have focused on for decades. And this period in particular is the convergence of technological change at the same time as we have an emergence of a true strategic competitor who does not share many of the values that we have. And as you know, technological leadership has always played a critical role in allowing us to maintain our US advantage. And it has been particularly critical in doing the single most important role of a government, which is to keep its citizens safe and to keep America strong. And AI is one of the critical technological advantages that we must maintain US advantage because our strategic competitors are investing immensely, immensely in undermining our advantage. And they are using different rules than the ones we abide by. And we have to understand that they have unifying principles that allow them to do things that are different than how we operate. And for us maintaining the capability and the security advantage that comes with AI allows us to use it in so many national security applications, whether it is making sense of the vast amount of data that our people have to get through to understand their situation, whether it’s having the analysis available in the critical times or whether it is protecting our service men and women by the use of AI through all sorts of autonomous capabilities. All of these things really come down to making sure we are all working together. We have spent the past few months, since the creation of the commission in meeting with hundreds and hundreds of stakeholders who are leading the fight here. There are amazing pockets of capability throughout the government, all over and yet it is still very difficult to unify and truly leverage this in the widest scale possible. Because ultimately these organizations are extremely large and have been breathtakingly successful in the past. And every new technology has a period where you have to bring it to true adoption and that involves one thing that is often even harder than innovation. Because our country’s a leader in innovation. We have so many of the important structures already in place and freedom by the way leads often to extreme innovation. The big issues involve implementing a change. And these changes, you know people always say they love change. Change is so exciting. They actually love it when you change, okay? (audience laughs) The truth is changing old processes and old ways of doing things is hard and often it comes down to leadership, constant communication and the opening of the possibilities of how to be successful. And these things ultimately require coming together, getting outside of our operational silos sometimes, and really working in collaboration. And it also means that you have to have the right tools and capabilities underlying it to implement some of these important technologies. Okay, so with my opening remarks I’m gonna now give an opportunity for our two guests first to make some opening remarks. So Andrew, if you don’t mind, making some opening remarks commenting how the advances in AI really impact our current national security environment and obviously our threat landscape. And how can the US government use AI applications to enhance our US national security?

Sure, thank you Safra. First I wanna commend the commissioners for the real impressive work done on the first report. I got a peak at it. It’s quite impressive, thorough, thoughtful and I wanna thank you for the opportunity to be here today. It’s a subject near and dear to my heart. It was when I was at CIA most recently leading a large organizational change there that included leadership of artificial intelligence and machine learning for CIA. And one that I also bring to the ODNI and building on the strong foundation that Sue Gordon built with one of six of our most important initiatives there, one of which being our Augmenting Intelligence With Machines Initiative, which directly relates to this subject. And that strategy that we have with AIM as we call it is really based on the premise that the insights that we need to defend the nation are increasingly found in that growing velocity, volume and variety of data that are the expressions of the threats that face the nation. And with the democratization of technology, producing this highly sensitive environment and a very competitive environment, not just for the intelligence community but for national security community. And in that with the democratization of technology, the emphasis and the priority that we have to have on speed, ’cause it really comes back to the speed factor. So our objective with artificial intelligence is really about enabling a higher order intelligence tradecraft, and I’ll talk about that more in a minute. So basically our IC strategy consists of four big ideas, the first being building out our multi-year investment and cloud infrastructure, investing in the digital foundation to implement AI. And that’s the hardware, software, algorithim development, data science capabilities, data architectures that are fundamental and foundational for artificial intelligence. The big idea too is with the vast majority of innovation occurring in the commercial sector is learning how to be fast adopters and fast followers and rapid adopters of that commercial technology. And that takes a cultural shift really in the intelligence community because we’ve long historically been on, of the view that we have to develop it internally and adopt it internally. And we found that is a, largely a losing strategy because we can’t keep up. So commercial first approaches are fundamental to artificial intelligence strategy. And then equipping our officers, up scaling them to be able to apply AI and enable their human cognition, ’cause that’s really what it was about for intelligence. Third is investing in the gaps the private sector’s not addressing, for example AI assurance, fundamental to our business. A lot of the technological needs we have are closely and parallel to the commercial industry and the commercial analog’s very similar. But the risk profile is different for us. So we care a lot about image recognition that may misidentify a cat. We can be misidentifying a terrorist or a weapons system. And then we have also our adversaries who are actively trying to deceive us. So we not only have to be on the offense but we have to be guarding against a very active offense against us. And fourthly investing in the basic R and D for systems that help us with context and understanding. So sense making largely and that’s not only changing how we analyze the data and synthesizing it, but turning that data into useful insight in a timely manner. And given the diversity of data with the diffusion of power globally, with a increasingly highly-sensitive environment, that’s really about multimodal artificial intelligence application for intelligence.

Thank you, maybe Christine you could make some opening comments on those same lines.

Surely, thank you. And thank you very much for the opportunity to be with you this morning. It’s a great privilege. I truly believe that the adoption of AI is vital to the future of our national security. I know you’ve been talking all morning about the importance of getting ahead of our potential adversaries, or maybe trying to catch up, depending on your perspective. For me, a key question is how can the Department of Defense adopt these new capabilities quickly enough to deal with those adversaries now and in the future. So the Department of Defense is an extremely effective organization. And that effectiveness is the result of deeply-ingrained values and processes. You know the values, that’s a huge plus and I’m gonna come back to that point in a minute. The processes, hmm, when it comes to AI, maybe less so. So, the adoption of AI across the department, whether it’s in a weapons system or support to decision making, or in the IC, in my view is going to require not one, not two, but many, many cultural shifts. And that’s a lot of really hard work. And at the moment, perhaps this is a little unfair but at least in places I see that the department looks at AI more like a little magic that you just buy and you sprinkle on the problem dujour and and it’s going to help you. And of course successful adoption of AI at the level that this commission’s first report suggests is going to require much, much more than that. Okay, so what are the keys to success? I certainly don’t know them all, but based on my five years in the Office of the Secretary of Defense, here are four that I can think of to offer. And the first, and in my mind without question the most important is leadership. Leadership is the key to breaking the bureaucracy. Real change comes from leadership, attention and support. I mean anybody that has touched the Pentagon knows the antibodies that live there are fierce. And anything new draws those antibodies like crazy. And the only thing that can help that new thing fend off those antibodies is the direct and personal support of department’s leadership starting with the secretary and going through all of the service leadership. And the organization of the AI organization, within that department in my view must directly and visibly reflect that priority. Well another reason that leadership in my mind is so important is that that second key to success and that is data. The department, yes, they have access to lots of data, but they don’t routinely collect it. There are no standards for collecting it, and that has to be pushed and set by the leadership. Once it’s collected, then it needs to be shared, another perhaps process ingrained in the Pentagon that’s not the best for AI, sharing across all the services and the other organizational boundaries. And finally leadership is gonna be required with regard to data to tackle some of the security barriers for putting all the data together. Does classification apply to data the same way that it applies to humans? Is there a way to be more creative here? I don’t know but we’re gonna have to start thinking about this. Which leads me to my third point, which is the importance of computing power. Data has no value with the computing power to process it. At APL we have a saying that is if data is the new oil, then computing power is the new combustion engine. Computing power also has workforce implications. I know that the report talks about, and we certainly agree the department can’t do this without access to tremendously talented data scientists. Even if the department can attract those data sciences, it won’t retain them if they can’t do their work. They can’t do their work without access to computing power. Access to computing power will require budget priority and that goes back again to leadership. My fourth point is the importance of education. To lead guide and support all of the change that we’re all talking about, the leadership needs to better understand AI and data science. Now, that doesn’t mean they need to be able to do it. But they need to understand it deeply enough that they can enable it effectively. And that means maybe a little time going to school. It’s incumbent in my view in organizations like Johns Hopkins Applied Physics Lab and other UARCs and FFRDCs and academia to help them with that challenge. We need to find a way to help them learn that is consistent with the world they live in, a world where time is incredibly scarce and there are many, many competing priorities. Finally I’d just like to return to that DoD core competence of values. I do think that the ethical adoption of AI capabilities is necessary for so many reasons and the report of course talks about this. I wanna touch here on the importance of it with regard to global leadership. Our adversaries are, as we all know, actively pursuing the integration of AI into their militaries and national security structures. How will they unleash it when they do? I think it’s a real concern. But when you look at the Department of Defense in my view, the DoD has taken the high road here. DoD’s policy for the use of lethal autonomous weapons has been in place for several years now. It’s been scrutinized by pretty much every organization in the world that wants to find fault with it, yet it’s standing that test of time and is often held up as the model. The Secretary of Defense has the Defense Innovation Board. Many members are here today. That board reports directly to the secretary and just released their report last week, AI Principles, Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. And the Joint AI Center has an entire line of effort focused on ethics. If DoD were to get out in front of this conversation, they could set the norms for the use of AI in military operations globally. Today I feel that they have been reticent to do so and I don’t really understand why, but I truly believe that the DoD has the values and the thinking to really do this. So clearly I think AI is critically important to our future. I wanna thank the commission for the important work that they’re doing in this area and for this fantastic report. And again, I really appreciate the opportunity to be with you this morning, so thank you.

Thank you Christine. I think what I’ll do now is I’ll start with a few questions for each of you. So I’m gonna come back to you, Andrew, and ask you what AI applications and where could AI applications be applied to be disruptive or game changing to maintain a competitive advantage in the intelligence community?

So there’s a number of mission use cases that we look at, and I’ll categorize them in this way. One is, first is kind of speed of operations, and that’s about synthesizing data to be able to react at machine speed. And that could be, and that of course models informed by human development, human judgment. But that can be case officer operating on the street, having situational awareness, being able to track targets, being able to defend against targets tracking that case officer. It could also be automating how we tip and cue our sensors to complement each other and again, bringing all the full comparative advantage of those different sensors to bear on intelligence problems. So speed of operations, as I talked about, especially in the great power competition, that emphasis on speed advantage has to be emphasized. Identity intelligence, and that’s about identifying the pattern relationships of people, organizations, of those that would do us harm. That could be terrorism, transnational crime, other illicit networks. Thirdly, detection and defense against influence operations, a growing area for us obviously, one that both keeps me up at night and gets me up in the morning. And that’s about, as our adversaries get much more sophisticated at developing and injecting artificial content into our daily discourse, or compromising the integrity of the electoral systems, is the ability to detect and defend against those, encounter those in realtime. Similarly cybersecurity. As cyber intrusions, attacks are more automated and we approach this world of automated bot on bot attacks, we have to have AI to detect and counter, prioritize and respond to those cyber intrusions in realtime at machine speed, faster than our adversaries. And then longer term or longer range strategic intelligence, one that we have developed this concept of anticipatory intelligence and that’s taking the decades of proven social science on correlative instability and what generates instability globally, and at the societal level, at the local level, applying then data to that and increasing this sophisticated model so you’ll detect where the weak spots or the fragile points are globally so that we can focus our attention on them and better anticipate where those unstable events will lead to true instability, whether that’s conflict, coups or humanitarian disasters. So those are some use cases.

Wow, fascinating. Wow, Christine, what keeps you up at night when you think of our adversaries having this kind of, these kinds of capabilities and what do you think we should be doing to mitigate that?

So I think the thing that keeps me up at night is what I call the failure of imagination. I think that we get so busy, we fail to imagine that future world of what could be when AI is fully integrated into our processes, our systems, our capabilities. That’s the good. The real nightmares that I have is that failure of imagination on the ill when we don’t keep up, that our adversaries advance, how our adversaries will use it. And then even the ungoverned nature of the introduction of AI has some nightmare qualities to me sometimes. And what do we do about it? I don’t know. When I talk to developers at the Applied Physics Lab, we have many wonderful developers, or outside commercial companies, I kinda encourage them to take, you know a few minutes each week and do what I say, walk on the dark side. Ask yourself, okay this cool technology you’re developing, what would happen if an adversary got ahold of it and meant us harm? How could they use it against us? Or what might happen that you haven’t even thought of if it just ran out out? So that’s what keeps me up at night is our failure to imagine both the good and the ill potential effects of AI.

That’s fascinating also, I feel like I’m learning so much in this panel alone. Steve, we’ve been talking also about how culture is important, how change is hard. How do you feel the US military can transition its culture and systems to this modern threat environment? And what types of risks will this require us taking as an organization in order to effectively adopt AI in AI-enabled applications and prepare for future threats?

Well in some sense, we’re in the midst of a software revolution right now. Software is increasingly the center point of complexity in all of these systems that we’re building and the key leveraging technology. It’s no longer a hardware world, and AI is really at the tip of that spear. So AI is really the key that enables us to take these different tools that we’ve been making and make them really less of tools, less of a fork, less of a knife and more of a partner that can represent what we want to do in the world, whether that’s for national security or within the DoD. And so the tremendous challenge is how do we embrace that change and understand this change all up and down the organization? So we really have to look at fundamentally all of these problems differently when we think of this as an algorithim, as an approach, as a tactic, a tactic that in the past we might have gone and trained our finest officers in the doctrine, in the tactics, and instead we’re going to see a different type of conflict, a type of conflict in which there are no boundaries, the blurring of lines between a kinetic conflict and disinformation and attacks on infrastructure, and this constant evolution of the algorithms of punch and counterpunch. And so all elements of the institutions that engage in this need to understand this is really a change, and we have to have that change up and down the organizations.

Thank you, Steve. And that leaves you, Bob, to talk a little bit about the change we were hinting at. Talk to us about the processes, the governance, the really organizational structures, everything that need to change for us to really field innovative AI for our national security.

Well following up on what Christine said what keeps her up at night, I can assure you I sleep like a baby. I wake up crying every two hours. (audience laughs) Look, the way I wanna answer this Safra is, when you read our interim report, you will notice that there aren’t specific recommendations. And there is a reason for that. We just formed, as a commission in March, and knowing that we had over a year to work, we concluded as a commission that we would go through an assessment phase and just listen and learn. And so we started in March. Since that time we’ve had four plenary sessions with all of the commissioners. We’ve had 17 working groups and the working groups correspond to the panels that we’re having today. We’ve had over 100 classified and unclassified briefings from across the federal government, the IC and the Department of Defense. And the staff itself has had nearly 200 interactions. So what you’ll find they’re initial judgments, consensus judgments from all of the commissioners. And I think it would be helpful if I’d just go through them real quick on this panel, the panel two which is really focused on DoD applications. We’re absolutely confident that AI can help the US execute its core national defense and national security missions if we let it. The implementation of our government security strategies on AI, as Christine alluded to, is threatened by bureaucratic impediments and inertias. We often use the term antibodies but really they’re super viruses. You know, these are viruses that we’ve tried to apply biotics to and they’ve gotten stronger and they’re really, really tough to overcome. So the only thing that we’ve concluded at this point is defense and intelligence agencies have to urgently accelerate their efforts. We see pockets of successful bottom up innovation across DoD and the intelligence community. But these are relatively isolated programs and they’re not gonna translate to the strategic change that we as commissioners believe has to happen without top-down leadership, which is building right off what Christine said to overcome these barriers. The other key thing is we’ve included, and this goes to Steve’s point. AI adoption and deployment requires a different approach to acquisition, and the department hasn’t figured that out yet. They’re trying, but unless they do figure it out, we’ll be in big trouble. The other one is AI is only as good as the infrastructure behind it and within DoD in particular, that infrastructure is severely underdeveloped. And this is just another theme that comes from the first panel, as well as several of the speakers. The US government is not adequately leveraging basic commercial AI to improve its business practices, the back office type structures in the department which we are certain will save us lots of money but we haven’t seen any widespread approach on what is commonly referred to as robotic process automation. And it gets to exactly what Safra said where this is about re imagining the processes and then having AI come up, help you come up with entirely new processes. Andrew Moore describes AI as a block of interrelated, I mean interrelated blocks. He calls ’em the AI stack and that includes talent, data, hardware, algorithms, applications and integration. What we will do in our next phase, and what you should consider is these interim judgements are kind of leading indicators where we will go and the final report will have specific recommendations. We intend to have specific recommendations on how DoD and the IC can approach the AI stack in ways that will give us better competitive advantage. And it happens that the timing is great. The FY 2021 budget is being developed in the Department of Defense and across the federal government right now. It is starting to flatten. The buildup is over. We’ll find out how serious DoD is in AI because we will be able to see what happens in the ’21 budget. And if it is a high priority, even as the budget flattens or perhaps goes down, it will be protected. But if it is just treated as another R and D priority among many, many R and D priorities, that is going to be very, very concerning to us. So like I said, this is a great time. This is a awesome commission and I’m very, very happy that you all took a day out of your busy days to come and listen to this very important subject.

Thank you, Bob. Bob is our Vice Chairman of our commission, has been incredibly valuable in helping civilians like myself understand a lot of this. We’re gonna open this up now and talk about what many of you wanna talk about. So we’ve got some mics. I’ve got one right here. And so if you raise your hand and we will take questions about this. There’s one over here. I don’t see any other hands, so one, two, and there’s one, three. (muffled speaking) Yes, shortest path and work your way over. (muffled speaking) Oh, whatever you, you’re in charge. You got the mic. (muffled speaking)

Hi, Aaron Mattis from Homeland Security. I just wanna say that a lot of what you are saying is music to my ears (muffled speaking) implementation that (muffled speaking) AI, is that on? That (mumbles) AI could go a long way to improving our strategic position. I come from social science world. (muffled speaking) reaching out to sociology, psychology, anthropology to think about these issues of implementation. That seems like a really important resource, thinking about not just the people with the organizational side, that we need an agenda for the research of implementation because it’s such a non-trivial problem. Also, just curious, were you thinking at all about the fact that a lot of these systems are really brittle and then if our adversaries get really brittle systems that for example, they’re not worried about false positives the way I know that Homeland Security we are. We can take advantage of that. We have an agenda to look at that problem, thanks.

Well, I’m gonna just get it started and then ask some of you to add in on. Second question, you bet. Just so that you know, we’ve published just this interim report. It’s unclassified. There’s actually gonna be a two-part report for the second half. It’s gonna cover both unclassified and classified addendum. And we have spent quite a lot of time thinking about our enemies and trying to figure out how to make our advantage all of their disadvantages and that involves a lot of things including the social science of our adversaries. And you’re gonna see quite a lot about that in our, in the final report. As far as, as you can tell, some of, many of our stakeholders and all of our meetings have involved different organizations inside and outside the national security architecture simply because sociology is what, is all about people. And some of the biggest issues in advancing change is figuring out how to motivate that kind of change. Anyone else, Steve?

Well, I just wanted to hop in. So one of the comments was about how do we enact this change. And I can say from my personal experiences within NASA, one of the primary movers of the infusion of AI into operational processes is the rotation of AI experts to operations, to flight software. And it’s only when you’ve been there, you know I’ve worked on operations for multiple missions. It’s only when you’ve been there as the operator, then you really start to get a completely different perspective on what is it that the end user wants. So I can’t emphasize too strongly that we have to have this cross fertilization, this interaction between these disparate communities.

Yeah, I, oh I’m (mumbles).

No, no go ahead Bob, and then Andrew.

The department and the commissioners are extremely aware of the brittleness of the current state of AI. But as the commissioner, the commissioners have learned, the department is going after this in narrow task applications that we can test, and we can actually compare against human behavior and human persistence. When in 2015, when computer vision started to be more, as effective as a human analysis in picking out an objective, or an object in a picture, that’s when the IC said okay, now it’s time to go because we can expect human-level performance from the machine. I liken this to what’s happening to our automobiles. We’re inserting narrow AI applications as they become tested and trusted. So we have an AI application for cruise control. We have an AI application for lane departure. We have an AI application for thinking over the breaking system and stopping the human operator who’s texting on their phone before they run into somebody. So I mean from our perspective, I can’t speak for all of the commissioners, because we have to address this, but just listening to what the department is going after, they’re going after things that are within the current reach of technology and consistent with the application, or consistent with the effectiveness of the application.

[Safra] Did you have something to add?

Yeah, I love your question because it, one, it validates why this liberal arts major was both at CIA tasked with setting up a large digital organization, ’cause it, the experience I had there over four years was, it wasn’t really a technological problem. We had, now there was certainly the case of slow acquisition, but really the adoption piece was about helping well-meaning officers of the intelligence community understand that their mission effectiveness was largely dependent on how they embrace technology and how they applied it. And so we found that our ability to drive adoption and at the pace that we thought was necessary was really about, to Bob’s point, appealing to their sense of what the mission, to whatever our mission outcome is, and to relate to them to show how they can be more mission effective. They can generate those mission outcomes to defend the country and appeal to that intrinsic value, to make them want to apply this and not treat it as a technological thing that they’re not really involved with. But to the, and this isn’t about me, but it’s about why I was asked to lead this with the skillset I brought was it was really about the leadership and social engineering of a large organization and how you motivate people to adapt something they’re not used to and they’re not familiar with, that frankly changes their historical way of operating. The other thing I think that’s interesting, and I’d be interested to know if the commission explored this or will take it on is when we think about AI as bridging that human-machine interface in the intelligence community, it’s about how we optimize our mission performance with a machine. But is any research done on the cognitive sciences of how we, any person regardless of their background can leverage machine power and artificial intelligence to make that interface much more effective. And I think that’s probably a lesser developed field.

If I could quickly just pile on. I’d like to just echo what Steve said about the performance of what we call at APL domain expertise. And it gets to this point of whether or not you have AI services if you will, a separate organization that services everybody else or whether you try to make it integral. And I think we find at APL that the people that understand the mission are critically important as you think about how to look at AI and that’s kind of the same point I think that Steve was making from the NASA perspective. So just wanted to pile on there.

Okay, thank you. Another question, lots here, there’s one here, here, here, you pick.

[Brett] Brett Fambry from Data (mumbles) Labs.

Where are you? Oh, oh, oh we can’t see you.

Thanks for speaking with us. I guess this is a question for Secretary Work and Mr. Hallman. You were mentioning, Secretary Work that (mumbles) AI applications are here. The technology exists to help some of your analysts that are buried in Arabic and Pashto data for example where it’s to help some of the operators and analysts that are buried in overhead video and imagery. A lot of that technology is found in startups and non-traditional defense companies in Silicon Valley. I know this is something that you guys think about a lot. If you had a team, if you could wave a wand, you had a team of deep learning engineers that could work on some of these problems, where would you tell them to go?

Go to the Department of Defense. (audience laughs and applauds) Look, I think this was covered in the first panel, at least touched on. The department knows that it relies upon the innovation in the commercial sphere. And this is one of the things that the commission has been mulling over quite a lot. How do we build the bridges that Steve Walker talked about between the department and the innovation ecosphere that is so vibrant in the commercial sector. I know Jake has been trying to do this. General Shanahan has spent a lot of time reaching out, but if your question is what would we go after, I would say all of the above, computer vision, national language processing, you know control systems for autonomous systems, decision support tools. You know this, of all of the things, this reminds me of the interwar period when aviation capabilities were starting up. They were extremely brittle. They were not very, you know the airplanes didn’t have a lot of payload. They didn’t have a lot of range, fell out of the sky quite frequently. But the US Navy tolerated an insurgency of we are going to inject aviation capabilities into fleet operations. We don’t know where it will end up. But if somebody came up and said, oh my goodness, the airplanes can only carry a 100-pound bomb 100 miles to which the aviator said, that’s true. But as Christine said, they were able to envision a future in which they might carry a 500-pound bomb 300 nautical miles. And they tested, tested, tested, tested, tested, and they failed and failed and failed and failed. So injecting AI-enabled systems into in-place processes is very, very similar to that problem. And so that’s why I am so optimistic. Look, the United States military is better at this than any other military in history. So all we gotta do is let the operators go and give people like General Shanahan the support they need to support them and away we go. So as Eric said, we’re extremely bullish on America. We’re absolutely certain we can win this competition. All we have to do is say, hey, let’s get after it.

Okay, next question. I don’t know where it is. Pick one. Her systems.

So thank you again all for being here. One thing that seems a consistent message is the need to be faster, but also the need to accommodate or circumvent the antibodies that come out of the Pentagon. So one thing that I would ask you about, ask your thoughts on, is you just mentioned aircraft and the interwar period. And there were a lot of mistakes and problems. The same thing will happen with AI. But AI is ultimately software. It’s software that serves a specific function. So if the department is not adept at doing software quickly, we won’t be able to do AI quickly. When it comes to the bureaucracy and the processes, right now there’s no concept of opportunity cost. So if you’re the person responsible for authorizing systems in DoD, you’re default is to be mitigating as much risk as possible. And you don’t incur any cost necessarily for being slow. How can we better align incentives so that people who are authorizing systems in DoD and the IC conceptualize that the opportunity costs or the cost of delay for the systems they’re considering?

Who’d like to take that?

All right, I’ll start.

Okay.

It’s a great question or complicated question. I think that the first step has been taken, which is a recognition that we need to get going. Now the next step is much harder. And to me, a lot of it revolves around the whole idea of how we authorize that systems are ready, which gets to testing, which is where it think a lot of this lies. And so we need, another one of my cultural shifts that I mentioned is in this notion of testing. Again, I think you wanna think now of testing to build trust rather than testing to meet a predetermined criteria or standard. And that is a paradigm shift in the way that DoD acquires and fields its capabilities. But operators, to pick up on Bob’s point, they’re gonna get that like not. They are constantly changing their tactics and their ways of operating to meet an emerging and changing threat. So they don’t get that. We just gotta get it to them. And that means a change in the way that we think about testing, the purpose of testing and what, how much testing is enough. Those are hard questions. I think there’s a lot of work to be done there. But we’re starting to do that work, the department is starting to do that work right now, and I think it will be key.

So I found my experience at CIA leading large-scale change with, driven by technology at option was two things. One was a lot of the antibodies were historically treated as outsiders and antibodies, not as partners. And so one of the things we did is we undertook a governance mechanism where we involved the antibodies up front to help us devise solutions. And part of it was raising awareness of what really they were encountering. ‘Cause a lot of it was based, ill-informed assumptions that they had made. So involving them in driving to decisions that they were partners in and have investment in to get there, we had pretty good success with that. The other one was helping them understand, really the collective understand and articulate what’s the mission risk if not adapting? And that’s something I don’t think we do typically well and certainly in the intelligence community is when the cost of saying no is so low and when we don’t articulate well the cost of not moving faster or adapting faster, we tend to then air on the side of let the no’s have it. And so we have to better articulate the mission risk of standing still, of being slow to adapt and that’s part of, and so developing those more holistic risk decisions I think is key.

I just wanted to, dropping one more point to underline something that Christine brought up. I think that AI is also slightly different from any of these other technologies in that it’s more intimately involved with the decision-making process. And so one of the things that’s key that’s been highlighted by multiple of the panelists is there is this constant evolution of both the human user and how they use the software as well as the software. And so one of the questions that’s an important one to look at is to think of not, this is a release of a piece of hardware and then it’s a very high bar to get another release of it a decade later, but more a co-evolution of the mission, the personnel and the software, and we need to make that loop much tighter in a much lower barrier. And that’s something that we’ve definitely experienced within NASA where we have dev ops teams that are the same people who are developing the software, who are using the software in operations.

I’d like to just make one last comment about this because we’ve talked about how important leadership is in this sort of situation. And what I’ve seen in implementing change really for decades is that there’s a small group in the front that sees the opportunity and they know this is the right thing. And there’s a sizeable group here in the back who don’t wanna change. They’re afraid. It’s not they’re not patriots. They’re actually concerned that you’re going in the wrong direction. And there is a giant group in the middle who go with who’s ever winning. And one of the things that has to be done and continue to be done is to really make, as much as can be, transparent the early successes. And when you make these early successes widely known, the actual risk calculation changes. Because people think that change is risky and in fact they think that fast change is riskier. And oftentimes the actual opposite is true. And so the faster you change, the less risk. And by the way, it doesn’t mean you’re not gonna make mistakes. Mistakes will be made. However if you you find them quickly, you have time to adjust. If you’ve waited it out too long, you are out of time. Decisions are made for you. And this is true in all organizations. And so that’s the kind of profile that I think of when I’m talking about implementing an important change and can you think of a more important mission in our country than keeping our people safe? I can’t. Okay, there’s some back here. There’s one over here.

Hello, I used to work on the Dragon Two Capsule at Space X which is designed to be much more autonomous than all the previous spacecraft to bring people to the space station. I’m wondering, that project has been delayed for years and it seems, it seems really difficult to get past the really important but also really difficult testing processes required. And so I’m wondering, it seems logical to advocate that changing faster is better and you know, has less risk. And the same thing is true of planes that you know, with 100 and 500 miles, if there was a human toll to that, that I guess people got over during the Cold War. So I’m wondering how do you balance a increasing risk aversion with systems like that with the necessity of building these autonomous systems to replace the last generation?

Bob, do you wanna field that a little bit since you’ve had experience. Or who’d like to?

Well, since it has to do with space and NASA.

Yeah, probably Steve, sure.

Oddly enough I’m gonna answer that question generically. I think you have to study the problem and understand what the risks are. You know, everybody here manages their own retirement or some aspect of it. And there’s this notion of inflationary risk. You can take all of your money and you can put it in the safest possible thing, but then you’re doomed because inflation will wipe out all of your savings. We’re in a global competition here. And if we do not move ahead, the risk is imperative. We understand what will happen if we do not move ahead. So we need to balance that, and we need to, first of all we need to understand that there are quite a few areas where, quite a few national security applications where we can introduce artificial intelligence without having huge risks to human life in the back end that Bob Work talked about, in terms of logistics, in terms of decision support. And in those key areas where there is significant risk, risk to catastrophic loss of life, then of course we will proceed, we must proceed taking that into account.

[Audience Member] Thank you, so as a panel you’ve touched on a lot of issues regarding cultural change and the need for moving faster, adopting commercialization. I’m wondering based on if you look at the trajectory of research in academia and the private sector, a lot of it revolves on collaborating and open sourcing code and sharing benchmark data. And that’s not traditionally how government agencies tend to operate when we’re developing new technologies. It seems like secrecy might no longer be a competitive advantage for AI. As an example, back in February, Open AI refused to release a model that they thought would be too dangerous, GPT2. And within four months, two recent graduates developed a model that could replicate the results almost exactly. So it seems like there may no longer be a competitive advantage in hiding things and maybe there are reasons to open source certain pieces of government work. I’m wondering how you view collaboration and the need for secrecy and what frameworks government agencies should use to balance those interests.

So I think, great question. I think the, one, I would agree with your premise which is there’s more we can do with open source and unclassified work. Even in intelligence community, more and more of our work is actually in that space. Because at the end of the day, the risk of undertaking that work is somewhat offset because if it’s in the application of our tradecraft to unclassify either software development or other kinds of work, there’s, the attack surface is different ’cause it’s not just about the technology. It’s about how we apply it. And if we do actually adapt faster and become more of moving targets and embrace a much more quickly changing technological market, I gotta think that we’re gonna be better than simply standing still with the illusion that we have with standing still with a classified system that may or may not be penetrated, or may or may not be secure, that illusion has to go away. So I think it doesn’t mean that we don’t have still a fundamental role for classified work. Of course we do. But I think that a proportion of the work that we’re doing is more and more in the unclassified space and is about how we adapt that to the intelligence tradecraft which will I think give us that great advantage. And when you look at the democratization of technology and data, that has to be increasingly on the open source side.

[Safra] Christine?

Just add on very quickly, I think collaboration is essential and I tried to allude to this point in my remarks that maybe we need to think a little bit differently about the classification of a piece of data versus the classification of a human understanding what that data means. They could be different. I don’t know that they are. But I think we oughta think about it so that we can get the most out of all of the data that we have, and that does include collaboration. This point of secrecy though I think is very important and I would just like to take the opportunity to plant the thought that we ought to also think strategically about what we choose to keep secret that we are doing in AI and what we choose to reveal. Because understanding where the United States really is from a national security perspective in acquiring and pushing forward important new technologies like AI, can have a very real effect on deterrence. But on the other hand, you might be giving something away. It’s a hard set of decisions, but this whole reveal-conceal notion is a part of this secrecy conversation that I think we need to be having.

I just wanted to jump in and add one more point to that. So there’s been a very active discussion in the artificial intelligence community on exactly what should the stance be towards some of these key technologies. And it’s the view of some people, well okay, so deep learning at its heart is a set of matrix multiplication, so you can’t really protect that. And I would disagree very strongly with that. So it’s been quite well understood. I’ll make an analogy to nuclear weapons. It’s been very well understood what the principles are to build nuclear weapons for quite some time. And it’s been in the public domain. Any graduate student in X, Y and Z knows that. But that doesn’t mean that we don’t choose particular key lines and key choke points to which we want to restrict technical know how and that’s the kind of nuanced decision-making that we have to have the right people making decision on in order to protect their national security.

And where this plays out inside the commission in a way is again we’re moving towards trying to get the consensus recommendations. And we have these things called hard problems where we haven’t come to a consensus yet. But there’s a school of thought in AI research we should try to disentangle ourselves from China, who’s our primary competitor in this field, or we should couple more closely with them. There have been people who have come and talked to the commission on both sides of that equation. And the commission is gonna try over the course of the next year to thread the needle. But for example, all of the competitors want explainable AI. All of the competitors have high incentives to have good AI safety. All of the competitors have high incentives for good verification and validation regimes. We would never want to unentangle ourselves from the broader community to include our competitors if we can share that data because we may use the AI differently, but it will be very, very good if all of us said we can trust the AI, you know it’s gone through this type of information. So the commission, as I said, hasn’t come to a consensus recommendation yet but we’re studying this problem very, very carefully.

Okay, I wish we could stay to take more questions, but I’m gonna have to thank our panelists. This has been for me, a very, very fascinating discussion. I want to let the audience know that we are gonna take a 20-minute break, but don’t go too far because lunch is offered just out here near the registration. And we’re gonna want you to bring your plate in here because we’re not taking any time off here. We’ve got a panel at 11:50 that I think you’re gonna wanna hear. It’s going to be Lieutenant General Jay Shanahan from the JAIC, Joint AI Center. It’s gonna be Cat Walker, VP of Global Affairs and Chief Legal Officer at Google.

Share with Friends:

Leave a Reply

Your email address will not be published.