AI National Security and the Public-Private Partnership

National Security Commission on Artificial Intelligence Conference – Lunch Keynote: AI National Security and the Public-Private Partnership

Subscribe to Dr. Justin Imel, Sr. by Email


Mr. Walker and General Shanahan.

I hope everyone had a good lunch or is busy finishing up an excellent lunch. I’m joined with two close friends of mine. And I’m probably the only person who can say this in the entire world. I work with and for both of them. (laughing) So I wanna make sure I disclose my conflict of interest to start with. So General Shanahan went to Michigan, our Go Blue, our OTC. Entered their service of our country in 1984. He’s been promoted a gazillion times. He was in charge of a whole bunch of intelligence activities, a whole bunch of operational activities. And eventually we needed somebody operationally to implement AI in the entire DOD and he was the perfect choice. So I worked with him in my role as chairman of the DIB. Okay, Walker was a federal prosecutor. Mr. you know law and order federal prosecutor who then chose to come to Silicon Valley and I think actually worked at eBay for a while. And then we snagged him, we being Google maybe 15 years ago.

[Mr Walker] Yes.

15 years, everyday together. And during that time, not only did he set up our legal function, but is now in charge of all of global policy PR, all those sorts of things together. So very, very significant players. And what I thought we should do, since you all have heard from me plenty is simply start and perhaps Kent, we should just have you make some comments about the world as you see it today.

Sure, thank you very much Eric, General Shanahan, it’s a pleasure to be with you today and with all of you. The topic of today’s panel, private public partnerships is extraordinarily important to me. I grew up in this community. My father was in the service for 24 years. I was born and spent the first years of my life on US military bases. My father finished his career at Lockheed. So I feel a profound commitment to getting this right. To making sure that the private sector, the defense sector, and the universities can work together in the best possible way. Before we jump into the thoughts on how we can accomplish that. I want to take on two issues upfront. It’s been frustrating to hear concerns around our commitment to national security and defense and so I want to set the record straight on two issues. First on China. In 2010, you may remember that Google was public. There’d been an attack on our infrastructure that originated in China, a sophisticated cyber security attack. We learned a lot from that experience. And while a number of our peer companies have significant commercial and AI operations in China, we have chosen to scope our operations there very carefully. Our focus is on advertising and on work supporting an open source platform. Second, with regard to the more general question of national security and our engagement in the Maven project. It is an area where it’s right that we decided to press the reset button until we had an opportunity to develop our own set of AI principles, our won work with regard to internal standards and review processes. But that was the decision focused on discrete contract. Not a broader statement about our willingness or our history of working with the Department of Defense and the National Security Administration. We continue to do that, we are committed to doing that. And that work builds on a long tradition of work throughout the Valley on national security general. It’s important to remember that the history of the Valley, in large measure, builds on government technologies from radar to the internet, to GPS, to some of the work on autonomous vehicles and personal assistants that you’re seeing now. Just in the last couple of weeks, we had an extraordinary accomplishment with regard to quantum supremacy which moved forward the frontiers of science and technology. But that was not an achievement by Google alone. It built on research that had been done at the University of California in Santa Barbara. It benefited from extensive consultation with research scientists at NASA. It was carried out in many ways on supercomputers from the Department of Energy. So those kinds of exchanges and collaborations are really key to what has made America technological innovation as successful as it’s been. And just as we feel as we’re contributing to the defense community, national security community, a lot of that work, that community is a part of Google. We have lots of vets who work at Google, we go above and beyond to make sure we reserve that work at Google can complete their military service while having thriving careers. And even our tools, we have tried to take steps to make sure that that’s transitioning to civilian life. Can make the best use of their military skills in the private sector. As we do that, we also are fully engaged in a wide variety of work with different agencies. With Jake, we are working on a number of national mission initiatives. From cyber security to health care to business automation. With DARPA we are working on a number of fundamental projects to ensure the robustness of AI to identify deep fakes and to progress work on the end of Moore’s Law and how to progress the operation of hardware and use software-hardware interfaces in better ways. So as we take on those kinds of things, we’re eager to do more. We are pursuing actively additional certifications that will allow us to engage across a range of these different topic areas. And we think that’s extremely important. At the same time, we think there’s a great partnership to be had on the work that the DIB has announced in the last week. There in AI principles which were I thought very well done. It’s a lengthy document but a thoughtful document. And while it continues to work, the ground work that was laid by the Department of Defense back in I think it was 2012 with directive 3009, which talked about the use of human judgment in the application of advanced technologies. The charter of the Jake, the work the DOD has done with it’s own AI principles. And the private sector, we too have been trying to drive forward on this. We have not only put out principles in very common and overlapping areas. There’s a lot in common in these questions. Safety, human judgment, accountability, explainability, fairness are all critical areas where different actors in the space each have different things to contribute. And I think that’s critically important. This is a shared responsibility to get this right. As the DIB report notes, we need a global framework. We need a global approach to these issues. Endorsing the OECD framework around these issues is extremely important and something that we wanna support. And we are working together to figure out where are there complimentarities. Because at the end of the day, we’re a proud American company. We are committed to the defense of the United States, our allies and the safety and security of the world. We are eager to continue this work and think about places we can work together to build on each other’s strengths.

Well thank you, Kent. General, take us through what you’re up to at the Jake.

Well so first of all let me say thanks. It is great to be here and I thank both Eric and Kent for the opportunity to do this. Admittedly though, I will say I’m a poor substitute for the chairman of the Joint Chiefs of Staff, General Milley. Although I say it’s a lower probability of any headline grabbing soundbites. So you get that with it. And I also confess this is undoubtedly first and also time I will serve as a warm up act for Dr Henry Kissinger. But hang on for the main event as I say. I not only welcome, but I relish the opportunity to have this broader conversation about public-private partnerships. And when you ask me to reflect back on my two years as the director of project Mavin and just about a year in the seat as the director of the Jake. There’s one overarching theme that continues to resonate strongly with me. It’s the importance, I would say, the necessity of strengthening bonds between government, industry and academia. This was said this morning, you brought it up and others had also mentioned it. Is this idea, this relationship should be depicted as a triangle. And it actually should be in the form of an equilateral triangle. Government, academia and industry. I would suggest that that’s largely the form it did take beginning in the 1950s and largely lasting until the early part of this decade. Walter Isaacson writes about this very eloquently and powerfully in his book, The Innovators. It is what really drove to Silicon Valley today. It’s not the case today. At best the sides of the triangle are no longer equidistant. You might even say they are distorted or a little frayed in addition to being of different lengths. The reasons for that are complex and they’re multi flawed. Snowed in, Apple encryption, mix matched operating tempo and agility. Different business models, general mistrust between the government and industry. We started talking past each other instead of with each other. The task is made much more difficult today by the fact that industry is moving so much faster than the Department of Defense in fact the rest of government when it comes to the adoption and integration of AI. We’re playing perpetual catch up. And some employees in the tech industry see no compelling reason to work with the Department of Defense. And even those who wanna work with DOD which should I say is far more than sometimes is portrayed. I say put everybody in this room in that category, we don’t make it easy for them. So I would just reinforce some of the themes that are in the security commissions report or the interim report. And that is this idea if a shared sense of responsibility about our AI future. A shared vision about the importance of trust and transparency. Our national security depends on it. And even for those who for various reasons still view DOD with suspicion or who are reluctant to accept that we are in a strategic competition with China, I would hope they would still agree with us that AI is a critical component of our nation’s prosperity, vitality and self sufficiency. So in other words, no matter where you stand with respect to the government’s future use of AI enabling technologies, I submit that we can never attain the vision outline and the commissions interim report without industry and academia with us together in an equal partnership. There’s too much at stake to do otherwise. We are in this together. Public-private partnerships are the very essence of America’s success as a nation. Not only in the Department of Defense but across the entire United States government. So the message we wanna send today, we have to make this triangle back to what it used to be.

Well thank you, General. I think I’m gonna ask a couple of questions to both of you. And let’s start with the same question to both. Kent, talk about Maven some more.

Sure. (laughing) I think it’s no secret that we came up as a consumer company. We are quickly evolving and also becoming an enterprise company. I’m putting a lot of resources into that. But there are different protocols and different ways of engaging and as we go along that journey, I’d be lying to tell you that everybody, all of our employees have an identical view on a lot of hard issues, they don’t. But in some ways that debate and discussion is a positive as well as a negative. In many ways it’s in our DNA, but it’s in the DNA of America. You could argue that that kind of constructive debate is America’s first innovation. You look at great research scientists like Richard Feynman who was one of the leading thinkers in quantum mechanics who was also notoriously iconoclastic freethinking guy. We think out of that comes incredible strength. If we work together well, we can actually have a more robust, more resilient framework. A framework that helps build social trust as well as a framework that works for the world. So as we put forward our AI principles and our governing processes because an important thing to note is that the principle in a sense are easy. As the DIB report notes the report devotes a couple of pages to the principles and a long section to the implementation. Because you quickly discover that a lot of the hard problems are when the principles conflict and are challenging. We’ve had debates about whether or not we should publish a paper on lip reading.

[Eric] Say that again. We have had debates about whether to publish a paper on lip reading. Lip reading is a great benefit to people who are hard oof hearing around the world et cetera. But you can imagine it could be misused for surveillance and other kinds of purposes. After reviewing the particular technology, we determined that it was appropriate to publish because that particular technology was used for really only only in one-to-one settings. Not for surveillance at a distance. But it’s an example of the kinds of discussions we have around issues like lip reading or facial recognition or other challenging questions where we have to come to terms with the reality, the trade offs that we’re making. Very much the case in a lot of these issues as well. But we think there’s an awful lot of room for collaboration and coordination on cybersecurity, on logistics, on transportation, on healthcare. Many more topics where we’re already engaged with the military.

[Eric] General, same question, tell us more about Maven.

Okay so when we started project Mavin our intent was to go after commercial industry, Eric and the dib had told us this is where the solutions already exist. Do not reinvent the wheel it happens out there. And our approach was a simple one. We wanted everybody in the market that was a small startup of 15 people which is one of the companies we got on contract to the biggest internet data, cyber cloud companies in the world. And one of those happened to be project Google. Why did we go after project, or why do we have to google with project Mavin? Because we wanted to take the best AI talent in the world and put it against our most wicked problem set. Why add area motion images, extraordinarily difficult problem to go after and we did a very successful collaboration with the Google team on this. What was happening internal to the company is how that played out is a little bit of a different story. But we got all the way to the end of the contract and we got products that we were very pleased with. Now it was unfortunate, I think even for some of the software engineers on that project. They got to the point where they almost felt a little bit ostracized because others criticized them for working with the Department of Defense. But day to day from the senior most leader down to the people working on the project, Mavin team, we are tremendous support from project in Mavin from Google. What we found though and this is really the critique on both sides. Is we lost the narrative very quickly. And part of this was about the company made a strategic decision really not to be public about what they wanted to do. Our approach in the Department of Defense is willing to talk as much as the company wanted us to talk about. We do whatever the market would bear. In very general terms, we didn’t want to get into operational specifics. This was a project, was intelligence surveillance reconnaissance on a drone remotely piloted aircraft that had no weapons on it. It was not a weapons project, it is not a weapons project. But what happened is we started hearing these wild stories and assumptions about what project Mavin was and was not. To the point where if you google today, you know pun intended, you actually google, the adjective controversial has now been inserted permanently in front of project Mavin. It was not controversial to me, it was not controversial to the team. I say it was not controversial to anybody right now beyond some people who just don’t like what we’re doing. So I guess when I bring it all the way full circle is this is an interesting point that I’ve thought a lot and I’m not sure everybody fully appreciates or agrees with me. I view what happened with Google and Mavin is a little bit of canary in a coal mine. The fact that it happened when it did as opposing on the verge of a conflict or a crisis we’re asking for their help. We’ve gotten some of that out of the way, you’ve heard Kent talking about a little bit of a reset here and how much the company and all the other companies that we deal with wanna work with the Department of Defense. I think that narrative is an important narrative. It happened, it would have happened to somebody else at some point. But this idea of transparency and a willingness to talk about what each side is trying to achieve may be the biggest lessons of all that I took from it.

It’s a real tragedy that we don’t wear hats anymore. Because I could borrow three hats and figure out which hat I’m wearing. With my DIB hat on I can tell you when I met General Shanahan, the real problem inside the military is that we take these ex-physically trained soldiers, airmen, so forth and so on and we put them in front of mind numbing observational tasks. They literally watch screens all day. And it’s a terrible waste of the human asset that the military produces. And so there’s a huge opportunity to try to sort of get them to work at a higher level position and that’s why the dib recommended the Mavin procedure and indeed the creation of a Joint center for AI which you’ve stood up and now had. Let’s talk another question for both of you. Which has to do with ethics. Now in the middle of the kerfuffle that went on inside of Google, Keth having the good idea of having a formal AI ethics proposal and he drove inside of Google an ethics process which produced a remarkable public document, now I have my Google hat on. Which I think is really quite definitive. And I think maybe you could talk about that and then similarly, the DIB produced a proposal to the military and I believe you are the customer for the proposal that we wrote on military AI ethics. I assume both of you are in favor since Kent wrote the first one and all the other industry companies have now largely copied variants of your approach in one form or another. I assume you’re in favor of this. What are the consequences of these ethics things? Does it really work? Does, for example, does Google prevent, you know, like turn off things or stop doing things. Like in the last little while. How does it actually work? And same question for you, General. There are people who claim that the military won’t operate under ethics principles. In our report, we cite the many rules that the military is required to operate under. Maybe you could talk about that, so Kent.

Sure, so I think as the General noted, having frameworks in place early on, both the set of principles, but then also the review processes and escalation opportunities, it’s a critical part of internal as well as external transparency. ‘Cause it’s right that you’re among our principles, we’ve talked about surveillance being a concern. We wanna make sure that some of the recognition tools and the image tracking software that we’re developing are deployed in appropriate ways. We wanna be a good partner, we don’t wanna pull away support. But we wanna make sure we know the scope of the project that we’re developing and when we’re licensing that for commercial uses have the sense of the direction of travel there. I think that’s a valuable thing for both sides in terms of making sure expectations are clear and in terms of building not only trust internally but trust across society. So another example would be when it comes to general purpose APIs for facial recognition. Where you don’t know necessarily what use is gonna be made of them. We’ve said until we develop more policy and more technological safeguards, we’re gonna be very cautious about pursuit in that area. Another example is when it comes to weapons. We have said this is a nation’s technology we wanna be very careful about the application of AI in this area, so that’s not an area that we’re pursuing. Given our background, we recognize the limits of our experience in that area. Obviously the military is gonna be deeper and have more understanding of safety implications and the like. So we’re gonna continue, work through these different areas. I think there’s a remarkable degree of convergence we see between the OECD, the DOD, the DIB. Now internationally we’re starting to see the European commissions say they’re coming up with regulations for artificial intelligence in the next 100 days. I think this will be a very interesting exercise as we all pursue kind of a common mission of how we build acceptance for this next generation of technologies.

And so looking at it from, in the DOD lens. This may be the best starting point. When you talk, Kent mentions these areas of convergence between commercial industry, academia and the government. Probably the AI ethics principle is good as anything else to drive a straight steak in the ground so do we agree on all of these, some of these, and if we don’t disagree let’s get the conversation going. So it’s a good starting point. The other point is I need to state the obvious that I can tell you with certainty that China and Russia did not embark on a 15-month process involving public hearings and discussion about the ethical, safe and lawful use of artificial intelligence. They’re not doing it. And I don’t expect they ever will do it. So people may question what the department is doing and why we’re doing it. If I tell you what we’ve just embarked on this long process just to make sure we took into account all of the different voices on the ethical use of artificial intelligence. And I would say the product that’s being delivered is an excellent product shaped by a lot of people who have spent time and attention against this. I’ve said this in other settings and over 35, pretty much 35 and a half years in uniform, I have never spent as much time on this question on the ethical use of the human technology. The Department of Defense actually has a long and I would say commendable history despite flaws along the way of looking at the ethical use of emerging technologies, whatever they are. There are differences with artificial intelligence and what the DIB report does very well is start with here is what’s similar to every other technology that’s ever been field in the department. Here are some areas that may be different, we’re not quite sure yet. And here’s some substantive differences like systems that learn on their own. That’s a pretty good framework for going after this. We have a way of looking at this and no matter if it’s artificial intelligence or any other technology, our history, our processes, our approach and our training are in place to look at any emerging technology and how we bring it in from a pilot and a prototype into production. So now that this report has been presented to the Secretary of Defense, it is up to, I get two questions now is one, what do you think about the reports? An excellent report provides the best possible starting point and then number two is what are you going to do about it. This is where it gets really complicated. We have to come up with an implementation plan. It will not be a Jake implementation plan. It will be a department wide implementation plan taking these recommendations, putting something together through my boss, Mr as the chief information officer of the department in making some recommendations on how we implement this for across the entire Department of Defense. That is not an overnight task. This is gonna take us a while to get this right. But we now have an outstanding starting point.

So, that’s a wonderful framing for where we are. I’d like to push a little bit on where this will go. Kent, let me give you an example. Open AI had developed a technology which will allow arbitrary rewriting of text that was sufficiently good that they became concerned and they didn’t release it instead they only released it in certain model ways to certain researchers. That’s an example of, and I asked them, and they said, I didn’t even put any pressure on you and they said, no, we just thought it was our good judgment. You famously very early said on this facial recognition thing, we’re going to avoid that as a general purposing because of the dangers. Where will the industry end up, in this sort of self-restrained thing. Is it going to be a common set of principles, is this gonna be, is the industry gonna have to have an AI ethics common with the respect to being careful? How will this play out in your view?

I think you already see some efforts to work across the industry with a partnership on artificial intelligence to exchange information on some of the work that’s being done. It’s gonna be an evolving question as we develop more infrastructure, more of these frameworks about the appropriate limits of use of artificial intelligence. The appropriate safeguards and checks and balances throughout a whole variety of different areas. But I think, I’m hopeful that with a common ground work, the way we’ve started to lay already, we’re on the path to doing that. But this is true of any new technology. Any communications platform from the television, radio to television to the internet. You’ve needed new regulatory infrastructures, new social conventions about how you use these different tools. This is of extraordinarily powerful technology, we’re at the early days. So I think it’s understandable that you’re saying a variety of views come together but also notable that you’re saying the degree of convergence that you’re seeing.

So General you have talked inside the Pentagon about this notion of a new kind of warfare. And I think the term that you all use is algorithmic warfare. Take us through, in the same sense that Kent talked about how this new emergent thing is sort of new and powerful, what’s new and powerful about this technology in a military context. With your long experience and understanding, how the military frames this, what’s the language, what’s the positioning?

I go back to as we were formed and then Deputy Secretary of Defense Bob Work was in the room, I’ll never forget it it was like yesterday. Dezzing in us, okay, you are now formed as the team that’s gonna figure out how you actually field AI. Get away from the research piece of it, which was all happening wonderfully behind the scenes, but now we needed a team that was focusing on fielding to the war fighter. And the name that he gave us was the algorithmic warfare cross functional team. It’s not an accidental name, it’s become project Mavin ’cause it’s much easier to say than

Your acronyms are gonna kill me.

Okay let’s just focus on algorithmic warfare.

Why don’t you tell us what algorithmic warfare is?

So it’s the idea of we’re going to face a fight in the future. We’re used to fighting for 20 years in a certain type of fight. Counter tourism, insurgencies, we are going to be shocked by the speed, the chaos, the bloodiness and the friction of a future fight. And what’s this will be maybe playing in microseconds a time. How do we imagine that fight happening, it has to be algorithm against algorithm? It is as you described earlier as we were talking about this it’s an OODA loop, how fast can we get inside somebody’s decision cycle.

[Eric] Remind people what the OODA loop is.

Colonel John Boyd, air force colonel who was sort of the author of the observe, orient, decide, act. Which is how you get through the cycle of decision making which was really never about the decider or the act. It was more about the observe and orient, I think really about the orient phase. But in this future fight we’re looking at this will be happening so fast. If we’re trying to do this by humans against machines and the other side has the machines and the algorithms that we don’t. We’re at an unacceptably high risk of losing that conflict. Now this is a challenging one because I think part of what you’re getting at in that future scenario, how are people going to be assured that our algorithms are gonna work as intended and they don’t take on aa life of it’s own so to speak. What we will fall back on, I think this is the starting point for what the DIB principles gave us is test, evaluation, validation and verification. We have to do a lot more work on the front end. By the time they field it, we know what’s being fielded. But I think we’re really going to be at a disadvantage if we think we’re gonna be in a pure human against machine. It will be human and machine on one side, human and machine on the others. But the temporal dimension, this fleeting superiority that you may be facing where decisions will be made that fast it might be algorithm against algorithm.

Yeah that was to me the key question, military matter is what happens when the whole scenario is faster than human decision making. ‘Cause I understand the way the military works is when there’s a threat, in general people check with their superior, there’s sort of a rule of engagement there’s human judgment, it’s all built around some number of minutes, right, not some number of nanoseconds. How will the military adjust its procedures to deal with this real possible threat?

It won’t be driven by above, the innovation will happen at the lowest possible level. What we have to be able to do in places like the Jake or in Mavin is give people the policies and the authorities in the framework to do what they need to do. The innovation, the people that will say I have a solution to this. I’m going to write a code, I’m gonna develop an algorithm and apply it to this problem set in the field if you give me the data, the tools, the frameworks, libraries, the centers all those other things. We can do that. So it’s the idea of in that fight, it will be more decentralized than a lot of people are comfortable with today. And that brings risk to this. We’re talking about higher risk, higher consequence, but it’s either that or risk losing the fight. So it’s this idea of decentralized development, decentralized experimentation, decentralized innovation. The innovation as was described in one of the panels this morning happens at the bottom. We’ve gotta give them the push from above to make it succeed.

[Kent] And also to add


To the temporal component. There are new fronts in the cyber security and cyber defense. We’re seeing already sort of efforts to destabilize with disinformation campaigns and the like. So the more we can work together to recognize those patterns across a wider battlefield if you will, the better for everybody.

Kent, do you have a model for how the industry. One of the themes of our whole conference is the industry and the government need to work together broadly. And obviously we have senior general here. But I’m really referring to the government as whole, there’s a lot more than just the DOD that needs AI. Do you have a model for how the industry should work with the federal government, the state governments, the DODs and so forth?

Well, I’ve already talked about two important elements that the DIB report touches on as well. The first is this notion of trying to build broad trust in the application of new technologies. The second is the need for a global framework which helps with that process. The third as General Shanahan alluded to is a more operational and administrative question of how do we make it as easy as possible for new companies to enter into these kinds of partnerships. So a lot of the innovation, a lot of the cutting edge research being done in Silicon Valley is not being done by large companies, it’s being done by small companies. It’s a rich ecosystem of innovation. And it’s challenging even for a company Google size to start to get more involved in that environment. It’s doubly difficult for some of these smaller companies. So as we look at modernizing procurement from the military side and working with Congress as well to make that as quick and as nimble and flexible as possible and responsive to new needs. Looking at increasing RND funding across the board because that’s traditionally been a really fertile ground for a lot of these collaborative enterprises to move forward. Looking at the human resources exchanges. There are a lot of authorities out there which authorize private sector, people to come in to the government but in practice it’s harder than you would think. So a lot of that hard work on the ground I think is important to making this a success.

For both of you, because we’re gonna be making recommendations that ideally will end up in legislation, you know a year from now plus or minus. Are there specific things that we could do that would promote private-public partnerships. So for example, as you know the DOD has DIUX, SCO and a number of other groups that work very closely, you have In-Q-Tel obviously the extraordinary contribution that DARPA has played to our industry and to technology and to me personally, so forth and so on. So the sum of all of that, do you have a model and I’ll ask Kent the same question, do you have specific things that would be helpful that would decrease the friction and increase the cohesion between small companies, large companies, the federal government, procurement, the DOD.

A couple of different thoughts on that, first of all you have so much that has started to happen over the last couple of years when placed like Defense Digital Service, DIU, Kessel Run, Compile to Combat, all what I call these beginning insurgencies to get things moving.

And we should pause and say that these are each small teams of software people inside the DOD that have had an outside impact in changing the procedures and important aspects in the air force for example in some of the Kocs and things like this.

So that all got it started but we have to do is figure how to institutionalize it. How to make that the systemic change across the Department of Defense which is the next hard part to do. And when you asked about what are the ways we do it? I’ll tell you one of the biggest ones is just talent. Bringing in talent from the outside, the academia, from industry, our Chief Scientist, Joe Chrisman, who’s here today has time in the government, in DARPA working for start up in her last job. Nand Mulchandani who’s our Chief Technical Officer 25 years in the Valley. He comes in and within 24 hours takes a different view of what we’re trying to do. We need a lot more of that, we need sabbaticals, people coming in from academia for a year or two going back out to the outside. Us putting people in education with industry, Secretary Defense Corporate Fellowship. That’s all beginning to happen, it needs to scale to the next level to really start to understand what we’re each talking about. Me going out to the Valley and talking to the sea sweep only gets so far. It’s the peer-to-peer relationships and discussions I think are going to be more important than anything else.

I very much agree with that. I think you’ve seen examples of it with the Jake, with DARPA, we are priming the pump on a variety of really important areas whether that’s training or models and simulation, they’re helpful in that. Recruiting a number of different areas. Another important component of this is the IT modernization because in many ways the AI kernel is critical but it comes embedded within a larger environment of software that’s often times very difficult because you have to get security clearances and appropriate certification for all the elements of that piece. So there’s that combination of successful individual experiments and trial runs to build the familiarity of the peer-to-peer level. But also this extended change to make it easier to have wider adaption of the technology and more broadly.

It’s time for us to finish up. My objective in this panel was to put to bed this notion that somehow Silicon Valley wouldn’t work with the military and I think we’ve clearly seen examples small companies, larger companies, strong statement from Kent on this. And we can sort of move forward and build this collective between the private and the public partnerships. Kent, can you sort of summarize, sort of the key takeaway that you wanna offer us, the key message, the key word, the key. Why are you here and why did you make a special trip just to make this point?

I wanna be clear and I’ll restate what I said at the beginning, we are a proud American company, we are committed to the cause of national defense for the United States of America, for allies and for peace and safety and security in the world. We approach that task thoughtfully as we do when approaching a variety of advanced technologies. We wanna be thoughtful and make sure we have clear frameworks and transparency and understanding as we move forward. I think that as a mission that the military and the US government share and I’m looking forward, we are looking forward to working more closely together in the future.

So General, you never like these things, but you’re sort of the top, you’re the tip, you’re the fellow who’s going to sort of make this change happen across 3.2 million people 660 billion dollars, an enormous bureaucracy. How are you going to pull this off?

One person at a time.

It has to be a combination of top down. It was said on the previous panel, you must have the full support of leadership from the very top to show that is a priority for the department, that’s critical but also insufficient. You have to have the bottom up innovation, the people pushing from below, it’s there today. There’s no question, some of them are represented in this room. They already know what that future needs to look like. How do we meet that in the middle and give them the resources and tools to succeed. The last thing I’ll say is this is intimidating. This is a daunting task and no way around it. It is a multi-generational problem, it’s going to require a multi-generational solution. I’m not gonna wake up tomorrow and suddenly realize we’ve got this all right. We’re gonna have some fits and starts, some successes, some drawbacks but just keep plowing ahead. And with the resources and the commitment of the department behind us, I know we’ll get there.

Well thank you, I think it’s worth saying. I worked with Kent for 15 years and with my Google hat on I’ll tell you I cannot be more proud of the impact that he’s had on our society, the scale and the reach of our corporation. I think you can see this today. And General, I don’t think Bob Work could have chosen a better person to lead this. Our partnership with you over the last three years, you really have moved the resources, gotten the money, gotten the attention and delivered and there was no one before you. You are that person, so thank you both very very much. Thank you all.

Share with Friends:

Leave a Reply

Your email address will not be published.