Joint AI Center Director Speaks at AI Public Conference

Air Force Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, speaks at the National Security Commission on Artificial Intelligence public conference, at the Liaison Washington Capitol Hill hotel, in Washington, Nov. 5, 2019.

Subscribe to Dr. Justin Imel, Sr. by Email


I hope everyone had a good lunch or is busy finishing up an excellent lunch. I’m joined with two close friends of mine, and I’m probably the only person who can say this in the entire world. I work with and for both of them. So I want to make sure I disclose my conflict of interest to start with. So General Shanahan went to Michigan.

Go Blue

Go Blue, ROTC, entered the service of our country in 1984. He’s been promoted a gazillion times, he was in charge of a whole bunch of intelligence activities, a whole bunch of operational activities, and eventually we needed somebody operationally to implement AI in the entire DoD and he was the perfect choice. So I work with him in my role as Chairman of the DIB. Kent Walker was a federal prosecutor. Law and order federal prosecutor who then chose to come to Silicon Valley, actually I think worked at eBay for awhile. And then we snagged him, we being Google, maybe 15 years ago,

Yeah, I guess

15 years every day together and during that time not only did he set up our legal function but is now in charge of all global policy, PR, all those sorts of things together so very, very significant players. And what I thought we should do since you all have heard from me plenty, is simply start and perhaps Kent, we should just have you make some comments about the world as you see it today.

Sure, so thank you very much Eric, General Shanahan, it’s a pleasure to be with you today and with all of you. The topic of today’s panel, Private Public Partnerships, is extraordinarily important to me. I grew up in this community, my father was in the service for 24 years, I was born and spent the first years of my life on U.S. military bases. My father finished his career at Lockheed, so I feel a profound commitment to getting this right, to making sure that the private sector, the defense sector, and universities can work together in the best possible way. Before we jump in to the thoughts on how we can accomplish that, I wanted to take on two issues up front. It’s been frustrating to hear concerns around our commitments to the national security and defense and so I wanted to set the record straight on two issues. First on China. In 2010 you may remember that Google was public about an attack on our infrastructure that originated in China, sophisticated cyber security attack. We learned a lot from that experience and while a number of our peer companies have significant commercial and AI operations in China, we have chosen to scope our operations there very carefully. Our focus is on advertising and on work supporting an open source platform. Second, with regard to the more general question of national security and our engagement in the Maven Project, it is an area where it’s right that we decided to press the reset button until we had an opportunity to develop our own set of AI principles, our own work with regard to internal standards and review processes. But that was the decision focused on a discreet contract, not a broader statement about our willingness or history of working with the Department of Defense and the National Security Administration. We continue to do that, we are committed to doing that, and that work builds on a long tradition of work throughout the Valley on national security generally. It’s important to remember that the history of the Valley in large measure builds on government technologies from radar to the internet to GPS to some of the work on autonomous vehicles and personal assistants you’re seeing now. Just in the last couple of weeks we had an extraordinary accomplishment with regard to quantum supremacy which moved forward the frontiers of science and technology, but that was not an achievement by Google alone. It built on research that had been done at the University of California in Santa Barbara, it benefited from extensive consultation with research scientists at NASA, it was carried out in many ways on super computers from the Department of Energy. So those kinds of exchanges and collaborations are really key to what has made America technological innovation as successful as it’s been. And just as we feel as we’re contributing to the defense community, national security community, a lot of that work, that community, is a part of Google. We have lots of vets who work at Google. We go above and beyond to make sure that reservists working at Google can complete their military service while having thriving careers, and even our tools we have tried to take steps to make sure that vets transitioning to civilian life can make the best use of their military skills in the private sector. As we do that we also are fully engaged in a wide variety of work with different agencies. With the JAIC we are working on a number of national mission initiatives from cyber security to health care to business automation. With DARPA we are working on a number of fundamental projects to ensure the robustness of AI, to identify deep fakes and to progress work on the end of Moore’s Law and how to progress the operation of hardware and use software/hardware interfaces in better ways. So as we take on those kinds of things we’re eager to do more, we are pursuing actively additional certifications that will allow us to more fully engage across a range of these different topic areas. And we think that’s extremely important. At the same time we think there’s a great partnership to be had on the work that the DIB has announced in the last week, their AI principles, which were I thought very well done, it’s a lengthy document but a thoughtful document. And really continues to work the groundwork that was laid by the Department of Defense back in I think it was 2012 with directive three thousand zero nine which talked about the use of human judgment in the application of advanced technologies, the charter of the JAIC, the work the DoD has done with its own AI principles. In the private sector, we too have been trying to drive forward on this. We have not only put out principles in very common and overlapping areas, there’s a lot in common in these questions. Safety, human judgment, accountability, explainability, fairness are all critical areas where different actors in the space, each have different things to contribute and I think that’s critically important. This is a shared responsibility to get this right. As the DIB report notes, we need a global framework, we need a global approach to these issues endorsing the OECD framework around these issues, extremely important and something that we want to support, and we are working together to figure out where are the complementarities. Because at the end of the day we are a proud American company, we are committed to the defense of the United States, our allies and the safety and security of the world, we are eager to continue this work and think about places we can work together to build on each other’s strengths.

Well thank you Kent. General, take us through what you’re up to at the JAIC.

Well so first of all let me say thanks, it is great to be here and I thank both Eric and Kent for the opportunity to do this. Admittedly though, I will say I’m a poor substitute for the Chairman of the Joint Chiefs of Staff, General Milley, although I say it’s a lower probability of any headline-grabbing soundbites, so you get that with it. And I also confess this is undoubtedly the first and last time I will serve as a warmup act for Dr. Henry Kissinger (audience laughing) but hang on for the main event, as I say. I not only welcome but I relish the opportunity to have this broader conversation about public-private partnerships. When you ask me to reflect back on my two years as the Director of Project Maven, and just about a year in the seat as the director of the JAIC, there is one over-arching thing that continues to resonate strongly with me. It’s the importance, and I would say the necessity, of strengthening bonds between government, industry and academia. This was said this morning, you brought it up and others had also mentioned it, is this idea this relationship should be depicted as a triangle. And it actually should be in the form of an equilateral triangle: government, academia, industry. I would suggest that that’s largely the form it did take beginning in the 1950’s and largely lasting until the early part of this decade. Walter Isaacson writes about this very eloquently and powerfully in his book, The Innovators. It is what really drove to Silicon Valley today. It’s not the case today. At best the sides of the triangle are no longer equidistant, you might even say they are distorted or a little frayed in addition to being of different lengths. The reasons for that are complex and they’re multi-fold. Snowden, Apple encryption, mismatched operating tempo and agility, different business models, general mistrust between the government and industry. We started talking past each other instead of with each other. The task is made much more difficult today by the fact that industry is moving so much faster than the Department of Defense, in fact the rest of government, when it comes to the adoption and integration of AI. We’re playing perpetual catch-up. And some employees in the tech industry see no compelling reason to work with the Department of Defense and even those who want to work with DoD, which I say is far more than sometimes is portrayed, I say put everybody in this room in that category, we don’t make it easy for them. So I would just reinforce some of the themes that are in the Security Commission’s report or the interim report, and that is this idea of a share sense of responsibility about our IA future. A shared vision about the importance of trust and transparency. Our national security depends on it. And even for those who for various reasons still view DoD with suspicion or who are reluctant to accept that we are in a strategic competition with China, I would hope they would still agree with us that AI is a critical component of our nation’s prosperity, vitality and self-sufficiency. So in other words, no matter where you stand in respect to the government’s future use of AI-enabling technologies, I submit that we can never attain the vision outlined in the Commission’s interim report without industry and academia with us together in an equal partnership. There’s too much at stake to do otherwise, we are in this together. Public-private partnerships are the very essence of America’s success as a nation, not only in the Department of Defense but across the entire United States government. So the message we want to send today, we have to make this triangle back to what it used to be.

Well thank you General. I think I’m gonna ask a couple of questions to both of you and let’s start with the same question to both. Kent, talk about Maven some more.

Sure. (audience laughing) Well I think it’s no secret that we came up as a consumer company. We are quickly evolving into also becoming an enterprise company and putting a lot of resources into that, but there are different protocols and different ways of engaging, and as we go along that journey, I’d be lying to tell you that everybody, all our employees have an identical view on a lot of hard issues, they don’t. But in some ways that debate and discussion is the positive as well as the negative. It’s, in many ways it’s in our DNA but it’s in the DNA of America. You could argue that that kind of constructive debate is America’s first innovation. You look at great research scientists like Richard Feynman who was one of the leading thinkers in quantum mechanics who was also notoriously iconoclastic, free-thinking guy. We think out of that comes incredible strength. If we work together well we can actually have a more robust, more resilient framework, a framework that helps build social trust as well as a framework that works for the world. So as we put forward our AI principles and our governing processes, because an important thing to note is that the principles in a sense are easy. As the DIB report notes, the report devotes a couple of pages to the principles and a long section to the implementation because you quickly discover that a lot of the hard problems are when the principles conflict and are challenging. We’ve had debates about whether or not to publish a paper on lip reading which has…

[Eric] Say that again.

We have had debates about whether to publish a paper on lip reading. Lip reading is a great benefit to people who are hard of hearing around the world, et cetera, but you could imagine it could be misused for surveillance and other kinds of purposes. After reviewing a particular technology, we determined that it was appropriate to publish because that particular technology was useful really only in one-to-one settings not for surveillance at a distance. But it’s an example of the kinds of discussions we have around issues like lip reading or facial recognition or other challenging questions where we have to come to terms with the reality, the trade-offs that we’re making. Very much the case in a lot of these issues as well but we think there’s an awful lot of room for collaboration and coordination on cyber security, on logistics, on transportation, on health care, many more topics where we’re already engaged with the military.

[Eric] General, same question. Tell us more about Maven.

Okay, so when we started Project Maven, our intent was to go after commercial industry. Eric and the DIB had told us this is where the solutions already exist, do not reinvent the wheel, it happens out there. And our approach was a simple one. We wanted everybody in the market that was a small startup of 15 people, which is one of the companies we got on contract, to the biggest internet data cyber cloud companies in the world. And one of those happened to be Project Google. Why did we go after Project, or why did we have to Google with Project Maven, because we wanted to take the best AI talent in the world and put it against our most wicked problem set, wide-area motion imagery, it’s an extraordinary difficult problem to go after and we did a very successful collaboration with the Google team on this. What was happening internal to the company is how that played out is a little bit, a different story, but we got all the way to the end of the contract and we got products that we were very pleased with. Now it was unfortunate I think even for some of the software engineers on that project, they got to the point where they almost felt a little bit ostracized because others criticized them for working with the Department of Defense. But day to day, from the senior-most leader down to the people working on the Project Maven team we got tremendous support from Project, in Maven from Google. What we found though, and this is really the critique on both sides, is we lost the narrative very quickly. And part of this was about the company made a strategic decision really not to be public about what they wanted to do. Our approach in the Department of Defense is willing to talk as much as the company wanted us to talk about, we do whatever the market would bear, in very general terms, we didn’t want to get into operational specifics. This was a project, was intelligence, surveillance, reconnaissance on a drone, remotely piloted aircraft, it had no weapons on it, it was not a weapons project, it is not a weapons project, but what happened is we started hearing these wild stories and assumptions about what Project Maven was and was not to the point where if you google today, no pun intended, you actually google, the adjective controversial has now been inserted permanently in front of Project Maven. It was not controversial to me, it was not controversial to the team, I would say it’s not controversial to anybody right now beyond some people who just don’t like what we’re doing. So I guess what I bring it all the way full circle is this is an interesting point of, I’ve thought a lot, and I’m not sure everybody fully appreciates or agrees with me. I view what happened with Google and Maven as a little bit of a canary in a coal mine. The fact that it happened when it did as opposed on the verge of a conflict or a crisis where asking for their help, we’ve gotten some of that out of the way, you’ve heard Kent talking a little bit about of a reset here and how much the company and all the other companies that we deal with want to work with the Department of Defense. I think that narrative is an important narrative, it happened, it would have happened to somebody else at some point, but this idea of transparency and a willingness to talk about what each side is trying to achieve may be the biggest lessons of all that I took from it.

It’s a real tragedy that we don’t wear hats anymore because I could borrow three hats and figure out which hat I’m wearing. With my DIB hat on, I can tell you that when I met General Shanahan the real problem inside the military is that we take these exquisitely trained soldiers, airmen, so forth and so on, and we put them in front of mind-numbing observational tasks. They literally watch screens all day. And it’s a terrible waste of the human asset that the military produces. And so there’s a huge opportunity to try to sort of get them to work at a higher level position and that’s why the DIB recommended the, made the procedure and indeed the creation of the Joint Center for AI which Kent, you both stood up and now head. Let’s talk about another question for both of you which has to do with ethics. Now, in the middle of the kerfuffle that went on inside of Google, Kent had the good idea of having a formal AI ethics proposal, and he drove inside of Google an ethics process which produced a remarkable public document, now I have my Google hat on, which I think is really quite definitive, and I think maybe you could talk about that and then similarly, the DIB produced a proposal to the military and I believe you are the customer for the proposal that we wrote on military AI ethics. I assume both of you are in favor, since Kent wrote the first one and all the other industry companies have now largely copied, variance of your approach in one form or another, I assume you are in favor of this. What are the consequences of these ethics things, does it really work, does, for example, does Google prevent, does Google turn off things or stop doing things like in the last little while, I mean, how does it actually work, and same question for you General, there are people who claim that the military won’t operate under ethics principles. In our report we cite the many rules of the military is required to operate under. Maybe you could talk about that. So Kent?

Sure. So I think as the General noted, having frameworks in place early on, both the set of principles but then also the review processes and escalation opportunities is a critical part of internal as well as external transparency. It’s right, among our principles we’ve talked about surveillance being a concern, so we want to make sure that some of the recognition tools and the image tracking software that we’re developing are deployed in appropriate ways. We want to be a good partner, we don’t want to pull away support but we want to make sure we know the scope of the project that we’re developing, and when we’re licensing that for commercial uses have a sense of the direction of travel there. I think that’s a valuable thing for both sides in terms of making sure that expectations are clear and in terms of building no only trust internally but trust across society. So another example would be when it comes to general purpose APIs for facial recognition where you don’t know necessarily what use is gonna be made of them, we said until we develop more policy and more technological safeguards we’re gonna be very cautious about proceeding in that area. Another example is when it comes to weapons, we have said this is a nascent technology, we want to be very careful about the application of AI in this area, so that’s not an area that we’re pursuing, given our background, we recognize the limits of our experience in that area. Obviously the military is gonna be deeper and have more understanding of safety implications and the like. So we’re gonna continue to work through these different areas. I think there’s a remarkable degree of convergence we see between the OECD, the DoD, the DIB, now internationally we’re starting to see the European commissions say they’re coming up with regulations for artificial intelligence in the next hundred days. I think this will be a very interesting exercise as we all pursue kind of a combination of how we build acceptance for this next generation of technologies.

And so looking at it through the DoD lens, this may be the best starting point, when you talk, Kent mentions this area of convergence between commercial industry, academia and the government, probably the AI ethics principles are as good as anything else to drive the stake in the ground, do we agree on all of these, some of these, and if we don’t disagree let’s get the conversation going, so it’s a good starting point. Another part is, I need to state the obvious, that I can tell you with certainty that China and Russia did not embark on a 15-month process involving public hearings and discussion about the ethical, safe and lawful use of artificial intelligence, they’re not doing it. And I don’t expect they ever will do it. So people may question what the department is doing and why we’re doing it, but I tell you what, we just embarked on this long process just to make sure we took into account all of the different voices on the ethical use of artificial intelligence and I would say the product that’s been delivered is an excellent product shaped by a lot of people who spent time and attention against this. I’ve said this in other settings and over 35, pretty much 35 and a half years in uniform, I have never spent as much time on this question of the ethical use of given technology. The Department of Defense actually has a long and I would say a commendable history despite flaws along the way of looking at the ethical use of emerging technologies whatever they are. There are differences with artificial intelligence and what the DIB report does very well is start with here is what’s similar to every other technology that ever been field in the department, here are some areas that may be different, we’re not quite sure yet, and here are some substantive differences like systems that learn on their own. That’s a pretty good framework for going after this. We have a way of looking at this, no matter if it’s artificial intelligence or any other technology, our history, our processes, our approach and our training are in place to look at any emerging technology and how we bring it in from a pilot and a prototype into production. So now that this report has been presented to the Secretary of Defense, it is up, I get two questions now. One, what do you think about the report? It’s an excellent report, provides the best possible starting point, and the number two is what are you going to do about it. This is where it gets really complicated. We have to come up with an implementation plan. It will not be a JAIC implementation plan, it will be a department-wide implementation plan taking these recommendations, putting something together through my boss, Mr. Dana Deasy as the Chief Information Officer of the department in making some recommendations on how we implement this for across the entire Department of Defense. That is not an overnight task, this is gonna take us a while to get this right, but we now have an outstanding starting point.

So, that’s a wonderful framing for where we are, I’d like to push a little bit on where this will go. Kent, let me give you an example. Open AI developed a technology which will allow arbitrary rewriting of text that was sufficiently good that they became concerned and they didn’t release it and said they only released it in certain model ways to certain researchers. That’s an example, and I asked them, and they said, I said did anyone put any pressure on you, and they said no, we just thought it was our good judgment. You famously, very early said on the face recognition thing, we’re going to avoid that as a general purpose because of the dangers. Where will the industry end up in this sort of self restraint thing. Is it going to be a common set of principles, is this going to be, is the industry going to have to have an AI ethics common with respect to, you know, being careful? How will this play out in your view?

I think you already see some of that first to work across the industry with the partnership on artificial intelligence to exchange information on some of the work that’s being done. It’s going to be an evolving question as we develop more infrastructure, more of these frameworks about the appropriate limits of the use of artificial intelligence, the appropriate safeguards and checks and balances for a whole variety of different areas. But I think, I’m hopeful, that with a common groundwork, the way we’ve started to lay already, we’re on the path to doing that, but this is true of any new technology. Any communications platform from the radio to television to the internet you’ve needed new regulatory infrastructures, new social conventions about how do you use these different tools. This is an extraordinarily powerful technology, we’re at the early days, so I think it’s understandable that you’re seeing a variety of views come together but also notable that you’re seeing the degree of convergence that you’re seeing.

So General, you have talked inside the Pentagon about this notion of a new kind of warfare. And I think the term that you all use is algorithmic warfare. Take us through, in the same sense that Kent talked about how this new emergent thing sort of new and powerful, what’s new and powerful about this technology in a military context. With your long experience and understanding, how the military frames it. What’s the language, what’s the positioning?

I go back to as we were formed and then Deputy Secretary of Defense Bob Work was in the room, I’ll never forget it, it’s like yesterday, designating us, okay, you are now formed as the team that’s gonna figure out how you actually field AI, get away from the research piece of it, which was all happening wonderfully behind the scenes, but now we needed a team that was focusing on fielding to the warfighter, and the name that he gave us was the algorithmic warfare cross-functional team. It’s not an accidental name. It’s become Project Maven because it’s much easier to say than aquifith,

And, your acronyms are gonna kill me. Okay, let’s

So let’s just focus on algorithmic warfare.

Why don’t you tell us what algorithmic warfare is?

So it’s the idea we’re going to face a fight in the future, we’re used to fighting for 20 years in a certain type of fight, counter-terrorism, insurgencies, we are going to be shocked by the speed, the chaos, the bloodiness and the friction of the future fight in which this will be, maybe playing in microseconds at time. How do we envision that fight happening? It has to be algorithm against algorithm. It is a, as you described earlier, as we were talking about this, it’s a Boydian OODA, how fast can we get inside somebody’s decision cycle.

Remind people what the OODA is.

Colonel John Boyd, Air Force colonel who was sort of the author of the observe, orient, decide, act, which is how you get through the cycle of decision making which was really never about the decide or the act, it was more about the observe and orient, I think really about the orient phase. But in this future fight we’re looking at, this would be happening so fast, if we’re trying to do this by humans against machines and the other side has the machines and the algorithms that we don’t, we’re at an unacceptably high risk of losing that conflict. Now this is a challenging one because I think part of what you’re getting at in that future scenario how are people going to be assured that our algorithms are going to work as intended and they don’t take on a life of its own so to speak. What we will fall back on, I think this is a starting point for what the DIB principles gave us, is test, evaluation, validation and verification. We have to do a lot more work on the front end, by the time they field it we know it’s being fielded. But I think we’re really going to be at a disadvantage, if we think we’re gonna be at a pure human against machine. It’ll be human and machine on one side, human, machine on the other, but the temporal dimension, this fleeting superiority that you may be facing where decisions will be made that fast, it might be algorithm against algorithm.

Yeah, that was, to me the key question military matter is what happens when the whole scenario is faster than human decision making. Because I understand the way the military works is when there’s a threat, in general people check with their superior, there’s sort of a rule of engagement, there’s human judgment, it’s all built around some number of minutes, right? Not some number of nanoseconds. How will the military adjust its procedures to deal with this real possible threat?

It won’t be driven by above. The innovation will happen at the lowest possible level. What we have to be able to do in places like JAIC or Maven is give people the policies and the authorities and the framework to do what they need to do. The innovation, the people that will say I have a solution to this, I’m going to write a code, I’m going to develop an algorithm and apply it to this problem set in the field if you’ve given me the data, the tools, the frameworks, the library, the standards, all those other things, we can do it. So if the idea in that fight, it will be more decentralized than a lot of people are comfortable with today and that brings risks with it. So we’re talking about higher risks, higher consequence, but it’s either that or risk losing the fight. So it’s this idea of decentralized development, decentralized experimentation, decentralized innovation. The innovation as was described in one of the panels this morning happens at the bottom. We’ve got to give them the push from above to make it succeed.

And also add


in addition to the tempo component, there’s new fronts in the cyber security, cyber defense. We’re seeing already sort of efforts to destabilize with disinformation campaigns and the like. So the more we can work together to recognize those patterns across a wider battlefield if you will, the better for everybody.

Kent, do you have a model for how the industry, one of the themes of our whole conference is the industry and the government need to work together broadly, and obviously we have a senior general here, but I’m really referring to the government as a whole, and there’s a whole lot more than just the DoD that needs AI. Do you have a model for how the industry should work with the federal government, the state governments, the DoDs and so forth?

Well, I’ve only talked about two important elements the DIB report talks, touches on as well. The first is this notion of trying to build broad trust in the application of new technologies, the second is the need for a global framework which helps with that process. The third, as General Shanahan alluded to, is a more operational administrative question of how do we make it as easy as possible for new companies to enter into these kinds of partnerships. So a lot of the innovation, a lot of the cutting edge research being done in Silicon Valley is not being done by large companies, it’s being done by small companies. It’s a rich ecosystem of innovation and it’s challenging even for a company Google’s size to start to get more involved in that environment. It’s doubly difficulty for some of these smaller companies. So as we look at modernizing procurement from the military side and working with Congress as well to make it as quick, as nimble, as flexible as possible, responsive to new needs. Looking at increasing R and D funding across the board because that’s traditionally been a really fertile ground for a lot of these collaborative enterprises to move forward. Looking at human resources exchanges, there’s a lot of authorities out there which authorize private sector people to come into the government, but in practice it’s harder than you would think. So a lot of that hard work on the ground I think is important to making this a success.

For both of you, because we’re gonna be making recommendations that ideally will end up in legislation a year from now, plus or minus, are there specific things that we could do that would promote private-public partnerships, for example as you know the DoD has DIUX, SCO and a number of other groups that work very closely, of Incutel, obviously the extraordinary contribution that DARPA has played to our industry and to technology and to me personally, so forth and so on. So the sum of all of that, do you have a model, and I’ll ask Kent the same question, do you have specific things that would be helpful that would decrease the friction and increase the cohesion between small companies, large companies, the federal government, procurement, the DoD?

A couple of different thoughts on that. First of all, there’s so much that has started to happen over the last couple of years with places like Defense Digital Service, DIU, Kessel Run, Compile to Combat, all what I call these beginning insurgencies to get things moving.

And we should pause and say that these are each small teams of software people inside the DoD that have had an outside impact in changing the procedures in important aspects in the Air Force for example, in some of the K-Ox and things that like.

So that all got it started but we have to do is figure out how to institutionalize, how to make that systemic change across the Department of Defense which is the next hard part to do. And you ask about what are the ways we do that. I’ll tell you one of the biggest ones is just talent, bringing in talent from the outside, from academia, from industry. Our chief scientist, Jo Crisman, who is here today, has time in the government in ARPA working for a startup in her last job, Nand Mulchandani who is our chief technical officer, 25 years in the Valley, he comes in and within 24 hours takes a different view of what we’re trying to do. We need a lot more of that, we need sabbaticals, people coming in from academia for a year or two, going back out to the outside. Us putting people in education with industry, Secretary of Defense corporate fellowship. That’s all beginning to happen. It needs to scale to the next level to really start to understand what we’re each talking about. Me going out to the Valley and talking to the Suite, C-Suite, only gets so far. It’s the peer-to-peer relationships and discussion that I think are gonna be more important than anything else.

And I very much agree with that, I think you’ve seen examples of it with the JAIC, with DARPA, we’re priming the pump on a variety of really important areas whether that’s training or in models and simulation, they’re helpful in that, recruiting, a number of different areas. Another important component of this is the IT modernization because in many ways, the AI kernel is critical but it comes embedded within a larger environment of software that’s oftentimes very difficult because you have to get security clearances and appropriate certification for all the elements of that piece, so there’s that combination of successful individual experiments and trial runs to build the familiarity at the peer-to-peer level but also the systemic change to make it easier to have wider adoption of the technology more broadly.

It’s time for us to finish up. My objective in this panel was to put to bed this notion that somehow Silicon Valley wouldn’t work with the military and I think we’ve clearly seen examples, small companies, large companies, strong statement from Kent on those, and we just sort of move forward and build this collective between the private and the public partnerships. Kent, can you sort of summarize sort of the key takeaway that you want to offer us, the key message, the key word, the key, why are you here and why did you make a special trip just to make this point?

I want to be clear, and I’ll restate what I said at the beginning. We are a proud American company, we are committed to the cause of national defense for the United States of America, for our allies and for peace and safety and security in the world. We approach that task thoughtfully as we do using approaching a variety of advanced technologies. We want to be thoughtful and make sure we have clear frameworks and transparency and understanding as we move forward. I think that it’s a mission that the military and the U.S. government share, and I’m looking forward, we’re looking forward to working more closely together in the future.

So General, you never like these things, but you’re sort of the top, you’re the tip, you’re the fellow who’s going to make this change happen across 3.2 million people, $660 billion, an enormous bureaucracy. How are you going to pull this off?

One person at a time. (audience laughing) It has to be a combination of top down. It was said on the previous panel, you must have the full support of leadership from the very top to show that it’s a priority for the department. That’s critical but also insufficient. You have to have the bottom up innovation, the people pushing from below. It’s there today, there’s no question, some of them are represented in this room. They already knows what that future needs to look like, how do we meet that in the middle, and give them the resources, the tools to succeed. The last thing I’ll say is this is intimidating, this is a daunting task. No way around it. It is a multi-generational problem, it’s going to require a multi-generational solution. I’m not gonna wake up tomorrow and suddenly realize we’ve got this all right. We’re gonna have some fits and starts, some successes, some drawbacks, but just keep plowing ahead, and with the resources and the commitment of the department behind us I know we’ll get there.

Well thank you. I think it’s worth saying, I’ve worked with Kent for 15 years and with my Google hat on, I’ll tell you I cannot be more proud of the impact that he’s had on our society, the scale and the reach of our corporation. I think you can see this today. And General, I don’t think Bob Work could have chosen a better person to lead this. Our partnership with you over the last three years, you really have moved the resources, gotten the money, gotten the attention and delivered, and there was no one before you. You are that person, so thank you both very much, thank you all.

Share with Friends:

Leave a Reply

Your email address will not be published.