Top Defense Official for Artificial Intelligence Briefs Reporters


Marine Corps Lt. Gen. Michael S. Groen, director of the Defense Department’s Joint Artificial Intelligence Center, briefs reporters at the Pentagon on efforts to adopt and scale artificial intelligence capabilities, November 24, 2020.

Subscribe to Dr. Justin Imel, Sr. by Email

Transcript

Good afternoon, everybody. We have a little bit of technical difficulty there. Before we get started, I’ll do just a really brief will call. As most of you know, I’m Lieutenant Commander, our Labor Hampson, and I’ll be moderating. Today’s press conference based press trumpets will cover artificial intelligence capabilities. Initiative Defense. Our host for today’s press conferences. Lieutenant General Michael Grown, the director of the Joint Artificial Intelligence Center. We’ll begin today’s briefing with opening remarks from general grown, and then we’ll continue with a question and answer session. We do have a hard stop today about 13 40 because of another event that will be going on here at two o’clock. So we just asked everybody to keep their questions brief. Just one question and me and at the most one follow one question that’s very brief, and I don’t have the list for that I normally have. So I’ll just go and do a quick roll call before we start the opening remarks and displease. Idea yourselves on your news organization when you ask the question. So if I could do a quick roll call out to the lines really quick, go ahead out to the films okay, I think we still have a tackle issue. So what we’ll do is we’ll go ahead and get started. General Grown will deliver his opening remarks, and we will try to get the phone lines patched in here. So without further ado, if you could deliver opening remarks and started Thanks, Carl. Okay. Good afternoon. Welcome. I’m Mike Grew in Lieutenant Lieutenant General, United States Marine Corps. I’m the new director of the Joint Artificial Intelligence Center. The Jake. I’m very glad for the opportunity to interact with you. Look forward to our conversation today. It’s my great privilege to serve alongside the members of the Jake, but also the much larger numbers across the department that are committed to changing the way we decide the way we fight, the way we manage and the way we prepare. It’s clear to me that we do not have an awareness problem in the department, but like with any transformational set of technologies, we have a lot of work to do and broadly understanding the transformative nature and the implications of a I integration. We’re challenged not so much in finding the technologies we need, but rather to get to getting about the hard work of AI implementation. I’ve often used the analogy of the transformation into industrial age warfare off literally lancers riding into battle against guns that were machines flying machines that scouted positions or dropped bombs of mass, long range artillery machines or even poison gas. Used a weapon uses a weapon at an industrial scale. That transformation that had been underway for decades suddenly coalesced into something very lethal and very riel understanding that came at great cost. Another example is blitzkrieg, literally lightning war that leverage technology known to both sides to create but but was used by one side to create tempo that overwhelmed the slower, more methodical force. In either case, the artifacts of the new technological environment were plain to see in the society that surrounded the participants. These transformational moments were eminently foreseeable, but in many cases not foreseen. I would submit that today we face a very similar situation. We’re surrounded by the artifacts of the information age. We need to understand the impacts of this set of globally available technologies on the future of warfare. We need to work hard now to foresee what is foreseeable. We have a tech native military and civilian work force that enjoys a fast flowing, responsive and tailored information environment at home When they’re on their mobile phones, they want that same experience in the military and department systems that they operate. Our war fighters want responsive, data driven decisions are commanders want to operate at speed. And with a mix of manned and unmanned capabilities, the citizens seek efficiency and effectiveness from their investments and defense. Artificial intelligence can unlock all of these. We’re surrounded by examples in every major industry of data driven enterprise that operate with speed and efficiency. That leaves their competitors in the dust. We want that. Most important of all, we need to ensure that the young men and women who go in harm’s way on our behalf are prepared and equipped for the complex, high tempo battlefields that of the future. I often hear that AI is our future, and I don’t disagree with that. But A I also needs to be our present as an implementation organization. The Jake will continue to work hard with many partners across the department to bring that into being, So let me just talk a little bit about our priorities in the Jake today and you can ask questions and Jake 1.0. We help jump start a I in the d. O. D through Pathfinder projects we called Mission initiatives. So over the last year, year and a half we’ve been in that business. We developed over 30 AI products, working across a range of department use cases. We learned a great deal and brought on board some of the brightest talent in the business. It really is amazing. When we took stock, however, we realized that this was not transformational enough. We weren’t going to be in a position to transform the department through the delivery of use cases. Uh, in Jake two point. Oh, what? We’re calling our our effort. Now we seek to push harder across the department to accelerate the adoption of a I across every aspect of our warfighting and business operations. While the Jake will continue to develop AI, solutions were working in parallel to enable a broad range of customers across the department. We can’t achieve scale without having a broader range of of participants in the integration of AI. That means a renewed focus on the Joint Common Foundation, which most of you are familiar with the Dev SEC ops platform. That and the key enabler for AI at that advancement within the department. It’s a resource for all, but especially for disadvantaged users who don’t have the infrastructure and the tech expertise to do it themselves. We’re re crafting our engagement mechanism inside the Jake to actively seek out problems and help make others successful. We will be mawr problem pull than product push. One thing we note is that stovepipes don’t scale, so we’ll work through our partners in the AI Executive steering group in the subcommittees of that group to integrate and focus common architectures. AI standards, data sharing strategies, educational norms and best practice for a I implementation will continue to work across the department on AI Ethics AI policy ai governance and we’ll do that as a community. We’ll also continue to work with like minded nations to enhance security cooperation and interoperability through our AI partnership for the for defense. All of the jakes were comes back to that enabling that broad transformation across the department. We want to help defense leaders see that AI is about generating essential warfighting. Advantage is hey, I is not I t. It’s not a black box that a contractor is going to deliver to you. It’s not some digital gadget that an I. T. Rep will show you how to log into our primary implement Implementation Challenge is the hard work of decision engineering. It’s commanders, business at every level and in every defense enterprise. How do you make this war fighting decisions? What data drives your decision making? Do you have that data? Do you have access to it? If it’s driving leaders to think, you know, I could make a better decision if I knew X. Jake wants to help leaders at every level. Get to that ex. We want data informed, data driven decisions across warfighting and functional enterprises. We want to understand the enemy and ourselves and benefit for data driven insights into what what happens next. We want the generation of tempo to respond to fast moving threats across multiple domains. We want recursive virtualized, war gaming and simulation at Great Fidelity. We want successful teaming among manned and unmanned platforms, and we want small leaders or small unit leaders that go into harm’s way to go with um or complete understanding of their threats, their risks, their resource is and their opportunities. We’re grateful to Congress. We’re grateful to d o d leadership, the enthusiastic service members who who are helping us with this and the American people for their continued trust and support. I really appreciate your attention and look forward to your questions. Thank you very much. Thank you, sir. Appreciate that. We’ll go to the phones now, Uh, the first question is gonna come from Sydney Freedberg from breaking defense. Hello, General Sydney Freedberg here from breaking defense. Thank you for doing it. Um, and, uh, apologies. If we ask, you repeat yourself a little bit because those of us on the phone line we’re not dialed until you start speaking. Um, you know, you’ve talked repeatedly about the importance of this being commanders, ai being commanders business about the importance of this not being seen as, you know, nerd stuff up. How how do you actually socialized? Institutionalized that across the Defense Department. And clearly a lot of high level interest from your service sheets in a I certainly a lot of lip service. At least a I and people, the briefing slides, But how do you really familiarize. You know, not the technical people, but the commanders with the potential of this, uh, you know, working as the Jake with European. Apparently a limited number of people you don’t have. You can’t send a missionary out. Uh, Thio every office, uh, you know, in the Pentagon to preach the virtues of a I great, great question, Sydney. And and so this this really is the heart of the implementation challenge. And so getting commanders, senior leaders across the department to really understand that this is not I t ai is not i t This is war fighting business. It is assessment and analysis, analysis of warfighting decision making or enterprise decision making in our support infrastructure and in our business infrastructure. If you if you understand it that way, then then we open the doors thio much better and much more effective integration into our warfighting constructs. Our service enterprises are support enterprises across the department and we really start to get traction. This is why our focus on on the Joint Common Foundation, because what we find, uh, I think there are two aspects that I think are important. The Joint Common Foundation, which provides a technical platform. So now we have a technical platform. Uh, it’ll become IOC here early in in 2021 and then we will. We will. We will rapidly change it. We expect to do monthly updates of tools and capabilities to that platform. But that platform now provides a technical basis for especially disadvantaged users who don’t have access to data scientists who don’t have access to algorithms who are not sure how to leverage their data. We can bring those those folks to a place where now they can store their data. They might be ableto leverage training data from some other program. We might be able to identify algorithms that can be repurposed and reused, you know, in similar problem sets. So there’s that technical piece of it. There’s also the soft what I call the soft services side of it, which is now we help them with, uh, a I testing and evaluation for verification and validation those critical AI functions on. We helped them with best practice in that regard. We help them with AI ethics and how toe build ethically grounded AI development program on. Then we create an environment for for sharing of all of that through best practice. Um, if we do that, then we will, In addition to the platform piece of this we’re building are what we call our missions directorate. Now, were we re crafting that to be much mawr aggressive in going out to find those problems? Find those most compelling use cases across the department that then we can bring back home and help that user understand the problem. Help that user get access to contracting vehicles, help that user access to technical platform and do everything we can to facilitate a i 1000 ai sprouts across the department so that it really starts to take hold, and we start to see the impact on decision making. Thanks, sir. The next question is coming from Carrie Johnson Venture beat car. If you’re still on the line, go ahead, sir, is not on the line. So we’re gonna go to the next question, which is from Jasmine from national defense. Jasmine, if you’re on the line, go ahead. Uh huh. Thank you, sir. A Do you know defense companies face a volley of attacks from adversarial nations attempting to steal their I p and get peaks and sensitive information. How is the Jake keeping the important work it does with industry? Stay from these countries or bad actors who may want to steal and replicated. Yeah, great question, Jasmine. And you know, we’re reminded every day that the artificial intelligence space is a competitive space, and there’s a lot of a lot of places that we compete. I probably the first thing I would throw out there is cybersecurity. And, you know, obviously we, you know, we are. We participate along with the rest of the department and our cybersecurity initiatives here in the department to defend our networks, to defend our cloud architectures, to defend our algorithms. But in addition to that, we we have developed a number of cyber security tools that we can help that industry detect those threats. And then and then and then the third thing I throw on there is our our efforts now to secure our platform so obviously will use defense certified, uh, accessibility requirements were. What we’re focused on is building a trusted ecosystem because one of the things that will make this powerful is our ability to share. So we have to be ableto ascertain our data. We have to know it’s providence. We have to know that the networks that we pass that data on our sound and secure we have to create an environment where we can readily move, you know, through container ization or some other method, uh, developments or code that’s done in one platform to another platform. So to do all of this securely and safely is a primary demand signal on the Joint Common Foundation. And it is on all of our ai developments across the department and the platforms that the other platforms that are out there across the department we are. We are wide awake to the threat posed by, uh, foreign actors, especially who you know who have who have a proven track record of stealing intellectual property from wherever they could get their hands on it. We’re going to try to provide an effective defense to ensure that doesn’t happen. Okay, the next question is gonna go out to Brandy Vincent from next cup. Go ahead, ma’am. Hi. Thank you so much for the call today. My question is on the Joint Common Foundation. You mentioned these soft services that it’ll have, and I read recently that there will be some to keep users aware of ethical principles and other important considerations that they should make when using AI in warfare. Can you tell us a little bit more about how the platform will be used with the Pentagon’s ethical priorities and from your own experience? Why do you believe that? That’s important. Yeah. Great. Great question. I really I think this is so important. And I tell you, I didn’t always think that way. When I came into the Jake job, I had my own epiphany about the role of an AI ethical foundation to everything that we do. And it just it just jumps right at you right out at you. Many people might think Well, yeah, of course. You know, we do things ethically. So when we use a, I will do that ethically as well. But I think of it through the lens of just like the law of war. The law of war. Um, you know, the the the determination of military necessity. The the, uh, you know, the the, uh, the unnecessary, you know, limiting unnecessary suffering, all of the all of the principles of the law of war that drive. Our decision making actually has a significant impact on the way that we organize and fight our force today. And you can see it. Anybody, you know, the fact that we have a very mature targeting doctrine and the targeting process that is full of checks and balances, um helps us to ensure that that were that were complying with the law of war. This this process is, uh, is unprecedented, and it it is thoroughly ingrained in the way we do things. It changes the way we do business in the targeting world. We believe that there’s a similar approach for AI and ethical considerations. So when you think about the, you know, the AI, the AI, uh, principles or the or the ethical principles these things tell us how to build a I and then how to employ them, uh, responsibly. So when we think about, you know, building A I, we wanna make sure that our requirement or our outcomes are traceable. We wanna make sure that it’s equitable. We wanna make sure that our systems are reliable and we do that through tests and evaluation in a very rigorous way. But then we also want to sure insure that as we employ our ai that we’re doing it in ways that are responsible and that are governable. So we know that we’re using an AI within the boundaries of which it was tested, for example, or we use an AI in a manner that we can we can turn it off or we can ask it in some cases. Hey, how sure are you about that answer? You know what? You know what? What is your assessment of the quality of the answer you provide? An AI gives us a window to be able to do that. Honestly, we and the nations that were working within our AI partnership for defense really are kind of breaking ground here for establishing that ethical foundation. And it will be just as important and just as impactful as application of the Law of War Eyes on our targeting doctrine, for example. So if you have that it’s really critical, then there are not that many, uh, experts ethicists who really understand this topic and can communicate it in a way that helps designers design systems, help testers test systems and help implement there’s implement them. And so we have some of them in the Jake. They’re fantastic people. And they’re and they’re, you know, they punch way above their weight. We’re really helping hoping toe Let give their give access to their expertise across the department by by linking it to the Joint Common Foundation. Thanks for the question. I think that’s a really important one. Okay, the next question goes out to Jackson Barnett of Fed Scoop Jackson. Go ahead, sir. Hi. Thank you so much for doing this. Um, could you say what is your expectation? Or even because baseline requirements for what everyone needs to understand about ai when you talk about trying to enable a across the department, what is it that you hope, um, that those being a commander is out in the field or people working on the back office? Um, parts of the Pentagon? What do people need to know about AI for your vision of enabling a across the department toe work? Yeah. Great question. Jackson s. So the most important thing, I think, is what I alluded to in my opening comments that ai is about decision making, not decision making in the abs abstract but decision making in the in the in the finite, in the in the moment, in the decision with the decision maker that that really defines, like, how do I want to make that decision? What process do I use today? And then what data do I use to make that decision today, in many cases, historically are Ah, lot of our warfighting decisions are made kind of the seat of the pants judgment, you know, individuals with lots of experience, a mature understanding of the situation, but doing decision making without necessarily, uh, current data. We can fix that. We can make that better and so ways for us to do that, um uh, you know, we have to help people visualize what a I means across the department. And what in a I use case looks like it’s really easy. I mean, for me to start at the tactical level, you know, way want weapons that are more precise. We want weapons that guide on command, you know, to human selected targets. We want threat detection, automatic threat detection and threat identification on our bases. We want a better information about the logistic support that is available available to our small units. We would like better awareness of the medical situation, you know, perhaps remote triage, medical dispatch processes, you know, everything that you just imagine that you do in the in a commercial environment today, here in the United States, we wanna be able to do those same things with the same ease and the same reliability on the battlefield. Um, you know, reconnaissance on scouting for, you know, with unmanned platforms. Uh, you know, equipment that’s instrumented that’s gonna tell us if it if it thinks it will fail in the next, you know, in the next hour, the next flight or whatever. Team members that have secure communications over small distances. Uh, you know that all that tech exists today, and if you move up the value chain, you know, up into the, you know, like theater like Combatant command, decision support, you know, visibility of data across across the theater. What an incredible thing that would be to achieve, Available at the fingertips of a combatant commander at any time today. Those combatant commanders really on the alone and unafraid in many cases, geographic in the geo geographical regions around the world have to make real time decisions based on imperfect knowledge. And, uh, s O they do the best they can. But I think our commanders deserve better than that. They should be able to decide based on data where we have data available and where we could make that data available for them. Things like at a service level, you know, human capital management, you know? Think Moneyball, right? Like, I need that kind of person for this job. I’m looking for a individual. This kind of skills. Where can I find such a person? When is that person going to rotate the services that we can provide? Service members? Uh, you know, I don’t know how many how many man hours I’ve spent standing in lines and it administration section, you know, in my command, you know, waiting for somebody to look at my record book or change and a new allowance or something like that. Why? Why do we do that? You know, I haven’t set foot in a bank for years. Why would I have to set foot into an admits section to be able to do these kind of processes? This is kind of, you know, this is the broad visualization that includes, you know, support and enabling capabilities. But it extends all the way the war fighting decision making that you know that it’s necessary, right? It’s We have to We have to do this. It will make us more effective. Mawr efficient. Thank you, sir. Uh, the next question comes from Lauryn Williams from FC W, uh, floor. If you’re on the line, go ahead, ma’am. Yes. Thank you for doing this. Far as you’re talking about the new capabilities, the the data strategy came out and obviously, like that is a very important part of making a I work. Can you talk a little bit about what the Jake is going to be doing in the near future? Like what we can expect to see, you know, in terms of implementing the data strategy and what the jig’s role is gonna be there. Great question, Laura. So? So the data strategy for those of you who don’t know that’s comes from the chief data officer. So within within the the chief information officer. Sweet on. So what? What the CEO organization has done is kind of created a vision and a strategy for how are we going to manage the enormous amount of data that’s gonna be flowing through our networks. That’s going to be coming from our sensors. That’s going to be generated and curated for AI models. And everywhere else We use data. You can’t be data driven as a department. You can’t do data driven warfighting if you don’t have a strategy for how to manage your data and so through as we as we established the Joint Common Foundation. But also as we help other customers, you know, execute AI programs within, you know, within their enterprises we will help the CEO uh, implement that strategy, right? So things like data sharing. So data sharing is really important in an environment where we have enormous amounts of data available to us broadly across the department. We need to make sure that data is available from one consumer to another consumer and hand in hand with that is the security of that data. We need to make sure that we have the right security controls on the data. That data is shared, but it’s shared within a construct that we can protect the data. One of the worst things that we could do is create stovepipes of data that are not accessible across the department and that that result in the department spending millions and millions of dollars. You know, re analyzing data. Re cleaning data re, you know, repurpose ing data. When? When that data is already available. Eso We’re working with the CEO, and then we’ll work across the the AI Executive Steering Group to figure out ways. How do we How do we not only share models, but how do we share code? How do we share training data? How do we share test and evaluation data, thes air? The kind of things that a data strategy will help us kind of, you know, put the lines in the road so we could do it effectively. But do it safely at the same time. Thank you, sir. We’ve got two other journalists on the line, and I want to try to get to that before we got to cut off. So the next question is gonna go to Scott from Federal News Network. Scott, if you’re on the line, go ahead, sir. Hi, general. Thanks for doing this. Um, you’re just curious about your priorities for 2021. You know, you’re getting, uh, more money than you were a couple of years ago. Considering that your organization is growing, you’ve started toe work within some of the combat areas. So you know where you’re going to be investing money and where we’re going to see the Jake start to grow. Great question, Scott. So So Aziz, we look at, you know, one of the challenges of kind of where we are in this evolution of the Jake and the department is we we have, ah, pipeline of use cases. That is way mawr, you know, vastly exceeds our resource is and so this is part of our enablement process. We want Thio. You know, we want to find the most compelling use case is that we can find the things that are most transformational, the things that will have the broadest application and the things that will lead to, uh, you know, innovation in the space. And so there’s a balance here that we’re that we’re trying to achieve. On the one hand, we’re working some very cutting edge uh, ai technologies with consumers, Cem pretty mature consumers, consumers who are, you know, you know, working at the same level. We are and in partnership. On the other side of the coin we have, we have partnerships with really important enterprises and organizations who haven’t even really started their journey into a I. And so we’ve got toe. Make sure that we have the right balance of investment in high tech AI that moves the state of the art on shows the pathway for additional AI development and implementation and then also, um, helping consumers, you know, with their first forays into the AI environment and that and that includes things like, you know, doing data readiness assessments. So as I mentioned in my opening remarks and re crafting our missions directorate todo we’re creating fly away teams, if you will, um, that can that can fall in on a annettor prize or or a potential AI consumer and help them understand their data environment, help them understand what kind of things that they’re gonna have to do to create an environment that that can support an artificial intelligence set of solutions. So we’ll help them with that. And when we’re done helping them with that, then we’ll help find them the AI solution. In an unlimited budgetary environment, we might build that algorithm for them, uh, in a limited budget environment, Sometimes the best things we can do is look link them to a contractor who may have a demonstrated expertise in their particular particular use case. In some cases, it may just be helping them find a contract vehicles so that they could bring somebody in, in any case, will inform them with the ethical standards will inform them with best practices for testing. Evaluation will help them do their data analysis, and so are re sourcing. Now is spread between high end use cases and use cases that we’re that we’re building because we you know, because purposefully we want to build those toe meet specific needs the common foundation and building that common foundation and then helping a broader base of consumers uh, take a I on board and start toe, you know, start to respond to the transformation by looking at their own problem sets facilitated by us. So we’ll have thio. We’ll have to, you know, it’s a very it’s a very nuanced program of how do you spread the resource ing to make sure all of those important functions are accomplished. Thanks for the question. This will be the last question things. Question comes from Peter from aside, TV and we just have a couple of more minutes here. So, Peter, if you could go ahead with your question, sir. Absolutely. Thank you very much. Um, I wanted to ask about the security of algorithms and how you attempt to deal with one of the biggest problems in AI is always over matching to the data. Um, in that you will have to keep algorithms secure and so periodically update and reduce and renew them. What fear do you see an over matching or even under matching? If you know that you have to throw out a bunch of data. Yeah, that’s that’s a great question. Uh, primarily you know what way? Our way are limited in the data we have in many cases. And the good data, the good labeled data, the you know, the well conditioned data and so helping us us kind of creating the standards in the environment so we can build high quality data Eyes is an important step that will accomplish through the J. C. F. And will help other consumers in that same in the same role but then, But then, once we have good data, we have to protect it. So we protect it through the way had the security conversation a little while ago. But we protect it through the right security apparatus so that we can share effectively yet ensure that that data remains protective. You know, we have to protect test and evaluation data we have to protect labeled in condition data for a lot of different reasons for operational reasons, for technical reasons. And because there is a valuable resource, we have to protect the intellectual property of government data and how we use that effectively Thio to ensure that we have access to rapid and frequent algorithm updates. Yet without paying a proprietary price for data that the government doesn’t own or the data the government gave away, we wanna make sure that we have environment that makes sense for that. For that, For that situation. Uh, what your question kind of reminds all of us, though, is, um, you know, the the technology of adversarial ai the opportunities for AI exploitation or spoofing or deception. Um, you know that that research environment is very robust. Obviously we pay very close attention. We do have Ah, uh Ah. Pretty significant powerhouse bench of AI. Engineers and experts in, uh, in data science as well, Who are who keep us up to date and keep us abreast of all of the developments in those threatening aspects of artificial intelligence. And we work those into our processes to the to the degree we can. We’re very sensitive to the idea of over conditioned or overmatched data were very sensitive to the issues of AI vulnerability and a and adversarial ai. And we’re trying to build and work in. How do we build robust algorithms? In many cases, the science of responding thio adversarial ai and the threat that it poses is a very immature science. And so, from an implementation perspective, we find ourselves, you know, working with our especially our academic partners and our industry partners to really help us understand where we need to go as a department to make sure that our A I algorithms are safe, are protected, and our data eyes is the same safe, protected and usable when we need to use it. All of these are artifacts of AI implementation that the department is learning as we go. And the Jake is trying thio kind of show the way and get the conversation going across the department so we don’t have to discover it. You know, serially, we could discover in parallel with all of us kind of learning together. So we’ll, you know, we’ll keep pushing that. But your point is very well taken. And it’s an important consideration for us eyes making sure that we have reliability, uh, in the in the outcomes of all of our artificial intelligence efforts. Hey, Thank you, ladies and gentlemen. Thank you, General. Grown for your time today. Uh, just a reminder for the folks out on the line. This broadcast will be replayed on David’s and we should have a transcript up on defense dot gov within the next 24 hours. If you have any follow on questions, you can reach out to me at my contacts. Most of you have those, or you can contact the SDP a duty officers. Thank you very much for everybody for attending today. Thank you.

Share with Friends: