Artificial Intelligence Experts Discuss Diverse Skill Sets


Brian Drake, Defense Intelligence Agency’s science and technology director of artificial intelligence, and Jane Pinells, the chief of test/evaluation of AI for the Defense Department’s Joint Artificial Intelligence Center, discuss “Human Machine Team: The Intersection of Diverse Skill Sets” in a discussion sponsored by Defense One, June 24, 2020.

Subscribe to Dr. Justin Imel, Sr. by Email

Transcript

We’re gonna have a great discussion about all of the different skill sets that are a big part of the machine learning and AI world and thinking A look at some of the ways that different organizations government to the national security state are beginning to understand AI to use it to achieve different mission objectives on and also to train within the organization. Different analysts, different operators to use emerging technology tools for the sake of the mission on and to create entirely new outcomes that were never possible before eso very glad to have all of you here today. Joining me. Bryan Drake, director of artificial intelligence at the Defense Intelligence Agency. Jane Penelas, chief of testing evaluation of artificial intelligence and machine learning at the Department of Defense’s Joint Artificial Intelligence Center, also called the Jake Paul Stanley, chief scientist at the National Counterterrorism Center. Rebecca Anderson, assistant research professor, communications information assistant professor of communications, information and navigation office at Penn State University And again, I am Patrick Tucker s. So with that, I’d like to give everybody on the panel an opportunity really quickly to give a couple quick sentences of introduction into who they are and what exactly they do. We’ll start with you, Brian. Thank you very much, Patrick. Much, very much a pleasure to be here today. And I really appreciate the invitation. Yes. So I work in the Future Capabilities and Innovation Office. Our function in D I. A. Is to identify issues that require some degree of either business process or human intervention or technical. Ah, solution. We have several different directors of every area of science. I am in the area of computer science. And so they renamed it into a director of artificial intelligence. And that’s that’s what I do are my charge in life is twofold. One is Teoh provide a strategic fault leadership for the agency to identify opportunities for us to meet the defense intelligence mission huge. And then, secondarily, it’s not problem solution. A meeting, trying to find doing some text scouting, having a fair amount of outreach, which is what this activity is about in my jobs are and trying to, uh, breath together for the mission component with as those folks that are trying to do good for us. Okay. Uh, pull. Hi. My name is Paul Stanley. I am currently the CTC’s chief engineer with our Information Technology Services Directorate. Um, I’ve spent quite a bit of time, the technical realm and in CGC Um, uh uh, that are often to data science and innovation. I was a senior data science is there for quite a Well, um, I’ve worked his ace, your developer and CTC’s. Well, prior to that, I worked at the NSA, worked at the FBI, and I was a cryptologic Arabic linguist to the Air Force. Eso have a kind of a wide swath of varied experience over the years, and I think we’ll probably get into at some point in this discussion how some of those soft skills that you learn in certain areas of work that you’ve had in the past can apply to new Rome’s, especially in the technological world. Okay, great. Jane. Ah, hi, everyone, and thank you very much. Much for the invitation to be here. My name is Jane Palestine. The youth of testing valuation over at the D. O. D. Joined Artificial Intelligence Center. Our general mission at the Jake is to accelerate the adoption of a I across the D o d on mine branch, The test evaluation branch participates in that vision on a few different ways. Jake’s number one charter is delivery, right? So our goal is to deliver a I capabilities that provide a competitive advantage for Warfighter Andi user. And so one part of my job and my team’s job is to rigorously tested evaluate the Jake develop AI capabilities according to their mission and ethical requirements. On the other hand, another one of Jake’s roles is to scale and cultivate AI capabilities, kind of and create repeatability in the rest of the d. O. D. And so another part of our charter charter over intestine valuation is really to product ties, whatever knowledge we get from testing Jakes Ai products and any kind of lessons learned and information, and package it so that the rest of the department test community can use it as well. My personal background is of statistics, and I spent over 10 years doing various levels of operational test evaluation for the D. O. D. Both of the service level and also at the IOS D level is well over in Rebecca. But it’s nice to be here. I, um, I worked with quite research lab being a liberal arts background, sick, intensive projects, training documentation, a trade ship capabilities. And I tend to be the out liar lost budgets. I back our old liberal arts. So be search areas are education fighting? It’s so I’m deeply interested in what, What enables you just post. Okay, great eso couple quick housekeeping notes First, want to thank our sponsor Isra, who’s been putting on these events Leader in geospatial intelligence for the Defense Department, but also for a lot of different organizations, different corporations You have most certainly interacted, but their software, But very likely you didn’t notice it running the background, for instance, of the big Johns Hopkins Cove in 19 map that everyone is using, It helps a lot different organizations figure out where to put things were to go eso computerized understanding of the geographical geospatial world what they dio longtime partner and also wants to remind everybody that this is an interactive discussion and there’s a Q and A tab over in the corner. We want to hear from you. As we get closer to the end of the broadcast, we typically find in these sorts of situations that we have more questions than we can accommodate. in the broadcast. We don’t have enough time for everybody to get theirs in. So, Aziz, yours occur to you. Please go ahead and submit them. I want to try and get as many in as possible because so many of them are so good. So keep that in mind. You hear something that seems interesting to you? Have a question that comes to your mind. Please use that chat feature. Get that question and eso with that. I want to turn first to Brian and tell us a little bit about some new ways that d I. A. Is beginning to use artificial intelligence and machine learning in a way that seemed maybe farfetched perhaps a few years ago. And that is creating a new reality for D A. In terms of using emerging tools. Yeah, good question to, um, the thing when I came into this job, one of the first things that I had to do was to kind of figure out what d I was already doing. So I had a very extensive tour of all the different operations and missions where we were doing things of interests. And so one of the is that I first went to was the National Media Exploitation Center. This is actually a DNA activity. That is where Dia is the executive agent. So into staff with mostly D I a folks. But it is interagency. So we have a course in C A N g. A of a whole bunch of folks that are in there with us and its mission is Teoh exploit captured media from the field. So when we do a raids, for example, the Osama bin Laden raid, that media that was captured in that particular raid was taken from a bottle bod and then was shipped over to our folks at the at the end neck. And in the process of going through all those documents, they discovered a number of different, very interesting things about future plans that Al Qaeda had the perspectives of what bin Laden had and so forth. They actually want a D and I awards for that sort of work. Now, at that time, they had invested in some very rudimentary AI capabilities. But when I didn’t know was that for the past 15 years they have been investing in a I capabilities across the board. So in their particular domain. They have made investments in text recognition, technology, object detection, machine translation, audio, an image categorization. And what that allows them to do is to go through. And it bites of Dina that they get from document exploitation, and that results in, like, tens of billions of pieces of data. And if you consider that we could dedicate all the in federal employees and all the contractors in the entire federal government to go through that kind of gated, that kind of scale and we would never get there. But what they have successfully done at the end, because they’ve successfully deployed, I keep the ability to go through who all of those pieces of data and they are able Teoh then derive the kind of insights of a cop out of the about of our compound raid and do it extremely quickly. And by doing that, we can weaken warning alert on things that are emerging threats or that we know are things are lots that are coming up or in, and mysteries that we didn’t understand before. Um, what I found and and I may be biased, but I found that their installation off their capability was probably the most impressive I’ve seen in government because of the volume. If they’re able to take the kind of insights they can deliver and then the different customer sets that they served, they serve analysts that are just doing a query for something that they know that they’re looking for. And then they also serve analysts where they don’t know what they working for, just kind of have a feeling or they have a hypothesis that they’re testing, and they also service machine to machine work. So where I need to take a raw piece of media intelligence and then take it to something else that takes to the next level, for example, like in the bathing program, that is something that they do as well. So, um, for me, if I feel like that is one of the most impressive things that is going now, give me a sense of when you say Take a media and then turn it into something actionable very quickly. Can you give me a sense of the contrast between very quickly? A few years ago, during the about a bad way, persons very quickly in 2020 yes. Oh, and It’s nearly the same problems that, because they had already deployed on some light automation at that time, says, there was light automation huge. So it’s now generationally gotten even faster. So from before, where it may have a couple of hours to go through a date of that kind of size. Now we’re starting to talk about milliseconds time. So there’s a tremendous return to deliver the, um, insights to our analysts very, very quickly. Yeah, uh, fascinating. And you know, this speaks to I think it’s an issue that Paul faces all when you look at the amount of information out there that might be useful for a counterterrorism mission on and you were cast with the job of sifting through all of that and understanding the bits that are gonna be most relevant to you within a particular time train Can you give us a sense of, in a general way the difference between the amount of data that might go into that process today versus a few years ago. It’s been to imagine this ribbon explosion, right? And what does that mean for you in terms of, uh, operationalize ing that data and getting to an actual intelligence. Run it, I think, looking at it from the perspective of the counterterrorism perspective, but just government in general, as communication becomes a little bit more prevalent in a little bit easier and a little bit more streamlined within agencies, we’re starting to see Massa bounce of of information flow 22 agencies that are cast with consolidating that sort of information from other agencies and trying to figure out he bits of information and distill that information down to something that’s consumable by either their agency or others. So National Counterterrorism Center is an example of an agency that works with other government partners and partners from around the world. Um, that provide information to us that may have in nexus to terrorism. So we are tasked with determining whether there is a nexus to terrorism. Isn’t as you can imagine? Um, we get sources from all over the place. So what does that mean? Um, data is not. There is no common scheme. There’s no common formats of the way that date is sent transmitted. Even within government agencies, that’s a big problem. That’s a huge problem, because what what’s happened in the past? The large part has been the use of things like regular expressions of people that are familiar with python or PERL or any scripting language. Well, you know what I’m talking about where you’re kind of on a case by case basis, you’re writing code to be able to say, pull out this this sort of information that I know might be relevant to this particular type of document. Um, and what we’re able to do now with machine learning is to say instead, let’s let’s do some supervised machine learning. We’ve got this corpus of data from years and years and years and years of human exploitation. Let’s take the findings of that human exploitation. And let’s train a model to be able to then predict, as documents come in a Devery minimum, just to give an analyst that heads up, that this may be relevant to them. We should probably put eyes on it where this is definitely something they don’t care about. Just by doing something that simple, I think not just at NCTC, but across the community. We’ve seen ah huge reduction in the amount of work pit that we have to do on the human side. One of the great things about that is the fact that it’s not the wherever putting anyone out of a position. I think the great thing about machine learning in this context is that we’re actually empowering analysts to be able to do the work they really want to dio versus happened to do the work that they do on a daily basis. You know, we’re taking them off the assembly line, and we’re letting them kind of work in the clouds a little bit more. Okay, this is a very common theme. Among all of the discussion that we’ve had with folks in national security and defense is how particularly people that are program managers or senior officers envision artificial intelligence is something that is going to enable their their talent pool. That’s going to enable the human either analyst for operator. It’s something that is kind of unique to the way the United States does. National security is supposed to China or Russia very focused on individual talent development in human beings. On also that provides an individual point of accountability for commanders, which is a thing that certainly the military context they tend to like is having a human being that’s accountable for different decisions and in the artificial intelligence than supports decision making in sort of an advisory role in the best, Uh, or just cutting down on the amount of work that’s required in order to really analyze the situation. And I want to get to Rebecca and Jane. But first I want to turn back to you, Brian real quick and ask a couple follow ups on this very fascinating thing that you just feel fable of us. I had no idea that we were actually using rudimentary AI on the products of the about a bad rate. It’s kind of a big deal. What are the challenges to using those sorts of tools in the field? Obviously, you know, and especially how do you envision those challenges going forward? Because we’re talking increasingly about the US, potentially being deployed to a lot of places that where they’re gonna be operating and highly contested environments where there’s a lot of electromagnetic interference or you’re not gonna be able to send stuff back as quickly. Aziz Late. How do you envision that sort of tool working in the field today, and especially if it’s a it’s a difficult field and increasingly in the future. It’s really a question. Um, so back in the seventies, when we were running the Corona program for satellite, right and it would fly over a target than it would take a variety of pictures. It had a cartridge of film in it, and that cartridge of film would be injected once it was full. So we kind of had an idea, like, Okay, it’s almost full. So it’s been a plane apparent to the plane would have a nets and the cartridge get ejected. We get captured by the net, and we get Sloan back, actually, the DEA headquarters where it would get exploited, and then it would be used for intelligence products in the in the Now we don’t have to do that, right? It always flying around. But in a denied environment, you’re not gonna be able to have those ways of slice of around so over over electromagnetic waves. So, um, I think into the future where we have places where it’s more of an austere environment, you’re gonna be going back to the future. I would expect that if if we had something, things like the compound rating. I would probably put all that stuff on planes, I guess. Um, but, uh, that that’s what we would be. We would ship after the post that have the power where it is Now. There are boats in the in the, um, in the field who are, um, you know, saying they’re doing computer the edge, and that exists. Um, and there’s probably other people in other programs that could speak to that with more authority. Um, in our environment. Right now, Um, that’s that’s presently not a focus area for us. Doesn’t mean it will be for the future, but right now, it’s just it’s not something we’re focused on. Okay, let me turn to you, Rebecca, and ask you when you hear all of these things that we’re talking about today, um, where do you see the talent gaps in terms of the workforce. So we’ve talked about a couple of different things here. All the different data streams coming into my beautiful both for a counter terrorism operation, O r. Also media exploitation, the wide variety of different sources that come in. But you look at folks out there that our first getting involved either and analysis work or in operations, that talent pool. Where is there a gap between the skills that they’ve got and the environment that we just described? Um, can you hear me? Yes. OK. You Rebecca? Yes. So, um what What? I see us as an interesting Japanese liberal arts disciplines, uh, can be sort of on the margins, sometimes with some of these really exciting I, uh elaborations ai is really pushing us to involve more and more liberal arts disciples Indies in these projects because it’s it’s no. We’ve always had these East collaborations, but AI is really it’s it’s it’s more intensive. It’s before a deeper and broader knowledge todo appease these capabilities. And so what? I’m seeing a somebody who is in the liberal arts and works on these kinds of collaborations. What I’m seeing is the great strength is the communication. It it sounds so, Manel. It’s so obvious. But I break it down just a little bit. What you want to happen in an interdisciplinary cover spot collaboration. We want to happen once you have happened, is he wants the team. Did you have to work together and identify which is if there’s a problem which disciplinary tool will address that problem. And if there isn’t a specific disciplinary tool, then it can work together to create that trends Did. Um, So how do we get there? Um, what you wanted you is we want to have is great, these different disciplines, but each discipline trains. You, um, provided a different set of tools. It provides a different language, different marriages, different perspectives and even more subtle things. Like volume rich people communicate and, um, the cheat it’s with. So so, for instance, you may have noticed and which isn’t potus, that my promise, My communication style may be a bit different from the other Panelists. Communication style. And so there are a lot of factors that go into that personality. But but disciplinary trading is actually one of those factors. The princess. I work on a lot of worked unburied stem collaborations. What I notice is that, um when I communicated with my staff and colleagues, I tend to use a lot of qualifiers. I just used one in this sense is 10 by a tendons, a lot of qualifiers, and that’s a disciplinary training that is commented displays that I come from so it. I happen to work with people who don’t use follow. Why is very not sure? Sometimes not at all. If we sat down and talked to each other about a particular topic where I used qualifiers, didn’t he would, he would probably come to realize that you didn’t miss a thing. But we’re using different words. It’s not just the jargon of participants, it’s our other words. It’s so this communication I These differences influence how we understand each other on, and it influences the collaboration choices that So what do we do about that? Look, there are a lot of genital courses like a 10 seek diversity Jennet courses that are combining two different disciplines in a pretty good course. And I would say in addition to the downside, focus on talking about how talk just as I am to write down in this. It is discussion. I’m talking about higher, tall and say this is something has disappeared. Airily, This is the, uh You do that in schools and you do that in the workplace is Valke workshops, opportunities where people talk about the content and also guided discussion and push it and practice where you talk about how you talk to each other and you can get the best possible contributions from these. Did you just charities? No. The next generation of these? Yeah, I think that’s great. And that gets into, Ah, a lot of the different Skittles, so called soft skills that are actually very relevant for a future that’s dominated by machine learning. Uh, let me turn to Jane, though, and ask her because you are in charge of the implementation of the ethical framework for the Defense Department and which is a huge deal. And for those of you that are familiar with the new AI principles list that the Defense Department is saying down to we’ve covered it, we actually broke the news that they were, doctor, get officially. And we also broke the news on the draft. Just why that’s why you’re here, because you know we do that something. It’s a very extensive document in terms of AI principles lists. It’s with the appendices and on the appendices air kind of key goes to like 80 pages. You can compare that, for instance, to what Google has put out in terms of AI, it’s a I principles list, which runs 800 words and then the story that I wrote on the AI principles list in the Defense Department was longer than Google’s. Actually, I principles that Jane, when you look at that enormous list of not just principals governance, etcetera but also what reads almost like instructions on how to build those AI ethics into different programs and activities. What do you see is the biggest challenge. And what are the skills that anyone on a team in the future that is involved in bringing a i n what skills are they going to need? So that’s a great question. Um, and you know, this This is a very unique time specifically for thing for testing evaluation on the Dogg in that testing against ethical requirements of something completely new to our community. Um, previously, we’ve never had to worry about something like this, Um s and the U. S. Has a very unique position, right, compared to other countries, because ultimately we cannot form a mission using a I in unethical and responsible way. Then we will not use AI for that mission. We may not be able to make a statement for our adversaries. All right, So, um and I found it interesting that even the way you wanted it, You said you who are responsible for inflicting the ethical principles and and the truth is that the Jake we’re all responsible for implementing them. That that’s that’s kind of that’s the training that the Jake is giving us. Is that everybody inside, Proud of development and any anybody inside our missions directorates, we’re all responsible ultimately, for developing these products. AI enabled systems in an ethical way. Um, at the Jake, we’ve actually we’re just finishing up our first responsible AI champions pilot, for which I was one of the guinea pigs where all of us from various parts of the Jake, I could have got together and had to go through many, many hours of reading and training and talking to each other and looking at case studies, etcetera. It was prepared for us by, um, Alka Patel, who is actually our, uh, you know, chief ethics officer at the Jake. And so we’re all going through extensive training for me as a test her. When I read the principles, I think one of the biggest challenges is the you know, my definition kind of tea, and he has been very quantity of in his apartment. Obviously I’m and by the scheme, definition of the ethics principles are written by people who don’t think in these quantitative terms as much as we do right on DSO. I think our first challenge that we’re going to take on is translating those ethics principles into quantifiable objectives that we contest too right? Because the job of tests and evaluation is to create that risk space and to tell our leadership that you know that if you feel this system today, this is how much risk you’re taking in all these various parts of the system and ethical requirements. That is what most parts. So we’ll have to take those ethical requirements which, um, again, you know, are very non quantitative in nature and translate them into something that we contest objectively incredibly. But with that said, you know, it’s a challenge that mighty any team certainly embraces. I think we’re all very excited Teoh to take off something like this. But it is very new and, no doubt be challenging. Okay, uh, I think that that brings up something really important because I’m familiar with the eye principles list on for any of you out there that’s looking to implement an AI program into your activity, whether you’re a contractor or do any work with the Defense Department with the intelligence community or just your someone out there that might be involved in bringing an AI program in your work, someone the future. This list is kind of going to set industry standard many ways because it goes so much further, I think, than anything that has emerged out of Silicon Valley. And it was much further out of necessity because the stakes are a little bit higher. So this question of how to take ideas like data governance, uh, ideas like responsible testing and turned them into things that can be effectively quantified as you say it sounds enormous on. So it sounds like it would you recommend, by way of a skill set that should be, at this point, kind of common core for everybody basis. A basic understanding of statistics Is that something that’s going to serve? Do you think basically anyone involved in a I in the future, even if it’s only a, uh, I think that basic understanding citizens will serve everybody. Well, in general, you know, we read the papers today way over the 19. Understand something that, you know, I was moved. So I think I want you guys are really important. But I also think especially coming to this intersection of ethics Santini. I agree with Rebecca, and I think it’s the interdisciplinary collaboration that’s really going to help us here. I It’s hard for me to imagine any one person with a set of skills that would enable them to both, you know, take it Everything that are ethicists are doing. And these were people with law degrees, right? So I can’t possibly imagine achieving kind of their level of competency unless and at the same time, You know, when we’re talking, we’re educating them on how we usually think about testing valuation. It has to be interdisciplinary team. There’s no way that I think any one person can seek that on at the moment. Okay, um so let me turn to you whole and ask you You have kind of a eclectic Uh huh. A deep but also eclectic background as a linguist and also doing this work with machine learning. What do you see as the hard skills and the soft skills that are gonna be most relevant to this new era. Um, so I think it’s slowed up by answering that and kind of bring to light something that we all know. But we don’t always think about, um, when we start talking about the world of any other, and that’s that this is a completely new field, something that’s literally come out of left field, um, out of nowhere. But yet it’s already become ubiquitous across every single industry. So because of that, I mean, what we’re seeing is it It’s the talent pool. It’s very difficult to find someone, especially in the government, where you can bring someone on it, the pay that they’re going to receive in the private sector. So first I think on the government side, we need to ensure that we’re offering the right sorts of things that were at the minimum. We’ve got the data. We’ve got great data people that are into data science and a I love to play with data. Um, but we also need to bear in mind that we need Teoh to make sure that we are offering the other things to these. These people like an environment that that allows them to really thrive. Um, that, you know, some different agencies may be having quite got quite gotten there yet, I think, as faras hard skills are concerned, As you mentioned earlier, a basic understanding of statistics is something that can kind of get you headed that direction. Mathematics is something that you’d want me personally in my background. I started off learning Arabic, as I mentioned earlier. I end up getting a bachelor’s degree in natural sciences and mathematics. I went over a different route and got in India when you get something very generalized. And then I ended up working in a zone intelligence analyst, and there’s an intelligence collector and kind of moved over into the technical realm only by the necessity of my job. So I ended up having a job where some of the work that I was doing I identified a bunch of different ways that we could streamline some of this work, and that’s what got me involved with scripting. So I started learning programming languages on. I will say that that having learned a foreign language in depth and knowing it very well, I feel like that gave me a huge head start in learning a programming language program. Let me just kind of came natural much easier. There’s much less syntax to understand. Everything is very logical. Any of the disciplines that you’ll learn in college that really lean themselves towards some understanding and the use of, you know, a plus. People see anything happens to be a very logical, not theoretical discipline is something that I think translates directly into, um into anything involving coding. When you start talking about a I specifically and I’m looking at this from an intelligence community perspective, but we can kind of expand that off the government. Um, we have This is, as I said earlier, we have, ah, brand new field, right? And we’ve got tons of technologists across the government. But these people in large part or engineers, they’re developers, their architects that worked on legacy systems. What I’m noticing is there are not a lot of people that are, um that are long term technologists. So folks that have been around for 10 15 years that that air jumping headfirst into the world of AI and machine learning. Um, so what? We’re what we’re ending up doing is we’re pulling a lot of our talent directly from college, which is a great thing. I mean, I think that’s great for everyone. They just learned it. It’s fresh in their mind, Uh, so they’ve got a little bit of the hard skill experience with what they’re lacking coming directly from college, and they learn this. But it’s on an understanding of a the workplace, and then he specifically for the missions that they’re working. What do people actually care about? So it’s easy to come from a background of college data science, and I think that you can jump right into solving problems for the U. S. Government. But what I’m seeing is there’s a little bit of learning curve. You have to really spend a little bit of time getting to know the jobs of analysts, and that takes a lot of work. You know, you may have to sit down inside side of with people and learn doctor what they’re doing, and then once you kind of got a consultative to understanding across your agency, I think then you’re ready to start brainstorming and doing stuff in the realm directly is a data scientist that’s coming directly from college. So So I guess in summation, I would say that from the sort of skill perspective you really have to learn how to collaborate with people, you have to understand how to communicate with people. You have to demonstrate willingness that actually come into a new environment. A totally new world very quickly learned the jobs of the people around you and then for people, their existing technologies that are looking to jump into a I. And now a lot of them are really lucky because they have a base of coding already. It’s a totally different room. Were you doing work with AI? Um, so they have to be willing to be kind of for lack of a better term. The old dog that actually does learn new tricks, and they have to invest a lot of time outside of work. Do you pick a lot of that up on unclassified data sets and then find a way to apply that? So that the job that they’re already doing Okay, let me turn back to Jane because I think she had a comment on it. Yes, thank you. I had a couple a couple more things as he was talking that came to mind. One, um, you know, one of the skills that I think we’ve kind of missed asses we’re talking about this is human factors type skills, right? So even the title of this session is the human machine team. And as we feel, yielded technology. One of the biggest things to evaluate is the human machine team together in the human system integration piece of it. And I think again, traditional from from a test in valuation perspective where you used to looking for these very quantitative background synthesis, but really have anything in terms of experimental psychology or neuroscience or education. Andi, things like that actually enable you to evaluate an entire different side of the system that is very, very, very relevant. Andan. Another thing that I’ll mention in terms of soft skills and any give this is just might. My bias has a statistician. I think in this industry the ability to explain technical issues to others were right. So inevitably, we tend to in quantitative sciences were not unnecessarily now for being great communicators with people outside of our discipline. And we were right at the jail, right where right of the crossroads off really, really exciting and complicated technology and an operator who needs to understand exactly how this technology works and what the limitations are. And so it is absolutely incumbent on us to be able to explain every in and out of that system and every limitation on risk associated with that system. You know, in a way that doesn’t put that operator to sleep on in a way that the operator understand eso. No matter how complex the analysis is in the background, right, we have to be able to get that across. And that’s probably one of the biggest skills that served me well over the years. The fascinating, I think very important. You know, we talked about human factors on and making things accessible to the operator. It’s something that Preparations now spends a lot of money developing for guys that are behind the wire that have to be able to communicate with drones and things with nothing more than haptic interfaces like their hands are are iflix and the rest of us are going to be in some other space where we’re going to be in this position where we have to convince a program to understand us, or we have to take the outputs of, ah, a process that is, uh, crude and mechanical and turn it into something that’s logical and also has some qualitative feature to it. So it’s, ah, you challenge. We’ve got a lot of questions that have come in, and I want, oh, feel free to come in on that or anything else that we’ve talked about so far. But, uh, going to get to these great questions The first National Security Commission on artificial intelligence, at least a white paper today on the role of AI in pandemic response. How do you see the technology being used right now? And in future pandemics? It’s Ah, I think it’s fascinating. Anybody jump ball is not Go ahead. Sorry. So when it was going to say is that, um, we had a recent public solicitation that I I was sitting on the source election work work, and we got a number of papers that came in that we’re not relevant to what we were asking for. But they did have a lot about the cove in 19 response and some of the things that they were suggesting were difficult for the Department of Defense to do because they’re really not our mission. They beloved the CDC of the N I H or something. But what was interesting about that was that they were looking at wearable Technologies. And they were saying that, you know, if you had an iPhone watch and I watch if you had a you know, a Fitbit, if you had something else, I mean the even we’re going to the things like, local, your implants or pacemakers. All that stuff is being instrumented and Bluetooth and can be collected. And they were making a fairly impassioned plea to adjust to someone who would listen that you could deployed something like that for the crisis so that everyone had these bands on them. And if it had, it was taking your temperature in your heart rate and so forth you could start toe approximate like over. Pretty sure that this cluster of folks is infected because they’re showing certain symptoms. Um, so from a technologist perspective, I was like, Well, that’s That’s pretty cool. The civil ever tearing and me says, Oh my God, right, So there’s this really difficult, and we’re already kind of circling around it. It was this really difficult interplay between what is technically possible, and then what is morally appropriate or ethically appropriate. And I I could see that future. I can see where it is. I just don’t know if that’s where we want to take the country. Yeah, uh, it’s a really good 0.1 that we’re watching a lot of folks in Silicon Valley grapple with in terms of what data is useful for people. There is a privacy law that sort of lays that out. You’ve got things that are personally identifiable pieces of information, and uh huh, that’s pretty much off limits for almost every use that you can imagine outside of a warrant. But, uh, the privacy law was written in the seventies, and it’s very easy if you’ve got a data set of lots of other openly available that it to infer what you might have been able to figure out if you had had access to that personally identifiable piece of information. So in terms of figuring out what data is legal to use to take a look at that privacy law, I would say, but It’s a very good point. I’m reading recently about a bunch of researchers at Carnegie Mellon that have come up with an AI app that can detect the presence of Kobe 19. Based on voice modulation. That’s basically a testing app that doesn’t take any fluid sample at all. It just listens to you. Talk. Uh, I think that there’s a big danger there in terms of false positives, but, uh, regardless, it’s obviously an area worth further investigation for. The research was not exactly implementation, but that speaks second to your point. How much do you want different systems to be able to infer about you given open, even open, have openly available data. And that’s part of why I think we need to have his ongoing discussion, especially if you’re dealing with a user group that is public. Uh, at some point they’re gonna have feelings about all that you’re being able to infer about them, and those feelings will be good or bad, depending on how they were brought into the process. I think so, Uh, I think that, um, yeah, helping people to know what data there, surrendering, how you’re going to use it and then presenting them with a product that actually improves their life for their safety. I think that you’re gonna have more luck with that then you are, uh, harvesting. Were buying whatever you can from a data broker, turning it into an awesome demonstration and then terrifying the crap out of people. And then I’ll write you up and people will love my article. It’s gonna be great for me. Not for you, if you do that. Another question. How do you see the emergence of a I into the intelligence community shaped the required backgrounds or skill sets for students entering the field of analysis? I’ll throw that up to anybody. Oh, you have you Did the spend a lifetime learning Arabic? Do you think that you would need to learn Arabic again? I definitely think so. I mean, in the intelligence community and then in other areas of the government, there is a There’s always gonna be need for good translators, right? I would consider myself to have always been a medio career, a big transmitter. So I learned modern standard Arabic, which is called facade, which is completely different than Egyptian dialect, 11 team dialect and a 1,000,000 other things. Um, you’re you’re often gonna have cloak, legalisms and language, and, um, I don’t see it. Any point in your future? Things like foreign languages being, um, usurped by by by the ai machine translations. Um, but I do think that, um that folks that are coming on board as analysts, they’re gonna I think, be focusing a lot more as a result of a I on analysis, then they actually are a lot of this stuff that they’re doing right now. You know, I’ve been toe I worked at a couple different agencies and a variety of different jobs. And I would say that as an analyst, baby, 15 to 20% of that, the work that I ever did really got me going. You know, I felt like I was doing the analysis that I I was It was in my job description. I guess you could say on the rest of the work was a lot of work surrounding that. It could be briefing people. It could be writing papers. It could be digesting information. It could be searching for things that could be consolidating things. So much of that work that I just listed out, um, falls within the realm of things that could be automated to some degree. So I think the big thing is going to be with the intelligence community, finding ways to automate the presentation of information to the right people and finding ways to automate the communication of that information out to other people that should also see it having a couple more than a couple. But having very intelligent folks are in the middle of all of that making sense of it and making human judgments on top of the automation that’s occurring. So I think people coming out of school or people joining into the intelligence field, uh, the big thing is just going to be, you know, the same things that they’ve always needed to bill having a good awareness of world events, having an interest in those sorts of things, having an interest in US government events and and but with a little bit of a technical bent, understanding what what automation can be done for you and then being willing to a to any point in time when you’re doing a job, say these are the things that are not working in my job These are things that I think could be automated. I’m going to write up a white paper, and I’m going to submit this to see if we can find a way that this might that ai and what might work in this room. Rebecca, do you have anything to add on that real quick? Yeah. Analysis. For instance, the natural process saving the despised language analysis is becoming more sophisticated, right? But there are nuances of language analysis that I, it is, are always going to involve an analyst so once to semantic capacity, very once and very very it I did which meeting off aided police are. It’s not fully. Uh huh. It seems that we’re going toward a TV, no level functioning ai, that is, which is pretty up, e. We have people, Teoh, Higher level skills cast, uh, that is going to require that you be able to assess what? What they’re getting in the way. I make sure that it’s it’s it’s reliable and discuss with each other. So yeah, uh, very good point. Recently sat down with Eric Schmidt, former CEO of Google, and we asked him what the biggest area for a I is the next two years in terms of how quickly is going to evolve and he picks machine translation. But of course, machines don’t have any experience of having an interaction on the street with any individual, so the new ones will be missed. That familiarity won’t be there. And the way people actually talk, is it going to remain a mystery to anyone that hasn’t actually talked to a person over that? I like to thank you for this insightful conversation. Like the transition have been. Conklin from misery are underwriter give you a quick closing presentation. So then take a right. Thanks so much, Patrick Aziz. Always This has been amazing panel discussion. This series we’ve actually has been going on for nearly almost two years about this point actually is pretty interesting to me. And each time I feel like we keep digging deeper and deeper into this issue in this question of how do we actually enable this? This human machine team? And I think when we first started this panel and these discussions on a regular basis, a lot of the reason behind it was this idea that we felt and I still feel it’s true and I think everybody today’s would concur that there’s still this gap between AI. That and the analyst on the ability to use and leverage and realize the promise of A. I think that Gap is definitely closing. I think some amazing work that’s being done out there shows us that it’s there. So one of things I just like to start with when I think about this. Is this a lot of the kind of news and buzz around a I when we first started diving into this was this these cases of, you know, things like Open Ai vs Dota two or Alphago versus Google and Watson versus the Jeopardy champion. We’re all very exciting and novel. It was really showing, you know, ideas, a promise of a I, but but really not solving any real challenges. So we created this Siri’s to really talk about the idea mawr of this. This this ai human machine team in the way that I like to think about it is, you know, in the future, and I think actually even somewhat today this idea that the AI is now a team member. It’s a teammate in these multi discipline teams in many ways out to think of it kind of like this. This little helpful robots like it in the Jetsons model, where it’s gonna it takes out the trash or it feeds the dogs. Or really, what is doing is helping remove some of the burden on the may be really hard, complex tasks in terms of volumes. Large volumes of data may be hard in terms of patterns. Sorting and shifting is helping to reduce the friction between analysts and the data, and it’s helping to improve their ability to deal with more data. But at the end of the day, I think again, hearing a lot of this today on the panel. This idea that humans are still really able to do things like make connections, understand, causes deal with issues like ethics, tell stories, communicate for that idea of these kind of soft skills and hard skills. That kind of bringing together is really with needed. And I don’t think anybody can really see a future where AI is really able to do those types of things effectively or in any way in a useful way. I’m so why why do we see this trend now? Why is this started happening over the last couple of years? Because we see increasing volume of data on do. Increasingly, this data has connections to it. We have this cloud computing and we have improved algorithms that are more accurate. And part of where we see that happening and why we have all this data is this idea that we’re making our lives and the world around us digital or was often called the digital twin of the world. And I think this is just growing and increasing, and we see this idea both in, you know, people and their at human activities. But we also see it in cities and in the natural world and in the environment. And we like to think, you know, that data really has a better idea of how to be used in this world of how to be leveraged and organized. And of course, the way that I like to think about it coming from as read leading software company in geography is that we could really think about and understand data using space and time. So all activity happens in a place so location happens at a time. And using that as a way to organize an integrate data is very useful. Helps us provide contexts and content to this data. It can tell us a lot about what is the contextual environment for the data, and it can also tell machines that contextual information to help improve. I’m the knowledge inferences that are being made from data and ultimately can help us support decision making to communicate and collaborate in support of the decision making process. Bring in multiple variables, testing different theories and modeling different outcomes. So what we see is the key challenge in this human machine team is really this idea of how do you connect analysts and AI and I love the discussion on this future human interface and the user experience and the needs really there. And as we see, I think a lot today is that were, I think in the very early days of that you know the idea that you need to somewhat be a programmer. Toe leverage ai. I miss still probably one of the biggest stumbling blocks and biggest challenges to connect these analysts. You’re, as was said the a couple of years ago. At the time, the the director Mike Rogers wrote. This future is about the semen analyst and the combination of him and artificial intelligence. We think about that really being expressed in the form of a platform for analysis, which is really about integrating data in real time in a distributed fashion and providing this kind of work bench for analysts to condition to manage data, visualize it exported and perform data modelling really about taking data and abstracting in a form that could be brought to They could be integrated dynamically and about being visualize and explore that data dynamically and providing that as a human interface that can actually understood by human beings, but also exposing that in a way that can be done in advanced modelling tools, course programming and tools like Python and Are and Bill to visualize that data. So what are the challenges that we’ve been discussing as we’ve been going down this road of the semen machine team and we’ve been diving deeper and deeper into these challenges actually, in each of these Siris in these discussions and it’s really I found it to be in this area of kind of training, trust and our ability to transition our technologies. We talked about training. We talk about both the human training, the human, but also training the machine. I think this is a well known fact of AI now that really most the power of AI comes in our ability to train the AI and often to train the human. I thought the example earlier of that. You need to take a data scientist to bring him to the bring him into the analytic community that’s training that that science into the analytic discipline that’s needed. But also then, of course, training the machines to be able to think and understand the process in those areas and can be quite challenging, of course, the trust that need to build trust. And I think I would actually I would definitely add ethics to this list now as well. A. So Patrick mentioned the awesome ethics framework has been developed. A lot of it has to do with trust. Both trust in the tools to give us the right answer. But actually the trust in the tools to give us responsible an actual information and then finally think the consistent challenge will be this need to transition technology. So a lot of advances air gonna happen in the I world in the commercial space. Trying to make this technology be able to be applied to the national security challenges is always gonna be hard to do. It’s going to require a constant mode of transitioning from the kind of commercial world into national security space and also, where possible, lovers celebrating. I think that the fact is, there’s unique needs and national security, you know, looking for a purse on Google. Images may not directly translate into surface their missile launchers, for example, so this transition is really kind of a to a road. But it requires understanding. And I honestly, I think the work that the Jakes doing is an example is fantastic examples of this. We’ve seen a lot of great advances through their ability to quickly integrate technology. And the Israel’s relationship with the Jake has been really around that taking our commercial technology that’s used for things like the Johns Hopkins asked for, like the covert pandemic response that same technology and using it for things like HDR missions and not looking, of course, for the future of kind of joint Roy Moore fighting and, of course, there’s a lot more challenges that I think we’re still discovering panels like this one today and the questions that you’ve asked, I’m really help us with that. I help this kind of shape the future. I’m I’m very interested, of course, in hearing about future challenges and ideas. Eso your q and A that you filled in and feel free to throw more questions in there, helping few inform us on future topics and future panels that will pull together. And with that, I guess that concludes our session for today. And I really appreciate everybody’s attention, and I definitely appreciate the Panelists and the just wonderful job that they did. And it is amazing to me the insights that I gain every every one of the sessions. Okay, thank you. But I appreciate it so that it does conclude our program, uh, on behalf of inciting or underwriter Isra, who just heard from thank you all for turning in Thank you especially to our panels for taking some time to talk to us today. The Webcast, uh, and the rest of the human machine team Siri’s is going to be found on demand accessible through google exact dot com. So you missed anything? People find it there on with that? Uh oh. I have a great day

Share with Friends:

Leave a Reply

Your email address will not be published.