The Root of All Problems (And Solutions): Creating an AI Workforce

Panel III: The Root of All Problems (And Solutions): Creating an AI Workforce

Subscribe to Dr. Justin Imel, Sr. by Email


The root of all problems and solutions, creating an AI workforce, moderated by Mignon Clyburn, thank you.

Good afternoon everyone. So as we settle in, I often pick a word for the day, it kind of governs how I conduct myself oftentimes, and just my frame of mind. But today I’m going to pick an image of the day, and if you look at your screen, one of our commissioners shared with me, Commissioner Louis, shared with me this image. And I think it really sets the tone and the stage and underscores the importance of why we’re here. And of course the significance of this panel. Now this is a Chinese book for kindergartners. And circled, even if you don’t speak Chinese, circled are two letters, AI, Kindergarten, textbook, AI. Now the Senator talked about us tooling ourselves and getting ready from grade school to grad school. Now we’re talking about kindergarten or even pre-K, if we’re going to keep ahead and really stay number one when it comes to AI. So I just wanted to start with this image, because again, it really, for me, underscores the urgency of why this commission was formed, and why you think, and know it’s so important that we are here today. N-S-C-A-I has a broad mandate, and working group three is charged with recommending concrete steps that the government should take to build and maintain an AI machine learning workforce that can address the national security and defense needs of the United States. Over the last eight months this working group has assessed the current state of the National Security Enterprise’s AI workforce, explored the roles of an AI workforce, how the AI workforce might and should play, and examined how the government might recruit, train, educate, manage, and to the extent that is necessary, retrain an AI workforce. Now here are our judgements thus far, and you will affirm this if you read the report. National security agencies need a holistic workforce renovation for the AI era. That includes extending AI familiarity throughout organizations, infusing ethical training at every level, and spreading the use of modern software tools. Developing AI ready leaders is especially critical, because without more well informed leaders who can go beyond talking points and reshape their organizations, the defense and intelligence communities will fail to compete in the AI era. Now I am a little hesitant because of all of the military presence here today, but I am going to make this next point. The Department of Defense and the intelligence community, do not have effective ways to identify AI relevant skills that already exist in their workforce. (audience applauding) So we’ll make it out alive, thank you very much. (audience laughs) They often fail to capitalize on their technical talent. Existing hiring authorities are adequate or close to adequate, more to the point. Government agencies and departments are not fully utilizing civilian hiring authorities to recruit AI talent, often due to risk-averse human resource teams, and commanders or civilian leaders that do not hold them sufficiently accountable. Am I gonna walk out still? It is less clear if the same is holding true when it comes to pay scales. Fourth, expanding AI-focused fellowships and exchange opportunities can give officials and service members access to cutting edge technology and bring talent from our top AI companies into federal service. These programs already exist and we’ve been talking about this today, but they need to expand. Government employees who gain valuable skills from the private sector should have an opportunity to use them when they return to government service. And my complimentary fifth point, is the military and national security agencies struggle to compete for top AI talent. The government needs to spend more effort, showing that service is an opportunity to solve unique, exciting problems, and have a positive impact. It should try to reduce, if it exists, any disparagement of its workforce, and better use pathways for recent graduates. Now there are two additional hard questions that we will explore with our panelists today. Since the American AI talent pool, as you know, depends heavily on international students and workers. Our global competitiveness hinges on our ability to attract and retain top minds from around the world. If we fail to do so, it is unclear how we will continue to compete. And colleges and universities are under strain to keep pace with student interest in AI and computer science generally. The number of computer science majors is increasing at ten times the rate of tenure track faculty. So to begin and to continue this discussion, we’ve asked Dr. James Manyika, Chairman and Director with the McKinsey Global Institute. And I take liberties with names, especially if it allows me to use some consonants that I don’t usually use a lot. The Former Principal and Deputy Director of National Intelligence, Sue Gordon. And Gary Bolles, the Chair of the Future Work for Singularity University. They will provide their perspectives on these two questions, primarily, but not exclusively. How important is our organizational structure for capitalizing on emerging technology talent, and how should the National Security Enterprise educate leaders and users who do not participate in the development process to deploy, use and resource, AIML solutions effectively and ethically? Miss Gordon.

Very and thoughtfully. No, I’m sorry. My very was, organizational construct is very important. And the second is how do you deal with your existing workforce? You have to do it thoughtfully. But, let me create a quick stack for you that I think begins both before that, and extends after it. I think of four things that we need to have in order to effectively integrate these technologies into our workflow. So, the first is, you have to have imperative. The organization has to believe it must. If the organization doesn’t believe that it must, then it will be a technology, or it will be left to the innovators, and you will have change, but it will not be at scale and it will not be at speed. So for the intelligence community, you need to see the world as it is, you need to understand what your mission is, it isn’t about secrecy, it’s about knowing a little bit more a little bit sooner. And if you look at this world with abundant data, and ubiquitous technology, speed of decision making, that you need, then if you’re the intelligence community, and you’re a leader, then you must, find a way to introduce the ability to handle data, like I say, from speed and from volume, but also to sense making. Differently in the technologies that are emergent are the ones that you must have. So you have to have imperative and you have to have C-Suite Buy-in, ’cause if not, you’ll fit it in to what’s left over after you’ve done your real mission. So that’s number one. The second thing is, you need infrastructure. And earlier panels talked about the information infrastructure to support it. We all are in various stages of building that infrastructure. Even those of us who have built infrastructure, built it for humans to use, and now we’re trying to figure out how machines use that infrastructure, ’cause data uses it differently than people use it. Algorithms use it differently than they must. So there’s that information infrastructure but there’s also the infrastructure that brings people into the mix, and the reason why you have to have that is so that people can play, with the new capabilities. What’s really important when you wanna have data is you need to be able to integrate it with no cost. And if you don’t have infrastructure that allows you to have barriers you aren’t gonna be able to get that curiosity that’s gonna get the organization to figure out what it can do. And you won’t get the mission pull to pair with the technology push. So you’ve gotta have abundant infrastructure. Organizationally you need two types of organization, you need an organization to support your technologists, I would opine that we can attract anybody. In the intelligence community our mission is so exciting, still, and such about possibility that people will come, but when they come in, they find that they aren’t supported with the same sorts of things that they can find outside, and at the five to ten year mark, they cannot stand not being able to pursue their craft, so they go somewhere where they can. So you have to have a way to support them, and technically, and get them around people, they can. But the other thing is, you need to think about whether our organizational model needs to change, because technology is so embedded in what we do that the serial process of the technologists sitting someplace else and pumping capability into a work unit, is not necessarily the model we need. And so I think your organizational models change. So again, organizational construct for your technical humans and the new work units that allow the integration and the transfer ideas and at speed to happen. And the last one is you need process. You need process revolution, because even when a leader wants it, and you have the infrastructure supporting it, and you have the organizations that demand it, all of them come crashing into processes that were never expected to be designed for this moment, and we dash people on the shoals of despair because our contracting process, or our information processes are the rules that do that. So I think one of the things we need to do is think about who we’re putting in charge of designing new processes because the people we have now, don’t. As far as how you deal with a mixed work force, you need to provide the opportunity to… through those things I mentioned, for people who want to come to be able to come. And you have to recognize that some people aren’t going to be able to come, and you need to be able to treat them honorably and offer them other solutions. And we do have a demographic problem that we’re gonna have to address. The leadership I think is the most, the middle leadership, is probably our most urgent need. Because if I have middle leadership that does not understand that this is fundamentally a technical world, they won’t trust that the ideas coming up, can actually effect the solution. And I’ll end it there.

I appreciate that because it really underscores the culture, what people find when they get there. Again I appreciate those four points. Primarily everything that you said, I appreciate. Dr. Manyika.

Well first of all I’m delighted to be here, thank you for having me. I’d like to applaud the report that the Commission’s put out. I think it’s very, very spot on. I know there’s a lot more work still to come, but I really enjoyed seeing what was already in there. I also particularly like the fact that it puts talent at the center and a talented workforce at the center of the AI conversation. That’s actually absolutely critical. And in particular, when you think about the, I think it was mentioned earlier in the discussions today, the triangle that is government, universities and the private sector, that’s a critical triangle when it comes to the issues of talent. Now, what is it about the AI talent specifically that we need to address and that we then need to see reflected in our organization? I would argue that there are basically four or five specific things that are worth understanding with regards to the AI workforce talent question. And I’ll frame these as problems. The first problem we have is what I’ll call the too few problem, which is, we just don’t have enough people with distinctive AI capabilities in the government, and you can even argue broadly, in the economy. So we have a too few problem that we need to solve for somehow. Now this is coupled, the second problem, which is a, what I’ll call the pipeline problem. If you look at the pipeline that’s supposed to feed the talent needs we’re gonna need in AI, it’s woefully weak. Whether we look at K through 12, whether we look at universities, and whether we look at the places we’ve historically relied on for talent, which has been a good domestic pipeline, but also international students coming to the United States and other places. So the pipeline issues are actually enormous. I was quite struck by the fact that, if you look at some of the data, the personal management office in the federal agency put out, that suggest that for example, only something like, I think less than 3% of all IT professionals are actually under the age of 30. I think that’s problematic. If we think about this pipeline question. So the pipeline challenge is absolutely important. The third challenge I’ll actually put out on the talented workforce, is what I’ll call the, we have many types lead in problem. What do I mean by that? What I mean by that is, I think often, when we have this conversation about the talented workforce for AI, we need many different types. We’re not just talking about the deep experts. We need those, we need many of those, we don’t have enough of those, they probably need to have PHDs or Post docs, whatever they’ve had. But we also need people who are developers. Who are gonna, not be doing the fundamental research, but doing the development work to build applications. We’re also gonna need users who understand enough to be able to know how this fits into workflows, and how they actually use these technologies. We’re gonna need leaders. You can go on, in fact the report, I think actually does some work trying to categorize the different types that are needed, but I think it’s important to recognize there’s a whole talent ecosystem here, and value chain, that has different kinds of capabilities and different kinds of roles. Some of those are easier to train in transition people too, some of them are harder, but there’s sort of monolithic problem when it comes to the AI workforce. Problem number four is what I might call the flow problem. And the flow problem is a challenge that between the elements of the triangle, the flow zones don’t work very well. And in fact you can argue, of the three legs of that triangle, government, universities and private sector, right now most of the flow is to the private sector. Almost entirely, and the government is getting the short end of that stick. So how do we unstick and solve the flow problem? Is actually problematic. And by the way, this problem is also, even real, even for universities. It used to be the case, so I did my PHD in robotics about 23 years ago, tells you how old I am, but at that time, if you were looking for the best cutting edge research in AI and robotics, you’d look at a handful of universities. That’s where the best work was being done. That’s not true anymore. Much of the most amazing groundbreaking, fundamental research is actually in the private sector. So the flow problem here is a big challenge. Let me highlight one other last one, and I think, I know in some of the conversations this has come up. And I might characterize this provocatively as a bit of a mission problem. It’s a mission problem in the following sense, which is, I think it used to be the case that you could imagine technologists, and there was a time when people would imagine that if you wanted to do something good for the world, you’d go into public service, you’d go into the military, you’d do things that were good for society. I think in the realm of technology, technologists now have a few more choices. So look at the young graduates who now see the private sector as one of the ways to change the world. Technology for good. So I think, arguably the monopoly that public service used to have as a mechanism for smart talented people to go do amazing things in the world, has now many more other competitors. So I think there’s more work, I think, national security agencies, the government needs to do, to do this. Now how do I, what does this mean for organizations and the organizational structure, which was one of the questions you asked. I think here, there’s some useful lessons from the private sector. And I spent a fair amount of time in the private sector. One thing you see nowadays is that, there was a time when companies had a hard time understanding that technology is now fundamental to what they do. I think now, everybody is coming to realize that in fact, every company is a technology company, it isn’t something that those people in the corner room there do, but it’s actually fundamental to the whole enterprise. And I think that mindset needs to come to our federal agencies that in fact, this isn’t just something that a few people are gonna do over there in the corner, it’s gotta be a part of the system. This shows up in a few places. It should effect the processes, as you suggested, so I won’t go into that, but we should also think about infrastructure, but let me take a particular twist to the infrastructure question. One of the things specific to AI, if you talk to any AI people, they’ll tell you that, yes you need amazingly smart people, and the algorithms, but you also need to compute, you also need tools, and you also need data. If you look at what is one of the reasons people go to the private sector for AI, is compute, and data, and tools. And so making sure the organizations have the ability to give people access to the leading tools, the amount of compute that they need, the infrastructure that they need to be able to even do the work in interesting ways, is actually another piece of the organizational change that’s required. The other thing has to do with just ways of working. And I think General Shanahan pointed to this, talked about this in the morning, which is, there’s just often a mismatch, where there’s in terms of agility and pace, that I think our defense agencies have typically worked historically, that doesn’t quite match the pace and agility in ways of working that these technologies actually now require. Whether it’s the ability to iterate, the ability to test things, and so forth. And all organizations have to be comfortable doing that. Let me end on at least a couple notes that relate to people. One of the things that at least we’ve learned with if you like the investment of technology in the private sector, is that, in fact there’s a metric that often sometimes people use which is, for every dollar investment in the technology you make, you need to invest another 20 in the change management. So it’s not just about buying the technology, there’s all the change that actually needs to happen in the organization, before organizations can fully capitalize this. And think this is perhaps what you were alluding to, about the actual change that has to happen, in our agencies work. And I’ll end on this note, at least for now, something we haven’t really talked about, which is, career pathways. One of the things that actually helps a lot, is when you bring people into organizations and there’s actually career pathways that where they can actually grow and succeed, to the highest levels of those organization, on the basis of unique skills of the building. Again you see this in companies all the time. I think until we started to see chief information officers, and chief technology officers sit at the C-Suite table, and be able to effect organizations and people can see career pathways, this was not taken seriously. It wasn’t those kids in the basement doing technology stuff, but this was actually, people could actually see how they could progress in the organization. I think that’s some of the fundamental thinking that’s gonna be required, I think, in our defense and national security agencies, which takes you all the way to the topic of leadership, which you have already spoken about. But those are at least some lessons learned from my experience.

Thank you, and to round things out.

Wonderful, I just want to second or even third the thanks for inviting me and for the marvelous work that’s been done on the report to date and I’m looking forward to seeing more of the output. Singularity University’s think tank based in Silicon Valley, it’s neither about the singularity, nor is it a university, so we have some identity issues that we’re working on. It’s not a university because in the United States to be accredited you have to actually pour glue on your curriculum for two years and we change our curriculum every two months. We have 300 brainiacs from around the world, they’re experts in everything from artificial intelligence to next generation medicine. And I get to pull from their brains a lot of the thoughts about the impact on future of work, future of organization, and future of learning. And if I sort of distill some of these things down into the way I sort of read some of the questions that we were asked, typically the framing that I often get is, so wait a minute, let me understand this, are we trying to mostly put our efforts on upgrading humans, or are we trying to change the systems including our organizations, and my answer is yes. You gotta do both, because the systems in an outsize manner disadvantage, the opportunities to be able to help the right kind of skills and capabilities to flow to solve the right kind of problems, and if you don’t help people to continually have the tools and learning that they need, then you’re gonna have this continual mismatch. I’ll focus first on the humans. What I talk a lot about is that, sort of the framing that I see, is that we’re going through as big a shift as we did going from agricultural to industrial economies. And we’re shifting to a digital work economy. And we’re doing it in a blindingly short period of time. And so what that means for humans is that there’s a whole bunch of ways that we’re reacting to that, and technology is potentially a great enabler. But it’s also increasing that pace. And so we’re shifting to what I call a portfolio of work, which is, rather than one person one job, we’re having this much more ambiguous set of different constructs different activities that people do. Parents ask me all the time well will my kid get a real job? And the answer is that working at a day job, driving for Uber at night, working on a startup with your friends, all simultaneously, is a rational response to an exponentially changing world. It’s hedge strategy, and so how do you think about how you then leverage that kind of unbundling of work, and being able to channel human energies, to be able to solve the problems that you want. So that’s the first opportunity, I think, is to think in terms of, as we’re trying to help humans to be able to upgrade themselves, there are macro issues going on with the workforce that we can actually leverage. We can actually take advantage of, because it creates opportunity if we change our organizations in the right way. And it’s one of those rare situations where the technology can actually be helpful, if we use it correctly. I talk about the half a dozen AI superpowers, not as Kai-Fu Lee talks about, China and Russia and others, but more, what are the superpowers that actually the technology can help us to have, so that we can be supported in solving the problems of tomorrow. And then to the organization issues. In the same way that we’re seeing so many of the constructs around the way humans work changing, the organization itself is a construct that’s left over, literally the whole idea of a corporate hierarchy, and that sort of thing we trace back all the way to Alexander the Great. And in that shift from agriculture to industrial model we created this thing called the organization, and I use the analogy of a box, there’s abundance outside the box, and there’s scarcity inside the box. There’s a corporate hierarchy, there’s slots, we want to stick people in the slots, and we did that to the rational response the needing to be able to build factories, and to be able to channel the energies of humans, when the best communications technology was a carrier pigeon. Well now we’ve got all these digital distraction devices that we all carry around and we can communicate instantaneously with half the people in the world. The organization has to change, and so I’ve written a lot on what we call unbundling the organization from my friend John Hagel, who’s ex McKinsey. But basically the idea is the shift to, if you want a picture of it, the shift to a model of a network. The more that you unbundle the organization, soften the walls of the organization, and this is especially germane to agencies. Apprenticeships, mentorships, leveraging crowd sourcing platforms, having people come through for tours of duty, anything that allows you to be able to take advantage of the resources, the skillsets of people, that can actually help to solve these problems. You can open up that box, and turn it into a network, the better advantage you have. But the mentality that I push for, I’ve got nine courses on LinkedIn learning where I talk a lot about these issues. What I say is, it isn’t any more about change management, it’s about managing change. Change management was this mentality that there’s a current state and future state, and you can do the delta between them and then you’re done. What’s the difference between them? Okay, yeah we got our plan. It’s only managing change, we can’t see any point at which exponential change is gonna slow down. As a matter of fact, one of our favorite phrases at Singularity University is, today is probably the slowest day of the rest of your life. You’re gonna look back in ten years and say, you know I remember when you kids didn’t embed chips in your head, and you weren’t printing your clothes into your closets, and that sort of thing. We can only see that it’s going to increase. And so the idea that an organization actually has some future static state, we don’t see that. And so the processes that you need to be able to help people to continually adapt, and especially as we think with the lens of AI, and the technologies themselves, that are not gonna slow down, they’re only going to increase. Then we just need a bigger boat. We need a new way of thinking about solving these problems.

Now that my head hurts because I’m listening to the three of you, and if there is a very simple refrain that I could put forth, is that you are demanding from us, or asking us, or asking these organizations, government, academia and the like, to do some things in ways that we’re not organically poised to do. Because again, you’re throwing out the entire model which has built this very framework, and you’re saying going forward there might be, if you’re not saying that please counter, that the model going forward that will enable all of the things that we speak of, and the things that are necessary for national security. That the way we went about it up until now, is not the way that’s gonna get us to nirvana.

[Susan] I think that’s right and not scary.

Because I’m scared.

I would be scared if I thought the future world would was for the technology and the humans to self organize. I think one of the difficulties of the last 20 years, when the communications, instead of being Pony express days, when there was a lot of expense, so you need the information you received, had value. Now it is infinitely available, and yet humans are still trying to process as though it means something, and we’re trying to let, even our private sector trying to figure out where we oughta go or which technology, so here’s what I think. You need government but government can’t act in this world to provide the functions that government does in the same way it did. I love your quotation on change, I have a different one, and that is, I hate change, but I love relevance more. So to me, what’s the function of government, what’s the function of national security, you must effect it but you cannot effect it the way that we have. It just isn’t working, it’s ripping at the seams. It’s too slow, it’s not expansive.

But again, you still have the people and I know you’re gettin’ there.

You do still have the people, but people without imperative… are going to have a hard time delivering the outcome, that we need, and imperative just to prosecute a technology, or to take it as far as it would go, has had the limitations to it. Look at Mark Zuckerberg sitting in front of Congress, oh my God I’m responsible, when he started, he didn’t understand the responsibility of that volume, and that technology has, and now he did it. The reason why I’m not concerned is that if organizations understand what their purpose is, but let go of the modality, and develop new craft, as you’re articulating, I think we can get there. But if we either think it’s willy-nilly or I’ve got to hold on to the ways that I’ve done it in the past. Those two are antithetical to the kind of progress we need, which is why some people look at China and say, well that’s attractive, we oughta lock it down, that isn’t America, that isn’t gonna yield it. It’s some combination of those two things, but it’s not either-or.

So how do you, I’m sorry, I’m gonna get to you. How do you, because you worked in government for more than a year or two.

Since I was 20.

Right, how in the world, do we get to your nirvana. I’m serious, but again, you’ve got sticky floors, and very obvious ceilings, that would potentially, prevent us from getting there, what are the outlines, what are the one, two and three things that will get us, getting rid of that ceiling, and unsticking us from that floor?

You cannot convince me that leadership doesn’t matter, it does, it sets the direction, the course, and the parameters, and can make some of the rules. From a government perspective, I think one of our responsibilities is to have a little bit longer horizon, and deeper pocketbook. I’m not gonna endorse Senator Shumer’s proposal, but I like it because I think that’s a very foundational thing that you need to do, you need more. And the other thing is, we need to create a much more semi-permeable membrane, between the public and the private sector, for talent, for processes, for ideas, but it can’t just be one way. The private sector’s gotta realize that their solutions have to work at scale. So, leadership, semi-permeable membrane and re-investment in the foundation that will allow us to have the basis for application going forward.

Dr. Manyika.

I guess I’m a little more optimistic, than the question suggests for the following reasons,

So sorry, I’m from the south.

I’m optimistic for the following reasons, but there’s still a call to action at the end of it. So the reasons for optimisms, I see there’s lots of instances of the kind of change and innovation we’re talking about. Look at what the Defense Department Innovation Board has recommended, look at what’s in this report, look at what various leaders are doing, look at the Fellows programs that have started to emerge that provide mechanisms to move back and forth between industry and the government. So you’ve got lots of these examples, so that’s good. So look at some of the leaders who’ve emerged and stepped up, General Shanahan and others. I had the pleasure for the last year to work closely, and I was co-chairing with Admiral Bill McRaven, a task force on National Security and Innovation. So you’ve got these leaders who are emerging, and these practices. I think the challenge is two-fold. It’s too small, too incremental, and not moving quickly enough. And I think, while in the past we might have been able to live with that, and slowly adapt and change over time, this time is a little bit different. Partly because, I love numbers, so if you think about, take the investment question. We now have a competitor called China, who’s at scale. Just some fun numbers on this, if you looked at the rate at which the U.S. was investing in basic research, that feeds a lot of these innovation, the peak of that was in 1964, when we were spending 2% of GDP on basic science research. Now we sort of sustained that for a while, and then it’s dropped, today it’s about .66% of GDP. Now look at the other side of the trajectory, that China’s on, at the rate at which they’re investing, they’re on path that in about a decade, if they keep up the rate of investment spending, they’ll be spending about 2.5% of their GDP, at a time when by all expectations their economy will be about the size of our economy. So the scale and the pace we’re talking about requires that we move much faster. So while I love all the fledglings and innovations and the calls to action, and the things that are in place, they just need to do the bigger and faster, that’s the challenge.

[Gary] Need a bigger boat.

And you hinted to it, because you can’t not, in terms of particularly budgetary allocations to achieve this. You cannot ignore the political dynamic.

Well, but that’s the reason why I think it’s important that we find a way to bring the public along, because we have to get the support, no one can just, it’s a democracy, no one can just re-write the budgets any way we want, that’s the beauty of this country. But we have to bring the public along to understand that this is quite important and quite foundational, and quite fundamental, to make these kinds of changes.

[Mignon] Mr. Chairman.

So there are a couple of things. First off, I’m the last person to suggest complacency in terms of the nimbleness of government agencies, but you have to know that out in industry, this is a work in progress. We’ve got all these poster children in Silicon Valley, that talk about being very nimble companies, and I spent time with boards of directors and CEOs who are asking exactly the same challenges. They’ve got all the same problems. It’s typically the innovator’s dilemma, or rather the incumbents dilemma. And so they’re all trying to focus on the same issues. What you find is there’s some consistency, so the first is courageous leaders, so there have to be some people that are setting the north star. Second is that they focus on the managers, because that’s the linchpin, and especially mid-management, they’re the ones who are gonna decide whether or not your organization lives or dies. They’re gonna manage all the information going up, and the power going down, and they have to be trained a new model. And I know this sounds a little random, but, if you want a great book on the subject read “Moonshots in Education” by our friend Esther Wojcicki. And what she says is, in teaching, and don’t get me started on education, ’cause I’ll go off on that one for a long time. Although I have no moral standing ’cause I never actually went to college. But what she says is, the old model is the sage on the stage, and we need to move to the guide on side. And that’s exactly the model for that adaptive manager. Is how you help them to think of themselves not as the one controlling the work of their employees, it’s the one that is actually enabling them to be able to dynamically bind in and around problems. And the third place that these organizations focus on often is alignment. Is once you’ve got that path of the direction that you’re going, is what is the role of every individual in being able to enable that change? An ongoing process of change. So that’s part of my answer, is don’t, industry doesn’t have this perfect, but they’ve got some processes that they’re going through, where they’re trying to continually build adaptive organizations that can be learned from.

But Gary, industry can get this wrong. For these issues, for defense and national security, we can’t get it wrong.

In an environment where the risks are higher, so yes, there’s a dynamic tension as to what kind of risk management processes you’re gonna put in place. And it isn’t just that you’re managing your citizens money, it’s that you’re also managing their ability to have a secure country.

So there are no shrinking violets on this panel, so if you have any questions, please raise your hands and we’ll get the mic to you. A lot of what we are speaking in terms of working group three, the questions, I was worried during the first part of today, I don’t even know what we’re gonna speak about now, because a lot of the questions were put forth. But if you want to get more granular, or re-ask a question, stated in a different way, now’s your opportunity to do so. By show of hands, please help me, because I don’t know how many questions I have, so by show of hands, if you would care to weigh into this conversation, please do so at this time. We’ve got one taker. If you could briefly state, who you are, where you’re from and your question.

Yes, I’m Russel Shilling. The American Psychological Association, and a former DARPA PM and ONR. One of the things that I’m hearing you say up here, when we’re discussing what the workforce needs to look like for innovation and AI, I still hear mostly, computer scientist and technologists. And again, I know my comrades in the DoD, since I served there for 22 years, and I just wanted your, to expand on that about the diversity of talent you need on AI, what an AI professional actually is in your worlds.

Oh neat, why don’t you start? ‘Cause we’ve been fighting about this since the interim.

In a good-natured way. So first off there is research going back to the 1950s, when we shifted to, from a war footing to a consumer economy. That was really good work on understanding human skills, and I like the framing Sidney Finder’s known as the father of the Dictionary of Occupational Titles he was sort of an honorary uncle of mine. He basically said there are these things that are called knowledges and these things that are called transferrable skills. And so we’ve got these, and unfortunately today we call them hard skills and soft skills. But really it’s, these are skills that are anchored or rooted in a particular arena, and these are skills that are usable in a range of different situations, that are transferrable, and what we’ve done, we get so over indexed on the specific knowledges, that we believe are needed at a particular period of time, to be able to train people to be able to solve certain kinds of problems. Our education systems are geared towards that, the way that we’re churning out people with degrees is geared towards that, there’s all these other skills that will allow people to be adaptive and collaborative and so on, and we’re not training for those. The shelf life of information is decaying rapidly. And so instead, what I pushed people to think of what’s the, here’s the portfolio of skills that we need, and the truth is, there’s always going to be these really deep knowledges that will continually change, to be able to actually be the equivalent of the car mechanic, but a lot of people just don’t know how to drive the car, and they’ve got to have a range of different perspectives to be able to solve problems with dynamic teams. It’s really clear, Google did this analysis and they found there’s only two actual characteristics of high performance teams, psychological safety, so you’ve got a bunch of people that all can brainstorm together, and psychological diversity. So it has to be a lot of the skills that we think of as being soft, or it has to be, you’ve got people trained in psychology and a range of different liberal arts backgrounds, because that’s the only way that you’re gonna solve the problem. But in Silicon Valley we haven’t got it right at all. We heavily over index on the technical skills, and we are ignoring many of the others that are required to be able to solve problems dynamically.

[Mignon] Doctor?

Yeah, two things to agree with that. One of the things, the report that NSCAI just put out, does in fact, do a nice job of articulating the seven or eight different kinds of capabilities, and you look at that, the majority of them are not computer science, so that’s one point. The other point, I think he’s trying to see if the AI community recognizing this itself. So for example if you look at, I’ve been involved in setting up of what is now one of the largest academic institutions, Stanford Human Centered AI Institute, and by design, that institute is a multi-disciplinary institute. If you look at it, in terms of what’s going on there, yes, you have computer scientists, roboticists, but you’ve also got people from law, people from philosophy, in fact the co-directors of the institute one is a computer scientist, the other’s a philosopher. So I think you’re starting to see this recognition that this takes multiple skills and capabilities, so I think we do need to move away from this AI skills topic as being primarily about computer science, it isn’t.

Just to take a twist on that. I think this is a technical world. I think everyone needs to be, I don’t care your discipline, you need to be comfortable with technology. You may not be the person who’s developing, but if you are not a comfortable data swimmer, or you’re not comfortable with technology, you’re gonna have a hard time. Second is, as these technologies become more ubiquitous, the differentiator’s gonna be critical thinking. The way I characterize it, oh you wanted to use it. And so both the technologists have got to have in their head, use, and the decision makers have to have that responsibility of use. So if you take Secretary Kissinger’s comments, if you believe you have a responsibility for use, you will get to the issue of ethic. Because nothing changed about the responsibility of the organization or the human, just because you introduced a technology.

I think he said application and interpretation.

Right, I’m just saying that you have to understand the responsibility of use, and so, to me that’s the critical thinking piece. And so I actually think you’re gonna see a resurgence in the liberal arts education, as the technology permeates even more broadly than it has now, because that’s gonna be what is gonna make the difference in terms of progress.

I keep going back, and forgive me for being fixated, it’s kind of a southern thing too, to be honest with you, is again, you’re speaking about being disruptive within both public and private sectors. And affirming that the composition within those sectors will again be more diverse in a number of ways. Particularly when it relates to disciplines, but that again is not natural, it’s not comfortable, it’s not easy to manage, and—

[Susan] Especially in the government where historically our promise is stability of employment.

Right, you said that out loud?

Yeah, and that has given us some of the greatest accomplishments of free societies, because of that, but it isn’t necessarily the model that we need going forward. So I’m very proud to have served in the intelligence community for almost 40 years. When I talk to young people now, I say it’s the best first five to ten years of your career. It is, higher purpose, you’ll understand the use case, you will have more responsibility early, but at five to ten years I want you to move.

So how do we make that sexy?

No but I think we totally can. I think there are partnerships that we talked about in the field, that you can imagine careers differently, I can imagine a company saying, we are going after the same talent and I want that talent’s first five years to be in the government, and they’re still my employee.

You said, I think, a most key word. And so really, I think one of the greatest places you have to stand is purpose. When I was a teenager my father was a recovering minister,

[Mignon] Recovering minister.

Recovering minister who had been laid off from his work, written out of the budget. And he went to go help other ministers who were being laid off, and he wrote a little pamphlet that turned into a book called “What Color is Your Parachute” which is the world’s career manual. One of the reasons that I didn’t go to college is, I fell into the family business, I was actually trained as a career counselor when I was 19. And he had a construct, he sort of broke down jobs and also the characteristics of us as humans, into sort of seven different characteristics, including our skills and knowledges, but center of the target was purpose. And so what you’re finding right now when the heads of companies, and boards of directors pound on the table and ask me why young kids won’t come to work for their companies. They keep saying kids are asking them, what’s your purpose? What is the purpose of your organization. You got purpose nailed in the public sector. And so that I think is your superpower. That’s the place you start from, it is about that process and helping them to onboard, and even if it’s the first five to ten years, or a later five to ten years.

[Susan] And then it comes back around, right?

Exactly, but you don’t have to make that up. That’s the north star, to bring that talent in, it just has to be clearer about how they can actually help to move the needle.

So the public sector needs to be a better messenger. A better prophet?

And also really clear as to what the… One example that I think is very indicative is Code for America. Jen Pahlka, former CTO, the whole model of getting a bunch of innovators making a problem clear, this is what I talk a lot about in The Future of Work, is we have to become more problem centric. Agencies become very process centric, and then they forget what the problem was they were trying to solve. The more problem centric you become, the more you can carve out the problem to be solved, it’s clear, you can actually have an impact on it. Code for America just basically threw a team at the state of California, which suddenly made marijuana legal, changing the records of 50,000 people who had convictions on their records and wiping them out automatically. And that was only done because she brought a bunch of these innovators in to solve that problem.

Any questions? One to my right.

Hi, afternoon, my name is Jim Perkins and I’m gonna speak under my Army Reservist hat. The question here about top management, in particular with top management reform, one of you mentioned there’s both a tech problem, there’s data, and there’s recruiting talent, and I would like to push on what Miss Gordon had sort of mentioned about the off ramp, for, what I’ll refer to as the frozen middle. And affectionately providing a… I’m so sorry, I teed it up and then I sort of blanked out. But the ability to retain the right talent in there, because many of the people that you have with these skills are leaving out of frustration. Even if you have the technology and the data, the lack of implementation is just killing them.

So I think you and I both said it, and we just need to tackle those problems. So there’s just the demographic group of, not people who want to participate, but people who are just waiting until their tenure is up. We need to help that, and that’s a difficult thing to say, but it’s something that we’re going to need to. The second is, if we don’t create the environment, where the talent we bring can thrive, the promise won’t have been enough. It’s like, and it used to be, that we were the only, the government was the only place where some of these really tough problems were being attacked. And so if you wanted to do mathematics, or if you wanted to work with high performance computing, you had to be in the government. If you wanted to do geospatial information, you had to be in the government. Now, there are so many other outlets, we have to fix that. So when I talk about that whole stack of infrastructure, and process it is to get at that problem. Now, that is a big ole honkin’ problem for us, which is why I think the partnership and that membrane of saying, you know what, I want you to go out now and work on your craft and develop new things. And that is still part of our tent. I think the national security tent is much bigger than government institutions, that’s one of the ways, I think we can address the numeric problems of supply and demand, and the talent problem of keeping people engaged in what I think the nation really needs, without having them have to wait until we solve the bureaucratic issues of government structures as they exist today. So if you just took the small step of saying, I’m gonna free that up, and not worry about the sole source justification behind sending person A to company B, that is a way for us to jump start this, and I think the companies would love it too. Because it kind of inoculates their people in terms of the issues of scale, the issue of regulation, and the issues of security, that are important nationally as well. So that is what I would do yours.

I think I saw another hand back here, did I? Way in the back. And the other person, there was somebody, I thought somebody, if you can approach, whoever has the mic on this side, approach them, so we can cut out a few seconds. Yes ma’am.

Hi, I’m Anna Mitchell. I work at Schmidt Futures as a product manager. I just graduated from Stanford’s computer science department and I talked to a lot of people who are deciding about their first jobs out of college. A lot of people who are specializing in AI, and one of the things that I just noticed over and over again is that it’s so hard to turn down a super high paying job from the private sector with stock options, bonuses, it’s very hard to do that as a new graduate. What are your most concrete proposals for solving this pay gap, beyond only tours of duty, but like recruiting people to government for the long term.

We talked about that a bit in the report.

A couple things, one of the things that has been proposed in various circles, is the idea that in fact when people are taking on jobs that have mission and public service in mind, and are based on these foundational technologies, such as people like yourselves who have graduated from these places, why wouldn’t the federal government write off their loans? Sure the government may not pay them what Google will pay them, why not underwrite the cost of the education that people have invested in these foundational technologies?

[Mignon] So there are other ways other than that bonus check or that stock option that is meaningful and have a financial impact?

I think so.

Any other?

So a couple things, so the reason that things like tours of duty are the things you initially default to, the idea that there’s some period of time again, to be focusing on a specific problem, but then go and make the bigger paycheck in industry, is simply, it’s risk reduction, especially if you’ve got big student loans, then you’re thinking about how are you gonna pay all that off, but then with the long term arc of your career. There’s several things. The first is to help, I’m just being a broken record on this, up the volume on purpose. We just know from 50 years of work with Parachute, that if you give people two jobs and one pays pretty well, but it’s lacking the purpose that they feel, the reason they’re on the planet, and another job that pays less, but it actually has that purpose baked in, if you can factor out some of the circumstantial issues like you’ve got heavy student loans, people will choose door number two over and over again. Depending upon their risk profile. So you just gotta amp up the purpose part, and then you’ve gotta carve out the problem so that it’s very clear, and then there’s probably a public/private partnership in between, where actually they can be partially on loan, to be able to solve problems over the longer term. That is not binary, these are all the softenin’ the walls of the organization kinds of mentalities.

We have retention bonuses and hiring bonuses, and all those sorts of things, what we most could do, is make it faster. If I could give someone the offer at the same time they got it from the private sector, not deferred by 16 months,

Oh no, that’s real, that’s real. Last question from the audience?

I couldn’t decide how I wanted to go…

I’m so sorry about that.

Hi, John Radavan. I’m part of the U.S. Air Force MIT AI Accelerator in Cambridge. I just came out of a tour in industry with Amazon, I was part of the first cohort to kind of go through their Machine Learning University. What was huge to me was this idea of the democratization of AI. And a lot of these tech companies have developed these internal schoolhouses if you will, to kind of upskill their force. We talked about questions on the flow problem, the many types, and the pipeline problem. What do you see as the role for industry and for academia, and the FFRDCs like Lincoln Laboratory, Oak Ridge National Laboratory, on helping to upskill the force to kind of create this organic capability within the DoD?

[Mignon] Thank you, quick answer?

So first off I don’t wanna be word police. I’m always worried about words like upskill and reskill, ’cause those sound like the industrial processes that we’re trying to leave behind. I’d much rather it’s actually a person that’s trying to upgrade their own capabilities. Again, I go back to the sort of public/private interaction process. There’s no reason that there’s organizations, the private companies that have so many of these resources, couldn’t basically construct boot camps. Where government agencies can be continually identifying the skillset that they want and they could have the processes by which they could dynamically connect to that program, have people do the immersive process and get trained very, very rapidly.

[Mignon] Doctor?

Tours of duty such as the one you just did, I think we can do that on much larger scale, people from the defense agencies and national security agencies spending time with companies. Number two, I think the government agencies do a poor job of creating a sense of excitement for the kind of work people can do in government. Anybody wanting to machine learning on weather or climate systems, the government has better data on that than anybody. So how do you attract people to come and work on the kinds of problems where the defense agencies and the government has better assets to offer to people who are AI people and to those problems.

[Mignon] Miss Gordon.

I’ll stand with their answers.

Okay final word time is up, forgive me. One-word answer from each of you about what you’re most excited about for an AI future. We’ve got negative 20 seconds. One-word answer.

It’s gonna allow us to solve the hardest problems in the world.

That is more than one word.

Maximizing human potential.

[Susan] Curiosity enablement.

Thank you very much, please.

So we’re gonna take a ten minute break and then my fellow commissioner, Katharina McFarland will come up introduce our next speaker, Secretary Esper.

[Man] Actually if I–

Share with Friends:

Leave a Reply

Your email address will not be published.