Strengthening our Core: The Way Ahead for American R&D


National Security Commission on Artificial Intelligence Conference Panel 1 – Strengthening our Core: The Way Ahead for American R&D

Subscribe to Dr. Justin Imel, Sr. by Email

Transcript

Good morning. It’s my pleasure to introduce the first panel of the day, Strengthening our Core: The Way Ahead for American R&D. And it’ll be led by Commissioner Andrew Moore, who chairs our working group, Focusing on AI R&D. Please join me welcoming to the stage.

For coming here today. We are having a discussion related to working group one, which is looking at the the basic questions around research in the United States. I’m privileged to be joined by four people who individually I all have huge respect for as people who’ve been building up the worlds of artificial intelligence and contributing to the United States in doing so. These are, Andy Jassy, Steve Walker, France Córdova and Eric Horvitz. So technology R&D in the United States has long been driven by a triangular alliance involving companies, universities and the federal government. It’s been this way ever since the days of the Manhattan Projects. And it’s been to the huge benefit to the United States. Artificial intelligence as we all know, has demonstrably become a huge strategic tool. It’s no longer a question of if it’s going to be useful but in the corporate world and the federal world over and over again it has been demonstrated to be a game changer for organizations to be able to produce more effective, safer technology. So we the human race we’ve got this new additional tool to help us and that’s very exciting. And it’s not stopping. There are new things coming down the pike from the world of AI which really raise my hair, if I had hair, in terms of it’s… What we’ll be talking about in artificial intelligence in two or three years is even more exciting than what we’re talking about right now. So while the commercial sector plays a significant role in AI research, we in the commission are confidence that it is not enough. It is insufficient to sustain the us advantages which we’ve enjoyed over the last 70 years. The government does retain a critical role, especially in supporting basic scientific research and research directly relevant to national security. Many of these exciting things coming down the pike which are game changers are currently happening in our national labs and especially in our universities, not entirely in our commercial organizations. What this working group has done since April, is we’ve engaged stakeholders across the AI R&D environments. People in academia, people in government and in the private sector. And in doing so we’ve come to several key consensus judgments on the research environments. So I’m gonna quickly go through the list and I’m going to ask our panelists to expand on their thoughts around it. Something we’d expected when we started but we are now confirmed is the federal R&D funding for AI has not kept pace by almost any metric in terms of the number of researchers, the economic impact, the pie charts of where our competitors are investing research. On all of these the United States has currently come short. We believe the government needs to re-evaluate its AI R&D budget levels. Not necessarily running things as business as usual but potentially being more flexible about the way we invest in these areas if we want to see this continuing wave of innovations coming down the pike. We recommend that the US government identifies, prioritize, coordinates and urgently implements national security focused AI R&D investments. This is particularly important. And as we’ve been looking through the areas of investment, it’s kind of shocking that it’s in the national security related arenas that we’re seeing far too little activity. There are untapped opportunities around where we do believe that it is possible to build an AI R&D research ecosystem in the United States. Mainly based on clusters and re-bringing in the whole country and different segments into this grand challenge. Another of our conclusions is that in many places, there are bureaucratic and resource constraints which are hindering the national labs and other federal R&D locations from being able to come to their full potential. I personally have really seen this in heroic parts of the federal government where folks have had to fight hard, maybe harder on the bureaucracy ends than on the technology end to get their artificial intelligences actually deployed and useful. I and many of my colleagues weren’t born in the United States and we came to the United States because it’s the absolute, at least for our generation, it was the absolute center of the world for excellent research. And we believe that it’s essential to continue having it so that all around the world, the smartest minds in artificial intelligence aspire to be working in the United States for the United States. So we recommend that law enforcement and academic leaders need to find common grounds on welcoming foreign talent while still being really careful about the very real security risks from state-directed illicit activity on American campuses. Officials should recognize that overly broad investigations or blunt restrictions on academic research carry major risks to national security and to preserving our AI leadership. So that’s a quick summary of what we’re talking about. I’m now gonna ask the panelists to help expand on this. Each of them we’re going to ask for their perspectives on two central questions: what is the comparative advantage of the US research environments, where are we with risk or where are we weakening and where do we see opportunities. The second question is how does our current R&D environment need to evolve to fully leverage the transformative potential of AI to strengthen our national security. I’m going to now ask my panelists to each take turns with short remarks and we will begin from me outwards with Andy Jassy.

Thanks. I think that if you look at where we are today I still think the US leads in AI research. In groundbreaking and foundational research and breakthroughs. I think, in significant part, because the amazing universities we have in this country and a lot of the companies who’ve made very significant investments here. I think we have an unusual amount of talent here, as Andrew referenced earlier and I think there is an unusual entrepreneurship community in the United States. I mean most the even early-stage startups today are using machine learning and AI in their early-stage businesses. So I think we have a lot of advantages that we started with but I think others are catching up. As I think you see it particularly with China and to some extent Russia as well. And they’re primarily top-down government driven in terms of what’s happening. And in the business I work in every day which is a cloud computing business called AWS, we often get asked, “What are the biggest differences “between companies that are making a big move to the cloud “versus those that just talk a lot about it?” And oftentimes what it really is, is its alignment at the senior leadership levels at that organization and then an aggressive top-down goal that forces the organization to move faster than it organically otherwise would. And I think that in the US we haven’t yet taken that mantle of that top-down leadership. We don’t have a top-down aggressive goal, we’re not funding research in AI like we really mean it yet. We have a start but not really like we mean it yet. We haven’t kind of provided guidance and regulations on how to use the technology in AI so that we make sure we’re using it responsibly but we also get out of our own way with all the debates on how to use it. And if you contrast that, you can look at China where they are very aggressively driving it from the government top-down. They have a very clear incentive and goal to be number one in AI by 2025, they have relatively few limitations in constraints and regulations in how they act and they view it as a very important strategic initiative to lead in this area because they believe it’s the future of really almost everything that we touch in society and so I think that Russia may be less expansively than China but still very big top-down investment to use AI to to their benefit in the military. So I think when you get back to our country, we need to pull together a more significant top-down strategy. I think one of our jobs as governments, as a country, as companies is to try to look around the corner and see what may be coming before it has meaningfully negative impact for our citizens or our customers in whatever we do. And I think you can look at where we are today and this may not be exactly that Sputnik moment but you can see there is a huge amount of activity and momentum right now in AI and we need to look around that corner and pull together a top-down strategy it allows us to invest in this transformative technology the way we should.

[Andrew] Thank you. Great.

Steve Walker, director DARPA. I like the title of this panel: Striking the Core because I think our core is pretty strong. I mean the core when I think about the core of our R&D environment in the country, I think about how Vannevar Bush first started up, right. Academia, the private sector, industry and government pulling together in this loose federation, our development community. Out of that concept of spring NSF pretty quickly thereafter took a little bit longer for DARPA to pop up in 1958 after Sputnik and this sort of loose federation of entities or ecosystem has really served us well through World War II and after during the Cold War. I don’t wanna change that system. I think it works well. At least from the standpoint of innovation and allowing ideas to come bottoms up, which is really the DARPA model. Hire the best people you can bring into the organization and turn them loose. Allow them to fail, measure their progress but give them the freedom to innovate. That works well for our country and I think it’s working well in the AI domain today. What may not be working as well is some of the seams that have grown up between those different communities for various reasons. And so I think one of the the risks or the the weak points there in the system that we’ve used over these many years is building those ridges again between those communities and in national security and the defense sector, we have the defense industry. We work with them all the time on military application and that’s great and it works well. But we have a whole commercial sector out there that is really focused on developing AI tools and techniques and DARPA, we at DARPA interact with that community as well. But building the bridges a little tighter between the commercial sector and the defense sector on the application side of AI, I think is gonna be critical as we move forward. I’m pretty comfortable this country is still innovating better than anybody on the research side and there is pressure at our academic institutions for various reasons talking to Kathleen who’s a DARPA alum before she was running a department at Tufts, computer science department. Lots of students. Lots of students coming on board which is a great thing. But giving them something to do after they graduate and not just in the commercial sector but also the defense sector I think is gonna be important. You said it, Andrew, early on, we wanna continue to be the country that attracts the best and brightest people from all over the world and well the last thing we wanna do is train them here and have them go home. We wanna keep them here if possible and so that’s gonna be important. That’s a weakness that’s popping up that we really need to deal with, I think, as a nation. I can’t remember what the second question was.

What, in fact you’ve been super helpful in all of this, but it is the question of how the current R&D environment needs to evolve. Yeah.

So, and I’ll talk a little bit more about it hopefully in the open session, but one of the things we’ve done in the electronics sector is DARPA has led an effort called Electronics Resurgence Initiative. Which has really I think, it’s about two years old but has really been successful at bridging that gap between the commercial micro electronics companies and the defense sector companies working in electronics who need those devices. Academia is the foundation. Working with companies across the board but working with both sides of that industrial aisle to get the best electronics we can, which is really in the commercial sector now over to the defense sector for national security purposes. I’d like to explore that I think a little bit more for the AI community because I think it’s not as far along.

I completely agree. So much of AI is now becoming hardware and compute bounds. Dr. Córdova.

Hi. Good morning, everyone. Not to lose my voice with the cold. I am very glad to be on this panel and especially we’ve had so many connections on this panel. Let me mention a couple of those first. So Steve and I chair the White House’s Select Committee on Artificial Intelligence and that brings together all the agencies that are involved to talk about what we’re doing and how we can grow and do more together. And most recently DARPA and the National Science Foundation that I head have agreed on a project $10 million each to foster work on machine learning together to fund proposals there. And Andrew, of course, is a great role model. Andrew, earlier this week I was at both Stanford University and the University of California at Santa Barbara where they’ve just launched, with Google, the big quantum computer. And Andrew’s my role model for how university should change and think about what’s a professor. He’s gone in and out of universities. He’s been a dean at Carnegie Mellon and now he’s back in industry but he circulates as do many of the computer and information scientists at our universities in this country but I think that should be extended to the whole domain of the professorial. That we should think get out of our old school models of professors get tenure and so on and be really encouraging the churn that is so important. I have a figure up here, a slide which is a famous slide from about the year 2012. A report of the National Academies. It’s called the tire tracks slide because they look like tire tracks but what it is are different industries within the whole computer and information science domain that you can see on the top of the tracks, that are all connected and show the uniqueness of the ecosystem in this country between the government which funds the long-term investments, and that’s the red line, that starts very early on. Like the National Science Foundation would be a good example, DARPA another. We we fund research that we don’t even know where it’s gonna ultimately wind up. As Eric Schmidt himself has said at a commencement address that we often quote that NSF is where all the interesting things get started, thank you, Eric. So interesting things get started on those red lines, way long time ago and we invest for a long time. Then industry gets interested in some of the potential applications. Those are the next track over and starts seeing how those can be blended with other innovations and create products and then the the final line, the green line there, is when businesses develop and become million-dollar businesses and then billion-dollar businesses. And this this is something that we have in this country that we should treasure is that relationship, interrelationship and that feedback between industry and academia because that’s where I think what the government is funding, is all these amazing talented university people. So another point that I wanted to make is where NSF’s investments are going. We spend about $500 million a year on artificial intelligence and that 85% of everything we fund goes to universities. That’s where the talent is. Some of the long-term programs that we’re investing in now have to do with the security and safety of AI and the fairness of AI. So we have a collaboration with the partnership for AI which is composed of about 50 industries and for funding proposals out there on the security and safety transparency accountability of AI. And with Amazon we’re funding a large number of programs on the fairness of AI. Another thing that we’re funding is one of our big ideas in our strategic framework for future investment and that’s the future of work. Because the workplace is gonna change. Whether you’re a schoolteacher or a factory worker or any occupation. And artificial intelligence lies at the core of that change. In fact, we’re fond of saying that for all of the big ideas that comprise our strategic framework, whether it’s quantum or genotype to phenotype, the future of work that it is artificial intelligence which is the universal connector. And that’s why we shouldn’t limit ourselves and think about just funding artificial intelligence because it is so broad it permeates all our fields. It connects all our fields. It will change everything we do not only in the workplace but in education and how we play, how we think and our work with machines. So the last point I want to make is on, in talking about universal connectors, they think about the glue that binds all of the disciplines together. And recently, as I said, I was in Santa Barbara and that is the home of one person who invented Gorilla Glue which some of you may have used. And that person turns out to be a friend of mine for a very long time. Our kids played soccer together and he is a woodworker and that’s why he developed gorilla glue. And to me that is a key example that when we talk about wonderful discoveries, discoveries that are transformative either in a little space like that kind of glue or a huge space like the AI glue, but it all devolves to people. To individuals that we fund early on to just experiment and as they need to do things and they need to improve what they’re doing, they become inventive. And they use chemistry and physics and mathematics, computer science all the basics in order to do that. So in emphasis on the workforce and funding the workforce, training the workforce has to be a huge part of how we’re thinking about the future and that’s a key goal of the administration and Congress and all of our agencies.

Thank you Dr. Córdova and Dr. Walker. Both of you have been huge champions of artificial intelligence for the United States and I really want to thank you for that. Our final panelist Dr. Eric Horvitz.

Yes, so the US is in a very good place. We’ve had a great history of leadership in creative innovations with AI going back to Vannevar Bush, Licklider at DARPA and leadership there, Herb Simon, Allen Newell, John McCarthy founders of the field of AI. But we could do a lot better and I see great opportunity with bringing together government agencies, academics in academia as well as industry to think through next steps in where we go to address some of the hard challenges and opportunities coming to the fore. As the director of Microsoft research worldwide, I’ve grappled with coming up with innovative models to bring together academia and industry. New kinds of positions, new kinds of of resources providing data sets of various kinds to academics and powerful compute that they might not have for example, providing homes in industrial environments for academics to understand some of the basic real-world challenges that are being faced in engineering as well as in the hard problem-solving tasks ahead. As chair of the partnership on AI, I worked with the NSF to think through, as France said, this is a innovative approaches to bringing private sector together with government agencies to fund innovative topics in AI. And thank you very much for, you Irwin and others for the great job that was done there. I thought I’d mention a few interesting hard challenges in AI today facing us technically to give folks a sense for why we need to bring government, academia and industry together. AI R&D has been on a fabulous run but we can do more. Today’s AI systems are actually rather quite brittle. Blind spots and biases show up all the time in these systems. They are narrowly focused in terms of how they can be applied. In some ways much more is being said about the possibility than the reality of these systems but we can make that reality come true over time. Wouldn’t give systems for example the ability to understand their own limitations, to give them a sense for their own competences in different situations. They need to know what they don’t know, especially in high stakes decision areas. We need to actually understand how to make AI systems more robust to attacks. The more we use AI throughout industry, government and defense, the more we create AI attack surfaces that others can render useless with adversarial focused attacks on these systems. So these defenses require investments in AI technologies themselves as well as in complementary technologies like security and verification. Technologies in computer science outside of AI to make these AI systems better and more robust. And moving into the realm of basic science where we really rely on academics to think broadly and long term. It’s clear that while we’ve made great progress, we have very little understanding of many mysteries of what I’ll call human intellect. Mysteries of mind. How is it that people can learn so much without large amounts of labeled data in the wild in an unsupervised way. How can people learn to do one thing by knowing another thing. The idea of generalizing across tasks and where does all of our common sense come from. This ability to manipulate thousands and upon thousands of facts daily in a very efficient manner. Another front we need to better understand how to enable people and AI systems to work together collaboratively. Human oversight and controls are a critical foundation of AI in DoD and related national security efforts. And so there’s great opportunity ahead on R&D, on methods that enable AI and people to work in a seamless manner together focused on the principle and goal where AI augments and extends human intellect. Where we work to mesh the intellect of machine and human to create a better whole. And finally we need to really understand how to move even today’s technologies into practice. It’s more difficult than one would imagine to take existing prototypes and even specific solutions and move them or translate them into real-world value providing services grappling with legacy systems and pipelines, training understanding cases where the application of existing technologies is actually relevant and powerful and useful. There’s so much that can be done in R&D to make that translation better more efficient and effective so that we can apply the AI technologies again, even today’s technologies in the laboratory to the real world of healthcare, education, transportation and defense. And I wanted to stress just one last thing here, to emphasize what Eric Schmidt said in his opening remarks and Andrew amplified as well, we have to be careful about a fortress America mentality when it comes to R&D and technical excellence. We need to recognize and preserve the role of our national strength which comes in the power of free inquiry. Going back to an Vannevar Bush, Licklider and our history of Science and Technology. So US restrictions to protect national security and intellectual property have to be narrowly tailored to specific threats so that they don’t hinder our academic and commercial leadership in AI. The innovation culture that we’ve long enjoyed in our nation and our values around innovation are the very heart of our power as a country. I’ll stop there.

Thank you, Dr. Horvitz. And I wanna emphasize that the commission recognizes the need for this breadth of research but we are absolutely convinced and have been briefed on very real series of threats and approaches to getting hold of US national security relevant AI by our adversaries and so we do have to bear that in mind as well. What I’m going to do now is hit the panel up with a couple of quick questions. So I’m looking for like short 30-second answers and then we’re going to open it up. So if you have questions for our panel about the AI R&D infrastructure in the United States please prepare them and then that will come. I’m first gonna do a couple of questions myself. And the first one’s for Dr. Córdova. One thing that I think we all admire is the NSF has launched an effort to establish AI Institutes. What impact do you think we’re gonna get out of this AI Institutes program?

Well, we did launch about a month ago a solicitation to the wide community and it’s connects four of our government agencies actually. The US Department of Agriculture and the Veterans Administration and a couple of the defense branches as well. And we are asking for proposals in six different areas for research institutes on artificial intelligence and this was from a workshop in which people talked about where are the areas that we should be funding first of research institutes and the one I remember most is artificial intelligence for physics discovery, since I’m a physicist and wanna make discoveries in that realm but there are ones on fairness and security. And I think almost any area that you’re working in in artificial intelligence could fit under one of the six umbrellas. So we’re hopeful that the impact is going to be a lot of creativity and innovation. I mean that’s what happens when you launch a solicitation to the wider community. Is we don’t know exactly what people are going to be producing but we assume they will be multi institutional collaborations. There could be collaborations between industry and the university. And we’ll see what happens when we get the proposals in. It’s just important to get that creativity from out there to fund and invest in it.

Thank you. And I absolutely agree that the big industrial pushes in AI all began with the basic research and pretty much entirely from the United States. I have a question for Mr. Jassy is—

[Jassy] I’m a doctor.

Oh, okay. (laughter) Your question for Dr. Jassy, you head the largest organization on the planet at the moment for all this transformation of information technology into the cloud and from that vantage point, I’m sure you’ve had a chance to form some opinions about what are the national security things that we in the United States should be most be concentrating on when it comes to AI R&D.

Yeah, I think, first as a number of people said on this panel, we’ve gotta make sure that we fund research and education. I think we’re underfunding it today and many of the most important discoveries, not just in AI but in computing in general, have come from research organizations and our educational institutions. I think if you look at our agencies that are in charge of national defense, we must make sure that they have modern and sophisticated infrastructures. And I think in this day and age having the ability to be able to leverage cloud computing with the way that the cost of storage and compute has radically changed and then all the services available will allow you, not only to save money for the missions you’re pursuing but also to move in a much more agile and fast fashion especially given what’s happening in the rest of the world, is essential. I think there’s all this data that exists and we see this with companies and with the governments that we serve in our business but it’s true for sure in our country where we have all these data silos where we have really valuable data that lives in these different places that we need to find a way to pull together so that we can leverage it for analytics and for machine learning and AI in a different way than we have in the past. I think also being able to help our national defense have the right data and the right AI and predictions at the edge of the battlefield where you oftentimes don’t have time to send a request back to some kind of central place and where those decisions have to be made in real time is incredibly important. So I think those are all important things. And I think the last piece is, as everyone has said, sometimes I think people mistakenly think that we’re okay with the investment in this country in AI because you’ve got these big companies that are making giant investments and the reality is it’s not enough. It has to be a deeply funded partnership between government, academia and companies.

Thank you. A quick question for Dr. Walker. This question of challenge-based R&D, DARPA has really pioneered that. Not only for the US but for the world and many of my own most exciting experiences in pushing on research has come from those challenges and I’m excited to see that Jake, for instance, is pushing on national mission initiatives. How do you see the future evolving in terms of mission based or challenge-based AI research?

Thanks. Yes, DARPA has used challenges very successfully. The one everybody talks about was last decade, the self-driving car challenges that led to a lot of what we’re using today. I mean the big challenge I see… DARPA has been funding AI research for 56 years now through the expert system phase, machine learning, which I very much agree with Eric’s comment that it’s still very fragile and can be manipulated and now I think the next big challenge for AI was well articulated by you as well, and it’s the human-machine partnership and how we rely on machines to help us do our jobs better. Thus being incredibly important to war fighters. And we have started a program at DARPA called the AI Exploration program which is designed to explore very quickly all aspects of the human-machine partnership. And by quickly I mean we’ll put a solicitation out to the world and promise to look at those proposals that come in to solve a particular problem and award contracts in 90 days or less. That’s pretty quick. And we’ve been successful at doing that now for the last 14 months. Every month the new topic comes out and we’ve got a very diverse set of universities, small companies, large companies working with us on all aspects of this human machine-partnership. And everything from machines being able to tell the human when they’re not competent to do a particular job to learning with less labels, I think it’s called fewer labels now because that was grammatically incorrect but the department has a problem. We don’t have a lot of data that the commercial sector does. So how do you learn with less data and having the machine be able to explain itself to the to the human operator, building that trust between human and machine is gonna be incredibly important. Especially for the war fighter if they’re actually gonna use AI in life-or-death situations. That is the next big challenge in AI and that is where we’re focused. Do we do a big self-driving car challenge in that space, we’re not there yet but we’re exploring these different avenues and we hope those will turn into bigger programs.

Fantastic. Thank you. And a question for Dr. Horvitz following on from Dr. Walker’s comments about access to data, we did look at the question in the commission of does China have a large advantage because of both volumes of data and perhaps less restrictions on federal use of data. What do you think about the the US position on the availability of data?

So let me just say that whether you’re China or the US or any nation state or a research team, we hear a lot about big data but we’re really in a world of data scarcity when it comes to specific tasks and needs operationally. This means we need to innovate. We need to come up with research programs that enable us to understand how best to leverage models, simulations, data generalization techniques as well as, where we can, to to transfer or generalize from the data we have to new settings through technologies referred to broadly as transfer learning. These are all interesting challenge and opportunity areas for AI research. They’re all gonna require government engagement with scenarios and people and ideas and funding private sector ideas because it’s more competitive terrain when it comes to understanding how to make the most of our data and to generate new datasets that are specific and aimed at particular applications and academia for the incredible lifeblood of ideas over the horizon thinking about this. Thinking about this whole area of taking simulation right now, there’s been a rise of this idea of taking physics space or model-based simulations to generate data to actually do machine learning in simulated worlds to become expert at doing that and then when you take your trained system that was that learned in a world that might not be full fidelity because it’s not the real world, understanding its blind spots and biases to make it as good as it can be in the open world. Again this is a core research area that we can excel at. It’s gonna be important no matter how much data is collected copiously in the stream of operations like behavioral data that we hear so much about the Chinese collecting.

Thank you very much. At this point, I’m going to open it to questions from the floor. You can raise your hand and a microphone will magically appear in it. Yes, please.

Thank you. My question relates to how efficient government investment is in helping to de-risk or otherwise encourage private investment. In the AI world there’s a lot of investment from all sources now in this particular dawn of AI. So maybe the question relates more to the emerging technologies for example quantum computing which is a strategic area for the US and for increasingly for US government investment. Private sector investment is lagging because the return on investment is so far in the future it’s kind of incalculable. That’s not true for AI today but my question, with all the different streams in your tire graph of investment, what’s your view about the efficiency of government investment in helping to de-risk or otherwise encourage private investment in emerging technology areas and possibly in other areas of AI that come up in the future?

It’s a great question, thank you. And so the whole idea of having some bodies, some agencies like the National Science Foundation fund the earliest research is to de-risk it but over the long term. Our investments sometimes are very very small. You take 3D printing. We were their first agency to invest in a university, one, single University of Texas at Austin researcher and everybody thought it was crazy. “What is this about? “Where’s this gonna go?” And of course it’s gone very very big. And many of the technologies that comprise our cellphones in the very beginning they were thought to be just not very interesting or useful. So we invest at a time, and for not very much money when things can grow and they can start develop some interests and some prototypes and then we also of course, we invest in the small business innovative research program. We were the first agency to do so back more than 30 years ago and now the whole federal government does that. And that’s kind of a next step and our evolution has been to develop more and more programs that sort of progress towards what Steve is investing in DARPA, more challenged things but the major part of our portfolio is to invest in the core and to do exactly what you’re saying is to de-risk just bright brilliant new ideas. But we do have programs, like the innovation core program would be one to once you get an idea that looks like there’s light at the end of the tunnel, our convergence accelerator is our newest idea to promote a fast track for research.

If I can just add to that. Sometimes investing in top talent in education and in STEM has the effect of actually taking care of some of the concerns that you expressed by bringing into even low resource, low investment areas great minds. And one example of this, I would say, is that during the what’s called AI winter, 1986 to 1990 or so, was actually secretly AI summer. There was an explosion or a revolution called the probabilistic AI revolution that happened on very low budget with top talent.

Absolutely true. And if I could just add, an example of a challenge going back to the challenge is spectrum collaboration challenge we just did by seeing if we could take hundreds of radios and actually build an AI into those radios to have them collaborate automatically to share different pieces of the spectrum. And so this was not a lot of money for DARPA to spend. It’s developing a future workforce and lots of company showed up to see the final competition and I think that’s gonna pay dividends for where we go in 5G and 6G in the future.

Thank you. Next question. I see a question in the back there. By the way, thanks everyone for contributing questions. It really helps I think focus today if it’s a very interactive discussion.

So academic computer science is having a bit of an issue right now where many academics are moving to the private sector either because of salary or better support, access to various other things. My question to the panelists is, is this a problem? Maybe it’s okay since we’re staying in the United States or is this something we should be worried about because they’re not staying in the university, they’re not training graduate students, they’re not teaching. So as some of you are funders of research, how do we think about this? Is this a problem, is it okay? What to do about it?

I think it is a problem. We need to have top talent teaching our future generations of researchers and scientists and training PhDs that go off to become the next generation of leaders. It’s something that I personally think a lot about and my colleagues think about and we’ve been thinking about on the commission. What’s the the future of the incentive models and the support models that will put top talent into teaching positions and sort of giving the people even in industry a chance to teach and mentor students moving forward. One direction is coming up with new kinds of hybrid models. We talked about that a bit in the report. We see interesting innovation going on there by major companies as well as universities who are on both sides for example loosening up the tightrope of IP concerns to enable more of a back and forth visitation of top talent. So I think we’re gonna see some innovation there in terms of the actual, we’ll call them the education in business models.

If I could quickly add, I think it’s crisis level faculty members in computer science typically spend about 40% of their entire time trying to raise money to fund their graduate students who are the next generation. And it’s getting to be a pretty brutal life and we’re losing faculty member to other countries and to industries and frankly the same problem is happening for our national labs as well. Those of us in the big internet companies, we kind of are often asked are we okay with hiring folks out of faculty positions. I want to turn this over to Andy to say what do we big internet companies think about this issue?

Well, I think the panel’s capturing it. I mean we do bring in faculty members to these companies and they’ve added quite a bit but I think it’s really concerning if we don’t have people staying in academia and if we don’t build the right types of partnerships and yeah I think the way we think about it and I think also just makes common sense is that if you have the right types of people who are trying to do research and trying to find breakthroughs to change the way we’re all able to operate, they’re a type of builder that needs to have challenges that are motivating to them. And so if the set of interesting challenges stops with the thesis, then they’re on to the next thing and they’re less likely to stay. And so I think we have to… Both industry as well as government, we have to find ways to provide and fund the right types of initiatives that really are game changers that incent the right types of people to stay and continue to be in academia and to solve some of the hardest problems and to help teach others as well.

[Andrew] Thank you.

And we have to provide stable funding for young people. That’s the the comment that I got in going around giving talks at universities this week. Is that students are just not secure that they have to work so hard to get this grant and as he said, 40% of the time of them is spent on all the infrastructure to do with the grant and so they get discouraged and wanna go off to industry. That said, one of the best examples I saw of the potential for the future was the quantum computer. The Sycamore computer examples. So we visited John Martinez who’s the lead author on that paper that just came out about the quantum supremacy in computing and he’s at Google in Santa Barbara and he’s really a faculty member at UC Santa Barbara and so he was showing us all around the lab and the computers inside their insulators and such and we got to do the programming and so forth and I said, “Where do you make the chips?” And he said, “Over at the University. “They have the clean labs.” So I mean, look at how that’s leveraged. That investment of ours. And we’re funding the quantum foundry there. And all the students that are now at Google and going back and forth. So there is hope that we can do this but we have to do it over a broader set of domains than just specific examples here and there.

And to in fact make one last comment. Maybe this is a call to myself and others, I believe that we should really promote this idea that industry, experienced industry hardened scholars head back to academia for a portion of their careers to teach and train.

Very good. Question raised in the back over there. While the mic’s moving over there, I mentioned to Eric, we’d love you to do that. (laughter) We would love for you to do that.

So question for the panel, is there any low-hanging fruit in terms of legislation or regulation that we can take care of now? There’s a lot of talk of guidelines that need to be done in the future but it seems like a lot of those pertain to principles that we might not be quite sure on yet. There’s still a lot of research that needs to be conducted. So is there anything that can be done right now?

Thank you for the question. I want to emphasize, this is an interim report and so we will be making those very specific recommendations at the end of the commission study. At the moment we’re really seeking information and advice. Having said that, does the panel have comments on this?

There are rising AI technologies that provide new powers to private and public sector organizations and nation states and this includes facial recognition, for example. And we already see countries using these technologies that are at odds with the values of this country. There’s an opportunity for us to make some core statements in the realm of regulation in how private and public sector, agencies use these technologies and best practices. And I know there’s a bunch of work going on in this space with civil liberties organizations of society as well as technology groups. So I see that’s one example of where there’s a technology that’s arisen. It’s out there now. It can be used in a number of ways. It creates new kinds of pressures and uses when it comes to human rights. Potential risks to human rights like freedom of expression, freedom of assembly and privacy that we can start thinking deeply about as a country and nation.

I’ll have one. And just generically we talked a little bit about workforce in academia and in corporations, but the Defense Department could actually use more AI experts. In order to do that you’re gonna have to boost some of those salaries to attract the best and brightest people at least for a time. So if there are proposals on the hill to do that, at least in a pilot type program, that would be extremely helpful to the Defense Department.

Yes.

Thank you. Jim Keffer from Lockheed Martin. We heard about going after top talents, university grants, going after those college type of students. My question refers to way before that time when the kids are in kindergarten and first grade, second grade and haven’t been told what they cannot do yet. And how do we get them on a track so they think, “Hey, I can do this STEM stuff.” Because I think as a nation we need to grow the foundation. We need to increase the number of young people that are, very young people, children that are interested in going in these tracks. We seem to focus a lot on the top end talent but I don’t see enough focus on that very early early early talent that goes across the nation to rural areas and underrepresented groups. Does the commission have a focus on reaching into that very young group and trying to get them on a path? Because that involves secondary schools, middle schools, elementary schools, all that. Teachers, teachers pay all that but growing that foundation I think is absolutely key. Otherwise we’ll be having this same discussion 10 years from now and 20 years from now. Thank you.

Really good point. Thank you for making it and it is something that I think the commission needs to focus on very carefully.

Certainly, the White House put out its strategic plan for STEM education almost a year ago, last December. And one of the three major tenets major goals in there, is computer literacy and another one is inclusion of all people in the US in STEM occupations. It’s something that NSF is really devoted to. We have a Computer Science for All program which extends over many government agencies now and also an Includes program which is about inclusion and diversity in the STEM workforce starting from very young and to a funding pilot programs all over the country. So this is a very much a goal of the inner agencies groups.

If I could just quickly add that two-year colleges and opportunities for students who aren’t gonna be getting into one or two of the elite four-year colleges we’ve talked about. Those two-year colleges are extremely important and I think all of the big cloud vendors up here would comment that part of the big development work at the moment is creating tool sets to empower folks to be able to do useful machine learning, computer vision, speech recognition without needing that four-year education and a PhD. We have a question up here in the front. (mic distorts sound) okay, thank you. Are we good? Go.

So I’m Adam Drake. I’m a presidential innovation fellow. First of all, thank you all for being here today and talking with us. Now, one thing that seems to be emphasized is leadership and strength as being the same thing but another way of looking at strength is adaptability and resilience. And speed of adapting to changes that happen elsewhere. And a lot of the context for the conversation today, is in being strong by being the first to develop something. Which makes a lot of sense if it is hard to adapt the thing you have developed. If it requires a lot of hardware or other stuff. What we see in artificial intelligence is a lot of this is public research once it’s been done. So it becomes more important to adapt to what is available than it is to be the first to have created it. So can you help me understand a little more of your thoughts on the interplay between leadership in terms of being first and leadership in terms of being robust and speedy in how you adapt to what has been developed?

So let me just say that I think that this whole effort on the National Security Commission on AI is part of being flexible and adapting by thinking across the nation about as to how to be more adaptable, What we should be doing differently, how to move forward. As I said in my opening remarks, taking today’s existing technologies and figuring out how to use them starting with legacy systems and moving forward with technical refresh and use cases is a a deep and hard engineering and technical challenge in itself. Scientific challenge. There were technologies available in the late 80s and early 90s that I will put out, would have changed healthcare in this nation if they could have been translated these early age technologies into use cases. So I think we underestimate the challenge that’s going to take and I’d like to see this country focused effort on that adaptability. The idea to understand use cases across government agencies. How to basically make better use of existing technologies and that will prepare us and give us a foundation for doing tech refresh as we advance in the technology on top of the the learnings. So it’s a really good comment.

I would just quickly also add that, yeah, I mean for sure we have to be adaptable and we have to use the technology responsibly. But if you look at the history of governments, look at the history of business, speed disproportionately matters though. It’s actually more important than people always give it credit for and there is so much that we can be doing to help on almost every dimension in life right now if we actually really were investing like we truly meant it across the board and it was this broad national urgency and I think also the other thing to remember too is we have incredible datasets that could potentially be at our disposal if we use it the right way but you don’t have the datasets and then build models and then have answers that work. It takes a lot of iterating. It takes a lot of building the models and training the models and tuning the models. We all work in organizations that do a fair bit of machine learning and AI and we can all tell you that while incredible advancements have been made, there’s still so much more that we have to do to make these algorithms as productive and as safe and as helpful as we want. And so I do think that while we have to continue to proceed adaptively all the way through, I do think speed matters.

Just to get a comment on this again, putting AI aside for a second and look at all the pieces you need to get it right including data ingestion, flows across the organization, the infrastructural innovation that’s needed is not to be underestimated and you wouldn’t necessarily point that and say that’s AI search but it certainly is enabling and critical.

Thank you. I wanna make sure we have time for a couple more questions. There’s a question over here.

Hi. I’m Melisa Flagen with C-smart. And I think that we had an unintended consequence of sort of federal funding and research to academia that led us to start buying professors out of teaching. So the teaching deficit and sort of training the next generation of scientists and engineers is not just a problem that they’re going to industry, it’s also a problem that as soon as I get an NSF grant or a DARPA grant or ONR grant, they get bought out of teaching. And so I think we could think about using the NDAA or other sort of approaches to actually think about funding lecturers and funding adjunct faculty at a living wage. It shouldn’t be an act of charity to educate the next generation and that that is actually something that NSF and ONR and other places should really be thinking about. And I think the DoD has an opportunity here, specifically to get people excited about security as an avenue in thinking about funding actual educators as opposed to just funding the research. We need both but I think right now we’re disproportionately skewing to the research.

Very very good points. Any comments on this?

She’s right.

Yeah. I’ve absolutely seen this.

Yes. By the way, it turns out there’s a line of sight I cannot see in that direction or that direction so please yell or something if you’re in that direction.

Hi. Thank you. Can everybody here me? Hi. Yusuf Azizullah, GBAC, CEO, Global Board Advisors. Faculty at University of Maryland and also strategy in Germany’s Mannheim Business School. Question for here: we have to as we continue to be advancing in AI the topic comes up academia but I think if you look at it from a practical point of view, Stanford, MIT and all the big Ivy League education academia institutions, we have to put a pause and see what our peers are doing in Europe and a great case study comes from Finland. The University of Helsinki which I actually did, they’re giving free AI courses and more than 200,000 people the masses have taken this free artificial intelligence course, elements of AI. I think we have to solve the problem is not only the labor but also from how do we educate our advanced citizens. The point of the Lockheed Martin earlier, how do we have a fair chance for a guy sitting in West Virginia from a kid growing up in Silicon Valley. And all these academic Ivy Leagues, and I did go to one in Boston also myself, but these institutions cannot do the masses. If we have to do free education, I think that’s the practical approach across universities for citizens to advance and I think that’s something we should discuss here and this would be the great forum for this. Thank you.

Thank you. I really appreciate that comment. I’ve noticed that we are now out of time for this session. I would like everyone to remain seated for the next session but prior to that I really want to sincerely thank our four amazing panelists. (applause) Thank you. And as this panel leaves the stage, I would like to welcome my fellow commissioner, Safra Catz, who leads the commission’s working group on applications of AI for national security. Thank you.

Share with Friends:

Leave a Reply

Your email address will not be published.