The finalists for WashingtonExec’s Pinnacle Awards were announced Oct. 13, and we’ll be highlighting some of them until the event takes place virtually Dec. 8.
Next is Artificial Intelligence Industry Executive of the Year (Public Company) finalist Lloyd Greenwald, who’s vice president of advanced cyber science at CACI International. Here, he talks proud career moments, focus areas going forward, turning points in his career and more.
What was a turning point or inflection point in your career?
Learning how to write a winning competitive proposal was an inflection point in my career. Being selected to perform DARPA-hard research is exciting and opens up doors for future work and partnerships. As I learned from a series of smart and accomplished mentors, writing a winning proposal doesn’t start when the BAA is published. The process starts very early.
One component is meeting with potential customers (program managers in DARPA) and discussing their interests and the hardest problems they are trying to solve. However, it starts even earlier with past performance and having differentiating capabilities to bring to a new mission problem. It takes a great team to continue to win good work and support missions and have past performance to bring to new problems.
I have been fortunate to work with great teams at Bell Labs, LGS Innovations and CACI. And I’m also grateful for the lessons and support of partners and customers.
What are you most proud of having been a part of in your current organization?
I’m proud that our team has been able to grow from a small organization working on niche research and development programs to being part of a large provider of technology and expertise to national security at CACI.
As part of CACI, we have been introduced to a wide range of national security problems across Defense Department services, the intelligence community and federal agencies. Each of these missions has unique requirements, both technically and organizationally, in terms of how we provide support.
Our team has taken our expertise in artificial intelligence, machine learning, natural language processing, networking and cyber, as well as foundational strengths in software development, integration and deployment and made a larger impact across CACI’s customer missions. We are continuing to find and adapt to new missions and extend the impact we can have on national security.
What are your primary focus areas going forward, and why are those so important to the future of the nation?
We are in a time of rapid change in our ability to create and field mission solutions based on AI and machine learning. AI provides the promise of building systems that have the speed and adaptation needed to confront our most difficult national security challenges, from federal enterprise systems to fielded tactical systems. While the algorithms and tools for doing this are maturing rapidly, there are risks that people recognize as well. We want to be able to apply AI at scale on real problems.
We are focused on being able to do so safely, removing risk that comes with a new technology. For example, one risk is the introduction of new attack vectors that can emerge that our adversaries might exploit against AI-based systems. It is important that we explore those potential adversarial attacks and address them so that AI-based systems are not less but more secure than traditional systems.
What’s the biggest professional risk you’ve ever taken?
In my first job after undergraduate at AT&T Bell Labs, things were going well. I was doing AI and computer vision on unique parallel processing hardware and impressing management and customers. I asked my department head if he would send me to a 1-year graduate school program to get a master’s degree, a perk many of my peers had received. He told me that his group didn’t support that program.
I received a generous raise that year and could have been very comfortable staying, but I thought if I was ever going to go back to graduate school, now was the best time. So, I took a leave of absence to start a Ph.D. program, thinking I’d come back after a year with a master’s degree.
I stayed to complete my PhD (in AI planning) and I learned new ways of thinking about problems that I don’t think I would have learned had I not taken a risk and left that comfortable position.
Looking back at your career, what are you most proud of?
We crossed the “valley of death.” This is a term applied to DARPA R&D that makes it to a prototype maturation level but doesn’t progress to an operational capability or product. Very few programs are able to cross that valley.
We did so with an active cyber defense program that combined novel natural language processing that considered mutual information across language in documents and a multi-objective optimization solution under uncertainty.
This was high-risk research that wasn’t likely to progress past the DARPA program. But the team impressed DARPA and potential transition partners across the program and we were continually funded to harden the capability, integrate it with other capabilities and make it operational for many years after the DARPA program ended. We are now taking that capability operational and it has already contributed to national security.
I’m proud of our team and everything they have accomplished and continue to do to make this important capability available across government and industry. I am proud to have played a role in supporting this group of future leaders as they created this success early in their careers and themselves now mentor and support our future leaders.
What’s your best career advice for those who want to follow in your footsteps?
Take on challenges that scare you.