Our research is organized around four research thrusts. In each, the goal is to develop new capabilities to defend our systems while minimizing the human effort needed to do so.



Mayhem, an autonomous hacking system created by CyLab-spinoff company ForAllSecure, sits on stage at the DARPA Cyber Grand Challenge in 2016

Source: CyLab

Mayhem, an autonomous hacking system created by CyLab-spinoff company ForAllSecure, sits on stage at the DARPA Cyber Grand Challenge in 2016.

Automated vulnerability detection and prevention

Decades of research in software analysis have resulted in tools that find many bugs, but still require more resources and human effort than is often available. Hence, critical vulnerabilities in production software frequently remain undiscovered.

DARPA’s 2016 Cyber Grand Challenge showed that we can do better: that in some contexts, analysis algorithms can be smart enough to find and remove many exploitable vulnerabilities without any human intervention. The Cyber Autonomy Research Center pursues a broad version of this vision, developing tools to find vulnerabilities in binary code and in web applications, as well as to fix these vulnerabilities. We explore both fully automated approaches and tools that work in concert with developers and human analysts, leveraging their skill and experience and multiplying their effort.

Faculty: Lujo Bauer, David Brumley, Limin Jia, Claire Le Goues, Maverick Woo

Example projects:


Predictive analytics

The combination of advances in machine learning and the availability of big data are revolutionizing many disciplines, from medicine to advertising. We pursue a similar revolution in computer security, in this research thrust with a particular focus on producing actionable intelligence, from predicting device compromise before it happens to mining social networking data to expose and explain hidden threats and trends.

Faculty: Leman Akoglu, Kathleen Carley, Nicolas Christin, Fei Fang

Example projects:


Network-level, rapid defense

In an ideal world, each computer system could be completely hardened against attack before it is deployed. While we strive to make this a reality, we also acknowledge that, at least in the short term, many deployed systems will continue to have vulnerabilities. This research thrust develops techniques to defend systems at the network level. A key building block of such defenses is provided by software defined networks, but making them effective at detecting and preventing attacks requires many advances, including in how data is efficiently moved across networks to enable effective analysis and in developing defensive algorithms that use this data to reach near-optimal control decisions.

Faculty: Vyas Sekar

Example Projects:


Counter-autonomy and hardened autonomy

Advances in machine learning and AI enable many new capabilities, including in computer security. At the same time, the pervasive deployment of machine learning and AI algorithms brings new risks. As our defenses and critical infrastructure start to depend on ML and AI, we must consider whether these algorithms themselves have weaknesses that attackers could exploit to compromise our systems. In the Cyber Autonomy Research Center, we address this risk in three steps: First, we investigate whether and how ML algorithms can be subverted by attackers, and what makes them particularly vulnerable. Second, we study how to make such algorithms safer, including by exposing their decision-making strategies to human analysts and by designing and training the algorithms so that they are verifiably more robust. Third, we develop techniques to build AI algorithms that are provably correct and cannot be subverted.

Faculty: Lujo Bauer, Anupam Datta, Matt Fredrikson, Zico Kolter, Stefan Mitsch, Corina Pasareanu, André Platzer

Example Projects:

Join us!

Is your company or organization interested in working with our researchers on cyber autonomy? Let us know by contacting Michael Lisanti.