Automated vulnerability detection and prevention
Decades of research in software analysis have resulted in tools that find many bugs, but still require more resources and human effort than is often available. Hence, critical vulnerabilities in production software frequently remain undiscovered.
DARPA’s 2016 Cyber Grand Challenge showed that we can do better: that in some contexts, analysis algorithms can be smart enough to find and remove many exploitable vulnerabilities without any human intervention. The Cyber Autonomy Research Center pursues a broad version of this vision, developing tools to find vulnerabilities in binary code and in web applications, as well as to fix these vulnerabilities. We explore both fully automated approaches and tools that work in concert with developers and human analysts, leveraging their skill and experience and multiplying their effort.
- Detecting vulnerabilities in deployed web applications (Bauer, Jia)
- Automated program repair (Le Goues)
The combination of advances in machine learning and the availability of big data are revolutionizing many disciplines, from medicine to advertising. We pursue a similar revolution in computer security, in this research thrust with a particular focus on producing actionable intelligence, from predicting device compromise before it happens to mining social networking data to expose and explain hidden threats and trends.
- Identifying imminent system compromise (Christin)
- Fast anomaly detection in streaming graphs (Akoglu)
Network-level, rapid defense
In an ideal world, each computer system could be completely hardened against attack before it is deployed. While we strive to make this a reality, we also acknowledge that, at least in the short term, many deployed systems will continue to have vulnerabilities. This research thrust develops techniques to defend systems at the network level. A key building block of such defenses is provided by software defined networks, but making them effective at detecting and preventing attacks requires many advances, including in how data is efficiently moved across networks to enable effective analysis and in developing defensive algorithms that use this data to reach near-optimal control decisions.
Faculty: Vyas Sekar
- Defending against core-link network flooding attacks (Sekar)
- Rethinking network flow monitoring (Sekar)
Counter-autonomy and hardened autonomy
Advances in machine learning and AI enable many new capabilities, including in computer security. At the same time, the pervasive deployment of machine learning and AI algorithms brings new risks. As our defenses and critical infrastructure start to depend on ML and AI, we must consider whether these algorithms themselves have weaknesses that attackers could exploit to compromise our systems. In the Cyber Autonomy Research Center, we address this risk in three steps: First, we investigate whether and how ML algorithms can be subverted by attackers, and what makes them particularly vulnerable. Second, we study how to make such algorithms safer, including by exposing their decision-making strategies to human analysts and by designing and training the algorithms so that they are verifiably more robust. Third, we develop techniques to build AI algorithms that are provably correct and cannot be subverted.
- Attacking and defending state-of-the-art face-recognition algorithms (Bauer)
- Developing algorithms that can explain why they make decisions (Datta, Fredrikson)
- Developing algorithms provably safe against compromise (Platzer, Mitsch)