As the world of “big data” gradually becomes a world of “bigger data,” Carnegie Mellon University CyLab researchers are focused on advancing research in machine learning and artificial intelligence (AI), in which computers can “learn” trends from massive collections of data. This research is being conducted and tested in various applications ranging from facial recognition to systems that can autonomously find and fix software bugs before they are exploited.
We have researchers working in the following subtopics of applications of security and privacy. Check out each of their research:
Researchers develop adversarial training methods to improve machine learning-based malware detection software
Professor Lujo Bauer and a team of researchers investigate the effectiveness of using adversarial training methods to create malware detection models that are more robust to some of the state-of-the-art attacks.
Researchers discover new vulnerability in large language models
Researchers at the Carnegie Mellon University School of Computer Science, the CyLab Security and Privacy Institute, and the Center for AI Safety in San Francisco have developed an attack that can bypass security measures of large language models. Their method enables chatbots like ChatGPT, Claude, and Google Bard, to generate objectionable content at high success rates.
CyLab’s Corina Pasareanu and colleagues receive $1.2 million grant to develop automated bug-finding techniques
The National Science Foundation has awarded a $1.2 million grant to researchers at Carnegie Mellon University, UC-Berkeley, and UC-Santa Barbara to develop automated bug-detection and repair techniques that work at large scales.
When privacy and the arts collide
Sophie Calle is a French artist who often blurs the lines between life and her art. What if Calle knew how to code, and take advantage of our personal data to create an even more personalized, privacy-intrusive form of art? That’s something CyLab’s Maggie Oates has been exploring.
$5M Knight Foundation Investment creates center to fight online disinformation
Carnegie Mellon University today announced the creation of a new research center dedicated to the study of online disinformation and its effects on democracy, funded by a $5 million investment from the John S. and James L. Knight Foundation. The new center will bring together researchers from within the institution and across the country.
Malicious social media bots tried, but failed, to diminish NATO during its 2018 exercise
A new study by Carnegie Mellon University researchers illustrates how fake news was spread on Twitter by bots during NATO’s 2018 Trident Juncture Exercise. The study is being presented this week at the 2019 SBP-BRiMS conference in Washington, D.C.
Carnegie Mellon researchers create an AI to help us make sense of privacy policies
Cyber Autonomy @ CyLab
Explore the Cyber Autonomy @ CyLab initative.