CyLab names 2025 Presidential Fellows
Michael Cunningham
Oct 20, 2025
Each year, CyLab recognizes high-achieving Ph.D. students pursuing security and/or privacy-related research, with a CyLab Presidential Fellowship, covering an entire year of tuition.
This year’s CyLab Presidential Fellowship recipients are:
Elijah Bouma-Sims
Ph.D. Student, Software and Societal Systems Department
Advised by Lorrie Cranor, Director and Bosch Distinguished Professor in Security and Privacy Technologies, CyLab; FORE Systems University Professor, Engineering and Public Policy, Software and Societal Systems Department
Internet-based fraud, extortion, and other deceptive schemes (collectively called “scams”) are some of the most common digital safety threats experienced by people today. While anyone can be scammed, scam victimization is not evenly distributed across the population, certain “at-risk” or vulnerable user groups are more likely to be victimized by scams.
Bouma-Sims’s research seeks to understand how at-risk populations are disproportionately affected by scams and to develop inclusive solutions to protect users from harm. Building upon the insights from his previous research, he will develop and evaluate accessible interventions to mitigate the threat of instant messaging-based scams, which remain understudied. Specifically, he will design warning systems powered by generative AI (GAI) that analyze message content and user-specific information to detect potential fraud. This system will provide context-sensitive advice, empowering users to discern legitimate messages.
Hao-Ping (Hank) Lee
Ph.D. Student, Human-Computer Interaction Institute
Advised by Sauvik Das, Associate Professor, Human-Computer Interaction Institute; and Jodi Forlizzi, Herbert A. Simon Professor in Computer Science and HCII, Human-Computer Interaction Institute
AI has created vast technological opportunities and a global market projected to exceed $244 billion by 2025. Yet, these advancements come with new privacy challenges. Lee’s prior research—both empirical (interviews, field deployments) and theoretical (AI privacy risk taxonomy)—demonstrates that consumer AI systems such as smart speakers, generative AI tools (e.g., ChatGPT), and behavioral advertising introduce unique and underaddressed privacy risks. This body of work underscores the need for privacy-preserving AI development practices and a shift toward human-centered design paradigms for AI privacy.
Building on his past work, and drawing from human-computer interaction (HCI), usable privacy and security, and human-centered AI, Lee’s thesis centers on building systems to support practitioners in identifying, reasoning about, and mitigating AI-entailed privacy risks in consumer-facing products. He is working to improve practitioners’ awareness of AI privacy harms pertinent to a product concept by developing interactive privacy risk assessment tools.
Terrance Liu
Ph.D. Student, Machine Learning Department
Advised by Steven Wu, Associate Professor, Software and Societal Systems Department
Greater availability of data has advanced the ways in which organizations and companies conduct statistical analyses and build AI systems. However, the adoption of data-driven decision-making has raised the question of how these methods can be deployed in a socially responsible manner. For example, sensitive data poses privacy risks for the individuals from which it is collected.
Liu’s research aims at devising methods that contribute to social good and perform reliably under real-world constraints, particularly through the lens of privacy and uncertainty quantification. A large body of his work has been focused on differentially private (DP) machine learning and its applications within the domain of synthetic data generation and more recently, privacy auditing. For Liu’s proposed work, he aims to study such principles of privacy and security more broadly, exploring how to quantify uncertainty when measuring the privacy risks of data-driven AI systems.
Alexandra Nisenoff
Ph.D. Student, Societal Computing, School of Computer Science
Advised by Nicolas Christin, Department Head and Professor, Software and Societal Systems Department; Professor, Engineering and Public Policy; and Lorrie Cranor, Director and Bosch Distinguished Professor in Security and Privacy Technologies, CyLab; FORE Systems University Professor, Engineering and Public Policy, Software and Societal Systems Department
In the face of prevalent online tracking and the potential for tracking-enabled discrimination many individuals turn to tools like adblockers and other extensions (e.g. Privacy Badger) to protect themselves. Most browsers have also taken the extra step to build protection from tracking directly into browsers. Unfortunately, these tracking protection tools can have the unintended consequence of causing websites to fail to function properly, which Nisenoff terms “breakage.” Breakage can also manifest as being unable to add products to a cart, buttons not functioning, preventing videos from playing, page elements with clearly incorrect placement on the page, or causing major portions of the websites to be completely missing. This in turn presents users with a lose-lose situation of dealing with the broken websites or allowing themselves to be tracked by disabling the protection they have in place.
Nisenoff believes that people shouldn’t have to tolerate being tracked just to use the internet. While a deciding factor when deciding what forms of tracking protection are rolled out in mainstream browsers , breakage is often only a side note in academic papers. Even when breakage is considered, it is usually to see if new tools cause breakage, rather than understanding or fixing the existing issues. Nisenoff’s research investigates how breakage is experienced and how to empower users and developers to fix or avoid these problems.
Hugo Sadok
Ph.D. Student, Computer Science Department
Advised by Justine Sherry, A. Nico Habermann Associate Professor of Computer Science, Computer Department
Networked applications such as distributed ML training, databases, and web servers have prompted network operators to deploy substantially higher network capacities over the past decades. Today, datacenters and enterprise networks offer line rates of 100 Gbps or more. Unfortunately, existing operating systems struggle to expose these network capacities to applications. This is fundamental to the existing designs adopted by popular operating systems such as Linux and Windows that force every network transfer to invoke syscalls, context switches, and data copies. These overheads are usually seen as necessary to securely implement network security features such as firewalling, intrusion detection, and performance isolation (also known as QoS). This is because such features need to be interposed between applications and the network. Sadok’s research question is: How can operating systems expose the network line rates that are available today while continuing to support interposed security features?
Sadok’s research has explored two different paths for reconciling the need for both performance and security. His past work has explored a vision where novel hardware can help us implement security features through secure interposition but without the overheads of system calls, context switches, and data copies. His current and proposed work explores a new direction where we can perform interposition entirely in software but with considerably less overhead.
Rose Silver
Ph.D. Student, Computer Science Department
Advised by Elaine Shi, Professor, Computer Science Department, Electrical and Computer Engineering
Silver studies foundational problems at the intersection of privacy and algorithms. She aims to design algorithms and data structures that are privacy-preserving, without compromising on performance. Silver is especially interested in privatizing algorithmic building blocks that are core to AI and machine learning applications.
A common narrative in the study of privacy-preserving algorithms is that, in many settings, privacy and efficiency are unavoidably at odds with one another. This is often due to inherent privacy-utility trade-offs in certain settings, or because many private algorithms are too unwieldy in practice. As a result, many practitioners struggle to meaningfully integrate privacy into their algorithmic workflows. This disconnect represents a missed opportunity: privacy-preserving methods could enable safe, high-value computation over sensitive data across a range of real-world systems.
In her research,Silver seeks to develop new algorithmic techniques that allow for privacy and efficiency to co-exist. Furthermore, these techniques are grounded in a design philosophy which emphasizes simplicity and practicality.
Taro Tsuchiya
Ph.D. Student, Societal Computing, School of Computer Science
Advised by Nicolas Christin, Department Head and Professor, Software and Societal Systems Department; Professor, Engineering and Public Policy
The traditional banking system is heavily regulated. It often requires identity verification to open accounts or conduct transactions through brokers. In the last decade, the financial sector has increasingly become digitized and expanded what retail users can do—trading with just one dollar on mobile apps, or copying trading strategies from strangers online. With blockchains, users could even perform tasks traditionally carried out by governments or banks: minting new financial assets or verifying transactions themselves. However, it is not trivial for users, developers, and platforms to understand the risks behind its technology (e.g., cryptography, game theory, and networking). As such, new threats that exploit the complexity of those financial services have emerged. They range from societal, software, to system-level: sending spam/toxic comments, tricking blockchain wallet UI for phishing, or performing denial of service (DoS) on blockchain nodes. Although those attacks are understudied, their underlying principles are not necessarily novel in computer security.
Tsuchiya’s research aims to analyze such attacks through the lens of computer security while incorporating the particularities of the financial sector. For instance, users typically go through four stages to complete transactions on blockchain: 1) retrieving information online, 2) setting up blockchain wallets, 3) exchanging fiat money for cryptocurrencies, and 4) sending transactions to the network. I aim to capture all the risks associated with each step.