CyLab study shows user-centric threat modeling framework helps privacy experts spot twice as many digital privacy flaws
Michael Cunningham
Feb 27, 2026
From left: Lorrie Cranor and Norman Sadeh presented the UsersFirst framework at the 2025 USENIX Conference on Privacy Engineering Practice and Respect (PEPR ‘25)
As privacy laws grow more stringent around the world, companies face increasing pressure to clearly explain how they collect and use personal data, and to give individuals meaningful control over that information.
New CyLab research that expands on previous work with the UsersFirst threat modeling framework shows that a user-centered approach to evaluating privacy notices and choices can make a measurable difference.
At this week’s Symposium on Usable Security and Privacy (USEC) in San Diego, the research team will present findings from its paper, UsersFirst in Practice: Evaluating a User-Centric Threat Modeling Taxonomy for Privacy Notice and Choice. The study demonstrates that privacy professionals using the UsersFirst framework were able to identify significantly more privacy shortcomings in digital products than those who did not use it.
Alexandra Xinran Li, who served as lead author, will present the research at USEC. The paper was co-authored by Carnegie Mellon University researchers Miguel Rivera-Lanas, Debeshi Ghosh, Hana Habib, Lorrie Cranor, and Norman Sadeh, and by University of Illinois Urbana-Champaign researchers Tian Wang, a former CMU postdoctoral researcher; and Yu-Ju Yang, a CMU Master of Science in Public Policy and Management alumna.
Privacy regulations such as the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR) require organizations not only to disclose their data practices, but to ensure that privacy notices and choices are accessible, understandable and free from manipulation. Yet, until recently, most privacy “threat modeling” frameworks focused primarily on technical risks, rather than on whether users can meaningfully understand and exercise their rights.
The UsersFirst framework was designed to fill that gap.
“When it comes to privacy, there are many ways in which organizations can fall short in the design and implementation of their products,” said Sadeh. “One particular area that is attracting increasing scrutiny is failures in the way in which products then notify users about their data practices and the choices available to them. “Earlier frameworks largely ignored user-centric issues. Our goal in developing UsersFirst was to help analysts systematically identify where products fall short in communicating data practices and enabling real user choice.”
The framework includes four high-level categories and 28 specific types of user-centric privacy threats. These range from making privacy settings difficult to find and understand, to presenting users that do not align with their concerns,, to nudging users toward options that may not be in their best interest.
To put the UsersFirst framework to the test, the CyLab research team conducted a controlled experiment with 26 participants with backgrounds in privacy similar to those analysts expected to use the framework
Participants were divided into two groups: one group analyzed privacy risks using the UsersFirst taxonomy, while the other group conducted the same analysis without it.
“We specifically recruited people with real privacy expertise,” said Li. “These were not beginners. Twenty-five of the 26 participants had industry experience working in privacy and all 26 had taken professional training in the area.”
To make the study realistic yet manageable, the team designed two detailed fictional scenarios inspired by real-world systems.
In the first scenario, participants evaluated a mobile app that allows users to virtually “try on” eyeglasses using their phone camera. The app required access to biometric data and presented a privacy policy consent screen. Later, the user persona attempted, and failed, to withdraw that consent.
The second scenario involved a smart TV with voice control features, where users navigated privacy-related tasks such as reviewing data collection disclosures and attempting to exercise privacy rights.
Both scenarios were carefully calibrated to mirror the “threat density” found in real consumer products. The researchers intentionally embedded a representative mix of notice and choice flaws aligned with the framework’s 28 threat types.
Alexandra Xinran Li, Ph.D. student with CMU's Software and Societal Systems Department, will present the research team's work on UsersFirst in practice at this week’s Symposium on Usable Security and Privacy (USEC) in San Diego.
Participants using the UsersFirst taxonomy identified significantly more relevant privacy threats than users who conducted the same analysis without it. In one scenario, they found more than twice as many meaningful threats. In the other, they identified 50 percent more than participants working without the framework.
Importantly, accuracy did not decline.
“It’s not like they were just identifying random issues,” Sadeh explained. “They were zooming in on exactly the right flaws, in some cases, twice as many of them, with the same or even greater levels of accuracy.”
The improvement held across all four high-level threat categories, including problems such as hidden privacy controls, confusing explanations, misaligned choices, and manipulative interface design.
In addition to proving the effectiveness of the framework, the study also revealed opportunities to refine the taxonomy. The version evaluated in the experiment was labeled version 0.9; the team has since updated it to version 1.1 based on the findings.
Beyond improving their own framework, the researchers say the project contributes something broader: a method for evaluating user-oriented privacy threat modeling tools themselves.
“In this field of privacy threat modeling, we haven’t really had prior work focusing on evaluating the effectiveness of frameworks,” said Li. “What we did shows that it’s possible to rigorously test whether these tools actually help, and that is an important development.”
As privacy laws increasingly emphasize usability, organizations need practical tools to meet those standards.
“It used to be enough to say, ‘you just need to provide people with an opt-out somewhere,’ and then bury that option where no one would ever find it,” said Sadeh. “Now, regulations make clear that choices must be accessible and not manipulative. These are exactly the types of threats our framework helps to identify.”
By helping privacy analysts more effectively detect shortcomings in notice and choice mechanisms, the UsersFirst framework aims to support both better user experiences and stronger regulatory compliance. And the researchers hope that presenting the findings at NDSS will accelerate adoption.
“We’ve now demonstrated in a rigorous way that this framework makes a big difference,” Sadeh said. “At this point, you’re running out of excuses not to use it.”
Read the full paper, and learn more about the UsersFirst framework at the project’s website.