Research Area: Privacy Protection
Cross Cutting Thrusts: Usable Privacy and Security
Scope: Privacy decision-making has become more complex, as information systems have vastly expanded our ability to permanently share with others information about ourselves. The complexity is such that our judgments in this area are prone to errors, stemming from lack of information, insight, or computational ability; or from problems of self-control and limited self-insight uncovered by research in behavioral economics and decision research. In the latter case, being afforded more information about privacy and more granular control over privacy settings does not solve the users’ problems, because it increases their cognitive costs, without addressing their underlying cognitive and behavioral biases. We propose to study, design, and test systems that anticipate, and sometimes even exploit, those cognitive and behavioral biases that hamper users’ privacy and security decision making. These systems “nudge” users towards certain behaviors that the users themselves have claimed to prefer, or which sound empirical evidence has demonstrated to be preferable, from a privacy perspective - an approach inspired by the growing body of behavioral research on “soft” paternalism. Our proposal aims to combine theories and methodologies from behavioral economics, behavioral decision research, human-computer interaction, usability research, and machine learning to assist privacy decision-making in information systems. We will apply our models to three loosely defined, and partially overlapping, domains: online social networks, mobile and behavioral advertising, and location sharing. Our approach will be, first, to conduct foundational studies to understand user privacy needs, preferences, and behaviors in these domains; second, to develop “nudging” technologies to support and ameliorate privacy decision-making in these domains; and third, to conduct user studies to evaluate the effectiveness of our technologies in countering users’ biases and increasing their overall welfare and satisfaction.
Outcomes: The objective is to counter and anticipate, and sometimes even exploit, cognitive and behavioral biases that lead individuals to make decisions that deviate from their professed preferences and actual attitudes, or that they stand to regret, or that reduce their overall welfare. To do so, we aim to develop a scientific body of knowledge, and then to test empirically, the design of privacy technologies that nudge users without restricting their choices. This is an ambitious multi-disciplinary effort, focused on three exemplary application domains,