CyLab researchers to present at ACM CHI 2024
Michael Cunningham
Apr 10, 2024
CyLab Security and Privacy Institute researchers are set to present 10 papers and participate in one special interest group at the upcoming Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI 2024).
The conference will take place in Honolulu, Hawaiʻi from May 11th through the 16th, bringing together researchers, practitioners, and industry leaders to share their latest work and ideas and to foster collaboration and innovation in advancing digital technologies.
Carnegie Mellon University research is well represented at ACM CHI 2024. In addition to the CyLab-affiliated papers being presented at CHI, CMU students and faculty members will present more than 35 papers at the conference, representing a diverse background of subject matter interests and research competencies in the process.
Below, we’ve compiled a list of papers authored by CyLab Security and Privacy Institute members that are being presented at this year’s event, as well as a special interest group on human-centered privacy research in the age of LLMs.
Authors: Clair C. Chen, Dillon Shu, Hamsini Ravishankar, Xinran Li, Yuvraj Agarwal, Lorrie Faith Cranor; Carnegie Mellon University
Abstract: The U.S. Government is developing a package label to help consumers access reliable security and privacy information about Internet of Things (IoT) devices when making purchase decisions. The label will include the U.S. Cyber Trust Mark, a QR code to scan for more details, and potentially additional information. To examine how label information complexity and educational interventions affect comprehension of security and privacy attributes and label QR code use, we conducted an online survey with 518 IoT purchasers. We examined participants' comprehension and preferences for three labels of varying complexities, with and without an educational intervention. Participants favored and correctly utilized the two higher-complexity labels, showing a special interest in the privacy-relevant content. Furthermore, while the educational intervention improved understanding of the QR code’s purpose, it had a modest effect on QR scanning behavior. We highlight clear design and policy directions for creating and deploying IoT security and privacy labels.
Stranger Danger? Investor Behavior and Incentives on Cryptocurrency Copy-Trading Platforms
Authors: Daisuke Kawai, Carnegie Mellon University; Kyle Soska, Ramiel Capital; Bryan Routledge, Carnegie Mellon University; Ariel Zelin-Jones, Carnegie Mellon University; Nicolas Christin; Carnegie Mellon University
Abstract: Several large financial trading platforms have recently begun implementing ``copy trading,'' a process by which a leader allows copiers to automatically mirror their trades in exchange for a share of the profits realized. While it has been shown in many contexts that platform design considerably influences user choices---users tend to disproportionately trust rankings presented to them---we would expect that here, copiers exercise due diligence given the money at stake, typically USD 500--2\,000 or more. We perform a quantitative analysis of two major cryptocurrency copy-trading platforms, with different default leader ranking algorithms. One of these platforms additionally changed the information displayed during our study. In all cases, we show that the platform UI significantly influences copiers' decisions. Besides being sub-optimal, this influence is problematic as rankings are often easily gameable by unscrupulous leaders who prey on novice copiers, and they create perverse incentives for all platform users.
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks
Authors: Hao-Ping (Hank) Lee, Carnegie Mellon University; Yu-Ju Yang, Carnegie Mellon University; Thomas Serban von Davier, University of Oxford; Jodi Forlizzi, Carnegie Mellon University; Sauvik Das, Carnegie Mellon University
Abstract: Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.
Interdisciplinary Approaches to Cybervulnerability Impact Assessment for Energy Critical Infrastructure
Authors: Andrea Gallardo, Carnegie Mellon University; Robert Erbes, Idaho National Laboratory; Katya Le Blanc, Idaho National Laboratory; Lujo Bauer, Carnegie Mellon University; Lorrie Faith Cranor, Carnegie Mellon University
Abstract: As energy infrastructure becomes more interconnected, understanding cybersecurity risks to production systems requires integrating operational and computer security knowledge. We interviewed 18 experts working in the field of energy critical infrastructure to compare what information they find necessary to assess the impact of computer vulnerabilities on energy operational technology. These experts came from two groups: 1) computer security experts and 2) energy sector operations experts. We find that both groups responded similarly for general categories of information and displayed knowledge about both domains, perhaps due to their interdisciplinary work at the same organization. Yet, we found notable differences in the details of their responses and in their stated perceptions of each group’s approaches to impact assessment. Their suggestions for collaboration across domains highlighted how these two groups can work together to help each other secure the energy grid. Our findings inform the development of interdisciplinary security approaches in critical-infrastructure contexts.
A Framework for Reasoning about Social Influences on Security and Privacy Adoption
Authors: Cori Faklaris, University of North Carolina at Charlotte; Laura Dabbish, Carnegie Mellon University; Jason I. Hong, Carnegie Mellon University
Much research has found that social influences (such as social proof, storytelling, and advice-seeking) help boost security awareness. But we have lacked a systematic approach to tracing how awareness leads to action, and to identifying which social influences can be leveraged at each step. Toward this goal, we develop a framework that synthesizes our design ideation, expertise, prior work, and new interview data into a six-step adoption process. This work contributes a prototype framework that accounts for social influences by step. It adds to what is known in the literature and the SIGCHI community about the social-psychological drivers of security adoption. Future work should establish whether this process is the same regardless of culture, demographic variation, or work vs. home context, and whether it is a reliable theoretical basis and method for designing experiments and focusing efforts where they are likely to be most productive.
SEAM-EZ: Simplifying Stateful Analytics through Visual Programming
Authors: Zhengyan Yu, Conviva; Hun Namkung, Conviva; Jiang Guo, Conviva; Henry Milner, Conviva; Joel Goldfoot, Conviva; Yang Wang, Conviva, University of Illinois Urbana-Champaign; Vyas Sekar, Conviva, Carnegie Mellon University
Abstract: Across many domains (e.g., media/entertainment, mobile apps, finance, IoT, cybersecurity), there is a growing need for stateful analytics over streams of events to meet key business outcomes. Stateful analytics over event streams entails carefully modeling the sequence, timing, and contextual correlations of events to dynamic attributes. Unfortunately, existing frameworks and languages (e.g., SQL, Flink, Spark) entail significant code complexity and expert effort to express such stateful analytics because of their dynamic and stateful nature. Our overarching goal is to simplify and democratize stateful analytics. Through an iterative design and evaluation process including a foundational user study and two rounds of formative evaluations with 15 industry practitioners, we created SEAM-EZ, a no-code visual programming platform for quickly creating and validating stateful metrics. SEAM-EZ features a node-graph editor, interactive tooltips, embedded data views, and auto-suggestion features to facilitate the creation and validation of stateful analytics. We then conducted three real-world case studies of SEAM-EZ with 20 additional practitioners. Our results suggest that practitioners who previously could not or had to spend significant effort to create stateful metrics using traditional tools such as SQL or Spark can now easily and quickly create and validate such metrics using SEAM-EZ.
Authors: Yasmine Kotturi, Carnegie Mellon University; Angel Anderson, Community Forge; Glenn Ford, Community Forge; Michael Skirpan, Carnegie Mellon University, Community Forge; Jeffrey P. Bigham, Carnegie Mellon University
Abstract: Generative AI platforms and features are permeating many aspects of work. Entrepreneurs from lean economies in particular are well positioned to outsource tasks to generative AI given limited resources. In this paper, we work to address a growing disparity in use of these technologies by building on a four-year partnership with a local entrepreneurial hub dedicated to equity in tech and entrepreneurship. Together, we co-designed an interactive workshops series aimed to onboard local entrepreneurs to generative AI platforms. Alongside four community-driven and iterative workshops with entrepreneurs across five months, we conducted interviews with 15 local entrepreneurs and community providers. We detail the importance of communal and supportive exposure to generative AI tools for local entrepreneurs, scaffolding actionable use (and supporting non-use), demystifying generative AI technologies by emphasizing entrepreneurial power, while simultaneously deconstructing the veneer of simplicity to address the many operational skills needed for successful application.
Authors: Zhiping Zhang, Khoury College of Computer Sciences; Michelle Jia, Carnegie Mellon University; Hao-Ping (Hank) Lee, Carnegie Mellon University; Bingsheng Yao, Rensselaer Polytechnic Institute; Sauvik Das, Carnegie Mellon University; Ada Lerner, Northeastern University; Dakuo Wang, Northeastern University; Tianshi Li, Northeastern University
Abstract: The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.
Bring Privacy To The Table: Interactive Negotiation for Privacy Settings of Shared Sensing Devices
Authors: Haozhe Zhou, Mayank Goel, Yuvraj Agarwal; Carnegie Mellon University
Abstract: To address privacy concerns with the Internet of Things (IoT) devices, researchers have proposed enhancements in data collection transparency and user control. However, managing privacy preferences for shared devices with multiple stakeholders remains challenging. We introduced ThingPoll, a system that helps users negotiate privacy configurations for IoT devices in shared settings. We designed ThingPoll by observing twelve participants verbally negotiating privacy preferences, from which we identified potentially successful and inefficient negotiation patterns. ThingPoll bootstraps a preference model from a custom crowdsourced privacy preferences dataset. During negotiations, ThingPoll strategically scaffolds the process by eliciting users’ privacy preferences, providing helpful contexts, and suggesting feasible configuration options. We evaluated ThingPoll with 30 participants negotiating the privacy settings of 4 devices. Using ThingPoll, participants reached an agreement in 97.5% of scenarios within an average of 3.27 minutes. Participants reported high overall satisfaction of 83.3% with ThingPoll as compared to baseline approaches.
“Don't put all your eggs in one basket”: How Cryptocurrency Users Choose and Secure Their Wallets
Authors: Yaman Yu, University of Illinois at Urbana-Champaign; Tanusree Sharma, University of Illinois at Urbana-Champaign; Sauvik Das, Carnegie Mellon University; Yang Wang, University of Illinois at Urbana-Champaign
Abstract: Cryptocurrency wallets come in various forms, each with unique usability and security features. Through interviews with 24 users, we explore reasons for selecting wallets in different contexts. Participants opt for smart contract wallets to simplify key management, leveraging social interactions. However, they prefer personal devices over individuals as guardians to avoid social cybersecurity concerns in managing guardian relationships. When engaging in high-stakes or complex transactions, they often choose browser-based wallets, leveraging third-party security extensions. For simpler transactions, they prefer the convenience of mobile wallets. Many participants avoid hardware wallets due to usability issues and security concerns with respect to key recovery service provided by manufacturer and phishing attacks. Social networks play a dual role: participants seek security advice from friends, but also express security concerns in soliciting this help. We offer novel insights into how and why users adopt specific wallets. We also discuss design recommendations for future wallet technologies based on our findings.
Special Interest Group: Human-Centered Privacy Research in the Age of Large Language Models
Participants: Tianshi Li, Northeastern University; Sauvik Das, Carnegie Mellon University; Hao-Ping (Hank) Lee, Carnegie Mellon University; Dakuo Wang, Northeastern University; Bingsheng Yao, Rensselaer Polytechnic Institute; Zhiping Zhang, Northeastern University
Abstract: The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns. To date, research on these privacy concerns has been model-centered: exploring how LLMs lead to privacy risks like memorization, or can be used to infer personal characteristics about people from their content. We argue that there is a need for more research focusing on the human aspect of these privacy issues: e.g., research on how design paradigms for LLMs affect users’ disclosure behaviors, users’ mental models and preferences for privacy controls, and the design of tools, systems, and artifacts that empower end-users to reclaim ownership over their personal data. To build usable, efficient, and privacy-friendly systems powered by these models with imperfect privacy properties, our goal is to initiate discussions to outline an agenda for conducting human-centered research on privacy issues in LLM-powered systems. This Special Interest Group (SIG) aims to bring together researchers with backgrounds in usable security and privacy, human-AI collaboration, NLP, or any other related domains to share their perspectives and experiences on this problem, to help our community establish a collective understanding of the challenges, research opportunities, research methods, and strategies to collaborate with researchers outside of HCI.