Research Area: Trustworthy Computing Platforms and Devices
Establishing new trust relations in interactive protocols increases the pool of usable services for protocol participants, removes cooperation barriers among users, and enables them to take advantage of “network effects.” Specifically, in an interactive protocol a receiver can use a new service made available by a sender only after for some form of on-line payment. However, for these protocols to work, a paying receiver must trust the sender to provide the promised service. Once established, these new trust relations help increase competition among network-service providers, which spurs innovation, productivity, expanded markets, and ultimately economic development. Our research will investigate interactive protocols that promote trust between two untrusting parties (e.g., between service providers and receivers) that do not share a common trusted authority. While helpful in many cases, trusted third parties (TTPs) create additional complexity, uncertainty, and vulnerabilities, and sometimes become an attractive attack target (e.g., Google, Facebook, Microsoft). More fundamentally, the existence of TTPs begs the very question we want to answer, namely how can we establish trust between two previously untrusting parties.
A basic question that we ask is whether safe protocol states exist in interactive protocols, namely “is it ever safe to trust a stranger?” More specifically, under what precise conditions do these safe states exist; e.g., under what conditions is it safe to accept input from a stranger (without relying on a TTP)? In particular, we will explore the applicability of safety conditions to protocols that enable acceptance of identification and authentication information (e.g., identities, certificates, and network links) from others on an ad-hoc basis, namely in the absence of authentication infrastructures; e.g., without hierarchies of certification authorities, forests of peer-linked certificate authorities, and webs of trust.
A second basic question is whether the safe interactive protocols can be used at the “street level”; i.e., by casual users. To be understandable, these protocols must mirror human expectations and mental models of trust. Furthermore, they must not create false metaphors and analogies with the physical world. As a specific practical test of the usefulness of the safety conditions proposed, we will formally specify the real-life scams (as illustrated by the BBC’s “Real Hustle” program) as interactive trust protocols and identify the common violation of protocols conditions.
A third basic question we intend to address is whether social collateral models, which have been successfully used to model interactive trust protocols among humans, can be used to derive safety conditions for interactive trust protocols in social networks. Specifically, we will investigate the extent to which social collateral models can provide robust semantics for different authentication infrastructures; e.g., single and interconnected hierarchies of certification authorities and authentication servers, and webs of trust.