Skip to main content

 

about Nicolas Christin

Nicolas ChristinNicolas Christin is the Associate Director of the Information Networking Institute, where he is also faculty, as well as a CyLab Systems Scientist. He was previously a resident faculty in our research and education center in Japan, CyLab Japan, located in Kōbe, Hyōgo Prefecture. He also serves as Faculty Advisor for the Master's of Information Technology-Information Security (MSIT-IS) degree program.

[ email ] | [profile]

CyLab Chronicles

Q&A with Nicolas Christin (2008)

posted by Richard Power

CyLab Chronicles: The "Economics of Security" is a fascinating and vital area of research. Share with us some of your insights. What has your exploration of this topic revealed? What are some of the problems and opportunities that have been illuminated? What are some of the some of the answers or solutions that have proved elusive?

CHRISTIN: My interest in the topic started with a very simple question: why don't people and corporations invest more in security, when technology is available for cheap (or free!), is not necessarily that cumbersome to deploy, and can save a lot of money and time? There are actually many reasons for this, but we noticed some overarching trends.

First, we, the end users, like to gamble when it comes to security. We (incorrectly) think security incidents only happen to other people, and as such do not see the need to bother investing in potentially tedious preventive measures. Related to this, we have trouble perceiving security as an investment -- after all, implementing security usually doesn't bring any money in; it just avoids future losses, which, by definition, are vague and uncertain.

Also, we are presented with a decision problem that is too difficult to process for us. Simply speaking, we have several decision variables to assess and tune at the same time: should I proactively protect my systems, and if so, by how much? Should I invest in insurance for recovery? We are usually pretty good at finding a somewhat optimal choice when we only have to consider one decision variable, but the fact that security is a multi-dimensional problem makes it very confusing for us. And last, we are not good at dealing with externalities: if you and me are on the same network, your system's security likely will affect mine. But why should you be the only one to invest in security if I too benefit from it?

To be honest, we quickly realized that figuring out exactly how individuals or corporations behave with respect to security is likely too complex a problem. But if we can identify some of the major traits most people share, like the ones I mentioned above, given a specific context, we may be able to account for them in our designs and yield individual behaviors that are beneficial to all users. Discovering security mechanisms that allow us to do this is the Holy Grail of our research.

CyLab Chronicles: What are some of the practical applications of research into the "Economics of Security"? How can be put to use by business and government?

CHRISTIN: I can think of several. First, our research can help technical staff make more compelling arguments to executive management when it comes to deploying security measures. Second, it can help secure system designers find out which levers to pull to make their products more attractive and usable to a larger base of customers. And third, we are not looking only at how people defend their systems, we are also looking at the other side of the fence, that is, attackers. By gaining a better understanding of how criminal markets related to information security operate, we likely can figure out the right combination of technological, economic, and legal countermeasures to disrupt them.

CyLab Chronicles: The "Psychology of Security" is a related area of research, one that is just as compelling and just as important. What are you looking at in this realm? What do you hope to achieve? What are some of the significant insights that researchers have been contributed so far?

CHRISTIN: Economic models are great in the sense that they often give you neat, closed form solutions to your problems. Unfortunately, they rely on a set of assumptions that sometimes are not a very good match with what happens in practice. For instance, a lot of economic models tend to assume that people are always perfectly rational in their behaviors, and that they have a lot more information available than they actually do in practice. Using behavioral and psychological analysis allows us to understand the weaknesses of these assumptions, and refine our models to be a better depiction of reality. We will mainly use insights from psychology research, but their application to the security context may yield very novel results useful to security engineers and practitioners. For example, psychologists tell us that depending on how a problem is framed, people will have very different reactions. Given a simple situation, if we phrase potential losses as leading to “catastrophic security failures,” as opposed to mere monetary losses, we might observe very different user behaviors, even though the economic consequences are the same. You could say that people's emotions get the better of them; psychologists will tell us what these "emotions" are; and our job will be to quantify how they bias individual choices. This likely will have a considerable public policy impact, for instance, in terms of designing better education models.

CyLab Chronicles: What are some of the practical applications of research into the "Psychology of Security"? How can it be put to use by business and government?

CHRISTIN: Well, one of the main applications is what some have termed "psychological acceptability". This is not really a new concept, as it has been formulated over thirty years ago in a slightly different form, but until recently, we have not been very good at satisfying it. In short, there are legions of examples where a mechanism or a policy is enforced as potentially more secure, but ends up being circumvented, which results in much worse security.

We also need to find ways to address the disconnect between our perceptions of security, and the actual security we are provided. For instance, most people will insert their banking cards into a machine that looks like an ATM, without checking this is, indeed, a legitimate ATM, and not a fake designed to skim their financial information; most people will trust that someone wearing a police uniform is a police officer; and so forth. So, we need to find ways to ensure that people will actually properly understand and use the security mechanisms in place; and to do this, we have to understand better our fundamental cognitive limitations and bias.

CyLab Chronicles: Tell us a something about two other areas of interest: a) Incentive-compatible network topology design and b) Information flow security

CHRISTIN: They are very related to my main research thrust. In the first project, we look at how people interconnect computer and communication networks, something usually referred to as network topology. We try to figure out which economic incentives drive the network topology. This provides us with considerable insights into potential instabilities or vulnerabilities due to conflicting economic incentives, and this also tells us which economic remedies we can try to apply to have people build networks with desirable topological properties.

Our research in information flow security is motivated by the explosion of the amount of information available in today's society. Very often, information has as at least as much, if not more value to a business as its network infrastructure. That makes information itself a potential target for security attacks. For instance, if a bad guy manages to insert content in the network, so that a query on search engine X for, say, "Carnegie Mellon", yields ten pages of useless results before we can actually get to the university's website, we are likely to switch to a competing search engine -- not to mention that the university's reputation may take a hit. And inserting information in the network is very easy compared to actually attacking the search engine -- generating content is what we do everyday when we write our blogs, web pages, etc.

So, basically, any distributed information system (e.g., peer-to-peer networks, the web, etc) can be vulnerable to this kind of attack, as long as the attacker knows how to insert contents in the network to fool the search algorithm used. We are trying to come up with analytical models that quantify that type of risk depending on the network topology and search primitives used, and look at ways to defend against it.


See all CyLab Chronicles articles