Researchers: Jie Yang
Development of De-identification Tools for Video Surveillance Data
Balancing the needs of safety and surveillance with the expectation of individual privacy has been an ongoing debate among privacy advocates, citizen groups, political leaders, and surveillance equipment manufacturers. In the wake of the September 2001 terrorist attacks, significantly more video surveillance systems continue to be deployed in a variety of locations. These systems pose significant questions not only about privacy concerns, but also about their effectiveness. A study conducted by Sandia in the 1970s for the U.S. Department of Energy found that even a highly motivated observer’s attention becomes less than effective after just 20 minutes of monitoring. This is a critical point because effective counter-terrorism measures require video data to be processed in real-time, yet we have more data than can be practically processed manually. To automatically process video surveillance data, we have to store and distribute these data over the network.
This proposal is to request a seed grant from Cylab to develop de-identification tools for video surveillance data. These tools will support automatically extract relevant information from video surveillance data where such data has been rendered sufficiently anonymous in order to protect the privacy of innocent civilians. The proposed work will be built upon our successful efforts and existing systems and technologies previously developed in real-time visual tracking, recognition, human activity analysis, and semi-automatically video data labeling (Yang 1998, Yang 1999, Chen 2002, Yan 2003, Chen 2004, Yan 2004). Figure 1 illustrates an example of masking people identity based on real-time people tracking results.
Figure 1: An Example of masking people identity based on people tracking results.
We will develop three kinds of tools for reducing privacy concerns in order for video surveillance data to be shared more freely. First, we will develop a set of real-time algorithms for locating faces in video surveillance data and then obscuring found faces in such a way that people are de-identified. Second, we will study how well humans and face recognition software can recognize individuals, whose images have been de-identified, and seek to develop algorithms that can then render these individuals sufficiently anonymous (the latter being a higher standard than merely obscuring faces). Finally, we will develop a cryptographic version of our obscuring algorithms. With permission (realized as a cryptographic key), the distortions provided in the data to protect people’s identities will be removed so that the original image is restored. Without permission, the image is provably secure from such restoration. All of these tools are innovative and will be newly created, building on our past success in tracking faces and rendering other kinds of data sufficiently anonymous.