These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Privacy enhancing technologies, or PETs, have been hailed as a promising
means to protect privacy without compromising on the functionality of digital
services. At the same time, and partly because they may encode a narrow
conceptualization of privacy as confidentiality that is popular among
policymakers, engineers and the public, PETs risk being co-opted to promote
privacy-invasive practices. In this paper, we resort to the theory of
Contextual Integrity to explain how privacy technologies may be misused to
erode privacy. To illustrate, we consider three PETs and scenarios: anonymous
credentials for age verification, client-side scanning for illegal content
detection, and homomorphic encryption for machine learning model training.
Using the theory of Contextual Integrity, we reason about the notion of privacy
that these PETs encode, and show that CI enables us to identify and reason
about the limitations of PETs and their misuse, and which may ultimately lead
to privacy violations.