These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The increasing reliance on deep computer vision models that process sensitive
data has raised significant privacy concerns, particularly regarding the
exposure of intermediate results in hidden layers. While traditional privacy
risk assessment techniques focus on protecting overall model outputs, they
often overlook vulnerabilities within these intermediate representations.
Current privacy risk assessment techniques typically rely on specific attack
simulations to assess risk, which can be computationally expensive and
incomplete. This paper introduces a novel approach to measuring privacy risks
in deep computer vision models based on the Degrees of Freedom (DoF) and
sensitivity of intermediate outputs, without requiring adversarial attack
simulations. We propose a framework that leverages DoF to evaluate the amount
of information retained in each layer and combines this with the rank of the
Jacobian matrix to assess sensitivity to input variations. This dual analysis
enables systematic measurement of privacy risks at various model layers. Our
experimental validation on real-world datasets demonstrates the effectiveness
of this approach in providing deeper insights into privacy risks associated
with intermediate representations.