Model explanations provide transparency into a trained machine learning
model's blackbox behavior to a model builder. They indicate the influence of
different input attributes to its corresponding model prediction. The
dependency of explanations on input raises privacy concerns for sensitive user
data. However, current literature has limited discussion on privacy risks of
model explanations.
We focus on the specific privacy risk of attribute inference attack wherein
an adversary infers sensitive attributes of an input (e.g., race and sex) given
its model explanations. We design the first attribute inference attack against
model explanations in two threat models where model builder either (a) includes
the sensitive attributes in training data and input or (b) censors the
sensitive attributes by not including them in the training data and input.
We evaluate our proposed attack on four benchmark datasets and four
state-of-the-art algorithms. We show that an adversary can successfully infer
the value of sensitive attributes from explanations in both the threat models
accurately. Moreover, the attack is successful even by exploiting only the
explanations corresponding to sensitive attributes. These suggest that our
attack is effective against explanations and poses a practical threat to data
privacy.
On combining the model predictions (an attack surface exploited by prior
attacks) with explanations, we note that the attack success does not improve.
Additionally, the attack success on exploiting model explanations is better
compared to exploiting only model predictions. These suggest that model
explanations are a strong attack surface to exploit for an adversary.