Privacy and transparency are two key foundations of trustworthy machine
learning. Model explanations offer insights into a model's decisions on input
data, whereas privacy is primarily concerned with protecting information about
the training data. We analyze connections between model explanations and the
leakage of sensitive information about the model's training set. We investigate
the privacy risks of feature-based model explanations using membership
inference attacks: quantifying how much model predictions plus their
explanations leak information about the presence of a datapoint in the training
set of a model. We extensively evaluate membership inference attacks based on
feature-based model explanations, over a variety of datasets. We show that
backpropagation-based explanations can leak a significant amount of information
about individual training datapoints. This is because they reveal statistical
information about the decision boundaries of the model about an input, which
can reveal its membership. We also empirically investigate the trade-off
between privacy and explanation quality, by studying the perturbation-based
model explanations.