The successful deployment of artificial intelligence (AI) in many domains
from healthcare to hiring requires their responsible use, particularly in model
explanations and privacy. Explainable artificial intelligence (XAI) provides
more information to help users to understand model decisions, yet this
additional knowledge exposes additional risks for privacy attacks. Hence,
providing explanation harms privacy. We study this risk for image-based model
inversion attacks and identified several attack architectures with increasing
performance to reconstruct private image data from model explanations. We have
developed several multi-modal transposed CNN architectures that achieve
significantly higher inversion performance than using the target model
prediction only. These XAI-aware inversion models were designed to exploit the
spatial knowledge in image explanations. To understand which explanations have
higher privacy risk, we analyzed how various explanation types and factors
influence inversion performance. In spite of some models not providing
explanations, we further demonstrate increased inversion performance even for
non-explainable target models by exploiting explanations of surrogate models
through attention transfer. This method first inverts an explanation from the
target prediction, then reconstructs the target image. These threats highlight
the urgent and significant privacy risks of explanations and calls attention
for new privacy preservation techniques that balance the dual-requirement for
AI explainability and privacy.