These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Deep learning's preponderance across scientific domains has reshaped
high-stakes decision-making, making it essential to follow rigorous operational
frameworks that include both Right-to-Privacy (RTP) and Right-to-Explanation
(RTE). This paper examines the complexities of combining these two
requirements. For RTP, we focus on `Differential privacy` (DP), which is
considered the current gold standard for privacy-preserving machine learning
due to its strong quantitative guarantee of privacy. For RTE, we focus on
post-hoc explainers: they are the go-to option for model auditing as they
operate independently of model training. We formally investigate DP models and
various commonly-used post-hoc explainers: how to evaluate these explainers
subject to RTP, and analyze the intrinsic interactions between DP models and
these explainers. Furthermore, our work throws light on how RTP and RTE can be
effectively combined in high-stakes applications. Our study concludes by
outlining an industrial software pipeline, with the example of a wildly used
use-case, that respects both RTP and RTE requirements.