These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The widespread adoption of machine learning necessitates robust privacy
protection alongside algorithmic resilience. While Local Differential Privacy
(LDP) provides foundational guarantees, sophisticated adversaries with prior
knowledge demand more nuanced Bayesian privacy notions, such as Maximum
Bayesian Privacy (MBP) and Average Bayesian Privacy (ABP), first introduced by
\cite{zhang2022no}. Concurrently, machine learning systems require inherent
robustness against data perturbations and adversarial manipulations. This paper
systematically investigates the intricate theoretical relationships among LDP,
MBP, and ABP. Crucially, we bridge these privacy concepts with algorithmic
robustness, particularly within the Probably Approximately Correct (PAC)
learning framework. Our work demonstrates that privacy-preserving mechanisms
inherently confer PAC robustness. We present key theoretical results, including
the formalization of the established LDP-MBP relationship, novel bounds between
MBP and ABP, and a proof demonstrating PAC robustness from MBP. Furthermore, we
establish a novel theoretical relationship quantifying how privacy leakage
directly influences an algorithm's input robustness. These results provide a
unified theoretical framework for understanding and optimizing the
privacy-robustness trade-off, paving the way for the development of more
secure, trustworthy, and resilient machine learning systems.