Outlier detection and novelty detection are two important topics for anomaly
detection. Suppose the majority of a dataset are drawn from a certain
distribution, outlier detection and novelty detection both aim to detect data
samples that do not fit the distribution. Outliers refer to data samples within
this dataset, while novelties refer to new samples. In the meantime, backdoor
poisoning attacks for machine learning models are achieved through injecting
poisoning samples into the training dataset, which could be regarded as
"outliers" that are intentionally added by attackers. Differential privacy has
been proposed to avoid leaking any individual's information, when aggregated
analysis is performed on a given dataset. It is typically achieved by adding
random noise, either directly to the input dataset, or to intermediate results
of the aggregation mechanism. In this paper, we demonstrate that applying
differential privacy can improve the utility of outlier detection and novelty
detection, with an extension to detect poisoning samples in backdoor attacks.
We first present a theoretical analysis on how differential privacy helps with
the detection, and then conduct extensive experiments to validate the
effectiveness of differential privacy in improving outlier detection, novelty
detection, and backdoor attack detection.