Machine learning models are vulnerable to data inference attacks, such as
membership inference and model inversion attacks. In these types of breaches,
an adversary attempts to infer a data record's membership in a dataset or even
reconstruct this data record using a confidence score vector predicted by the
target model. However, most existing defense methods only protect against
membership inference attacks. Methods that can combat both types of attacks
require a new model to be trained, which may not be time-efficient. In this
paper, we propose a differentially private defense method that handles both
types of attacks in a time-efficient manner by tuning only one parameter, the
privacy budget. The central idea is to modify and normalize the confidence
score vectors with a differential privacy mechanism which preserves privacy and
obscures membership and reconstructed data. Moreover, this method can guarantee
the order of scores in the vector to avoid any loss in classification accuracy.
The experimental results show the method to be an effective and timely defense
against both membership inference and model inversion attacks with no reduction
in accuracy.