These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Auditing Data Provenance (ADP), i.e., auditing if a certain piece of data has
been used to train a machine learning model, is an important problem in data
provenance. The feasibility of the task has been demonstrated by existing
auditing techniques, e.g., shadow auditing methods, under certain conditions
such as the availability of label information and the knowledge of training
protocols for the target model. Unfortunately, both of these conditions are
often unavailable in real applications. In this paper, we introduce Data
Provenance via Differential Auditing (DPDA), a practical framework for auditing
data provenance with a different approach based on statistically significant
differentials, i.e., after carefully designed transformation, perturbed input
data from the target model's training set would result in much more drastic
changes in the output than those from the model's non-training set. This
framework allows auditors to distinguish training data from non-training ones
without the need of training any shadow models with the help of labeled output
data. Furthermore, we propose two effective auditing function implementations,
an additive one and a multiplicative one. We report evaluations on real-world
data sets demonstrating the effectiveness of our proposed auditing technique.