BlackCATT: Black-box Collusion Aware Traitor Tracing in Federated Learning

AIにより推定されたラベル
Abstract

Federated Learning has been popularized in recent years for applications involving personal or sensitive data, as it allows the collaborative training of machine learning models through local updates at the data-owners’ premises, which does not require the sharing of the data itself. Considering the risk of leakage or misuse by any of the data-owners, many works attempt to protect their copyright, or even trace the origin of a potential leak through unique watermarks identifying each participant’s model copy. Realistic accusation scenarios impose a black-box setting, where watermarks are typically embedded as a set of sample-label pairs. The threat of collusion, however, where multiple bad actors conspire together to produce an untraceable model, has been rarely addressed, and previous works have been limited to shallow networks and near-linearly separable main tasks. To the best of our knowledge, this work is the first to present a general collusion-resistant embedding method for black-box traitor tracing in Federated Learning: BlackCATT, which introduces a novel collusion-aware embedding loss term and, instead of using a fixed trigger set, iteratively optimizes the triggers to aid convergence and traitor tracing performance. Experimental results confirm the efficacy of the proposed scheme across different architectures and datasets. Furthermore, for models that would otherwise suffer from update incompatibility on the main task after learning different watermarks (e.g., architectures including batch normalization layers), our proposed BlackCATT+FR incorporates functional regularization through a set of auxiliary examples at the aggregator, promoting a shared feature space among model copies without compromising traitor tracing performance.

タイトルとURLをコピーしました