Federated Learning(FL), in theory, preserves privacy of individual clients'
data while producing quality machine learning models. However, attacks such as
Deep Leakage from Gradients(DLG) severely question the practicality of FL. In
this paper, we empirically evaluate the efficacy of four defensive methods
against DLG: Masking, Clipping, Pruning, and Noising. Masking, while only
previously studied as a way to compress information during parameter transfer,
shows surprisingly robust defensive utility when compared to the other three
established methods. Our experimentation is two-fold. We first evaluate the
minimum hyperparameter threshold for each method across MNIST, CIFAR-10, and
lfw datasets. Then, we train FL clients with each method and their minimum
threshold values to investigate the trade-off between DLG defense and training
performance. Results reveal that Masking and Clipping show near to none
degradation in performance while obfuscating enough information to effectively
defend against DLG.