Federated learning has been proposed as a privacy-preserving machine learning
framework that enables multiple clients to collaborate without sharing raw
data. However, client privacy protection is not guaranteed by design in this
framework. Prior work has shown that the gradient sharing strategies in
federated learning can be vulnerable to data reconstruction attacks. In
practice, though, clients may not transmit raw gradients considering the high
communication cost or due to privacy enhancement requirements. Empirical
studies have demonstrated that gradient obfuscation, including intentional
obfuscation via gradient noise injection and unintentional obfuscation via
gradient compression, can provide more privacy protection against
reconstruction attacks. In this work, we present a new data reconstruction
attack framework targeting the image classification task in federated learning.
We show that commonly adopted gradient postprocessing procedures, such as
gradient quantization, gradient sparsification, and gradient perturbation, may
give a false sense of security in federated learning. Contrary to prior
studies, we argue that privacy enhancement should not be treated as a byproduct
of gradient compression. Additionally, we design a new method under the
proposed framework to reconstruct the image at the semantic level. We quantify
the semantic privacy leakage and compare with conventional based on image
similarity scores. Our comparisons challenge the image data leakage evaluation
schemes in the literature. The results emphasize the importance of revisiting
and redesigning the privacy protection mechanisms for client data in existing
federated learning algorithms.