Federated Learning (FL), a distributed machine learning paradigm, has been
adapted to mitigate privacy concerns for customers. Despite their appeal, there
are various inference attacks that can exploit shared-plaintext model updates
to embed traces of customer private information, leading to serious privacy
concerns. To alleviate this privacy issue, cryptographic techniques such as
Secure Multi-Party Computation and Homomorphic Encryption have been used for
privacy-preserving FL. However, such security issues in privacy-preserving FL
are poorly elucidated and underexplored. This work is the first attempt to
elucidate the triviality of performing model corruption attacks on
privacy-preserving FL based on lightweight secret sharing. We consider
scenarios in which model updates are quantized to reduce communication overhead
in this case, where an adversary can simply provide local parameters outside
the legal range to corrupt the model. We then propose the MUD-PQFed protocol,
which can precisely detect malicious clients performing attacks and enforce
fair penalties. By removing the contributions of detected malicious clients,
the global model utility is preserved to be comparable to the baseline global
model without the attack. Extensive experiments validate effectiveness in
maintaining baseline accuracy and detecting malicious clients in a fine-grained
manner