Many existing privacy-enhanced speech emotion recognition (SER) frameworks
focus on perturbing the original speech data through adversarial training
within a centralized machine learning setup. However, this privacy protection
scheme can fail since the adversary can still access the perturbed data. In
recent years, distributed learning algorithms, especially federated learning
(FL), have gained popularity to protect privacy in machine learning
applications. While FL provides good intuition to safeguard privacy by keeping
the data on local devices, prior work has shown that privacy attacks, such as
attribute inference attacks, are achievable for SER systems trained using FL.
In this work, we propose to evaluate the user-level differential privacy (UDP)
in mitigating the privacy leaks of the SER system in FL. UDP provides
theoretical privacy guarantees with privacy parameters $\epsilon$ and $\delta$.
Our results show that the UDP can effectively decrease attribute information
leakage while keeping the utility of the SER system with the adversary
accessing one model update. However, the efficacy of the UDP suffers when the
FL system leaks more model updates to the adversary. We make the code publicly
available to reproduce the results in
https://github.com/usc-sail/fed-ser-leakage.