Wireless connectivity is instrumental in enabling scalable federated learning
(FL), yet wireless channels bring challenges for model training, in which
channel randomness perturbs each worker's model update while multiple workers'
updates incur significant interference under limited bandwidth. To address
these challenges, in this work we formulate a novel constrained optimization
problem, and propose an FL framework harnessing wireless channel perturbations
and interference for improving privacy, bandwidth-efficiency, and scalability.
The resultant algorithm is coined analog federated ADMM (A-FADMM) based on
analog transmissions and the alternating direction method of multipliers
(ADMM). In A-FADMM, all workers upload their model updates to the parameter
server (PS) using a single channel via analog transmissions, during which all
models are perturbed and aggregated over-the-air. This not only saves
communication bandwidth, but also hides each worker's exact model update
trajectory from any eavesdropper including the honest-but-curious PS, thereby
preserving data privacy against model inversion attacks. We formally prove the
convergence and privacy guarantees of A-FADMM for convex functions under
time-varying channels, and numerically show the effectiveness of A-FADMM under
noisy channels and stochastic non-convex functions, in terms of convergence
speed and scalability, as well as communication bandwidth and energy
efficiency.