These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Due to the widespread availability of data, machine learning (ML) algorithms
are increasingly being implemented in distributed topologies, wherein various
nodes collaborate to train ML models via the coordination of a central server.
However, distributed learning approaches face significant vulnerabilities,
primarily stemming from two potential threats. Firstly, the presence of
Byzantine nodes poses a risk of corrupting the learning process by transmitting
inaccurate information to the server. Secondly, a curious server may compromise
the privacy of individual nodes, sometimes reconstructing the entirety of the
nodes' data. Homomorphic encryption (HE) has emerged as a leading security
measure to preserve privacy in distributed learning under non-Byzantine
scenarios. However, the extensive computational demands of HE, particularly for
high-dimensional ML models, have deterred attempts to design purely homomorphic
operators for non-linear robust aggregators. This paper introduces SABLE, the
first homomorphic and Byzantine robust distributed learning algorithm. SABLE
leverages HTS, a novel and efficient homomorphic operator implementing the
prominent coordinate-wise trimmed mean robust aggregator. Designing HTS enables
us to implement HMED, a novel homomorphic median aggregator. Extensive
experiments on standard ML tasks demonstrate that SABLE achieves practical
execution times while maintaining an ML accuracy comparable to its non-private
counterpart.