The recent advances in sensor technologies and smart devices enable the
collaborative collection of a sheer volume of data from multiple information
sources. As a promising tool to efficiently extract useful information from
such big data, machine learning has been pushed to the forefront and seen great
success in a wide range of relevant areas such as computer vision, health care,
and financial market analysis. To accommodate the large volume of data, there
is a surge of interest in the design of distributed machine learning, among
which stochastic gradient descent (SGD) is one of the mostly adopted methods.
Nonetheless, distributed machine learning methods may be vulnerable to
Byzantine attack, in which the adversary can deliberately share falsified
information to disrupt the intended machine learning procedures. Therefore, two
asynchronous Byzantine tolerant SGD algorithms are proposed in this work, in
which the honest collaborative workers are assumed to store the model
parameters derived from their own local data and use them as the ground truth.
The proposed algorithms can deal with an arbitrary number of Byzantine
attackers and are provably convergent. Simulation results based on a real-world
dataset are presented to verify the theoretical results and demonstrate the
effectiveness of the proposed algorithms.