Distributed Support Vector Machines (DSVM) have been developed to solve
large-scale classification problems in networked systems with a large number of
sensors and control units. However, the systems become more vulnerable as
detection and defense are increasingly difficult and expensive. This work aims
to develop secure and resilient DSVM algorithms under adversarial environments
in which an attacker can manipulate the training data to achieve his objective.
We establish a game-theoretic framework to capture the conflicting interests
between an adversary and a set of distributed data processing units. The Nash
equilibrium of the game allows predicting the outcome of learning algorithms in
adversarial environments, and enhancing the resilience of the machine learning
through dynamic distributed learning algorithms. We prove that the convergence
of the distributed algorithm is guaranteed without assumptions on the training
data or network topologies. Numerical experiments are conducted to corroborate
the results. We show that network topology plays an important role in the
security of DSVM. Networks with fewer nodes and higher average degrees are more
secure. Moreover, a balanced network is found to be less vulnerable to attacks.