With a large number of sensors and control units in networked systems,
distributed support vector machines (DSVMs) play a fundamental role in scalable
and efficient multi-sensor classification and prediction tasks. However, DSVMs
are vulnerable to adversaries who can modify and generate data to deceive the
system to misclassification and misprediction. This work aims to design defense
strategies for DSVM learner against a potential adversary. We establish a
game-theoretic framework to capture the conflicting interests between the DSVM
learner and the attacker. The Nash equilibrium of the game allows predicting
the outcome of learning algorithms in adversarial environments, and enhancing
the resilience of the machine learning through dynamic distributed learning
algorithms. We show that the DSVM learner is less vulnerable when he uses a
balanced network with fewer nodes and higher degree. We also show that adding
more training samples is an efficient defense strategy against an attacker. We
present secure and resilient DSVM algorithms with verification method and
rejection method, and show their resiliency against adversary with numerical
experiments.