Recent work has highlighted the vulnerability of many deep machine learning
models to adversarial examples. It attracts increasing attention to adversarial
attacks, which can be used to evaluate the security and robustness of models
before they are deployed. However, to our best knowledge, there is no specific
research on the adversarial attacks for multi-view deep models. This paper
proposes two multi-view attack strategies, two-stage attack (TSA) and
end-to-end attack (ETEA). With the mild assumption that the single-view model
on which the target multi-view model is based is known, we first propose the
TSA strategy. The main idea of TSA is to attack the multi-view model with
adversarial examples generated by attacking the associated single-view model,
by which state-of-the-art single-view attack methods are directly extended to
the multi-view scenario. Then we further propose the ETEA strategy when the
multi-view model is provided publicly. The ETEA is applied to accomplish direct
attacks on the target multi-view model, where we develop three effective
multi-view attack methods. Finally, based on the fact that adversarial examples
generalize well among different models, this paper takes the adversarial attack
on the multi-view convolutional neural network as an example to validate that
the effectiveness of the proposed multi-view attacks. Extensive experimental
results demonstrate that our multi-view attack strategies are capable of
attacking the multi-view deep models, and we additionally find that multi-view
models are more robust than single-view models.