These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Adversarial machine learning attacks on video action recognition models is a
growing research area and many effective attacks were introduced in recent
years. These attacks show that action recognition models can be breached in
many ways. Hence using these models in practice raises significant security
concerns. However, there are very few works which focus on defending against or
detecting attacks. In this work, we propose a novel universal detection method
which is compatible with any action recognition model. In our extensive
experiments, we show that our method consistently detects various attacks
against different target models with high true positive rates while satisfying
very low false positive rates. Tested against four state-of-the-art attacks
targeting four action recognition models, the proposed detector achieves an
average AUC of 0.911 over 16 test cases while the best performance achieved by
the existing detectors is 0.645 average AUC. This 41.2% improvement is enabled
by the robustness of the proposed detector to varying attack methods and target
models. The lowest AUC achieved by our detector across the 16 test cases is
0.837 while the competing detector's performance drops as low as 0.211. We also
show that the proposed detector is robust to varying attack strengths. In
addition, we analyze our method's real-time performance with different hardware
setups to demonstrate its potential as a practical defense mechanism.