These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The proliferation of AI technology gives rise to a variety of security
threats, which significantly compromise the confidentiality and integrity of AI
models and applications. Existing software-based solutions mainly target one
specific attack, and require the implementation into the models, rendering them
less practical. We design UniGuard, a novel unified and non-intrusive detection
methodology to safeguard FPGA-based AI accelerators. The core idea of UniGuard
is to harness power side-channel information generated during model inference
to spot any anomaly. We employ a Time-to-Digital Converter to capture power
fluctuations and train a supervised machine learning model to identify various
types of threats. Evaluations demonstrate that UniGuard can achieve 94.0%
attack detection accuracy, with high generalization over unknown or adaptive
attacks and robustness against varied configurations (e.g., sensor frequency
and location).