Labels Predicted by AI
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
Machine learning-based hardware malware detectors (HMDs) offer a potential game changing advantage in defending systems against malware. However, HMDs suffer from adversarial attacks, can be effectively reverse-engineered and subsequently be evaded, allowing malware to hide from detection. We address this issue by proposing a novel HMDs (Stochastic-HMDs) through approximate computing, which makes HMDs’ inference computation-stochastic, thereby making HMDs resilient against adversarial evasion attacks. Specifically, we propose to leverage voltage overscaling to induce stochastic computation in the HMDs model. We show that such a technique makes HMDs more resilient to both black-box adversarial attack scenarios, i.e., reverse-engineering and transferability. Our experimental results demonstrate that Stochastic-HMDs offer effective defense against adversarial attacks along with by-product power savings, without requiring any changes to the hardware/software nor to the HMDs’ model, i.e., no retraining or fine tuning is needed. Moreover, based on recent results in probably approximately correct (PAC) learnability theory, we show that Stochastic-HMDs are provably more difficult to reverse engineer.