AIにより推定されたラベル
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
The wide acceptance of Internet of Things (IoT) for both household and industrial applications is accompanied by several security concerns. A major security concern is their probable abuse by adversaries towards their malicious intent. Understanding and analyzing IoT malicious behaviors is crucial, especially with their rapid growth and adoption in wide-range of applications. However, recent studies have shown that machine learning-based approaches are susceptible to adversarial attacks by adding junk codes to the binaries, for example, with an intention to fool those machine learning or deep learning-based detection systems. Realizing the importance of addressing this challenge, this study proposes a malware detection system that is robust to adversarial attacks. To do so, examine the performance of the state-of-the-art methods against adversarial IoT software crafted using the graph embedding and augmentation techniques. In particular, we study the robustness of such methods against two black-box adversarial methods, GEA and SGEA, to generate Adversarial Examples (AEs) with reduced overhead, and keeping their practicality intact. Our comprehensive experimentation with GEA-based AEs show the relation between misclassification and the graph size of the injected sample. Upon optimization and with small perturbation, by use of SGEA, all the IoT malware samples are misclassified as benign. This highlights the vulnerability of current detection systems under adversarial settings. With the landscape of possible adversarial attacks, we then propose DL-FHMC, a fine-grained hierarchical learning approach for malware detection and classification, that is robust to AEs with a capability to detect 88.52 malicious AEs.