AIにより推定されたラベル
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
Source code attribution approaches have achieved remarkable accuracy thanks to the rapid advances in deep learning. However, recent studies shed light on their vulnerability to adversarial attacks. In particular, they can be easily deceived by adversaries who attempt to either create a forgery of another author or to mask the original author. To address these emerging issues, we formulate this security challenge into a general threat model, the relational adversary, that allows an arbitrary number of the semantics-preserving transformations to be applied to an input in any problem space. Our theoretical investigation shows the conditions for robustness and the trade-off between robustness and accuracy in depth. Motivated by these insights, we present a novel learning framework, normalize-and-predict (N&P), that in theory guarantees the robustness of any authorship-attribution approach. We conduct an extensive evaluation of N&P in defending two of the latest authorship-attribution approaches against state-of-the-art attack methods. Our evaluation demonstrates that N&P improves the accuracy on adversarial inputs by as much as 70 N&P also increases robust accuracy to 45 training while running over 40 times faster.