AIセキュリティポータル K Program
Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence
Share
Abstract
Black-box adversarial attacks have demonstrated strong potential to compromise machine learning models by iteratively querying the target model or leveraging transferability from a local surrogate model. Recently, such attacks can be effectively mitigated by state-of-the-art (SOTA) defenses, e.g., detection via the pattern of sequential queries, or injecting noise into the model. To our best knowledge, we take the first step to study a new paradigm of black-box attacks with provable guarantees -- certifiable black-box attacks that can guarantee the attack success probability (ASP) of adversarial examples before querying over the target model. This new black-box attack unveils significant vulnerabilities of machine learning models, compared to traditional empirical black-box attacks, e.g., breaking strong SOTA defenses with provable confidence, constructing a space of (infinite) adversarial examples with high ASP, and the ASP of the generated adversarial examples is theoretically guaranteed without verification/queries over the target model. Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs). Then, we propose several novel techniques to craft the randomized AEs while reducing the perturbation size for better imperceptibility. Finally, we have comprehensively evaluated the certifiable black-box attacks on the CIFAR10/100, ImageNet, and LibriSpeech datasets, while benchmarking with 16 SOTA black-box attacks, against various SOTA defenses in the domains of computer vision and speech recognition. Both theoretical and experimental results have validated the significance of the proposed attack. The code and all the benchmarks are available at \url{https://github.com/datasec-lab/CertifiedAttack}.
Sign bits are all you need for black-box attacks
Abdullah Al-Dujaili, Una-May O’Reilly
Published: 2019
Sorting out Lipschitz function approximation
C. Anil, J. Lucas, R. Grosse
Published: 2019
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
Anish Athalye, Nicholas Carlini, David Wagner
Published: 2018
Practical black-box attacks on deep neural networks using efficient query mechanisms
Arjun Nitin Bhagoji, Warren He, Bo Li, Dawn Song
Published: 2018
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel, Jonas Rauber, Matthias Bethge
Published: 12.12.2017
Thermometer encoding: One hot way to resist adversarial examples
Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow
Published: 2018
Speaker recognition: A tutorial
Joseph P Campbell
Published: 1997
(certified!!) adversarial robustness for free!
Carlini, N., Tramer, F., Dvijotham, K. D., Rice, L., Sun, ` M., Kolter, J. Z.
Published: 2023
Share