AIセキュリティポータル K Program
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
Share
Abstract
In safety-critical applications such as medical imaging and autonomous driving, where decisions have profound implications for patient health and road safety, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks and reliable uncertainty quantification in decision-making. With extensive research focused on enhancing adversarial robustness through various forms of adversarial training (AT), a notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models. To address this gap, this study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. It is first unveiled that existing CP methods do not produce informative prediction sets under the commonly used $l_{\infty}$-norm bounded attack if the model is not adversarially trained, which underpins the importance of adversarial training for CP. Our paper next demonstrates that the prediction set size (PSS) of CP using adversarially trained models with AT variants is often worse than using standard AT, inspiring us to research into CP-efficient AT for improved PSS. We propose to optimize a Beta-weighting loss with an entropy minimization regularizer during AT to improve CP-efficiency, where the Beta-weighting loss is shown to be an upper bound of PSS at the population level by our theoretical analysis. Moreover, our empirical study on four image classification datasets across three popular AT baselines validates the effectiveness of the proposed Uncertainty-Reducing AT (AT-UR).
Uncertainty sets for image classifiers using conformal prediction
A.N. Angelopoulos, S. Bates, M. Jordan, J. Malik
Published: 2020
Learnable boundary guided adversarial training
J. Cui, S. Liu, L. Wang, J. Jia
Published: 2021
Accelerating monte carlo bayesian prediction via approximating predictive uncertainty over the simplex
Cui, Y., Yao, W., Li, Q., Chan, A. B., Xue, C. J.
Published: 2020
Bayesian nested neural networks for uncertainty calibration and adaptive compression
Cui, Y., Liu, Z., Li, Q., Chan, A. B., Xue, C. J.
Published: 2021
Bayes-MIL: A new probabilistic perspective on attention-based multiple instance learning for whole slide images
CUI, Y., Liu, Z., Liu, X., Liu, X., Wang, C., Kuo, T.-W., Xue, C. J., Chan, A. B.
Published: 2023
Variational nested dropout
Cui, Y., Mao, Y., Liu, Z., Li, Q., Chan, A. B., Liu, X., Kuo, T.-W., Xue, C. J.
Published: 2023
Training uncertainty-aware classifiers with conformalized deep learning
Einbinder, B.-S., Romano, Y., Sesia, M., Zhou, Y.
Published: 2022
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Yarin Gal, Zoubin Ghahramani
Published: 6.6.2015
Adversarially robust conformal prediction
Gendler, A., Weng, T.-W., Daniel, L., Romano, Y.
Published: 2021
Probabilistically robust conformal prediction
Ghosh, S., Shi, Y., Belkhouja, T., Yan, Y., Doppa, J., Jones, B.
Published: 2023
Adaptive conformal inference under distribution shift
Gibbs, I., Candes, E.
Published: 2021
Semi-supervised learning by entropy minimization
Grandvalet, Y., Bengio, Y.
Published: 2004
Caltech-256 object category dataset
Griffin, G., Holub, A., Perona, P.
Published: 2007
On calibration of modern neural networks
Guo, C., Pleiss, G., Sun, Y., Weinberger, K. Q.
Published: 2017
Deep residual learning for image recognition
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Published: 2016
What uncertainties do we need in bayesian deep learning for computer vision?
Kendall, A., Gal, Y.
Published: 2017
Variational dropout and the local reparameterization trick
D. P. Kingma, T. Salimans, M. Welling
Published: 2015
On the effectiveness of adversarial training against common corruptions
Kireev, K., Andriushchenko, M., Flammarion, N.
Published: 2022
Learning multiple layers of features from tiny images
Alex Krizhevsky, Geoffrey Hinton
Published: 2009
Distribution-free predictive inference for regression
J. Lei, M. G’Sell, A. Rinaldo, R. J. Tibshirani, L. Wasserman
Published: 2018
Focal loss for dense object detection
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollár
Published: 2017
Probabilistic margins for instance reweighting in adversarial training
Liu, F., Han, B., Liu, T., Gong, C., Niu, G., Zhou, M., Sugiyama, M.
Published: 2021
Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Ziquan Liu, Antoni B. Chan
Published: 10.11.2022
Improve generalization and robustness of neural networks via weight scale shifting invariant regularizations
Liu, Z., Yufei, C., Chan, A. B.
Published: 2021
Twins: A fine-tuning framework for improved transferability of adversarial robustness and generalization
Liu, Z., Xu, Y., Ji, X., Chan, A. B.
Published: 2023
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu
Published: 6.20.2017
When does label smoothing help?
Muller, R., Kornblith, S., Hinton, G. E.
Published: 2019
Inductive confidence machines for regression
H. Papadopoulos, K. Proedrou, V. Vovk, A. Gammerman
Published: 2002
On estimation of a probability density function and mode
Emanuel Parzen
Published: 1962
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods
Platt, J.
Published: 1999
Adversarial Robustness through Local Linearization
Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli
Published: 7.5.2019
Improving calibration through the relationship with adversarial robustness
Qin, Y., Wang, X., Beutel, A., Chi, E.
Published: 2021
Deep learning for medical image processing: Overview, challenges and the future
Razzak, M. I., Naz, S., Zaib, A.
Published: 2018
Classification with valid and adaptive coverage
Romano, Y., Sesia, M., Candes, E.
Published: 2020
Remarks on some nonparametric estimates of a density function
Rosenblatt, M.
Published: 1956
Do adversarially robust ImageNet models transfer better?
H. Salman, A. Ilyas, L. Engstrom, A. Kapoor, A. Madry
Published: 2020
A tutorial on conformal prediction
G. Shafer, V. Vovk
Published: 2008
Machine-learning applications of algorithmic randomness
V. Vovk, A. Gammerman, C. Saunders
Published: 1999
Algorithmic Learning in a Random World
V. Vovk, A. Gammerman, G. Shafer
Published: 2005
The caltech-ucsd birds-200-2011 dataset
Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.
Published: 2011
Improving adversarial robustness requires revisiting misclassified examples
Y. Wang, D. Zou, J. Yi, J. Bailey, X. Ma, Q. Gu
Published: 2019
Theoretically principled trade-off between robustness and accuracy
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, Michael Jordan
Published: 2019
Share