These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Due to the vulnerability of deep neural networks (DNNs) to adversarial
examples, a large number of defense techniques have been proposed to alleviate
this problem in recent years. However, the progress of building more robust
models is usually hampered by the incomplete or incorrect robustness
evaluation. To accelerate the research on reliable evaluation of adversarial
robustness of the current defense models in image classification, the TSAIL
group at Tsinghua University and the Alibaba Security group organized this
competition along with a CVPR 2021 workshop on adversarial machine learning
(https://aisecure-workshop.github.io/amlcvpr2021/). The purpose of this
competition is to motivate novel attack algorithms to evaluate adversarial
robustness more effectively and reliably. The participants were encouraged to
develop stronger white-box attack algorithms to find the worst-case robustness
of different defenses. This competition was conducted on an adversarial
robustness evaluation platform -- ARES (https://github.com/thu-ml/ares), and is
held on the TianChi platform
(https://tianchi.aliyun.com/competition/entrance/531847/introduction) as one of
the series of AI Security Challengers Program. After the competition, we
summarized the results and established a new adversarial robustness benchmark
at https://ml.cs.tsinghua.edu.cn/ares-bench/, which allows users to upload
adversarial attack algorithms and defense models for evaluation.