This page provides the attacks and factors that have a negative impact “Continuous decrease in predictive accuracy, leading to degradation or cessation of functionality or service quality” in the information systems aspect in the AI Security Map, the defense methods and countermeasures against them, as well as the relevant AI technologies, tasks, and data. It also indicates related elements in the external influence aspect.
Attack or cause
Defensive method or countermeasure
Targeted AI technology
- DNN
- CNN
- LLM
- Contrastive learning
- FSL
- GNN
- Federated learning
- LSTM
- RNN
Task
- Classification
- Generation
Data
- Image
- Graph
- Text
- Audio
Related external influence aspect
- Reputation
- Usability
- Physical impact
- Psychological impact
- Financial impact
- Economy
- Critical infrastructure
- Medical care
References
Poisoning attack
- Poisoning Attacks against Support Vector Machines, 2012
- Understanding Black-box Predictions via Influence Functions, 2017
- Towards poisoning of deep learning algorithms with back-gradient optimization, 2017
- Stronger Data Poisoning Attacks Break Data Sanitization Defenses, 2018
- Online data poisoning attack, 2019
- Data Poisoning Attacks Against Federated Learning Systems, 2020
- PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning, 2022
- Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, 2022
- Poisoning Web-Scale Training Datasets is Practical, 2023
Detection of poisoned data
Certified robustness
- Certified Defenses for Data Poisoning Attacks, 2017
- Certified Robustness to Adversarial Examples with Differential Privacy, 2019
- On Evaluating Adversarial Robustness, 2019
- Certified Adversarial Robustness via Randomized Smoothing, 2019
- Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation, 2021
- Certified Robustness for Large Language Models with Self-Denoising, 2023
- RAB: Provable Robustness Against Backdoor Attacks, 2023
- (Certified!!) Adversarial Robustness for Free!, 2023
- Certifying LLM Safety against Adversarial Prompting, 2024