Poisoning

Revisiting Static Feature-Based Android Malware Detection

Authors: Md Tanvirul Alam, Dipkamal Bhusal, Nidhi Rastogi | Published: 2024-09-11
Dataset Generation
Poisoning
Model Performance Evaluation

2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures

Authors: Xinheng Xie, Kureha Yamaguchi, Margaux Leblanc, Simon Malzard, Varun Chhabra, Victoria Nockles, Yue Wu | Published: 2024-09-08 | Updated: 2025-03-20
Backdoor Attack
Poisoning
Evaluation Method

Enhancing Quantum Security over Federated Learning via Post-Quantum Cryptography

Authors: Pingzhi Li, Tianlong Chen, Junyu Liu | Published: 2024-09-06
Poisoning
Communication Efficiency
Quantum Cryptography Technology

The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs

Authors: Bocheng Chen, Hanqing Guo, Guangjing Wang, Yuanda Wang, Qiben Yan | Published: 2024-09-01
LLM Performance Evaluation
Prompt Injection
Poisoning

Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers

Authors: Rahul Yumlembam, Biju Issac, Seibu Mary Jacob, Longzhi Yang | Published: 2024-09-01
Poisoning
Adversarial Example
Evaluation Method

Analyzing Inference Privacy Risks Through Gradients in Machine Learning

Authors: Zhuohang Li, Andrew Lowy, Jing Liu, Toshiaki Koike-Akino, Kieran Parsons, Bradley Malin, Ye Wang | Published: 2024-08-29
Privacy Protection Method
Poisoning
Membership Inference

Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks

Authors: Ziqiang Li, Yueqi Zeng, Pengfei Xia, Lei Liu, Zhangjie Fu, Bin Li | Published: 2024-08-21
Backdoor Attack
Poisoning

Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks

Authors: Hetvi Waghela, Jaydip Sen, Sneha Rakshit | Published: 2024-08-20
Poisoning
Adversarial Example
Defense Method

Transferring Backdoors between Large Language Models by Knowledge Distillation

Authors: Pengzhou Cheng, Zongru Wu, Tianjie Ju, Wei Du, Zhuosheng Zhang Gongshen Liu | Published: 2024-08-19
LLM Security
Backdoor Attack
Poisoning

Regularization for Adversarial Robust Learning

Authors: Jie Wang, Rui Gao, Yao Xie | Published: 2024-08-19 | Updated: 2024-08-22
Algorithm
Poisoning
Regularization