Robustness bounds on the successful adversarial examples in probabilistic models: Implications from Gaussian processes Authors: Hiroaki Maeshima, Akira Otsuka | Published: 2024-03-04 | Updated: 2025-03-19 Attack MethodAdversarial ExampleWatermark Evaluation 2024.03.04 2025.05.27 Literature Database
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks Authors: Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li | Published: 2024-03-02 LLM SecurityPrompt InjectionAttack Method 2024.03.02 2025.05.27 Literature Database
Attacking Delay-based PUFs with Minimal Adversary Model Authors: Hongming Fei, Owen Millwood, Prosanta Gope, Jack Miskelly, Biplab Sikdar | Published: 2024-03-01 Evaluation Methods for PUFModel Performance EvaluationAttack Method 2024.03.01 2025.05.27 Literature Database
Coercing LLMs to do and reveal (almost) anything Authors: Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein | Published: 2024-02-21 LLM SecurityPrompt InjectionAttack Method 2024.02.21 2025.05.27 Literature Database
The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative Authors: Zhen Tan, Chengshuai Zhao, Raha Moraffah, Yifan Li, Yu Kong, Tianlong Chen, Huan Liu | Published: 2024-02-20 | Updated: 2024-06-03 LLM SecurityClassification of Malicious ActorsAttack Method 2024.02.20 2025.05.27 Literature Database
IT Intrusion Detection Using Statistical Learning and Testbed Measurements Authors: Xiaoxuan Wang, Rolf Stadler | Published: 2024-02-20 CVE Information ExtractionIntrusion Detection SystemAttack Method 2024.02.20 2025.05.27 Literature Database
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning Authors: Shuai Zhao, Leilei Gan, Luu Anh Tuan, Jie Fu, Lingjuan Lyu, Meihuizi Jia, Jinming Wen | Published: 2024-02-19 | Updated: 2024-03-29 Backdoor DetectionAttack MethodDefense Method 2024.02.19 2025.05.27 Literature Database
Manipulating hidden-Markov-model inferences by corrupting batch data Authors: William N. Caballero, Jose Manuel Camacho, Tahir Ekin, Roi Naveiro | Published: 2024-02-19 Quantification of UncertaintyAttack EvaluationAttack Method 2024.02.19 2025.05.27 Literature Database
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning Authors: Enrique Mármol Campos, Aurora González Vidal, José Luis Hernández Ramos, Antonio Skarmeta | Published: 2024-02-15 PoisoningAttack MethodFederated Learning 2024.02.15 2025.05.27 Literature Database
PAL: Proxy-Guided Black-Box Attack on Large Language Models Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15 LLM SecurityPrompt InjectionAttack Method 2024.02.15 2025.05.27 Literature Database