BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models Authors: Jiaqi Xue, Mengxin Zheng, Yebowen Hu, Fei Liu, Xun Chen, Qian Lou | Published: 2024-06-03 | Updated: 2024-06-06 LLM Performance EvaluationQuery DiversityQuery Generation Method 2024.06.03 2025.05.27 Literature Database
A Synergistic Approach In Network Intrusion Detection By Neurosymbolic AI Authors: Alice Bizzarri, Chung-En Yu, Brian Jalaian, Fabrizio Riguzzi, Nathaniel D. Bastian | Published: 2024-06-03 NSAI IntegrationModel InterpretabilityUnknown Attack Detection 2024.06.03 2025.05.27 Literature Database
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data Authors: Thibault Simonetto, Salah Ghamizi, Maxime Cordy | Published: 2024-06-02 CAPGD AlgorithmAttack MethodAdversarial Training 2024.06.02 2025.05.27 Literature Database
Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models Authors: Garrett Crumrine, Izzat Alsmadi, Jesus Guerrero, Yuvaraj Munian | Published: 2024-06-02 LLM SecurityCybersecurityCompliance with Ethical Guidelines 2024.06.02 2025.05.27 Literature Database
VeriSplit: Secure and Practical Offloading of Machine Learning Inferences across IoT Devices Authors: Han Zhang, Zifan Wang, Mihir Dhamankar, Matt Fredrikson, Yuvraj Agarwal | Published: 2024-06-02 | Updated: 2025-03-31 WatermarkingData Privacy AssessmentComputational Efficiency 2024.06.02 2025.05.27 Literature Database
Exploring Vulnerabilities and Protections in Large Language Models: A Survey Authors: Frank Weizhen Liu, Chenhui Hu | Published: 2024-06-01 LLM SecurityPrompt InjectionDefense Method 2024.06.01 2025.05.27 Literature Database
Improved Techniques for Optimization-Based Jailbreaking on Large Language Models Authors: Xiaojun Jia, Tianyu Pang, Chao Du, Yihao Huang, Jindong Gu, Yang Liu, Xiaochun Cao, Min Lin | Published: 2024-05-31 | Updated: 2024-06-05 LLM SecurityWatermarkingPrompt Injection 2024.05.31 2025.05.27 Literature Database
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning Authors: Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran | Published: 2024-05-31 | Updated: 2024-06-05 PoisoningEvaluation MethodDefense Method 2024.05.31 2025.05.27 Literature Database
Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks Authors: Chen Xiong, Xiangyu Qi, Pin-Yu Chen, Tsung-Yi Ho | Published: 2024-05-30 | Updated: 2025-06-04 DPPセット生成Prompt InjectionAttack Method 2024.05.30 2025.06.06 Literature Database
Robust Kernel Hypothesis Testing under Data Corruption Authors: Antonin Schrab, Ilmun Kim | Published: 2024-05-30 Data Privacy AssessmentData Protection MethodHypothesis Testing 2024.05.30 2025.05.27 Literature Database