Robustness via Referencing: Defending against Prompt Injection Attacks by Referencing the Executed Instruction Authors: Yulin Chen, Haoran Li, Yuan Sui, Yue Liu, Yufei He, Yangqiu Song, Bryan Hooi | Published: 2025-04-29 Indirect Prompt InjectionPrompt validationAttack Method 2025.04.29 2025.05.27 Literature Database
Network Attack Traffic Detection With Hybrid Quantum-Enhanced Convolution Neural Network Authors: Zihao Wang, Kar Wai Fok, Vrizlynn L. L. Thing | Published: 2025-04-29 Performance Evaluation MethodAttack DetectionQuantum Framework 2025.04.29 2025.05.27 Literature Database
Enhancing Leakage Attacks on Searchable Symmetric Encryption Using LLM-Based Synthetic Data Generation Authors: Joshua Chiu, Partha Protim Paul, Zahin Wahab | Published: 2025-04-29 Indirect Prompt InjectionAttack MethodHierarchical Clustering 2025.04.29 2025.05.27 Literature Database
A Cryptographic Perspective on Mitigation vs. Detection in Machine Learning Authors: Greg Gluch, Shafi Goldwasser | Published: 2025-04-28 | Updated: 2025-07-10 Certified RobustnessAdversarial attackComputational Problem 2025.04.28 2025.07.12 Literature Database
The Automation Advantage in AI Red Teaming Authors: Rob Mulla, Ads Dawson, Vincent Abruzzon, Brian Greunke, Nick Landers, Brad Palm, Will Pearce | Published: 2025-04-28 | Updated: 2025-04-29 Prompt leakingAttack MethodEffects of Automation 2025.04.28 2025.05.27 Literature Database
CodeBC: A More Secure Large Language Model for Smart Contract Code Generation in Blockchain Authors: Lingxiang Wang, Hainan Zhang, Qinnan Zhang, Ziwei Wang, Hongwei Zheng, Jin Dong, Zhiming Zheng | Published: 2025-04-28 | Updated: 2025-05-07 Program VerificationPerformance EvaluationVulnerability Analysis 2025.04.28 2025.05.27 Literature Database
$\texttt{SAGE}$: A Generic Framework for LLM Safety Evaluation Authors: Madhur Jindal, Hari Shrawgi, Parag Agrawal, Sandipan Dandapat | Published: 2025-04-28 User Identification SystemLarge Language ModelTrade-Off Between Safety And Usability 2025.04.28 2025.05.27 Literature Database
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks? Authors: Hao Du, Shang Liu, Yang Cao | Published: 2025-04-28 | Updated: 2025-05-01 Privacy Risk ManagementMembership Disclosure RiskDifferential Privacy 2025.04.28 2025.05.27 Literature Database
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts Authors: Qingyue Wang, Qi Pang, Xixun Lin, Shuai Wang, Daoyuan Wu | Published: 2025-04-24 | Updated: 2025-04-29 Poisoning attack on RAGBackdoor Attack TechniquesAttack Method 2025.04.24 2025.05.27 Literature Database
Evaluating the Vulnerability of ML-Based Ethereum Phishing Detectors to Single-Feature Adversarial Perturbations Authors: Ahod Alghuried, Ali Alkinoon, Abdulaziz Alghamdi, Soohyeon Choi, Manar Mohaisen, David Mohaisen | Published: 2025-04-24 Detection Rate of Phishing AttacksCertified RobustnessAdversarial Example Detection 2025.04.24 2025.05.27 Literature Database