Unmasking the Shadows: Pinpoint the Implementations of Anti-Dynamic Analysis Techniques in Malware Using LLM Authors: Haizhou Wang, Nanqing Luo, Xusheng Li, Peng LIu | Published: 2024-11-08 | Updated: 2025-04-29 Malware EvolutionAttack MethodAnalysis of Detection Methods 2024.11.08 2025.05.27 Literature Database
Low-Rank Adversarial PGD Attack Authors: Dayana Savostianova, Emanuele Zangrando, Francesco Tudisco | Published: 2024-10-16 Attack Method 2024.10.16 2025.05.27 Literature Database
Unified Breakdown Analysis for Byzantine Robust Gossip Authors: Renaud Gaucher, Aymeric Dieuleveut, Hadrien Hendrikx | Published: 2024-10-14 | Updated: 2025-02-03 FrameworkAttack Method 2024.10.14 2025.05.27 Literature Database
Can a large language model be a gaslighter? Authors: Wei Li, Luyao Zhu, Yang Song, Ruixi Lin, Rui Mao, Yang You | Published: 2024-10-11 Prompt InjectionSafety AlignmentAttack Method 2024.10.11 2025.05.27 Literature Database
F2A: An Innovative Approach for Prompt Injection by Utilizing Feign Security Detection Agents Authors: Yupeng Ren | Published: 2024-10-11 | Updated: 2024-10-14 Prompt InjectionAttack EvaluationAttack Method 2024.10.11 2025.05.27 Literature Database
Time Traveling to Defend Against Adversarial Example Attacks in Image Classification Authors: Anthony Etim, Jakub Szefer | Published: 2024-10-10 Attack MethodAdversarial ExampleDefense Method 2024.10.10 2025.05.27 Literature Database
Study of Attacks on the HHL Quantum Algorithm Authors: Yizhuo Tan, Hrvoje Kukina, Jakub Szefer | Published: 2024-10-10 CybersecurityAttack EvaluationAttack Method 2024.10.10 2025.05.27 Literature Database
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems Authors: Donghyun Lee, Mo Tiwari | Published: 2024-10-09 Prompt InjectionAttack MethodDefense Method 2024.10.09 2025.05.27 Literature Database
Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders Authors: David Noever, Forrest McKee | Published: 2024-10-09 CybersecurityPrompt InjectionAttack Method 2024.10.09 2025.05.27 Literature Database
Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models Authors: Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng | Published: 2024-10-05 LLM SecurityPrompt InjectionAttack Method 2024.10.05 2025.05.27 Literature Database