Leveraging Large Language Models to Bridge On-chain and Off-chain Transparency in Stablecoins Authors: Yuexin Xiang, Yuchen Lei, SM Mahir Shazeed Rish, Yuanzhe Zhang, Qin Wang, Tsz Hon Yuen, Jiangshan Yu | Published: 2025-12-02 Blockchain IntegrationPrompt InjectionRisk Analysis Method 2025.12.02 2025.12.04 Literature Database
A Wolf in Sheep’s Clothing: Bypassing Commercial LLM Guardrails via Harmless Prompt Weaving and Adaptive Tree Search Authors: Rongzhe Wei, Peizhi Niu, Xinjie Shen, Tony Tu, Yifan Li, Ruihan Wu, Eli Chien, Olgica Milenkovic, Pan Li | Published: 2025-12-01 Training MethodPrompt InjectionEthical Considerations 2025.12.01 2025.12.03 Literature Database
DefenSee: Dissecting Threat from Sight and Text – A Multi-View Defensive Pipeline for Multi-modal Jailbreaks Authors: Zihao Wang, Kar Wai Fok, Vrizlynn L. L. Thing | Published: 2025-12-01 Prompt InjectionModel DoSRobustness Improvement Method 2025.12.01 2025.12.03 Literature Database
Constructing and Benchmarking: a Labeled Email Dataset for Text-Based Phishing and Spam Detection Framework Authors: Rebeka Toth, Tamas Bisztray, Richard Dubniczky | Published: 2025-11-26 Social Engineering AttackDataset IntegrationPrompt Injection 2025.11.26 2025.11.28 Literature Database
GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision Authors: Yuxiao Xiang, Junchi Chen, Zhenchao Jin, Changtao Miao, Haojie Yuan, Qi Chu, Tao Gong, Nenghai Yu | Published: 2025-11-26 Prompt InjectionRisk Assessment MethodEthical Considerations 2025.11.26 2025.11.28 Literature Database
Can LLMs Make (Personalized) Access Control Decisions? Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25 Disabling Safety Mechanisms of LLMPrivacy AssessmentPrompt Injection 2025.11.25 2025.11.27 Literature Database
Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24 Prompt InjectionLarge Language ModelMalicious Prompt 2025.11.24 2025.11.26 Literature Database
Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24 Indirect Prompt InjectionPrompt InjectionRisk Assessment Method 2025.11.24 2025.11.26 Literature Database
Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24 Disabling Safety Mechanisms of LLMPrompt InjectionMalicious Prompt 2025.11.24 2025.11.26 Literature Database
Small Language Models for Phishing Website Detection: Cost, Performance, and Privacy Trade-Offs Authors: Georg Goldenits, Philip Koenig, Sebastian Raubitzek, Andreas Ekelhart | Published: 2025-11-19 フィッシング検出手法Prompt InjectionPrompt Engineering 2025.11.19 2025.11.21 Literature Database