Literature Database

A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

Authors: Yang Yao, Xuan Tong, Ruofan Wang, Yixu Wang, Lujundong Li, Liang Liu, Yan Teng, Yingchun Wang | Published: 2025-02-19 | Updated: 2025-06-03
Disabling Safety Mechanisms of LLM
Ethical Considerations
Large Language Model

SEA: Low-Resource Safety Alignment for Multimodal Large Language Models via Synthetic Embeddings

Authors: Weikai Lu, Hao Peng, Huiping Zhuang, Cen Chen, Ziqian Zeng | Published: 2025-02-18 | Updated: 2025-05-21
Alignment
Text Generation Method
Prompt Injection

Toward Integrated Solutions: A Systematic Interdisciplinary Review of Cybergrooming Research

Authors: Heajun An, Marcos Silva, Qi Zhang, Arav Singh, Minqian Liu, Xinyi Zhang, Sarvech Qadir, Sang Won Lee, Lifu Huang, Pamela J. Wisniewski, Jin-Hee Cho | Published: 2025-02-18 | Updated: 2025-07-31
サイバーグルーミング研究
Adversarial Learning
文献レビュー方法論

Unveiling Privacy Risks in LLM Agent Memory

Authors: Bo Wang, Weiyi He, Shenglai Zeng, Zhen Xiang, Yue Xing, Jiliang Tang, Pengfei He | Published: 2025-02-17 | Updated: 2025-06-03
Privacy Analysis
Prompt leaking
Causes of Information Leakage

BackdoorDM: A Comprehensive Benchmark for Backdoor Learning on Diffusion Model

Authors: Weilin Lin, Nanjun Zhou, Yanyun Wang, Jianze Li, Hui Xiong, Li Liu | Published: 2025-02-17 | Updated: 2025-07-21
Trigger Detection
Backdoor Attack
Performance Evaluation

DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing

Authors: Yi Wang, Fenghua Weng, Sibei Yang, Zhan Qin, Minlie Huang, Wenjie Wang | Published: 2025-02-17 | Updated: 2025-05-29
LLM Security
Prompt Injection
Defense Method

Nuclear Deployed: Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents

Authors: Rongwu Xu, Xiaojian Li, Shuo Chen, Wei Xu | Published: 2025-02-17 | Updated: 2025-03-23
Indirect Prompt Injection
Ethical Statement
Decision-Making Dynamics

QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language

Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26
Disabling Safety Mechanisms of LLM
Prompt leaking
教育的分析

A hierarchical approach for assessing the vulnerability of tree-based classification models to membership inference attack

Authors: Richard J. Preen, Jim Smith | Published: 2025-02-13 | Updated: 2025-06-12
Privacy Protection Method
Model Extraction Attack
Risk Assessment

RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency Detection in Privacy-Preserving Federated Learning

Authors: Nazatul H. Sultan, Yan Bo, Yansong Gao, Seyit Camtepe, Arash Mahboubi, Hang Thanh Bui, Aufeef Chauhan, Hamed Aboutorab, Michael Bewong, Dineshkumar Singh, Praveen Gauravaram, Rafiqul Islam, Sharif Abuadbba | Published: 2025-02-13 | Updated: 2025-04-16
Privacy Enhancing Protocol
Performance Evaluation Method
Federated Learning