Fixing 7,400 Bugs for 1$: Cheap Crash-Site Program Repair

Authors: Han Zheng, Ilia Shumailov, Tianqi Fan, Aiden Hall, Mathias Payer | Published: 2025-05-19

The Hidden Dangers of Browsing AI Agents

Authors: Mykyta Mudryi, Markiyan Chaklosh, Grzegorz Wójcik | Published: 2025-05-19

Evaluatiing the efficacy of LLM Safety Solutions : The Palit Benchmark Dataset

Authors: Sayon Palit, Daniel Woods | Published: 2025-05-19

From Assistants to Adversaries: Exploring the Security Risks of Mobile LLM Agents

Authors: Liangxuan Wu, Chao Wang, Tianming Liu, Yanjie Zhao, Haoyu Wang | Published: 2025-05-19

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Ronghua Li | Published: 2025-05-19

Malware families discovery via Open-Set Recognition on Android manifest permissions

Authors: Filippo Leveni, Matteo Mistura, Francesco Iubatti, Carmine Giangregorio, Nicolò Pastore, Cesare Alippi, Giacomo Boracchi | Published: 2025-05-19

Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models

Authors: Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu | Published: 2025-05-19

R1dacted: Investigating Local Censorship in DeepSeek’s R1 Language Model

Authors: Ali Naseh, Harsh Chaudhari, Jaechul Roh, Mingshi Wu, Alina Oprea, Amir Houmansadr | Published: 2025-05-19

IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems

Authors: Liwen Wang, Wenxuan Wang, Shuai Wang, Zongjie Li, Zhenlan Ji, Zongyi Lyu, Daoyuan Wu, Shing-Chi Cheung | Published: 2025-05-18 | Updated: 2025-05-20

JULI: Jailbreak Large Language Models by Self-Introspection

Authors: Jesson Wang, Zhanhao Hu, David Wagner | Published: 2025-05-17 | Updated: 2025-05-20