PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics

Authors: Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Alexander Branch, Gregory Pottie | Published: 2024-05-28 | Updated: 2024-06-02

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems

Authors: Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu | Published: 2024-05-27 | Updated: 2024-10-05

Medical MLLM is Vulnerable: Cross-Modality Jailbreak and Mismatched Attacks on Medical Multimodal Large Language Models

Authors: Xijie Huang, Xinyuan Wang, Hantao Zhang, Yinghao Zhu, Jiawen Xi, Jingkun An, Hao Wang, Hao Liang, Chengwei Pan | Published: 2024-05-26 | Updated: 2024-08-21

Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character

Authors: Siyuan Ma, Weidi Luo, Yu Wang, Xiaogeng Liu | Published: 2024-05-25 | Updated: 2024-06-12

Revisit, Extend, and Enhance Hessian-Free Influence Functions

Authors: Ziao Yang, Han Yue, Jian Chen, Hongfu Liu | Published: 2024-05-25 | Updated: 2024-10-20

BadGD: A unified data-centric framework to identify gradient descent vulnerabilities

Authors: Chi-Hua Wang, Guang Cheng | Published: 2024-05-24

Can Implicit Bias Imply Adversarial Robustness?

Authors: Hancheng Min, René Vidal | Published: 2024-05-24 | Updated: 2024-06-05

$$\mathbf{L^2\cdot M = C^2}$$ Large Language Models are Covert Channels

Authors: Simen Gaure, Stefanos Koffas, Stjepan Picek, Sondre Rønjom | Published: 2024-05-24 | Updated: 2024-10-07

Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study

Authors: Karl Tamberg, Hayretdin Bahsi | Published: 2024-05-24

Lost in the Averages: A New Specific Setup to Evaluate Membership Inference Attacks Against Machine Learning Models

Authors: Florent Guépin, Nataša Krčo, Matthieu Meeus, Yves-Alexandre de Montjoye | Published: 2024-05-24