Can LLMs be Scammed? A Baseline Measurement Study

Authors: Udari Madhushani Sehwag, Kelly Patel, Francesca Mosca, Vineeth Ravi, Jessica Staddon | Published: 2024-10-14

Evaluating of Machine Unlearning: Robustness Verification Without Prior Modifications

Authors: Heng Xu, Tianqing Zhu, Wanlei Zhou | Published: 2024-10-14

Survival of the Safest: Towards Secure Prompt Optimization through Interleaved Multi-Objective Evolution

Authors: Ankita Sinha, Wendi Cui, Kamalika Das, Jiaxin Zhang | Published: 2024-10-12

Minimax rates of convergence for nonparametric regression under adversarial attacks

Authors: Jingfu Peng, Yuhong Yang | Published: 2024-10-12

Can a large language model be a gaslighter?

Authors: Wei Li, Luyao Zhu, Yang Song, Ruixi Lin, Rui Mao, Yang You | Published: 2024-10-11

Federated Learning in Practice: Reflections and Projections

Authors: Katharine Daly, Hubert Eichner, Peter Kairouz, H. Brendan McMahan, Daniel Ramage, Zheng Xu | Published: 2024-10-11

Decoding Secret Memorization in Code LLMs Through Token-Level Characterization

Authors: Yuqing Nie, Chong Wang, Kailong Wang, Guoai Xu, Guosheng Xu, Haoyu Wang | Published: 2024-10-11

PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning

Authors: Tingchen Fu, Mrinank Sharma, Philip Torr, Shay B. Cohen, David Krueger, Fazl Barez | Published: 2024-10-11

F2A: An Innovative Approach for Prompt Injection by Utilizing Feign Security Detection Agents

Authors: Yupeng Ren | Published: 2024-10-11 | Updated: 2024-10-14

PILLAR: an AI-Powered Privacy Threat Modeling Tool

Authors: Majid Mollaeefar, Andrea Bissoli, Silvio Ranise | Published: 2024-10-11