Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models Authors: Garrett Crumrine, Izzat Alsmadi, Jesus Guerrero, Yuvaraj Munian | Published: 2024-06-02 2024.06.02 2025.05.12 Literature Database
VeriSplit: Secure and Practical Offloading of Machine Learning Inferences across IoT Devices Authors: Han Zhang, Zifan Wang, Mihir Dhamankar, Matt Fredrikson, Yuvraj Agarwal | Published: 2024-06-02 | Updated: 2025-03-31 2024.06.02 2025.05.12 Literature Database
Exploring Vulnerabilities and Protections in Large Language Models: A Survey Authors: Frank Weizhen Liu, Chenhui Hu | Published: 2024-06-01 2024.06.01 2025.05.12 Literature Database
Improved Techniques for Optimization-Based Jailbreaking on Large Language Models Authors: Xiaojun Jia, Tianyu Pang, Chao Du, Yihao Huang, Jindong Gu, Yang Liu, Xiaochun Cao, Min Lin | Published: 2024-05-31 | Updated: 2024-06-05 2024.05.31 2025.05.12 Literature Database
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning Authors: Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran | Published: 2024-05-31 | Updated: 2024-06-05 2024.05.31 2025.05.12 Literature Database
Robust Kernel Hypothesis Testing under Data Corruption Authors: Antonin Schrab, Ilmun Kim | Published: 2024-05-30 2024.05.30 2025.05.12 Literature Database
Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function Prior Authors: Shuyu Cheng, Yibo Miao, Yinpeng Dong, Xiao Yang, Xiao-Shan Gao, Jun Zhu | Published: 2024-05-29 2024.05.29 2025.05.12 Literature Database
Toxicity Detection for Free Authors: Zhanhao Hu, Julien Piet, Geng Zhao, Jiantao Jiao, David Wagner | Published: 2024-05-29 | Updated: 2024-11-08 2024.05.29 2025.05.12 Literature Database
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics Authors: Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Alexander Branch, Gregory Pottie | Published: 2024-05-28 | Updated: 2024-06-02 2024.05.28 2025.05.12 Literature Database
Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems Authors: Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu | Published: 2024-05-27 | Updated: 2025-04-30 2024.05.27 2025.05.12 Literature Database