Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy

Authors: Yichuan Shi, Olivera Kotevska, Viktor Reshniak, Abhishek Singh, Ramesh Raskar | Published: 2024-05-16

The Effect of Quantization in Federated Learning: A Rényi Differential Privacy Perspective

Authors: Tianqu Kang, Lumin Liu, Hengtao He, Jun Zhang, S. H. Song, Khaled B. Letaief | Published: 2024-05-16

Learnable Privacy Neurons Localization in Language Models

Authors: Ruizhe Chen, Tianxiang Hu, Yang Feng, Zuozhu Liu | Published: 2024-05-16

Transfer Learning in Pre-Trained Large Language Models for Malware Detection Based on System Calls

Authors: Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán, Gérôme Bovet, Gregorio Martínez Pérez | Published: 2024-05-15

Cross-Input Certified Training for Universal Perturbations

Authors: Changming Xu, Gagandeep Singh | Published: 2024-05-15 | Updated: 2024-09-09

Towards Next-Generation Steganalysis: LLMs Unleash the Power of Detecting Steganography

Authors: Minhao Bai. Jinshuai Yang, Kaiyi Pang, Huili Wang, Yongfeng Huang | Published: 2024-05-15

The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks

Authors: Ziquan Liu, Yufei Cui, Yan Yan, Yi Xu, Xiangyang Ji, Xue Liu, Antoni B. Chan | Published: 2024-05-14

Differentially Private Federated Learning: A Systematic Review

Authors: Jie Fu, Yuan Hong, Xinpeng Ling, Leixia Wang, Xun Ran, Zhiyu Sun, Wendy Hui Wang, Zhili Chen, Yang Cao | Published: 2024-05-14 | Updated: 2024-05-20

Adversarial Machine Learning Threats to Spacecraft

Authors: Rajiv Thummala, Shristi Sharma, Matteo Calabrese, Gregory Falco | Published: 2024-05-14

DoLLM: How Large Language Models Understanding Network Flow Data to Detect Carpet Bombing DDoS

Authors: Qingyang Li, Yihang Zhang, Zhidong Jia, Yannan Hu, Lei Zhang, Jianrong Zhang, Yongming Xu, Yong Cui, Zongming Guo, Xinggong Zhang | Published: 2024-05-13