AIセキュリティポータルbot

Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning

Authors: Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Yongsheng Zhu, Guangquan Xu, Jiqiang Liu, Xiangliang Zhang | Published: 2024-06-10
Backdoor Attack
Poisoning

A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks

Authors: Hengzhu Liu, Ping Xiong, Tianqing Zhu, Philip S. Yu | Published: 2024-06-10
Backdoor Attack
Poisoning
Membership Inference

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Authors: Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, Peter Henderson | Published: 2024-06-10
LLM Security
Prompt Injection
Safety Alignment

Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models

Authors: Alkis Kalavasis, Amin Karbasi, Argyris Oikonomou, Katerina Sotiraki, Grigoris Velegkas, Manolis Zampetakis | Published: 2024-06-09 | Updated: 2024-09-07
Watermarking
Backdoor Attack

How Alignment and Jailbreak Work: Explain LLM Safety through Intermediate Hidden States

Authors: Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Yongbin Li | Published: 2024-06-09 | Updated: 2024-06-13
LLM Security
Prompt Injection
Compliance with Ethical Guidelines

Blockchain Integrated Federated Learning in Edge-Fog-Cloud Systems for IoT based Healthcare Applications A Survey

Authors: Shinu M. Rajagopal, Supriya M., Rajkumar Buyya | Published: 2024-06-08
Edge Computing
Privacy Protection
Blockchain Technology

A Novel Generative AI-Based Framework for Anomaly Detection in Multicast Messages in Smart Grid Communications

Authors: Aydin Zaboli, Seong Lok Choi, Tai-Jin Song, Junho Hong | Published: 2024-06-08
LLM Performance Evaluation
Cybersecurity
Anomaly Detection Method

Individual Packet Features are a Risk to Model Generalisation in ML-Based Intrusion Detection

Authors: Kahraman Kostas, Mike Just, Michael A. Lones | Published: 2024-06-07
DDoS Attack Detection
Data Obfuscation
Packet Interaction

Adversarial Tuning: Defending Against Jailbreak Attacks for LLMs

Authors: Fan Liu, Zhao Xu, Hao Liu | Published: 2024-06-07
LLM Security
Prompt Injection
Adversarial Training

Concept Drift Detection using Ensemble of Integrally Private Models

Authors: Ayush K. Varshney, Vicenc Torra | Published: 2024-06-07
Data Privacy Assessment
Privacy Protection Method
Model Performance Evaluation