AIセキュリティポータルbot

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

Authors: Yihe Fan, Yuxin Cao, Ziyu Zhao, Ziyao Liu, Shaofeng Li | Published: 2024-04-08 | Updated: 2024-08-11
LLM Security
Prompt Injection
Threat modeling

Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning

Authors: Yu Bi, Yekai Li, Xuan Feng, Xianghang Mi | Published: 2024-04-08
Spam Detection
Poisoning
Federated Learning

Initial Exploration of Zero-Shot Privacy Utility Tradeoffs in Tabular Data Using GPT-4

Authors: Bishwas Mandal, George Amariucai, Shuangqing Wei | Published: 2024-04-07
Data Privacy Assessment
Privacy Protection Method
Prompt Injection

Contextual Chart Generation for Cyber Deception

Authors: David D. Nguyen, David Liebowitz, Surya Nepal, Salil S. Kanhere, Sharif Abuadbba | Published: 2024-04-07
Data Preprocessing
Model Design
Evaluation Method

PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics

Authors: Derui Zhu, Dingfan Chen, Qing Li, Zongxiong Chen, Lei Ma, Jens Grossklags, Mario Fritz | Published: 2024-04-06
LLM Security
LLM Performance Evaluation
Evaluation Method

Advances in Differential Privacy and Differentially Private Machine Learning

Authors: Saswat Das, Subhankar Mishra | Published: 2024-04-06
Watermarking
Data Privacy Assessment
Privacy Protection Method

CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems

Authors: Francesco Marchiori, Mauro Conti | Published: 2024-04-06
Intrusion Detection System
Adversarial Training
Threat modeling

Optimization of Lightweight Malware Detection Models For AIoT Devices

Authors: Felicia Lo, Shin-Ming Cheng, Rafael Kaliski | Published: 2024-04-06
Membership Inference
Model Performance Evaluation
Resource optimization

Fine-Tuning, Quantization, and LLMs: Navigating Unintended Outcomes

Authors: Divyanshu Kumar, Anurakt Kumar, Sahil Agarwal, Prashanth Harshangi | Published: 2024-04-05 | Updated: 2024-09-09
LLM Security
Prompt Injection
Safety Alignment

Prompt Public Large Language Models to Synthesize Data for Private On-device Applications

Authors: Shanshan Wu, Zheng Xu, Yanxiang Zhang, Yuanbo Zhang, Daniel Ramage | Published: 2024-04-05 | Updated: 2024-08-07
Dataset Generation
Privacy Protection Method
Federated Learning