Literature Database

Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning

Authors: Jianwei Li, Sheng Liu, Qi Lei | Published: 2023-12-10 | Updated: 2024-03-15
Watermarking
Privacy Protection Method
Federated Learning

Towards Smart Healthcare: Challenges and Opportunities in IoT and ML

Authors: Munshi Saifuzzaman, Tajkia Nuri Ananna | Published: 2023-12-09 | Updated: 2024-01-12
Smart Healthcare
Data Preprocessing
Advancements in Medical IoT

Model Extraction Attacks Revisited

Authors: Jiacheng Liang, Ren Pang, Changjiang Li, Ting Wang | Published: 2023-12-08
Cyber Attack
Model Extraction Attack
Adversarial attack

An Explainable Ensemble-based Intrusion Detection System for Software-Defined Vehicle Ad-hoc Networks

Authors: Shakil Ibne Ahsan, Phil Legg, S M Iftekharul Alam | Published: 2023-12-08 | Updated: 2024-10-11
Model Interpretability
Intrusion Detection System
Vehicle Network

Exploring the Limits of ChatGPT in Software Security Applications

Authors: Fangzhou Wu, Qingzhao Zhang, Ati Priya Bajaj, Tiffany Bao, Ning Zhang, Ruoyu "Fish" Wang, Chaowei Xiao | Published: 2023-12-08
Program Analysis
Prompt Injection
Vulnerability Management

Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs

Authors: Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang | Published: 2023-12-08
LLM Security
Prompt Injection
Inappropriate Content Generation

Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks

Authors: Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo | Published: 2023-12-07
LLM Security
Poisoning Attack
Model Performance Evaluation

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

Authors: Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao | Published: 2023-12-07 | Updated: 2023-12-12
LLM Security
Code Generation
Prompt Injection

Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models

Authors: Manish Bhatt, Sahana Chennabasappa, Cyrus Nikolaidis, Shengye Wan, Ivan Evtimov, Dominik Gabi, Daniel Song, Faizan Ahmad, Cornelius Aschermann, Lorenzo Fontana, Sasha Frolov, Ravi Prakash Giri, Dhaval Kapil, Yiannis Kozyrakis, David LeBlanc, James Milazzo, Aleksandar Straumann, Gabriel Synnaeve, Varun Vontimitta, Spencer Whitman, Joshua Saxe | Published: 2023-12-07
LLM Security
Cybersecurity
Prompt Injection

Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations

Authors: Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian Fuller, Davide Testuggine, Madian Khabsa | Published: 2023-12-07
Alignment
Data Generation Method
Risk Analysis Method