Literature Database

A Review of Adversarial Attacks in Computer Vision

Authors: Yutong Zhang, Yao Li, Yin Li, Zhichang Guo | Published: 2023-08-15
Poisoning
Adversarial Attack Methods
Defense Method

DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection

Authors: Sudipta Paria, Aritra Dasgupta, Swarup Bhunia | Published: 2023-08-14
LLM Security
Security Assurance
Vulnerability Mitigation Technique

FedEdge AI-TC: A Semi-supervised Traffic Classification Method based on Trusted Federated Deep Learning for Mobile Edge Computing

Authors: Pan Wang, Zeyi Li, Mengyi Fu, Zixuan Wang, Ze Zhang, MinYao Liu | Published: 2023-08-14
Model Interpretability
Model Performance Evaluation
Federated Learning

S3C2 Summit 2023-06: Government Secure Supply Chain Summit

Authors: William Enck, Yasemin Acar, Michel Cukier, Alexandros Kapravelos, Christian Kästner, Laurie Williams | Published: 2023-08-13
SBOM Practices
Cybersecurity
Security Assurance

SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network Intrusion Detection

Authors: João Vitorino, Isabel Praça, Eva Maia | Published: 2023-08-13
Backdoor Attack
Adversarial Training
Defense Method

PentestGPT: An LLM-empowered Automatic Penetration Testing Tool

Authors: Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | Published: 2023-08-13 | Updated: 2024-06-02
Prompt Injection
Penetration Testing Methods
Performance Evaluation

A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural Networks

Authors: Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina, Muhammad Shafique | Published: 2023-08-10 | Updated: 2023-10-12
Watermarking
Model Design and Accuracy
Performance Evaluation

You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content

Authors: Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang | Published: 2023-08-10
Text Detoxification
Prompt leaking
Calculation of Output Harmfulness

An Empirical Study on Using Large Language Models to Analyze Software Supply Chain Security Failures

Authors: Tanmay Singla, Dharun Anandayuvaraj, Kelechi G. Kalu, Taylor R. Schorlemmer, James C. Davis | Published: 2023-08-09
Cyber Attack
Prompt Injection
Model Performance Evaluation

ModSec-AdvLearn: Countering Adversarial SQL Injections with Robust Machine Learning

Authors: Giuseppe Floris, Christian Scano, Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti, Battista Biggio | Published: 2023-08-09 | Updated: 2025-05-21
Relationship between Robustness and Privacy
Adversarial Example Detection
Defense Mechanism