Literature Database

LegalGuardian: A Privacy-Preserving Framework for Secure Integration of Large Language Models in Legal Practice

Authors: M. Mikail Demir, Hakan T. Otal, M. Abdullah Canbaz | Published: 2025-01-19
Privacy Protection
Improvement of Learning
Safety Alignment

Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks

Authors: Xin Yi, Yue Li, Linlin Wang, Xiaoling Wang, Liang He | Published: 2025-01-18
Prompt Injection
Adversarial Training
Excessive Denial Mitigation

AI/ML Based Detection and Categorization of Covert Communication in IPv6 Network

Authors: Mohammad Wali Ur Rahman, Yu-Zheng Lin, Carter Weeks, David Ruddell, Jeff Gabriellini, Bill Hayes, Salim Hariri, Edward V. Ziegler Jr | Published: 2025-01-18
IPv6 Security
Network Threat Detection
Communication Analysis

Differentiable Adversarial Attacks for Marked Temporal Point Processes

Authors: Pritish Chakraborty, Vinayak Gupta, Rahul R, Srikanta J. Bedathur, Abir De | Published: 2025-01-17
Adversarial Example
Optimization Problem

GaussMark: A Practical Approach for Structural Watermarking of Language Models

Authors: Adam Block, Ayush Sekhari, Alexander Rakhlin | Published: 2025-01-17
Watermarking
Hypothesis Testing
Experimental Validation

CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

Authors: Matan Ben-Tov, Daniel Deutch, Nave Frost, Mahmood Sharif | Published: 2025-01-17
Data Integrity Constraints
Experimental Validation
Adversarial Example

Computing Optimization-Based Prompt Injections Against Closed-Weights Models By Misusing a Fine-Tuning API

Authors: Andrey Labunets, Nishit V. Pandya, Ashish Hooda, Xiaohan Fu, Earlence Fernandes | Published: 2025-01-16
Prompt Injection
Attack Evaluation
Optimization Problem

A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy

Authors: Huandong Wang, Wenjie Fu, Yingzhou Tang, Zhilong Chen, Yuxi Huang, Jinghua Piao, Chen Gao, Fengli Xu, Tao Jiang, Yong Li | Published: 2025-01-16
Survey Paper
Privacy Protection
Prompt Injection
Large Language Model

Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks

Authors: Yixiao Xu, Binxing Fang, Rui Wang, Yinghai Zhou, Shouling Ji, Yuan Liu, Mohan Li, Zhihong Tian | Published: 2025-01-16 | Updated: 2025-01-17
Watermarking
Model Extraction Attack
Attack Evaluation

Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography

Authors: Ilia Shumailov, Daniel Ramage, Sarah Meiklejohn, Peter Kairouz, Florian Hartmann, Borja Balle, Eugene Bagdasarian | Published: 2025-01-15
Trusted Capable Model Environments
Privacy Protection
Cryptography