文献データベース

LegalGuardian: A Privacy-Preserving Framework for Secure Integration of Large Language Models in Legal Practice

Authors: M. Mikail Demir, Hakan T. Otal, M. Abdullah Canbaz | Published: 2025-01-19
プライバシー保護
学習の改善
安全性アライメント

Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks

Authors: Xin Yi, Yue Li, Linlin Wang, Xiaoling Wang, Liang He | Published: 2025-01-18
プロンプトインジェクション
敵対的訓練
過剰拒否緩和

AI/ML Based Detection and Categorization of Covert Communication in IPv6 Network

Authors: Mohammad Wali Ur Rahman, Yu-Zheng Lin, Carter Weeks, David Ruddell, Jeff Gabriellini, Bill Hayes, Salim Hariri, Edward V. Ziegler Jr | Published: 2025-01-18
IPv6セキュリティ
ネットワーク脅威検出
通信解析

Differentiable Adversarial Attacks for Marked Temporal Point Processes

Authors: Pritish Chakraborty, Vinayak Gupta, Rahul R, Srikanta J. Bedathur, Abir De | Published: 2025-01-17
敵対的サンプル
最適化問題

GaussMark: A Practical Approach for Structural Watermarking of Language Models

Authors: Adam Block, Ayush Sekhari, Alexander Rakhlin | Published: 2025-01-17
ウォーターマーキング
仮説検定
実験的検証

CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

Authors: Matan Ben-Tov, Daniel Deutch, Nave Frost, Mahmood Sharif | Published: 2025-01-17
データ整合性制約
実験的検証
敵対的サンプル

Computing Optimization-Based Prompt Injections Against Closed-Weights Models By Misusing a Fine-Tuning API

Authors: Andrey Labunets, Nishit V. Pandya, Ashish Hooda, Xiaohan Fu, Earlence Fernandes | Published: 2025-01-16
プロンプトインジェクション
攻撃の評価
最適化問題

A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy

Authors: Huandong Wang, Wenjie Fu, Yingzhou Tang, Zhilong Chen, Yuxi Huang, Jinghua Piao, Chen Gao, Fengli Xu, Tao Jiang, Yong Li | Published: 2025-01-16
サーベイ論文
プライバシー保護
プロンプトインジェクション
大規模言語モデル

Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks

Authors: Yixiao Xu, Binxing Fang, Rui Wang, Yinghai Zhou, Shouling Ji, Yuan Liu, Mohan Li, Zhihong Tian | Published: 2025-01-16 | Updated: 2025-01-17
ウォーターマーキング
モデル抽出攻撃
攻撃の評価

Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography

Authors: Ilia Shumailov, Daniel Ramage, Sarah Meiklejohn, Peter Kairouz, Florian Hartmann, Borja Balle, Eugene Bagdasarian | Published: 2025-01-15
Trusted Capable Model Environments
プライバシー保護
暗号学