LegalGuardian: A Privacy-Preserving Framework for Secure Integration of Large Language Models in Legal Practice Authors: M. Mikail Demir, Hakan T. Otal, M. Abdullah Canbaz | Published: 2025-01-19 プライバシー保護学習の改善安全性アライメント 2025.01.19 2025.04.03 文献データベース
Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks Authors: Xin Yi, Yue Li, Linlin Wang, Xiaoling Wang, Liang He | Published: 2025-01-18 プロンプトインジェクション敵対的訓練過剰拒否緩和 2025.01.18 2025.04.03 文献データベース
AI/ML Based Detection and Categorization of Covert Communication in IPv6 Network Authors: Mohammad Wali Ur Rahman, Yu-Zheng Lin, Carter Weeks, David Ruddell, Jeff Gabriellini, Bill Hayes, Salim Hariri, Edward V. Ziegler Jr | Published: 2025-01-18 IPv6セキュリティネットワーク脅威検出通信解析 2025.01.18 2025.04.03 文献データベース
Differentiable Adversarial Attacks for Marked Temporal Point Processes Authors: Pritish Chakraborty, Vinayak Gupta, Rahul R, Srikanta J. Bedathur, Abir De | Published: 2025-01-17 敵対的サンプル最適化問題 2025.01.17 2025.04.03 文献データベース
GaussMark: A Practical Approach for Structural Watermarking of Language Models Authors: Adam Block, Ayush Sekhari, Alexander Rakhlin | Published: 2025-01-17 ウォーターマーキング仮説検定実験的検証 2025.01.17 2025.04.03 文献データベース
CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers Authors: Matan Ben-Tov, Daniel Deutch, Nave Frost, Mahmood Sharif | Published: 2025-01-17 データ整合性制約実験的検証敵対的サンプル 2025.01.17 2025.04.03 文献データベース
Computing Optimization-Based Prompt Injections Against Closed-Weights Models By Misusing a Fine-Tuning API Authors: Andrey Labunets, Nishit V. Pandya, Ashish Hooda, Xiaohan Fu, Earlence Fernandes | Published: 2025-01-16 プロンプトインジェクション攻撃の評価最適化問題 2025.01.16 2025.04.03 文献データベース
A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy Authors: Huandong Wang, Wenjie Fu, Yingzhou Tang, Zhilong Chen, Yuxi Huang, Jinghua Piao, Chen Gao, Fengli Xu, Tao Jiang, Yong Li | Published: 2025-01-16 サーベイ論文プライバシー保護プロンプトインジェクション大規模言語モデル 2025.01.16 2025.04.03 文献データベース
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks Authors: Yixiao Xu, Binxing Fang, Rui Wang, Yinghai Zhou, Shouling Ji, Yuan Liu, Mohan Li, Zhihong Tian | Published: 2025-01-16 | Updated: 2025-01-17 ウォーターマーキングモデル抽出攻撃攻撃の評価 2025.01.16 2025.04.03 文献データベース
Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography Authors: Ilia Shumailov, Daniel Ramage, Sarah Meiklejohn, Peter Kairouz, Florian Hartmann, Borja Balle, Eugene Bagdasarian | Published: 2025-01-15 Trusted Capable Model Environmentsプライバシー保護暗号学 2025.01.15 2025.04.03 文献データベース