文献データベース

Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender Systems

Authors: Fatemeh Nazary, Yashar Deldjoo, Tommaso di Noia | Published: 2025-01-20
RAGへのポイズニング攻撃
タグ選択戦略
ポイズニング攻撃

Everyone’s Privacy Matters! An Analysis of Privacy Leakage from Real-World Facial Images on Twitter and Associated User Behaviors

Authors: Yuqi Niu, Weidong Qiu, Peng Tang, Lifan Wang, Shuo Chen, Shujun Li, Nadin Kokciyan, Ben Niu | Published: 2025-01-20
プライバシー手法
画像をAIにより分析し、個人情報を推測する攻撃
顔認識技術

LegalGuardian: A Privacy-Preserving Framework for Secure Integration of Large Language Models in Legal Practice

Authors: M. Mikail Demir, Hakan T. Otal, M. Abdullah Canbaz | Published: 2025-01-19
プライバシー保護
学習の改善
安全性アライメント

Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks

Authors: Xin Yi, Yue Li, Linlin Wang, Xiaoling Wang, Liang He | Published: 2025-01-18
プロンプトインジェクション
敵対的訓練
過剰拒否緩和

AI/ML Based Detection and Categorization of Covert Communication in IPv6 Network

Authors: Mohammad Wali Ur Rahman, Yu-Zheng Lin, Carter Weeks, David Ruddell, Jeff Gabriellini, Bill Hayes, Salim Hariri, Edward V. Ziegler Jr | Published: 2025-01-18
IPv6セキュリティ
ネットワーク脅威検出
通信解析

Differentiable Adversarial Attacks for Marked Temporal Point Processes

Authors: Pritish Chakraborty, Vinayak Gupta, Rahul R, Srikanta J. Bedathur, Abir De | Published: 2025-01-17
敵対的サンプル
最適化問題

GaussMark: A Practical Approach for Structural Watermarking of Language Models

Authors: Adam Block, Ayush Sekhari, Alexander Rakhlin | Published: 2025-01-17
ウォーターマーキング
仮説検定
実験的検証

CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

Authors: Matan Ben-Tov, Daniel Deutch, Nave Frost, Mahmood Sharif | Published: 2025-01-17
データ整合性制約
実験的検証
敵対的サンプル

Computing Optimization-Based Prompt Injections Against Closed-Weights Models By Misusing a Fine-Tuning API

Authors: Andrey Labunets, Nishit V. Pandya, Ashish Hooda, Xiaohan Fu, Earlence Fernandes | Published: 2025-01-16
プロンプトインジェクション
攻撃の評価
最適化問題

A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy

Authors: Huandong Wang, Wenjie Fu, Yingzhou Tang, Zhilong Chen, Yuxi Huang, Jinghua Piao, Chen Gao, Fengli Xu, Tao Jiang, Yong Li | Published: 2025-01-16
サーベイ論文
プライバシー保護
プロンプトインジェクション
大規模言語モデル