透かし

The Steganographic Potentials of Language Models

Authors: Artem Karpov, Tinuade Adeleke, Seong Hah Cho, Natalia Perez-Campanero | Published: 2025-05-06
RAG
著者貢献
透かし

Steering the CensorShip: Uncovering Representation Vectors for LLM “Thought” Control

Authors: Hannah Cyberey, David Evans | Published: 2025-04-23
プロンプトインジェクション
心理的操作
透かし

Snorkeling in dark waters: A longitudinal surface exploration of unique Tor Hidden Services (Extended Version)

Authors: Alfonso Rodriguez Barredo-Valenzuela, Sergio Pastrana Portillo, Guillermo Suarez-Tangil | Published: 2025-04-23
ネットワーク脅威検出
研究方法論
透かし

MCMC for Bayesian estimation of Differential Privacy from Membership Inference Attacks

Authors: Ceren Yildirim, Kamer Kaya, Sinan Yildirim, Erkay Savas | Published: 2025-04-23
プライバシー保護データマイニング
メンバーシップ推論
透かし

A Collaborative Intrusion Detection System Using Snort IDS Nodes

Authors: Tom Davies, Max Hashem Eiza, Nathan Shone, Rob Lyon | Published: 2025-04-23
ネットワーク脅威検出
マルウェア検出手法
透かし

PiCo: Jailbreaking Multimodal Large Language Models via $\textbf{Pi}$ctorial $\textbf{Co}$de Contextualization

Authors: Aofan Liu, Lulu Tang, Ting Pan, Yuguo Yin, Bin Wang, Ao Yang | Published: 2025-04-02 | Updated: 2025-04-07
モデル性能評価
大規模言語モデル
透かし

Generating Privacy-Preserving Personalized Advice with Zero-Knowledge Proofs and LLMs

Authors: Hiroki Watanabe, Motonobu Uchikoshi | Published: 2025-02-10 | Updated: 2025-04-24
アライメント
プライバシー保護データマイニング
透かし

Adversarial Reprogramming of Neural Networks

Authors: Gamaleldin F. Elsayed, Ian Goodfellow, Jascha Sohl-Dickstein | Published: 2018-06-28 | Updated: 2018-11-29
モデルの頑健性保証
敵対的サンプル
透かし

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Authors: Anish Athalye, Nicholas Carlini | Published: 2018-04-10
モデルの頑健性保証
敵対的攻撃
透かし

Robust Decentralized Learning Using ADMM with Unreliable Agents

Authors: Qunwei Li, Bhavya Kailkhura, Ryan Goldhahn, Priyadip Ray, Pramod K. Varshney | Published: 2017-10-14 | Updated: 2018-05-21
ロバスト性向上手法
収束特性
透かし