AIセキュリティポータルbot

MUBox: A Critical Evaluation Framework of Deep Machine Unlearning

Authors: Xiang Li, Bhavani Thuraisingham, Wenqi Wei | Published: 2025-05-13
クラス別忘却評価
毒データの検知
透かし技術

LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries

Authors: Zekun Wu, Seonglae Cho, Umar Mohammed, Cristian Munoz, Kleyton Costa, Xin Guan, Theo King, Ze Wang, Emre Kazim, Adriano Koshiyama | Published: 2025-05-13 | Updated: 2025-06-30
インダイレクトプロンプトインジェクション
リスク評価手法
依存関係管理

Privacy-Preserving Analytics for Smart Meter (AMI) Data: A Hybrid Approach to Comply with CPUC Privacy Regulations

Authors: Benjamin Westrich | Published: 2025-05-13
バックドア攻撃用の毒データの検知
プライバシー設計原則
暗号技術

Securing WiFi Fingerprint-based Indoor Localization Systems from Malicious Access Points

Authors: Fariha Tanjim Shifat, Sayma Sarwar Ela, Mosarrat Jahan | Published: 2025-05-12
信頼性評価
機械学習技術
異常検出手法

SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models

Authors: Huining Cui, Wei Liu | Published: 2025-05-12
LLMセキュリティ
プロンプトインジェクション
プロンプトリーキング

Security through the Eyes of AI: How Visualization is Shaping Malware Detection

Authors: Asmitha K. A., Matteo Brosolo, Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A., Muhammed Shafi K. P | Published: 2025-05-12
プロンプトインジェクション
マルウェア分類
敵対的サンプルの検知

Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption

Authors: Jordan Frery, Roman Bredehoft, Jakub Klemsa, Arthur Meyre, Andrei Stoian | Published: 2025-05-12
LLMセキュリティ
暗号技術
機械学習技術

Comet: Accelerating Private Inference for Large Language Model by Predicting Activation Sparsity

Authors: Guang Yan, Yuhui Zhang, Zimu Guo, Lutan Zhao, Xiaojun Chen, Chen Wang, Wenhao Wang, Dan Meng, Rui Hou | Published: 2025-05-12
スパース性最適化
スパース表現
プライバシー設計原則

Securing Genomic Data Against Inference Attacks in Federated Learning Environments

Authors: Chetan Pathade, Shubham Patil | Published: 2025-05-12
プライバシー設計原則
属性開示リスク
差分プライバシー

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12
LLMセキュリティ
LLMの安全機構の解除
プロンプトインジェクション