Prompt Injection

Evaluating Apple Intelligence’s Writing Tools for Privacy Against Large Language Model-Based Inference Attacks: Insights from Early Datasets

Authors: Mohd. Farhan Israk Soumik, Syed Mhamudul Hasan, Abdur R. Shahid | Published: 2025-06-04
Application of Text Classification
Privacy Issues
Prompt Injection

Client-Side Zero-Shot LLM Inference for Comprehensive In-Browser URL Analysis

Authors: Avihay Cohen | Published: 2025-06-04
Alignment
Prompt Injection
Dynamic Analysis

CyberGym: Evaluating AI Agents’ Cybersecurity Capabilities with Real-World Vulnerabilities at Scale

Authors: Zhun Wang, Tianneng Shi, Jingxuan He, Matthew Cai, Jialin Zhang, Dawn Song | Published: 2025-06-03
Prompt Injection
Dynamic Analysis Method
Watermark Evaluation

BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage

Authors: Kalyan Nakka, Nitesh Saxena | Published: 2025-06-03
Disabling Safety Mechanisms of LLM
Detection Rate of Phishing Attacks
Prompt Injection

Beyond the Protocol: Unveiling Attack Vectors in the Model Context Protocol (MCP) Ecosystem

Authors: Hao Song, Yiming Shen, Wenxuan Luo, Leixin Guo, Ting Chen, Jiashui Wang, Beibei Li, Xiaosong Zhang, Jiachi Chen | Published: 2025-05-31 | Updated: 2025-08-20
Indirect Prompt Injection
Prompt Injection
Attack Type

A Systematic Review of Metaheuristics-Based and Machine Learning-Driven Intrusion Detection Systems in IoT

Authors: Mohammad Shamim Ahsan, Salekul Islam, Swakkhar Shatabda | Published: 2025-05-31 | Updated: 2025-06-03
Prompt Injection
Intrusion Detection System
Selection and Evaluation of Optimization Algorithms

SafeCOMM: A Study on Safety Degradation in Fine-Tuned Telecom Large Language Models

Authors: Aladin Djuhera, Swanand Ravindra Kadhe, Farhan Ahmed, Syed Zawad, Fernando Koch, Walid Saad, Holger Boche | Published: 2025-05-29 | Updated: 2025-10-27
Prompt Injection
Large Language Model
安全性評価

Does Johnny Get the Message? Evaluating Cybersecurity Notifications for Everyday Users

Authors: Victor Jüttner, Erik Buchmann | Published: 2025-05-28
パーソナライズ
Prompt Injection
対策の説明

Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models

Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28
LLM Security
Prompt Injection
Large Language Model

Jailbreak Distillation: Renewable Safety Benchmarking

Authors: Jingyu Zhang, Ahmed Elgohary, Xiawei Wang, A S M Iftekhar, Ahmed Magooda, Benjamin Van Durme, Daniel Khashabi, Kyle Jackson | Published: 2025-05-28
Prompt Injection
Model Evaluation
Attack Evaluation