Prompt Injection

JavelinGuard: Low-Cost Transformer Architectures for LLM Security

Authors: Yash Datta, Sharath Rajasekar | Published: 2025-06-09
Privacy Enhancing Technology
Prompt Injection
Model Architecture

Chain-of-Code Collapse: Reasoning Failures in LLMs via Adversarial Prompting in Code Generation

Authors: Jaechul Roh, Varun Gandhi, Shivani Anilkumar, Arin Garg | Published: 2025-06-08 | Updated: 2025-06-12
Performance Evaluation
Prompt Injection
Prompt leaking

Evaluating Apple Intelligence’s Writing Tools for Privacy Against Large Language Model-Based Inference Attacks: Insights from Early Datasets

Authors: Mohd. Farhan Israk Soumik, Syed Mhamudul Hasan, Abdur R. Shahid | Published: 2025-06-04
Application of Text Classification
Privacy Issues
Prompt Injection

Client-Side Zero-Shot LLM Inference for Comprehensive In-Browser URL Analysis

Authors: Avihay Cohen | Published: 2025-06-04
Alignment
Prompt Injection
Dynamic Analysis

CyberGym: Evaluating AI Agents’ Cybersecurity Capabilities with Real-World Vulnerabilities at Scale

Authors: Zhun Wang, Tianneng Shi, Jingxuan He, Matthew Cai, Jialin Zhang, Dawn Song | Published: 2025-06-03
Prompt Injection
Dynamic Analysis Method
Watermark Evaluation

BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage

Authors: Kalyan Nakka, Nitesh Saxena | Published: 2025-06-03
Disabling Safety Mechanisms of LLM
Detection Rate of Phishing Attacks
Prompt Injection

A Systematic Review of Metaheuristics-Based and Machine Learning-Driven Intrusion Detection Systems in IoT

Authors: Mohammad Shamim Ahsan, Salekul Islam, Swakkhar Shatabda | Published: 2025-05-31 | Updated: 2025-06-03
Prompt Injection
Intrusion Detection System
Selection and Evaluation of Optimization Algorithms

Does Johnny Get the Message? Evaluating Cybersecurity Notifications for Everyday Users

Authors: Victor Jüttner, Erik Buchmann | Published: 2025-05-28
パーソナライズ
Prompt Injection
対策の説明

Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models

Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28
LLM Security
Prompt Injection
Large Language Model

Jailbreak Distillation: Renewable Safety Benchmarking

Authors: Jingyu Zhang, Ahmed Elgohary, Xiawei Wang, A S M Iftekhar, Ahmed Magooda, Benjamin Van Durme, Daniel Khashabi, Kyle Jackson | Published: 2025-05-28
Prompt Injection
Model Evaluation
Attack Evaluation