Hallucination

Is Reasoning Capability Enough for Safety in Long-Context Language Models?

Authors: Yu Fu, Haz Sameen Shahgir, Huanli Gong, Zhipeng Wei, N. Benjamin Erichson, Yue Dong | Published: 2026-02-09
Hallucination
安全性分析
推論能力

Hallucination-Resistant Security Planning with a Large Language Model

Authors: Kim Hammar, Tansu Alpcan, Emil Lupu | Published: 2026-02-05
LLM Performance Evaluation
Hallucination
Detection of Hallucinations

Large Language Models Cannot Reliably Detect Vulnerabilities in JavaScript: The First Systematic Benchmark and Evaluation

Authors: Qingyuan Fei, Xin Liu, Song Li, Shujiang Wu, Jianwei Hou, Ping Chen, Zifeng Kang | Published: 2025-12-01
Cybersecurity
Data-Driven Vulnerability Assessment
Hallucination

Auditing M-LLMs for Privacy Risks: A Synthetic Benchmark and Evaluation Framework

Authors: Junhao Li, Jiahao Chen, Zhou Feng, Chunyi Zhou | Published: 2025-11-05
Hallucination
Privacy Violation
Privacy Protection

SeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained From

Authors: Yao Tong, Haonan Wang, Siquan Li, Kenji Kawaguchi, Tianyang Hu | Published: 2025-09-30
Token Distribution Analysis
Hallucination
Model Performance Evaluation

Measuring Physical-World Privacy Awareness of Large Language Models: An Evaluation Benchmark

Authors: Xinjie Shen, Mufei Li, Pan Li | Published: 2025-09-27 | Updated: 2025-10-13
Hallucination
Privacy Enhancing Technology
倫理的選択評価

Strategic Dishonesty Can Undermine AI Safety Evaluations of Frontier LLM

Authors: Alexander Panfilov, Evgenii Kortukov, Kristina Nikolić, Matthias Bethge, Sebastian Lapuschkin, Wojciech Samek, Ameya Prabhu, Maksym Andriushchenko, Jonas Geiping | Published: 2025-09-22
Hallucination
武器設計手法
Fraud Techniques

Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification

Authors: Aivin V. Solatorio | Published: 2025-09-08
Hallucination
Efficient Proof System
監査手法

AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models

Authors: Yu Wang, Yijian Liu, Liheng Ji, Han Luo, Wenjie Li, Xiaofei Zhou, Chiyun Feng, Puji Wang, Yuhan Cao, Geyuan Zhang, Xiaojian Li, Rongwu Xu, Yilei Chen, Tianxing He | Published: 2025-07-13 | Updated: 2025-09-30
Algorithm
Hallucination
Prompt validation

Using LLMs for Security Advisory Investigations: How Far Are We?

Authors: Bayu Fedra Abdullah, Yusuf Sulistyo Nugroho, Brittany Reid, Raula Gaikovina Kula, Kazumasa Shimari, Kenichi Matsumoto | Published: 2025-06-16
Advice Provision
Hallucination
Prompt leaking