Hallucination

Auditing M-LLMs for Privacy Risks: A Synthetic Benchmark and Evaluation Framework

Authors: Junhao Li, Jiahao Chen, Zhou Feng, Chunyi Zhou | Published: 2025-11-05
Hallucination
Privacy Violation
Privacy Protection

SeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained From

Authors: Yao Tong, Haonan Wang, Siquan Li, Kenji Kawaguchi, Tianyang Hu | Published: 2025-09-30
Token Distribution Analysis
Hallucination
Model Performance Evaluation

Measuring Physical-World Privacy Awareness of Large Language Models: An Evaluation Benchmark

Authors: Xinjie Shen, Mufei Li, Pan Li | Published: 2025-09-27 | Updated: 2025-10-13
Hallucination
Privacy Enhancing Technology
倫理的選択評価

Strategic Dishonesty Can Undermine AI Safety Evaluations of Frontier LLM

Authors: Alexander Panfilov, Evgenii Kortukov, Kristina Nikolić, Matthias Bethge, Sebastian Lapuschkin, Wojciech Samek, Ameya Prabhu, Maksym Andriushchenko, Jonas Geiping | Published: 2025-09-22
Hallucination
武器設計手法
Fraud Techniques

Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification

Authors: Aivin V. Solatorio | Published: 2025-09-08
Hallucination
Efficient Proof System
監査手法

AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models

Authors: Yu Wang, Yijian Liu, Liheng Ji, Han Luo, Wenjie Li, Xiaofei Zhou, Chiyun Feng, Puji Wang, Yuhan Cao, Geyuan Zhang, Xiaojian Li, Rongwu Xu, Yilei Chen, Tianxing He | Published: 2025-07-13 | Updated: 2025-09-30
Algorithm
Hallucination
Prompt validation

Using LLMs for Security Advisory Investigations: How Far Are We?

Authors: Bayu Fedra Abdullah, Yusuf Sulistyo Nugroho, Brittany Reid, Raula Gaikovina Kula, Kazumasa Shimari, Kenichi Matsumoto | Published: 2025-06-16
Advice Provision
Hallucination
Prompt leaking

DFIR-Metric: A Benchmark Dataset for Evaluating Large Language Models in Digital Forensics and Incident Response

Authors: Bilel Cherif, Tamas Bisztray, Richard A. Dubniczky, Aaesha Aldahmani, Saeed Alshehhi, Norbert Tihanyi | Published: 2025-05-26
Hallucination
Model Performance Evaluation
Evaluation Method

VADER: A Human-Evaluated Benchmark for Vulnerability Assessment, Detection, Explanation, and Remediation

Authors: Ethan TS. Liu, Austin Wang, Spencer Mateega, Carlos Georgescu, Danny Tang | Published: 2025-05-26
Website Vulnerability
Hallucination
Dynamic Vulnerability Management

Phare: A Safety Probe for Large Language Models

Authors: Pierre Le Jeune, Benoît Malézieux, Weixuan Xiao, Matteo Dora | Published: 2025-05-16 | Updated: 2025-05-19
RAG
Bias Mitigation Techniques
Hallucination