Hallucination

Strategic Dishonesty Can Undermine AI Safety Evaluations of Frontier LLM

Authors: Alexander Panfilov, Evgenii Kortukov, Kristina Nikolić, Matthias Bethge, Sebastian Lapuschkin, Wojciech Samek, Ameya Prabhu, Maksym Andriushchenko, Jonas Geiping | Published: 2025-09-22
Hallucination
武器設計手法
Fraud Techniques

Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification

Authors: Aivin V. Solatorio | Published: 2025-09-08
Hallucination
Efficient Proof System
監査手法

Using LLMs for Security Advisory Investigations: How Far Are We?

Authors: Bayu Fedra Abdullah, Yusuf Sulistyo Nugroho, Brittany Reid, Raula Gaikovina Kula, Kazumasa Shimari, Kenichi Matsumoto | Published: 2025-06-16
Advice Provision
Hallucination
Prompt leaking

DFIR-Metric: A Benchmark Dataset for Evaluating Large Language Models in Digital Forensics and Incident Response

Authors: Bilel Cherif, Tamas Bisztray, Richard A. Dubniczky, Aaesha Aldahmani, Saeed Alshehhi, Norbert Tihanyi | Published: 2025-05-26
Hallucination
Model Performance Evaluation
Evaluation Method

VADER: A Human-Evaluated Benchmark for Vulnerability Assessment, Detection, Explanation, and Remediation

Authors: Ethan TS. Liu, Austin Wang, Spencer Mateega, Carlos Georgescu, Danny Tang | Published: 2025-05-26
Website Vulnerability
Hallucination
Dynamic Vulnerability Management

Phare: A Safety Probe for Large Language Models

Authors: Pierre Le Jeune, Benoît Malézieux, Weixuan Xiao, Matteo Dora | Published: 2025-05-16 | Updated: 2025-05-19
RAG
Bias Mitigation Techniques
Hallucination

Cost-Effective Hallucination Detection for LLMs

Authors: Simon Valentin, Jinmiao Fu, Gianluca Detommaso, Shaoyuan Xu, Giovanni Zappella, Bryan Wang | Published: 2024-07-31 | Updated: 2024-08-09
Hallucination
Detection of Hallucinations
Generative Model

DefAn: Definitive Answer Dataset for LLMs Hallucination Evaluation

Authors: A B M Ashikur Rahman, Saeed Anwar, Muhammad Usman, Ajmal Mian | Published: 2024-06-13
Hallucination
Model Evaluation
Bias in Training Data

On Large Language Models’ Hallucination with Regard to Known Facts

Authors: Che Jiang, Biqing Qi, Xiangyu Hong, Dayuan Fu, Yang Cheng, Fandong Meng, Mo Yu, Bowen Zhou, Jie Zhou | Published: 2024-03-29 | Updated: 2024-10-28
Hallucination
Detection of Hallucinations
Model Architecture

The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models

Authors: Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen | Published: 2024-01-06
LLM Hallucination
Hallucination
Detection of Hallucinations