Labels Predicted by AI
Hallucination Audit Method Efficient Proof System
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
Large Language Models (LLMs) as stochastic systems may generate numbers that deviate from available data, a failure known as numeric hallucination. Existing safeguards – retrieval-augmented generation, citations, and uncertainty estimation – improve transparency but cannot guarantee fidelity: fabricated or misquoted values may still be displayed as if correct. We propose Proof-Carrying Numbers (PCN), a presentation-layer protocol that enforces numeric fidelity through mechanical verification. Under PCN, numeric spans are emitted as claim-bound tokens tied to structured claims, and a verifier checks each token under a declared policy (e.g., exact equality, rounding, aliases, or tolerance with qualifiers). Crucially, PCN places verification in the renderer, not the model: only claim-checked numbers are marked as verified, and all others default to unverified. This separation prevents spoofing and guarantees fail-closed behavior. We formalize PCN and prove soundness, completeness under honest tokens, fail-closed behavior, and monotonicity under policy refinement. PCN is lightweight and model-agnostic, integrates seamlessly into existing applications, and can be extended with cryptographic commitments. By enforcing verification as a mandatory step before display, PCN establishes a simple contract for numerically sensitive settings: trust is earned only by proof, while the absence of a mark communicates uncertainty.