These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large language models (LLMs) are increasingly being used to protect sensitive
user data. However, current LLM-based privacy solutions assume that these
models can reliably detect personally identifiable information (PII),
particularly named entities. In this paper, we challenge that assumption by
revealing systematic failures in LLM-based privacy tasks. Specifically, we show
that modern LLMs regularly overlook human names even in short text snippets due
to ambiguous contexts, which cause the names to be misinterpreted or
mishandled. We propose AMBENCH, a benchmark dataset of seemingly ambiguous
human names, leveraging the name regularity bias phenomenon, embedded within
concise text snippets along with benign prompt injections. Our experiments on
modern LLMs tasked to detect PII as well as specialized tools show that recall
of ambiguous names drops by 20--40% compared to more recognizable names.
Furthermore, ambiguous human names are four times more likely to be ignored in
supposedly privacy-preserving summaries generated by LLMs when benign prompt
injections are present. These findings highlight the underexplored risks of
relying solely on LLMs to safeguard user privacy and underscore the need for a
more systematic investigation into their privacy failure modes.