As the scale of training corpora for large language models (LLMs) grows,
model developers become increasingly reluctant to disclose details on their
data. This lack of transparency poses challenges to scientific evaluation and
ethical deployment. Recently, pretraining data detection approaches, which
infer whether a given text was part of an LLM's training data through black-box
access, have been explored. The Min-K\% Prob method, which has achieved
state-of-the-art results, assumes that a non-training example tends to contain
a few outlier words with low token probabilities. However, the effectiveness
may be limited as it tends to misclassify non-training texts that contain many
common words with high probabilities predicted by LLMs. To address this issue,
we introduce a divergence-based calibration method, inspired by the
divergence-from-randomness concept, to calibrate token probabilities for
pretraining data detection. We compute the cross-entropy (i.e., the divergence)
between the token probability distribution and the token frequency distribution
to derive a detection score. We have developed a Chinese-language benchmark,
PatentMIA, to assess the performance of detection approaches for LLMs on
Chinese text. Experimental results on English-language benchmarks and PatentMIA
demonstrate that our proposed method significantly outperforms existing
methods. Our code and PatentMIA benchmark are available at
https://github.com/zhang-wei-chao/DC-PDD.