These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
We consider coverless steganography where a Large Language Model (LLM) drives
an arithmetic coding decoder to generate stego-texts. An efficient method
should embed secret message bits in as few language tokens as possible, while
still keeping the stego-text natural and fluent. We show that on the individual
token level, this problem is mathematically equivalent to maximizing the
entropy of a replacement probability distribution of the next token generation,
subject to a constraint on the KL divergence between the chosen probability
distribution and the original distribution given by the LLM. A closed-form
solution is provided for the optimization problem, which can be computed
efficiently. Several important practical issues are also tackled: 1) An
often-overlooked tokenization mismatch issue is resolved with a simple prompt
selection approach, 2) The combination of the optimized distribution and the
vocabulary truncation technique is considered, and 3) The combination of the
optimized distribution with other sequence-level selection heuristics to
further enhance the efficiency and reliability is studied.