These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
AI assistants are becoming an integral part of society, used for asking
advice or help in personal and confidential issues. In this paper, we unveil a
novel side-channel that can be used to read encrypted responses from AI
Assistants over the web: the token-length side-channel. We found that many
vendors, including OpenAI and Microsoft, have this side-channel.
However, inferring the content of a response from a token-length sequence
alone proves challenging. This is because tokens are akin to words, and
responses can be several sentences long leading to millions of grammatically
correct sentences. In this paper, we show how this can be overcome by (1)
utilizing the power of a large language model (LLM) to translate these
sequences, (2) providing the LLM with inter-sentence context to narrow the
search space and (3) performing a known-plaintext attack by fine-tuning the
model on the target model's writing style.
Using these methods, we were able to accurately reconstruct 29\% of an AI
assistant's responses and successfully infer the topic from 55\% of them. To
demonstrate the threat, we performed the attack on OpenAI's ChatGPT-4 and
Microsoft's Copilot on both browser and API traffic.