These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Private large language model (LLM) inference based on secure multi-party
computation (MPC) offers cryptographically-secure protection for both user
prompt and proprietary model weights. However, it suffers from large latency
overhead especially for long input sequences. While key-value (KV) cache
eviction algorithms have been proposed to reduce the computation and memory
cost for plaintext inference, they are not designed for MPC and cannot benefit
private inference easily. In this paper, we propose an accurate and
MPC-friendly KV cache eviction framework, dubbed MPCache. MPCache is built on
the observation that historical tokens in a long sequence may have different
effects on the downstream decoding. Hence, MPCache combines a look-once static
eviction algorithm to discard unimportant tokens and a query-aware dynamic
selection algorithm to further select a small subset of tokens for attention
computation. As existing dynamic selection algorithms incur too much latency,
we propose a series of optimizations to drastically reduce the KV cache
selection overhead, including MPC-friendly similarity approximation,
hierarchical KV cache clustering, and cross-layer index sharing strategy. With
extensive experiments, we demonstrate that MPCache consistently outperforms
prior-art KV cache eviction baselines across different LLM generation tasks and
achieves 1.8~2.01x and 3.39~8.37x decoding latency and communication reduction
on different sequence lengths, respectively.