These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Queueing systems present many opportunities for applying machine-learning
predictions, such as estimated service times, to improve system performance.
This integration raises numerous open questions about how predictions can be
effectively leveraged to improve scheduling decisions. Recent studies explore
queues with predicted service times, typically aiming to minimize job time in
the system. We review these works, highlight the effectiveness of predictions,
and present open questions on queue performance. We then move to consider an
important practical example of using predictions in scheduling, namely Large
Language Model (LLM) systems, which presents novel scheduling challenges and
highlights the potential for predictions to improve performance. In particular,
we consider LLMs performing inference. Inference requests (jobs) in LLM systems
are inherently complex; they have variable inference times, dynamic memory
footprints that are constrained by key-value (KV) store memory limitations, and
multiple possible preemption approaches that affect performance differently. We
provide background on the important aspects of scheduling in LLM systems, and
introduce new models and open problems that arise from them. We argue that
there are significant opportunities for applying insights and analysis from
queueing theory to scheduling in LLM systems.