These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Unanticipated runtime errors, lacking predefined handlers, can abruptly
terminate execution and lead to severe consequences, such as data loss or
system crashes. Despite extensive efforts to identify potential errors during
the development phase, such unanticipated errors remain a challenge to to be
entirely eliminated, making the runtime mitigation measurements still
indispensable to minimize their impact. Automated self-healing techniques, such
as reusing existing handlers, have been investigated to reduce the loss coming
through with the execution termination. However, the usability of existing
methods is retained by their predefined heuristic rules and they fail to handle
diverse runtime errors adaptively. Recently, the advent of Large Language
Models (LLMs) has opened new avenues for addressing this problem. Inspired by
their remarkable capabilities in understanding and generating code, we propose
to deal with the runtime errors in a real-time manner using LLMs.
Specifically, we propose Healer, the first LLM-assisted self-healing
framework for handling runtime errors. When an unhandled runtime error occurs,
Healer will be activated to generate a piece of error-handling code with the
help of its internal LLM and the code will be executed inside the runtime
environment owned by the framework to obtain a rectified program state from
which the program should continue its execution. Our exploratory study
evaluates the performance of Healer using four different code benchmarks and
three state-of-the-art LLMs, GPT-3.5, GPT-4, and CodeQwen-7B. Results show
that, without the need for any fine-tuning, GPT-4 can successfully help
programs recover from 72.8% of runtime errors, highlighting the potential of
LLMs in handling runtime errors.