These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In the burgeoning field of Large Language Models (LLMs), developing a robust
safety mechanism, colloquially known as "safeguards" or "guardrails", has
become imperative to ensure the ethical use of LLMs within prescribed
boundaries. This article provides a systematic literature review on the current
status of this critical mechanism. It discusses its major challenges and how it
can be enhanced into a comprehensive mechanism dealing with ethical issues in
various contexts. First, the paper elucidates the current landscape of
safeguarding mechanisms that major LLM service providers and the open-source
community employ. This is followed by the techniques to evaluate, analyze, and
enhance some (un)desirable properties that a guardrail might want to enforce,
such as hallucinations, fairness, privacy, and so on. Based on them, we review
techniques to circumvent these controls (i.e., attacks), to defend the attacks,
and to reinforce the guardrails. While the techniques mentioned above represent
the current status and the active research trends, we also discuss several
challenges that cannot be easily dealt with by the methods and present our
vision on how to implement a comprehensive guardrail through the full
consideration of multi-disciplinary approach, neural-symbolic method, and
systems development lifecycle.