TOP Literature Database "It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
arxiv
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
AI Security Portal bot
Information in the literature database is collected automatically.
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The widespread use of Large Language Model (LLM)-based conversational agents
(CAs), especially in high-stakes domains, raises many privacy concerns.
Building ethical LLM-based CAs that respect user privacy requires an in-depth
understanding of the privacy risks that concern users the most. However,
existing research, primarily model-centered, does not provide insight into
users' perspectives. To bridge this gap, we analyzed sensitive disclosures in
real-world ChatGPT conversations and conducted semi-structured interviews with
19 LLM-based CA users. We found that users are constantly faced with trade-offs
between privacy, utility, and convenience when using LLM-based CAs. However,
users' erroneous mental models and the dark patterns in system design limited
their awareness and comprehension of the privacy risks. Additionally, the
human-like interactions encouraged more sensitive disclosures, which
complicated users' ability to navigate the trade-offs. We discuss practical
design guidelines and the needs for paradigm shifts to protect the privacy of
LLM-based CA users.