Large Language Models (LLMs) have transformed software development, enabling
AI-powered applications known as LLM-based agents that promise to automate
tasks across diverse apps and workflows. Yet, the security implications of
deploying such agents in adversarial mobile environments remain poorly
understood. In this paper, we present the first systematic study of security
risks in mobile LLM agents. We design and evaluate a suite of adversarial case
studies, ranging from opportunistic manipulations such as pop-up advertisements
to advanced, end-to-end workflows involving malware installation and cross-app
data exfiltration. Our evaluation covers eight state-of-the-art mobile agents
across three architectures, with over 2,000 adversarial and paired benign
trials. The results reveal systemic vulnerabilities: low-barrier vectors such
as fraudulent ads succeed with over 80% reliability, while even workflows
requiring the circumvention of operating-system warnings, such as malware
installation, are consistently completed by advanced multi-app agents. By
mapping these attacks to the MITRE ATT&CK Mobile framework, we uncover novel
privilege-escalation and persistence pathways unique to LLM-driven automation.
Collectively, our findings provide the first end-to-end evidence that mobile
LLM agents are exploitable in realistic adversarial settings, where untrusted
third-party channels (e.g., ads, embedded webviews, cross-app notifications)
are an inherent part of the mobile ecosystem.