These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Models (LLMs) have emerged as the new recommendation engines,
surpassing traditional methods in both capability and scope, particularly in
code generation. In this paper, we reveal a novel provider bias in LLMs:
without explicit directives, these models show systematic preferences for
services from specific providers in their recommendations (e.g., favoring
Google Cloud over Microsoft Azure). To systematically investigate this bias, we
develop an automated pipeline to construct the dataset, incorporating 6
distinct coding task categories and 30 real-world application scenarios.
Leveraging this dataset, we conduct the first comprehensive empirical study of
provider bias in LLM code generation across seven state-of-the-art LLMs,
utilizing approximately 500 million tokens (equivalent to $5,000+ in
computational costs). Our findings reveal that LLMs exhibit significant
provider preferences, predominantly favoring services from Google and Amazon,
and can autonomously modify input code to incorporate their preferred providers
without users' requests. Such a bias holds far-reaching implications for market
dynamics and societal equilibrium, potentially contributing to digital
monopolies. It may also deceive users and violate their expectations, leading
to various consequences. We call on the academic community to recognize this
emerging issue and develop effective evaluation and mitigation methods to
uphold AI security and fairness.