These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The integration of Large Language Models (LLMs) into Electronic Design Automation (EDA) and hardware security is rapidly reshaping the semiconductor industry. While LLMs offer unprecedented capabilities in generating Register Transfer Level (RTL) code, automating testbenches, and bridging the semantic gap between high-level specifications and silicon, they simultaneously introduce severe vulnerabilities. This comprehensive review provides an in-depth analysis of the state-of-the-art in LLM-driven hardware design, organized around key advancements in EDA synthesis, hardware trust, design for security, and education. We systematically expand on the methodologies of recent breakthroughs -- from reasoning-driven synthesis and multi-agent vulnerability extraction to data contamination and adversarial machine learning (ML) evasion. We integrate general discussions on critical countermeasures, such as dynamic benchmarking to combat data memorization and aggressive red-teaming for robust security assessment. Finally, we synthesize cross-cutting lessons learned to guide future research toward secure, trustworthy, and autonomous design ecosystems.
External Datasets
VerilogEval
TrojanInS
VeriContaminated
References
Proc. ISVLSI
LLMs and the future of chip design: Unveiling security risks and building trust
Z. Wang, L. Alrahis, L. Mankali, J. Knechtel, O. Sinanoglu
Published: 2024
Proc. SOCC
Large language models (LLMs) for electronic design automation (EDA): Special session paper
K. Xu, D. Schwachhofer, J. Blocklove, I. Polian, P. Domanski, D. Pfluger, S. Garg, R. Karri, O. Sinanoglu, J. Knechtel, Z. Zhao, U. Schlichtmann, B. Li
Published: 2024
ACM TODAES
VeriGen: A large language model for Verilog code generation
S. Thakur, B. Ahmad, H. Pearce, B. Tan, B. Dolan-Gavitt, R. Karri, S. Garg
Benchmarking large language models under data contamination: A survey from static to dynamic evaluation
S. Chen, Y. Chen, Z. Li, Y. Jiang, Z. Wan, Y. He, D. Ran, T. Gu, H. Li, T. Xie, B. Ray
Published: 2025
Information
Prompt injection attacks in large language models and AI agent systems: A comprehensive review of vulnerabilities, attack vectors, and defense mechanisms
S. Gulyamov, S. Gulyamov, A. Rodionov, R. Khursanov, K. Mekhmonov, D. Babaev, A. Rakhimjonov
Published: 2026
Proc. VTS
GLLaMoR: Graph-based logic locking by large language models for enhanced robustness
A. Saha, P. B. Roy, J. Knechtel, R. Karri, O. Sinanoglu, L. Alrahis
Published: 2025
Proc. DAC
LockForge: Automating paper-to-code for logic locking with multi-agent reasoning LLMs
A. Saha, Z. Wang, P. B. Roy, J. Knechtel, O. Sinanoglu, R. Karri