ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning Authors: Zhengyue Zhao, Yingzi Ma, Somesh Jha, Marco Pavone, Patrick McDaniel, Chaowei Xiao | Published: 2025-07-14 | Updated: 2025-10-20 Large Language Model安全性分析評価基準 2025.07.14 2025.10.22 Literature Database