Large language models (LLMs) have shown promise in software engineering, yet
their effectiveness for binary analysis remains unexplored. We present the
first comprehensive evaluation of commercial LLMs for assembly code
deobfuscation. Testing seven state-of-the-art models against four obfuscation
scenarios (bogus control flow, instruction substitution, control flow
flattening, and their combination), we found striking performance
variations--from autonomous deobfuscation to complete failure. We propose a
theoretical framework based on four dimensions: Reasoning Depth, Pattern
Recognition, Noise Filtering, and Context Integration, explaining these
variations. Our analysis identifies five error patterns: predicate
misinterpretation, structural mapping errors, control flow misinterpretation,
arithmetic transformation errors, and constant propagation errors, revealing
fundamental limitations in LLM code processing.We establish a three-tier
resistance model: bogus control flow (low resistance), control flow flattening
(moderate resistance), and instruction substitution/combined techniques (high
resistance). Universal failure against combined techniques demonstrates that
sophisticated obfuscation remains effective against advanced LLMs. Our findings
suggest a human-AI collaboration paradigm where LLMs reduce expertise barriers
for certain reverse engineering tasks while requiring human guidance for
complex deobfuscation. This work provides a foundation for evaluating emerging
capabilities and developing resistant obfuscation techniques.x deobfuscation.
This work provides a foundation for evaluating emerging capabilities and
developing resistant obfuscation techniques.