These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Neural decompilers are machine learning models that reconstruct the source
code from an executable program. Critical to the lifecycle of any machine
learning model is an evaluation of its effectiveness. However, existing
techniques for evaluating neural decompilation models have substantial
weaknesses, especially when it comes to showing the correctness of the neural
decompiler's predictions. To address this, we introduce codealign, a novel
instruction-level code equivalence technique designed for neural decompilers.
We provide a formal definition of a relation between equivalent instructions,
which we term an equivalence alignment. We show how codealign generates
equivalence alignments, then evaluate codealign by comparing it with symbolic
execution. Finally, we show how the information codealign provides-which parts
of the functions are equivalent and how well the variable names match-is
substantially more detailed than existing state-of-the-art evaluation metrics,
which report unitless numbers measuring similarity.