For many IoT domains, Machine Learning and more particularly Deep Learning
brings very efficient solutions to handle complex data and perform challenging
and mostly critical tasks. However, the deployment of models in a large variety
of devices faces several obstacles related to trust and security. The latest is
particularly critical since the demonstrations of severe flaws impacting the
integrity, confidentiality and accessibility of neural network models. However,
the attack surface of such embedded systems cannot be reduced to abstract flaws
but must encompass the physical threats related to the implementation of these
models within hardware platforms (e.g., 32-bit microcontrollers). Among
physical attacks, Fault Injection Analysis (FIA) are known to be very powerful
with a large spectrum of attack vectors. Most importantly, highly focused FIA
techniques such as laser beam injection enable very accurate evaluation of the
vulnerabilities as well as the robustness of embedded systems. Here, we propose
to discuss how laser injection with state-of-the-art equipment, combined with
theoretical evidences from Adversarial Machine Learning, highlights worrying
threats against the integrity of deep learning inference and claims that join
efforts from the theoretical AI and Physical Security communities are a urgent
need.