With a constant improvement in the network architectures and training
methodologies, Neural Networks (NNs) are increasingly being deployed in
real-world Machine Learning systems. However, despite their impressive
performance on "known inputs", these NNs can fail absurdly on the "unseen
inputs", especially if these real-time inputs deviate from the training dataset
distributions, or contain certain types of input noise. This indicates the low
noise tolerance of NNs, which is a major reason for the recent increase of
adversarial attacks. This is a serious concern, particularly for
safety-critical applications, where inaccurate results lead to dire
consequences. We propose a novel methodology that leverages model checking for
the Formal Analysis of Neural Network (FANNet) under different input noise
ranges. Our methodology allows us to rigorously analyze the noise tolerance of
NNs, their input node sensitivity, and the effects of training bias on their
performance, e.g., in terms of classification accuracy. For evaluation, we use
a feed-forward fully-connected NN architecture trained for the Leukemia
classification. Our experimental results show $\pm 11\%$ noise tolerance for
the given trained network, identify the most sensitive input nodes, and confirm
the biasness of the available training dataset.