Ensemble learning is a methodology that integrates multiple DNN learners for
improving prediction performance of individual learners. Diversity is greater
when the errors of the ensemble prediction is more uniformly distributed.
Greater diversity is highly correlated with the increase in ensemble accuracy.
Another attractive property of diversity optimized ensemble learning is its
robustness against deception: an adversarial perturbation attack can mislead
one DNN model to misclassify but may not fool other ensemble DNN members
consistently. In this paper we first give an overview of the concept of
ensemble diversity and examine the three types of ensemble diversity in the
context of DNN classifiers. We then describe a set of ensemble diversity
measures, a suite of algorithms for creating diversity ensembles and for
performing ensemble consensus (voted or learned) for generating high accuracy
ensemble output by strategically combining outputs of individual members. This
paper concludes with a discussion on a set of open issues in quantifying
ensemble diversity for robust deep learning.