Nowadays, classification and Out-of-Distribution (OoD) detection in the
few-shot setting remain challenging aims due to rarity and the limited samples
in the few-shot setting, and because of adversarial attacks. Accomplishing
these aims is important for critical systems in safety, security, and defence.
In parallel, OoD detection is challenging since deep neural network classifiers
set high confidence to OoD samples away from the training data. To address such
limitations, we propose the Few-shot ROBust (FROB) model for classification and
few-shot OoD detection. We devise FROB for improved robustness and reliable
confidence prediction for few-shot OoD detection. We generate the support
boundary of the normal class distribution and combine it with few-shot Outlier
Exposure (OE). We propose a self-supervised learning few-shot confidence
boundary methodology based on generative and discriminative models. The
contribution of FROB is the combination of the generated boundary in a
self-supervised learning manner and the imposition of low confidence at this
learned boundary. FROB implicitly generates strong adversarial samples on the
boundary and forces samples from OoD, including our boundary, to be less
confident by the classifier. FROB achieves generalization to unseen OoD with
applicability to unknown, in the wild, test sets that do not correlate to the
training datasets. To improve robustness, FROB redesigns OE to work even for
zero-shots. By including our boundary, FROB reduces the threshold linked to the
model's few-shot robustness; it maintains the OoD performance approximately
independent of the number of few-shots. The few-shot robustness analysis
evaluation of FROB on different sets and on One-Class Classification (OCC) data
shows that FROB achieves competitive performance and outperforms benchmarks in
terms of robustness to the outlier few-shot sample population and variability.