These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Standard deep learning architectures used for classification generate label
predictions with a projection head and softmax activation function. Although
successful, these methods fail to leverage the relational information between
samples in the batch for generating label predictions. In recent works,
graph-based learning techniques, namely Laplace learning, have been
heuristically combined with neural networks for both supervised and
semi-supervised learning (SSL) tasks. However, prior works approximate the
gradient of the loss function with respect to the graph learning algorithm or
decouple the processes; end-to-end integration with neural networks is not
achieved. In this work, we derive backpropagation equations, via the adjoint
method, for inclusion of a general family of graph learning layers into a
neural network. This allows us to precisely integrate graph Laplacian-based
label propagation into a neural network layer, replacing a projection head and
softmax activation function for classification tasks. Using this new framework,
our experimental results demonstrate smooth label transitions across data,
improved robustness to adversarial attacks, improved generalization, and
improved training dynamics compared to the standard softmax-based approach.