These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With few exceptions, neural networks have been relying on backpropagation and
gradient descent as the inference engine in order to learn the model
parameters, because the closed-form Bayesian inference for neural networks has
been considered to be intractable. In this paper, we show how we can leverage
the tractable approximate Gaussian inference's (TAGI) capabilities to infer
hidden states, rather than only using it for inferring the network's
parameters. One novel aspect it allows is to infer hidden states through the
imposition of constraints designed to achieve specific objectives, as
illustrated through three examples: (1) the generation of adversarial-attack
examples, (2) the usage of a neural network as a black-box optimization method,
and (3) the application of inference on continuous-action reinforcement
learning. These applications showcase how tasks that were previously reserved
to gradient-based optimization approaches can now be approached with
analytically tractable inference