These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The possibility for one to recover the parameters-weights and biases-of a
neural network thanks to the knowledge of its function on a subset of the input
space can be, depending on the situation, a curse or a blessing. On one hand,
recovering the parameters allows for better adversarial attacks and could also
disclose sensitive information from the dataset used to construct the network.
On the other hand, if the parameters of a network can be recovered, it
guarantees the user that the features in the latent spaces can be interpreted.
It also provides foundations to obtain formal guarantees on the performances of
the network. It is therefore important to characterize the networks whose
parameters can be identified and those whose parameters cannot. In this
article, we provide a set of conditions on a deep fully-connected feedforward
ReLU neural network under which the parameters of the network are uniquely
identified-modulo permutation and positive rescaling-from the function it
implements on a subset of the input space.