AIにより推定されたラベル
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
Due to significant improvements in performance in recent years, neural networks are currently used for an ever-increasing number of applications. However, neural networks have the drawback that their decisions are not readily interpretable and traceable for a human. This creates several problems, for instance in terms of safety and IT security for high-risk applications, where assuring these properties is crucial. One of the most striking IT security problems aggravated by the opacity of neural networks is the possibility of so-called poisoning attacks during the training phase, where an attacker inserts specially crafted data to manipulate the resulting model. We propose an approach to this problem which allows provably verifying the integrity of the training procedure by making use of standard cryptographic mechanisms.