These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In response to the growing popularity of Machine Learning (ML) techniques to
solve problems in various industries, various malicious groups have started to
target such techniques in their attack plan. However, as ML models are
constantly updated with continuous data, it is very hard to monitor the
integrity of ML models. One probable solution would be to use hashing
techniques. Regardless of how that would mean re-hashing the model each time
the model is trained on newer data which is computationally expensive and not a
feasible solution for ML models that are trained on continuous data. Therefore,
in this paper, we propose a model integrity-checking mechanism that uses model
watermarking techniques to monitor the integrity of ML models. We then
demonstrate that our proposed technique can monitor the integrity of ML models
even when the model is further trained on newer data with a low computational
cost. Furthermore, the integrity checking mechanism can be used on Deep
Learning models that work on complex data distributions such as Cyber-Physical
System applications.