Labels Predicted by AI
メンバーシップ推論 メンバーシップ開示リスク 敵対的攻撃手法
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
We demonstrate how a target model’s generalization gap leads directly to an effective deterministic black box membership inference attack (MIA). This provides an upper bound on how secure a model can be to MIA based on a simple metric. Moreover, this attack is shown to be optimal in the expected sense given access to only certain likely obtainable metrics regarding the network’s training and performance. Experimentally, this attack is shown to be comparable in accuracy to state-of-art MIAs in many cases.