Labels Predicted by AI
バックドア攻撃 フィッシング検出 プロンプトインジェクション
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new practical data extraction attack that we call “neural phishing”. This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10 success rates, at times, as high as 50 adversary can insert as few as 10s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data.