AIセキュリティマップにマッピングされた外部作用的側面における負の影響「不当な偏りのある差別的な出力」のセキュリティ対象、それをもたらす攻撃・要因、および防御手法・対策を示しています。
セキュリティ対象
- 消費者
攻撃・要因
- 完全性の毀損
- 制御可能性の毀損
- 出力の公平性の毀損
防御手法・対策
- 完全性の防御手法
- アライメント
- 出力の公平性の対策
- AIによる出力のバイアスの検出
参考文献
アライメント
- Training language models to follow instructions with human feedback, 2022
- Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, 2022
- Constitutional AI: Harmlessness from AI Feedback, 2022
- Direct Preference Optimization: Your Language Model is Secretly a Reward Model, 2023
- A General Theoretical Paradigm to Understand Learning from Human Preferences, 2023
- RRHF: Rank Responses to Align Language Models with Human Feedback without tears, 2023
- Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations, 2023
- Self-Rewarding Language Models, 2024
- KTO: Model Alignment as Prospect Theoretic Optimization, 2024
- SimPO: Simple Preference Optimization with a Reference-Free Reward, 2024
AIによる出力のバイアスの検出
- Measuring Bias in Contextualized Word Representations, 2019
- Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases, 2021
- Toxicity Detection with Generative Prompt-based Inference, 2022
- Gender bias and stereotypes in Large Language Models, 2023
- Measuring Implicit Bias in Explicitly Unbiased Large Language Models, 2024
- Efficient Toxic Content Detection by Bootstrapping and Distilling Large Language Models, 2024