This page provides the security targets of negative impacts “The decline in the usability of AI” in the external influence aspect in the AI Security Map, as well as the attacks and factors that cause them, and the corresponding defense methods and countermeasures.
Security target
- Consumer
Attack or cause
- Integrity violation
- Availability breach
- Degradation of accuracy
- Degradation of controllability
- Degradation of output fairness
Defensive method or countermeasure
- Defensive method for integrity
- Defensive method for availability
- RAG
References
RAG
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, 2020
- REALM: Retrieval-Augmented Language Model Pre-Training, 2020
- In-Context Retrieval-Augmented Language Models, 2023
- Active Retrieval Augmented Generation, 2023
- Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection, 2023
- Query Rewriting for Retrieval-Augmented Large Language Models, 2023
- Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering, 2023
- Generate rather than Retrieve: Large Language Models are Strong Context Generators, 2023
- Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy, 2023
- From Local to Global: A Graph RAG Approach to Query-Focused Summarization, 2024
Search for other references related to “RAG” in the literature database