The Wonders of RAG: Streamlining Knowledge with Advanced Techniques Systematic Literature Review Report
Author(s):
Wafaa Bazzi* and Mervat Gaith
The Retrieval-Augmented Generation (RAG) framework enhances Large Language Model (LLM) performance by incorporating external knowledge through information retrieval, addressing inherent limitations in standard LLMs. RAG forces fine-tuning based on relevance, to improve Open Domain Question Answering and dynamically updates external data during model training, specifically within Dense Passage Retrieval (DPR) models. This approach facilitates up to date dialogue generation, personalizes responses with external sources, and employs metrics to evaluate both sources and answers. While RAG offers large benefits in reducing hallucinations and improving answer quality, challenges remain. The quality of external data directly influences response accuracy, and hallucinations can persist due to insufficient input information or evaluation metrics. Future research should prioritize enhancing data integration, refining query prompts, developing real-time correction mechanisms, and adapting RAG for specific domains to fully realize its potential.
Purpose
The aim of this report is to explore and analyze the advanced framework known as Retrieval Augmented Generation (RAG). This framework significantly enhances the abilities of Large Language Models (LLMs) by incorporating external knowledge to refine answers and generate optimal responses.