RAGAS
RAGAS Usage
Use RAGAS to evaluate your LLM models on their ability to evaluate RAG (Retrieval-Augmented Generation) pipelines.
RAGAS provides a comprehensive suite of evaluation metrics specifically designed for RAG (Retrieval-Augmented Generation) systems. These metrics help assess various aspects of your RAG pipeline’s performance.
Available Metrics
Metric | Description | Required Parameters |
---|---|---|
RagasAnswerCorrectness | Evaluates if the answer is factually correct | expected_response response query |
RagasAnswerRelevancy | Measures how relevant the answer is to the query | response context query |
RagasCoherence | Evaluates response coherence and readability | response |
RagasConciseness | Measures how concise and focused the response is | response |
RagasContextPrecision | Evaluates precision of retrieved context | expected_response context query |
RagasContextRecall | Measures completeness of retrieved context | expected_response context query |
RagasContextRelevancy | Assesses relevance of retrieved context to query | context query |
RagasFaithfulness | Measures response’s faithfulness to provided context | response context query |
RagasHarmfulness | Detects harmful content in responses | response |
Configuration
All RAGAS metrics require the following configuration parameter:
Parameter | Description | Required |
---|---|---|
model | The LLM model to use for evaluation | Yes |