Skip to main content
result = evaluator.evaluate(
    eval_templates="eval_ranking",
    inputs={
        "input": "What is the solar system?",
        "context": [
            "The solar system consists of the Sun and celestial objects bound to it",
            "Our solar system formed 4.6 billion years ago"
        ]
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
inputstringThe input provided to the model
contextlist[string]List of contexts to rank
Output
FieldDescription
ResultReturns a score, where higher values indicate better ranking quality of that context
ReasonProvides a detailed explanation of the ranking assessment

What to do if the Eval Ranking is Low

If the evaluation returns a low ranking score, the ranking criteria should be reviewed to ensure they are well-defined, relevant, and aligned with the evaluation’s objectives. Adjustments may be necessary to enhance clarity and comprehensiveness. Additionally, the contexts should be analysed for relevance and suitability, identifying any gaps or inadequacies and refining them as needed to better support the input.

Differentiating Eval Ranking with Context Adherence

Eval Ranking and Context Adherence serve distinct purposes. Eval Ranking focuses on ranking contexts based on their relevance and suitability for the input, ensuring that the most appropriate context is identified. In contrast, Context Adherence evaluates how well a response stays within the provided context, ensuring that no external information is introduced.
I