Evaluates the quality of the context retrieved for generating a response. This evaluation ensures that the context used is relevant and sufficient to produce an accurate and coherent output.
Click here to learn how to setup evaluation using the Python SDK.
Input Type | Parameter | Type | Description |
---|---|---|---|
Optional | input | string | The input provided to the LLM that triggers the function call. |
output | string | Data which has the resulting function call or response generated by the LLM. | |
context | string or list[string] | The contextual information provided to the model. | |
Configuration Parameters | criteria | string | Description of the criteria for evaluation. |
Output | Type | Description |
---|---|---|
Score | float | Returns score between 0 and 1. |