Skip to main content
result = evaluator.evaluate(
    eval_templates="prompt_instruction_adherence",
    inputs={
        "input": "Write a short poem about nature that has exactly 4 lines and includes the word 'sunshine'.",
        "output": "Morning rays filter through leaves,\nBirds sing in harmony with sunshine's glow,\nGreen meadows dance in the gentle breeze,\nNature's symphony in perfect flow."
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
inputstringThe input prompt provided to the model
outputstringThe output generated by the model
Output
FieldDescription
ResultReturns a score, where higher values indicate better adherence to the prompt instructions
ReasonProvides a detailed explanation of the prompt instruction adherence assessment

What to Do if Prompt Instruction Adherence is Low

Identify specific areas where the output deviates from the given instructions. Providing targeted feedback helps refine the content to better align with the prompt. Reviewing the prompt for clarity and completeness is essential, as ambiguous or vague instructions may contribute to poor adherence. If necessary, adjusting the prompt to offer clearer guidance can improve response accuracy. Enhancing the model’s ability to interpret and follow instructions through fine-tuning or prompt engineering can further strengthen adherence.

Differentiating Prompt/Instruction Adherence with Context Adherence

Context Adherence focuses on maintaining information boundaries and verifying sources, ensuring that responses are strictly derived from the given context. Whereas, Prompt Adherence evaluates whether the output correctly follows instructions, completes tasks, and adheres to specified formats. Their evaluation criteria differ, with Context Adherence checking if information originates from the provided context, while Prompt Adherence ensures that all instructions are followed accurately.
I