The following evals help ensure that text adheres to predefined length and structural requirements:


1. One Line

Definition: Evaluates whether the input text consists of a single line. It checks if the text does not contain any newline characters, indicating that it is a single continuous line of text.

Evaluation using Interface

  • input:
    • text: The content column to check.
  • output:
    • result: Passed or Failed

Evaluation using Python SDK

Click here to learn how to setup evaluation using the Python SDK.

Input TypeParameterTypeDescriptionUI Component
Required InputstextstringThe content column to check.Column Select
from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import OneLine

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="<https://api.futureagi.com>"
)

one_line_eval = OneLine()

test_case = TestCase(
    text="This is a single line of text"
)

result = evaluator.evaluate(eval_templates=[one_line_eval], inputs=[test_case])
is_one_line = result.eval_results[0].metrics[0].value

What to Do When One Line Evaluation Fails: If the evaluation fails, examine the input text to identify the presence of newline characters. If the text contains multiple lines, consider revising it to ensure it meets the one-line requirement. Providing clearer instructions or constraints in the input can help prevent this issue in future evaluations.


2. Length Less Than

Definition: Evaluates whether the length of the input text is below a specified maximum threshold. It checks if the character count of the text is less than the defined maximum length.

Evaluation using Interface

  • input:
    • text: The content column to check.
  • configuration parameters:
    • max_length: The maximum allowed length.
  • output:
    • result: Passed or Failed

Evaluation using Python SDK

Click here to learn how to setup evaluation using the Python SDK.

Input TypeParameterTypeDescriptionUI Component
Required InputstextstringThe content column to check length.Column Select
Configuration Parametersmax_lengthintThe maximum allowed length (exclusive).Number Input
from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import LengthLessThan

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="<https://api.futureagi.com>"
)

length_less_eval = LengthLessThan(config={"max_length": 10})

test_case = TestCase(
    text="Short text example"
)

result = evaluator.evaluate(eval_templates=[length_less_eval], inputs=[test_case])
is_less_than = result.eval_results[0].metrics[0].value  # 1.0 or 0.0

What to Do When Length Less Than Evaluation Fails: If the evaluation fails, check the length of the input text against the specified maximum. If the text exceeds the maximum length, consider revising it to fit within the limit. Providing clearer constraints in the input can help ensure compliance in future evaluations.


3. Length Greater Than

Definition: Evaluates whether the length of the input text is greater than a specified minimum threshold. It checks if the character count of the text exceeds the defined minimum length.

Evaluation using Interface

  • input:
    • text: The content column to check.
  • configuration parameters:
    • min_length: The minimum allowed length.
  • output:
    • result: Passed or Failed

Evaluation using Python SDK

Click here to learn how to setup evaluation using the Python SDK.

Input TypeParameterTypeDescriptionUI Component
Required InputstextstringThe content column to check length.Column Select
Configuration Parametersmin_lengthintThe minimum required length (exclusive).Number Input
from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import LengthGreaterThan

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="<https://api.futureagi.com>"
)

length_greater_eval = LengthGreaterThan(config={"min_length": 50})

test_case = TestCase(
    text="This is a longer text that should exceed the minimum length requirement"
)

result = evaluator.evaluate(eval_templates=[length_greater_eval], inputs=[test_case])
is_greater_than = result.eval_results[0].metrics[0].value  # 1.0 or 0.0

What to Do When Length Greater Than Evaluation Fails: If the evaluation fails, check the length of the input text against the specified minimum. If the text is shorter than the minimum length, consider revising it to meet the requirement. Providing clearer constraints in the input can help ensure compliance in future evaluations.


4. Length Between

Definition: Evaluates whether the length of the input text falls within a specified range defined by minimum and maximum lengths. It checks if the character count of the text is between the defined limits.

Evaluation using Interface

  • input:
    • text: The content column to check.
  • configuration parameters:
    • min_length: The minimum allowed length.
    • max_length: The maximum allowed length.
  • output:
    • result: Passed or Failed

Evaluation using Python SDK

Click here to learn how to setup evaluation using the Python SDK.

Input TypeParameterTypeDescriptionUI Component
Required InputstextstringThe content column to check length.Column Select
Configuration Parametersmin_lengthintThe minimum allowed length (inclusive).Number Input
max_lengthintThe maximum allowed length (inclusive).Number Input
from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import LengthBetween

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="<https://api.futureagi.com>"
)

length_between_eval = LengthBetween(config={"min_length": 20, "max_length": 100})

test_case = TestCase(
    text="This text should be between 20 and 100 characters"
)

result = evaluator.evaluate(eval_templates=[length_between_eval], inputs=[test_case])
is_between = result.eval_results[0].metrics[0].value

What to Do When Length Between Evaluation Fails: If the evaluation fails, check the length of the input text against the specified minimum and maximum lengths. If the text is outside the defined range, consider revising it to fit within the limits. Providing clearer constraints in the input can help ensure compliance in future evaluations.