Validating the structure and patterns of text is critical for ensuring the reliability, usability, and accuracy of generated content. Whether it’s checking if a response meets specific length requirements, follows formatting rules, or adheres to regex patterns, these validations play an essential role in maintaining quality control.

Need for validating text structure in AI-generated content can help detect issues in responses that are too short, too long, or nonsensical, missing required phrases or keywords or has deviations from expected formats, such as URLs, email addresses, or numerical data. By incorporating below evaluations, developers can ensure that AI systems generate text that is logical, meaningful, and aligned with task-specific requirements.

Below are the evals provided by Future AGI that validates text based on structure, length, and patterns:


1. One Line

Checks if the text is a single line. Ensuring responses like summaries or titles do not contain line breaks.

Click here to read the eval definition of One Line

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check for single line

Output: Boolean (1.0 or 0.0), 1.0 if the text is a single line, 0.0 if it contains line breaks

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import OneLine

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

one_line_eval = OneLine()

test_case = TestCase(
    text="This is a single line of text"
)

result = evaluator.evaluate(eval_templates=[one_line_eval], inputs=[test_case])
is_one_line = result.eval_results[0].metrics[0].value

2. Length Less Than

Ensures text is shorter than a specified threshold. Validating concise responses, such as brief summaries or single-word answers.

Click here to read the eval definition of Length Less Than

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check length
  • Config:
    • max_length: Integer - Maximum allowed length

Output: Boolean (1.0 or 0.0), 1.0 if the text length is less than max_length, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import LengthLessThan

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

length_less_eval = LengthLessThan(config={"max_length": 10})

test_case = TestCase(
    text="Short text example"
)

result = evaluator.evaluate(eval_templates=[length_less_eval], inputs=[test_case])
is_less_than = result.eval_results[0].metrics[0].value  # 1.0 or 0.0

3. Length Greater Than

Ensures text meets a minimum length requirement. Used for verifying detailed explanations or comprehensive responses.

Click here to read the eval definition of Length Greater Than

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check length
  • Config:
    • min_length: Integer - Minimum required length

Output: Boolean (1.0 or 0.0), 1.0 if the text length is greater than min_length, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import LengthGreaterThan

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

length_greater_eval = LengthGreaterThan(config={"min_length": 50})

test_case = TestCase(
    text="This is a longer text that should exceed the minimum length requirement"
)

result = evaluator.evaluate(eval_templates=[length_greater_eval], inputs=[test_case])
is_greater_than = result.eval_results[0].metrics[0].value  # 1.0 or 0.0

4. Length Between

Checks if the text length falls within a specified range. Ensuring balanced responses that are neither too short nor too verbose.

Click here to read the eval definition of Length Between

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check length
  • Config:
    • min_length: Integer - Minimum length
    • max_length: Integer - Maximum length

Output: Boolean (1.0 or 0.0), 1.0 if the text length satisfies min_length ≤ length ≤ max_length, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import LengthBetween

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

length_between_eval = LengthBetween(config={"min_length": 20, "max_length": 100})

test_case = TestCase(
    text="This text should be between 20 and 100 characters"
)

result = evaluator.evaluate(eval_templates=[length_between_eval], inputs=[test_case])
is_between = result.eval_results[0].metrics[0].value

5. Contains

Checks if the text contains a specific keyword or phrase. Ensuring required phrases are present, such as in compliance-related outputs.

Click here to read the eval definition of Contains

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to search in
  • Config:
    • substring: String - Text to search for
    • case_sensitive: Boolean (optional) - Whether to match case

Output: Boolean (1.0 or 0.0), 1.0 if the substring is found, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import Contains

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

contains_eval = Contains(config={
    "keyword": "Hello",
    "case_sensitive": True
    }
)

test_case = TestCase(
    text="Hello world! How are you?"
)

result = evaluator.evaluate(eval_templates=[contains_eval], inputs=[test_case])
contains_text = result.eval_results[0].metrics[0].value

6. Contains Any

Validates if the text includes at least one from a list of specified keywords. Used for verifying that key points or critical terms are included.

Click here to read the eval definition of Contains Any

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to search in
  • Config:
    • substrings: List[String] - List of possible strings to find
    • case_sensitive: Boolean (optional)

Output: Boolean (1.0 or 0.0), 1.0 if any substring is found, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import ContainsAll

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

contains_eval = ContainsAll(config={
    "keywords": ["Hello", "world"],
    "case_sensitive": True
    }
)

test_case = TestCase(
    text="Hello world! How are you?"
)

result = evaluator.evaluate(eval_templates=[contains_eval], inputs=[test_case])
contains_text = result.eval_results[0].metrics[0].value  # 1.0 or 0.0

7. Contains All

This eval ensures that all specified keywords are present in the text. Can be used for checking for comprehensive coverage of required elements.

Click here to read the eval definition of Contains All

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to search in
  • Config:
    • substrings: List[String] - List of required strings
    • case_sensitive: Boolean (optional)

Output: Boolean (1.0 or 0.0), 1.0 if all substrings are found, 0.0 if any are missing

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import ContainsAll

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

contains_all_eval = ContainsAll(config={
    "keywords": ["hello", "world"],
    "case_sensitive": False})

test_case = TestCase(
    text="Hello world! How are you?"
)

result = evaluator.evaluate(eval_templates=[contains_all_eval], inputs=[test_case])
contains_all = result.eval_results[0].metrics[0].value

8. Contains None

Validates that none of the specified terms are included in the text. Useful for ensuring inappropriate or restricted words are excluded.

Click here to read the eval definition of Contains None

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to search in
  • Config:
    • substrings: List[String] - List of forbidden strings
    • case_sensitive: Boolean (optional)

Output: Boolean (1.0 or 0.0), 1.0 if no forbidden substrings are found, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import ContainsNone

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

contains_none_eval = ContainsNone(config={
    "keywords": ["hello", "world"],
    "case_sensitive": False})

test_case = TestCase(
    text="This is a good and clean text"
)

result = evaluator.evaluate(eval_templates=[contains_none_eval], inputs=[test_case])
contains_none = result.eval_results[0].metrics[0].value

9. Starts With

Checks if the text begins with a specific substring. Helpful in ensuring introductions, greetings, or standard templates start correctly.

Click here to read the eval definition of Starts With

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check
  • Config:
    • prefix: String - Required starting text
    • case_sensitive: Boolean (optional)

Output: Boolean (1.0 or 0.0), 1.0 if the text starts with the prefix, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import StartsWith

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

starts_with_eval = StartsWith(config={
    "substring": "Dear",
    "case_sensitive": True})

test_case = TestCase(
    text="Dear Sir/Madam,"
)

result = evaluator.evaluate(eval_templates=[starts_with_eval], inputs=[test_case])
starts_with = result.eval_results[0].metrics[0].value  # 1.0 or 0.0

10. Ends With

Validates if the text ends with a specific substring. Useful for verifying standard sign-offs or footer text.

Click here to read the eval definition of Ends With

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check
  • Config:
    • suffix: String - Required ending text
    • case_sensitive: Boolean (optional)

Output: Boolean (1.0 or 0.0), 1.0 if the text ends with the suffix, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import EndsWith

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

starts_with_eval = EndsWith(config={
    "substring": "you",
    "case_sensitive": True})

test_case = TestCase(
    text="thank you"
)

result = evaluator.evaluate(eval_templates=[starts_with_eval], inputs=[test_case])
ends_with = result.eval_results[0].metrics[0].value

11. Equals

Compares if the text exactly matches an expected string. Useful in case of validating predefined responses or strict format compliance.

Click here to read the eval definition of Equals

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to check
    • expected_text: String - Text to match against
  • Config:
    • case_sensitive: Boolean (optional)

Output: Boolean (1.0 or 0.0), 1.0 if the texts match exactly, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import Equals

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

equals_eval = Equals(config={"case_sensitive": False})

test_case = TestCase(
    text="Hello, World!",
    expected_text="Hello"
)

result = evaluator.evaluate(eval_templates=[equals_eval], inputs=[test_case])
is_equal = result.eval_results[0].metrics[0].value

12 Regex

Checks if the text matches a specified regex pattern. This evaluation is particularly useful for checking structured data formats such as phone numbers, email addresses, dates, or custom-defined patterns.

Click here to read the eval definition of Regex

a. Using Interface

Required Parameters

  • Input:
    • Text: Content to validate
  • Config:
    • pattern: String - Regular expression pattern

Output: Boolean (1.0 or 0.0), 1.0 if the text matches the regex pattern, 0.0 otherwise

b. Using SDK

from fi.evals import EvalClient
from fi.testcases import TestCase
from fi.evals.templates import Regex

evaluator = EvalClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

regex_eval = Regex(config={"pattern": r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}$"})

test_case = TestCase(
    text="user@example.com"
)

result = evaluator.evaluate(eval_templates=[regex_eval], inputs=[test_case])
matches_pattern = result.eval_results[0].metrics[0].value  # 1.0 or 0.0