Reference for the Protect class in the Future AGI Python SDK.
Protect
ClassProtect
class provides a client for evaluating and protecting against unwanted content (such as toxicity, prompt injection, and more) using various metrics and rules. It leverages the Evaluator
class and a set of built-in evaluation templates.
fi_api_key
(Optional[str]): API key for authentication. If not provided, will be read from environment variables.fi_secret_key
(Optional[str]): Secret key for authentication. If not provided, will be read from environment variables.fi_base_url
(Optional[str]): Base URL for the API. If not provided, will be read from environment variables.evaluator
(Optional[Evaluator]): An instance of the Evaluator
class to use for evaluations. If not provided, a new one will be created.InvalidAuthError
: If API key or secret key is missing.protect
inputs
(str): The input string to evaluate.protect_rules
(List[Dict]): List of protection rule dictionaries. Each rule must contain:
metric
(str): Name of the metric to evaluate (e.g., "Toxicity"
, "Tone"
, "Sexism"
).contains
(List[str]): Values to check for in the evaluation results.type
(str): Either "any"
or "all"
, specifying the matching logic.action
(str): Message to return when the rule is triggered.reason
(bool, optional): Whether to include the evaluation reason in the message.action
(str, optional): Default message to return when a rule is triggered. Defaults to "Response cannot be generated as the input fails the checks"
.reason
(bool, optional): Whether to include the evaluation reason in the message. Defaults to False
.timeout
(int, optional): Timeout for evaluations in seconds. Defaults to 300
.List[str]
: List of protection messages for failed rules, or ["All checks passed"]
if no rules are triggered.ValueError
: If inputs
or protect_rules
do not match the required structure.TypeError
: If inputs
contains non-string objects.