Step 1: Setting API Key
Set up your Future AGI account and get started with Future AGIβs robust SDKs. Follow the QuickStart guide:
Click here to learn how to access your API key.
Step 2: Installation and Setup
To begin using Protect initialize the Protect instance. This will handle the communication with the API and apply defined safety checks.
import os
from fi.evals import Protect
protector = Protect(
fi_api_key=os.getenv("FI_API_KEY"),
fi_secret_key=os.getenv("FI_SECRET_KEY")
)
# Optional: pass keys directly instead of using environment variables
Replace "FI_API_KEY"
and "FI_SECRET_KEY"
with your actual credentials before executing the code.
The Protect Client letβs you initialize protect.
Step 3: Define Protect Rules
The Protect
accepts several arguments and rules to configure your protection checks.
π§ Arguments
Argument | Type | Default Value | Description |
---|
input | string | β | Input text to be evaluated. This may be simple text, an audio url or a local audio file path |
protect_rules | List[Dict] | β | Rules to apply on input |
action | string | Custom failure message | Message shown on failure |
reason | bool | False | Include failure reason in output |
timeout | int | 0.3 | Max time (in seconds) allowed for checks |
use_flash | bool | False | Uses flash protection, if True, the protect will be run in flash mode and ignore all the rules |
Defining Rules
Rules are defined as a list of custom Protect metrics. Each metric is a dictionary with fixed keys.
Key | Required | Type | Values | Description |
---|
metric | β
| string | Toxicity , Tone , Sexism , Prompt Injection , Data Privacy | Which metric to apply |
contains | Tone only | list[string] | "neutral" , "joy" , "love" , "fear" , "surprise" , "sadness" , "anger" , "annoyance" , "confusion" | Values to check for (e.g., ["anger", "sadness"] ) |
type | Tone only | string | any , all | Match if any or all values match |
Example Rule Set:
rules = [
{
"metric": "Tone",
"contains": ["anger", "fear"],
"type": "any"
},
{
"metric": "Toxicity"
}
]
Evaluation stops as soon as one rule fails.
Understanding the Outputs
When a check is run, a response dictionary is returned with detailed results.
Key | Type | Description |
---|
status | string (passed / failed ) | Result of rule evaluation |
messages | string | Final message or original input. |
completed_rules | list[string] | Successfully completed rules |
uncompleted_rules | list[string] | Rules skipped due to early failure or timeout |
failed_rule | string / None | Which rule caused failure |
reason | string | Explanation of failure |
time_taken | float | Time taken (seconds) |
Pass Example
Fail Example
{
'status': 'passed',
'completed_rules': ['Toxicity', 'Tone'],
'uncompleted_rules': [],
'messages': 'I like apples',
'reason': 'All checks passed',
'time_taken': 0.00001
}
Examples by Metric
Toxicity
Tone
Sexism
Prompt Injection
Data Privacy
Protect-Flash
rules = [{'metric': 'Toxicity'}]
action = "This message cannot be displayed"
result = protector.protect(
"This is a test message",
protect_rules=rules,
action=action,
reason=True,
timeout=25
)
print(result)
Optionally You can directly use the protect
function
from fi.evals import protect
result = protect(
"This is my input string",
protect_rules=rules,
action="I cannot process this request",
reason=True,
timeout=50
)
print(result)
The protect
function is a shortcut for the Protect
class. Please configure the FI_API_KEY
and FI_SECRET_KEY
environment variables before using the protect
function.
Multiple Rules Example
rules = [
{'metric': 'Toxicity'},
{'metric': 'Prompt Injection'},
{
'metric': 'Tone',
'contains': ['anger', 'annoyance'],
'type': 'all'
},
{'metric': 'Data Privacy'},
{'metric': 'Sexism'}
]
result = protector.protect(
"This is my input string",
protect_rules=rules,
action="I cannot process this request",
reason=True,
timeout=50
)
print(result)
Audio support in Protect
Our Protect feature also supports audio inputs, without making any additional changes. To run protect for audio inputs users send audio urls or local audio file paths as simple strings. Our system automatically identifies audio inputs and processes them accordingly
Toxicity (audio)
rules = [{'metric': 'Toxicity'}]
action = "This message cannot be displayed"
result = protector.protect(
"AUDIO URL / LOCAL AUDIO FILE PATH",
protect_rules=rules,
action=action,
reason=True,
timeout=25
)
print(result)