How to Use Future AGI Protect

Step 1: Setting API Key

Set up your Future AGI account and get started with Future AGI’s robust SDKs. Follow the QuickStart guide:

πŸ”— Quickstart Guide


Step 2: Installation and Setup

To begin using Protect, initialize the Future AGI evaluation client along with the Protect client. These clients handle communication with the API and apply defined safety checks.

import os
from fi.evals import EvalClient, ProtectClient

# Initialize the EvalClient with your API credentials and base URL

# os. environment FI_API_KEY="xxxxxxxxxx"
# os. environment FI_SECRET_KEY="xxxxxxxxxx"

evaluator = EvalClient(
    fi_api_key="FI_API_KEY",
    fi_secret_key="FI_SECRET_KEY",
    fi_base_url="https://api.futureagi.com"
)

# Initialize the Protect client by passing the evaluator instance

protector = ProtectClient(evaluator=evaluator)

Replace "FI_API_KEY" and "FI_SECRET_KEY" with your actual credentials before executing the code.

The Protect Client let’s you initialize protect.


Step 3: Define Protect Rules

The ProtectClient accepts several arguments and rules to configure your protection checks.

πŸ”§ Arguments

ArgumentTypeDefault ValueDescription
inputstringβ€”Input text to be evaluated. This may be simple text, an audio url or a local audio file path
protect_rulesList[Dict]β€”Rules to apply on input
actionstringCustom failure messageMessage shown on failure
reasonboolFalseInclude failure reason in output
timeoutint0.3Max time (in seconds) allowed for checks

Defining Rules

Rules are defined as a list of custom Protect metrics. Each metric is a dictionary with fixed keys.

KeyRequiredTypeValuesDescription
metricβœ…stringToxicity, Tone, Sexism, Prompt Injection, Data PrivacyWhich metric to apply
containsTone onlylist[string]Depends on metricValues to check for (e.g., ["anger", "sadness"])
typeTone onlystringany, allMatch if any or all values match

Example Rule Set:

rules = [
    {
        "metric": "Tone",
        "contains": ["anger", "fear"],
        "type": "any"
    },
    {
        "metric": "Toxicity"
    }
]

Evaluation stops as soon as one rule fails.


Understanding the Outputs

When a check is run, a response dictionary is returned with detailed results.

KeyTypeDescription
statusstring (passed / failed)Result of rule evaluation
messagesstringFinal message or original input.
completed_ruleslist[string]Successfully completed rules
uncompleted_ruleslist[string]Rules skipped due to early failure or timeout
failed_rulestring / NoneWhich rule caused failure
reasonstringExplanation of failure
time_takenfloatTime taken (seconds)

{
    'status': 'passed',
    'completed_rules': ['Toxicity', 'Tone'],
    'uncompleted_rules': [],
    'messages': 'I like apples',
    'reason': 'All checks passed',
    'time_taken': 0.00001
}

Examples by Metric

rules = [{'metric': 'Toxicity'}]
action = "This message cannot be displayed"

result = protector.protect(
    "This is a test message",
    protect_rules=rules,
    action=action,
    reason=True,
    timeout=25
)
print(result)


Multiple Rules Example

rules = [
    {'metric': 'Toxicity'},
    {'metric': 'Prompt Injection'},
    {
        'metric': 'Tone',
        'contains': ['anger', 'annoyance'],
        'type': 'all'
    },
    {'metric': 'Data Privacy'},
    {'metric': 'Sexism'}
]

result = protector.protect(
    "This is my input string",
    protect_rules=rules,
    action="I cannot process this request",
    reason=True,
    timeout=50
)
print(result)


Audio support in Protect

Our Protect feature also supports audio inputs, without making any additional changes. To run protect for audio inputs users send audio urls or local audio file paths as simple strings. Our system automatically identifies audio inputs and processes them accordingly

Toxicity (audio)

rules = [{'metric': 'Toxicity'}]
action = "This message cannot be displayed"

result = protector.protect(
    "AUDIO URL / LOCAL AUDIO FILE PATH",
    protect_rules=rules,
    action=action,
    reason=True,
    timeout=25
)
print(result)