- Define custom guardrail criteria
- Enforce dynamic content filtering in production
- Instantly respond to violations based on metrics like Toxicity, Sexism, Prompt Injection, Data Privacy, and more

QuickStart
Use the Protect module from the FutureAGI SDK to evaluate and filter AI-generated content based on safety metrics like toxicity.Step 1 : Install the SDK
Step 2 : Set Your API Keys
Make sure to set your API keys as environment variables:Step 3 : Use Protect
By enabling intelligent, real-time decisions on what passes through your model, Protect helps maintain trust, ensure safety, and strengthen the integrity of your AI in the real world.