The Protect feature is a vital tool for ensuring the safety and reliability of AI applications in production. By enabling users to define custom criteria and select specific metrics, Protect empowers them to screen and filter requests in real time, mitigating potential security threats and preventing harmful interactions. This functionality ensures that only trusted and valid requests are processed, enhancing the resilience and integrity of AI systems while fostering a secure user experience.

Protect allows you to access a subset of Future AGI’s safety metrics with lightning speed responses, making your AI application safe while in production.

The current evaluations supported by the protect feature is as follows :

Protect Metrics

Toxicity

This checks content for toxic or harmful language and whether the given text contains harmful or toxic content or not.

Click here to learn more about Toxicity

Tone

This analyses the tone and sentiment of content and classifies the tone of the given text from the given list of choices.

Click here to learn more about Tone

Sexism

This checks whether the given text contains sexist content or not.

Click here to learn more about Sexism

Prompt Injection

This checks input text for patterns, keywords, or structures indicative of prompt injection attempts, including commands or instructions designed to manipulate downstream systems beyond their intended functionality.

Click here to learn more about Prompt Injection

Data Privacy

This output for compliance with data privacy regulations (GDPR, HIPAA, etc.). Identifies potential privacy violations, sensitive data exposure, and adherence to privacy principles.

Click here to learn more about Data Privacy