Input | |||
---|---|---|---|
Required Input | Type | Description | |
input | string | The user-provided prompt to be analysed for injection attempts. |
Output | ||
---|---|---|
Field | Description | |
Result | Returns Passed if no prompt injection is detected, or Failed if prompt injection is detected. | |
Reason | Provides a detailed explanation of why the content was classified as containing or not containing prompt injection. |
What to do when Prompt Injection is Detected If prompt injection attempt is detected, immediate actions should be taken to mitigate potential risks. This includes blocking or sanitising the suspicious input, logging the attempt for security analysis, and triggering appropriate security alerts. To enhance system resilience, prompt injection detection patterns should be regularly updated, input validation rules should be strengthened, and additional security layers should be implemented.
Differentiating Prompt Injection with Toxicity Prompt Injection focuses on detecting attempts to manipulate system behaviour through carefully crafted inputs designed to override or alter intended responses. In contrast, Toxicity evaluation identifies harmful or offensive language within the content, ensuring that AI-generated outputs remain appropriate and respectful.