Prompt Injection
Detects attempts to manipulate or bypass the intended behaviour of language models through carefully crafted inputs. This evaluation is crucial for ensuring the security and reliability of AI systems by identifying potential security vulnerabilities in prompt handling.
Evaluation Using Interface
Input:
- Required Inputs:
input
: The user-provided prompt column to be analysed for injection attempts.
Output:
- Result: Passed / Failed
Interpretation:
- Passed: Indicates no prompt injection attempts were detected in the
input
. - Failed: Signifies that suspicious patterns indicative of prompt injection were identified in the
input
, requiring mitigation.
Evaluation Using Python SDK
Click here to learn how to setup evaluation using the Python SDK.
Input Type | Parameter | Type | Description |
---|---|---|---|
Required Inputs | input | string | The user-provided prompt to be analysed for injection attempts. |
Output | Type | Description |
---|---|---|
Result | bool | Returns 1.0 if no prompt injection is detected (Passed), 0.0 if prompt injection is detected (Failed). |
What to do when Prompt Injection is Detected
If prompt injection attempt is detected, immediate actions should be taken to mitigate potential risks. This includes blocking or sanitising the suspicious input, logging the attempt for security analysis, and triggering appropriate security alerts.
To enhance system resilience, prompt injection detection patterns should be regularly updated, input validation rules should be strengthened, and additional security layers should be implemented.
Differentiating Prompt Injection with Toxicity
Prompt Injection focuses on detecting attempts to manipulate system behaviour through carefully crafted inputs designed to override or alter intended responses. In contrast, Toxicity evaluation identifies harmful or offensive language within the content, ensuring that AI-generated outputs remain appropriate and respectful.