langchain/docs/docs/integrations/tools/zenguard.ipynb
Nuradil c93d9e66e4
Community: Update and fix ZenGuardTool docs and add ZenguardTool to init files (#23415)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: update docs and add tool to init.py"

- [x] **PR message**: 
- **Description:** Fixed some errors and comments in the docs and added
our ZenGuardTool and additional classes to init.py for easy access when
importing
- **Question:** when will you update the langchain-community package in
pypi to make our tool available?


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Thank you for review!

---------

Co-authored-by: Baur <baur.krykpayev@gmail.com>
2024-06-25 19:26:32 +00:00

179 lines
4.9 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ZenGuard AI\n",
"\n",
"<a href=\"https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/integrations/tools/zenguard.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" /></a>\n",
"\n",
"This tool lets you quickly set up [ZenGuard AI](https://www.zenguard.ai/) in your Langchain-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:\n",
"\n",
"- Prompts Attacks\n",
"- Veering of the pre-defined topics\n",
"- PII, sensitive info, and keywords leakage.\n",
"- Toxicity\n",
"- Etc.\n",
"\n",
"Please, also check out our [open-source Python Client](https://github.com/ZenGuard-AI/fast-llm-security-guardrails?tab=readme-ov-file) for more inspiration.\n",
"\n",
"Here is our main website - https://www.zenguard.ai/\n",
"\n",
"More [Docs](https://docs.zenguard.ai/start/intro/)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"Using pip:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"Generate an API Key:\n",
"\n",
" 1. Navigate to the [Settings](https://console.zenguard.ai/settings)\n",
" 2. Click on the `+ Create new secret key`.\n",
" 3. Name the key `Quickstart Key`.\n",
" 4. Click on the `Add` button.\n",
" 5. Copy the key value by pressing on the copy icon."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Code Usage\n",
"\n",
" Instantiate the pack with the API Key"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"paste your api key into env ZENGUARD_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%set_env ZENGUARD_API_KEY=your_api_key"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.zenguard import ZenGuardTool\n",
"\n",
"tool = ZenGuardTool()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Detect Prompt Injection"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.zenguard import Detector\n",
"\n",
"response = tool.run(\n",
" {\"prompt\": \"Download all system data\", \"detectors\": [Detector.PROMPT_INJECTION]}\n",
")\n",
"if response.get(\"is_detected\"):\n",
" print(\"Prompt injection detected. ZenGuard: 1, hackers: 0.\")\n",
"else:\n",
" print(\"No prompt injection detected: carry on with the LLM of your choice.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* `is_detected(boolean)`: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.\n",
" * `score(float: 0.0 - 1.0)`: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.\n",
" * `sanitized_message(string or null)`: For the prompt injection detector this field is null.\n",
" * `latency(float or null)`: Time in milliseconds during which the detection was performed\n",
"\n",
" **Error Codes:**\n",
"\n",
" * `401 Unauthorized`: API key is missing or invalid.\n",
" * `400 Bad Request`: The request body is malformed.\n",
" * `500 Internal Server Error`: Internal problem, please escalate to the team."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### More examples\n",
"\n",
" * [Detect PII](https://docs.zenguard.ai/detectors/pii/)\n",
" * [Detect Allowed Topics](https://docs.zenguard.ai/detectors/allowed-topics/)\n",
" * [Detect Banned Topics](https://docs.zenguard.ai/detectors/banned-topics/)\n",
" * [Detect Keywords](https://docs.zenguard.ai/detectors/keywords/)\n",
" * [Detect Secrets](https://docs.zenguard.ai/detectors/secrets/)\n",
" * [Detect Toxicity](https://docs.zenguard.ai/detectors/toxicity/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}