mirror of
https://github.com/hwchase17/langchain.git
synced 2026-04-03 10:55:08 +00:00
Use docusaurus versioning with a callout, merged master as well @hwchase17 @baskaryan --------- Signed-off-by: Weichen Xu <weichen.xu@databricks.com> Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com> Co-authored-by: Leonid Ganeline <leo.gan.57@gmail.com> Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru> Co-authored-by: Averi Kitsch <akitsch@google.com> Co-authored-by: Erick Friis <erick@langchain.dev> Co-authored-by: Nuno Campos <nuno@langchain.dev> Co-authored-by: Nuno Campos <nuno@boringbits.io> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com> Co-authored-by: Martín Gotelli Ferenaz <martingotelliferenaz@gmail.com> Co-authored-by: Fayfox <admin@fayfox.com> Co-authored-by: Eugene Yurtsev <eugene@langchain.dev> Co-authored-by: Dawson Bauer <105886620+djbauer2@users.noreply.github.com> Co-authored-by: Ravindu Somawansa <ravindu.somawansa@gmail.com> Co-authored-by: Dhruv Chawla <43818888+Dominastorm@users.noreply.github.com> Co-authored-by: ccurme <chester.curme@gmail.com> Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: WeichenXu <weichen.xu@databricks.com> Co-authored-by: Benito Geordie <89472452+benitoThree@users.noreply.github.com> Co-authored-by: kartikTAI <129414343+kartikTAI@users.noreply.github.com> Co-authored-by: Kartik Sarangmath <kartik@thirdai.com> Co-authored-by: Sevin F. Varoglu <sfvaroglu@octoml.ai> Co-authored-by: MacanPN <martin.triska@gmail.com> Co-authored-by: Prashanth Rao <35005448+prrao87@users.noreply.github.com> Co-authored-by: Hyeongchan Kim <kozistr@gmail.com> Co-authored-by: sdan <git@sdan.io> Co-authored-by: Guangdong Liu <liugddx@gmail.com> Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com> Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com> Co-authored-by: pjb157 <84070455+pjb157@users.noreply.github.com> Co-authored-by: Eun Hye Kim <ehkim1440@gmail.com> Co-authored-by: kaijietti <43436010+kaijietti@users.noreply.github.com> Co-authored-by: Pengcheng Liu <pcliu.fd@gmail.com> Co-authored-by: Tomer Cagan <tomer@tomercagan.com> Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
85 lines
2.9 KiB
Plaintext
85 lines
2.9 KiB
Plaintext
# Layerup Security
|
|
|
|
The [Layerup Security](https://uselayerup.com) integration allows you to secure your calls to any LangChain LLM, LLM chain or LLM agent. The LLM object wraps around any existing LLM object, allowing for a secure layer between your users and your LLMs.
|
|
|
|
While the Layerup Security object is designed as an LLM, it is not actually an LLM itself, it simply wraps around an LLM, allowing it to adapt the same functionality as the underlying LLM.
|
|
|
|
## Setup
|
|
First, you'll need a Layerup Security account from the Layerup [website](https://uselayerup.com).
|
|
|
|
Next, create a project via the [dashboard](https://dashboard.uselayerup.com), and copy your API key. We recommend putting your API key in your project's environment.
|
|
|
|
Install the Layerup Security SDK:
|
|
```bash
|
|
pip install LayerupSecurity
|
|
```
|
|
|
|
And install LangChain Community:
|
|
```bash
|
|
pip install langchain-community
|
|
```
|
|
|
|
And now you're ready to start protecting your LLM calls with Layerup Security!
|
|
|
|
```python
|
|
from langchain_community.llms.layerup_security import LayerupSecurity
|
|
from langchain_openai import OpenAI
|
|
|
|
# Create an instance of your favorite LLM
|
|
openai = OpenAI(
|
|
model_name="gpt-3.5-turbo",
|
|
openai_api_key="OPENAI_API_KEY",
|
|
)
|
|
|
|
# Configure Layerup Security
|
|
layerup_security = LayerupSecurity(
|
|
# Specify a LLM that Layerup Security will wrap around
|
|
llm=openai,
|
|
|
|
# Layerup API key, from the Layerup dashboard
|
|
layerup_api_key="LAYERUP_API_KEY",
|
|
|
|
# Custom base URL, if self hosting
|
|
layerup_api_base_url="https://api.uselayerup.com/v1",
|
|
|
|
# List of guardrails to run on prompts before the LLM is invoked
|
|
prompt_guardrails=[],
|
|
|
|
# List of guardrails to run on responses from the LLM
|
|
response_guardrails=["layerup.hallucination"],
|
|
|
|
# Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM
|
|
mask=False,
|
|
|
|
# Metadata for abuse tracking, customer tracking, and scope tracking.
|
|
metadata={"customer": "example@uselayerup.com"},
|
|
|
|
# Handler for guardrail violations on the prompt guardrails
|
|
handle_prompt_guardrail_violation=(
|
|
lambda violation: {
|
|
"role": "assistant",
|
|
"content": (
|
|
"There was sensitive data! I cannot respond. "
|
|
"Here's a dynamic canned response. Current date: {}"
|
|
).format(datetime.now())
|
|
}
|
|
if violation["offending_guardrail"] == "layerup.sensitive_data"
|
|
else None
|
|
),
|
|
|
|
# Handler for guardrail violations on the response guardrails
|
|
handle_response_guardrail_violation=(
|
|
lambda violation: {
|
|
"role": "assistant",
|
|
"content": (
|
|
"Custom canned response with dynamic data! "
|
|
"The violation rule was {}."
|
|
).format(violation["offending_guardrail"])
|
|
}
|
|
),
|
|
)
|
|
|
|
response = layerup_security.invoke(
|
|
"Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789."
|
|
)
|
|
``` |