Compare commits

...

31 Commits

Author SHA1 Message Date
Erick Friis
797b665581 docs: v02_url [rfc] 2024-05-21 13:26:23 -07:00
junefish
5a40413bfd docs: add Pinecone tab to vector stores page (#21969)
Thank you for contributing to LangChain!

- [x] **PR title**: docs: add Pinecone tab to [vector stores
page](https://python.langchain.com/v0.1/docs/modules/data_connection/vectorstores/).


- [x] **PR message**: Recreation of
https://github.com/langchain-ai/langchain/pull/21721.
Adds information about PineconeVectorStore to the LangChain vector
stores page. Although this page is deprecated, it still shows up
prominently in Google search results, so it will still be very helpful
to users to have correct information.
![search
results](https://github.com/langchain-ai/langchain/assets/19216250/e05d8d74-03da-44a1-b87f-0f8087d3c13a)


- [x] **Add tests and docs**: N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-21 09:35:20 -07:00
Bagatur
cb45caa02e docs: link to v0.1 api ref (#21945) 2024-05-20 19:01:55 -07:00
SN
bc84bc44b6 docs: Update pairwise evaluations page to link to LangSmith evaluate_compar… (#21686)
…ative function
2024-05-20 16:22:24 -07:00
Erick Friis
b199db9767 docs: update announcement bar (v0.1) (#21855) 2024-05-18 00:35:42 +00:00
Erick Friis
91c6117fd5 docs: align vercel core dep (v0.1) (#21838) 2024-05-17 14:21:28 -07:00
Erick Friis
65e6239f14 docs: cookbook redirect (v0.1) (#21823) 2024-05-17 10:02:00 -07:00
Erick Friis
4bf7add9b8 docs: version dropdown (v0.1) (#21786) 2024-05-16 17:01:37 -07:00
Erick Friis
c9ec8c5d07 docs: dont rewrite ipynb links that have double slash (v0.1) (#21776) 2024-05-16 13:26:09 -07:00
Erick Friis
d53de65aca docs: anthropic forced tool calling (#21774) 2024-05-16 18:34:02 +00:00
Mohammad Mohtashim
f02e27b664 Docs[patch]: v0.1 UseCases Notebooks' Google Colab Links fix (#21722)
- **Description:** v0.1 Usecases Notebooks google colab link were
pointing to master but they should point to v0.1 which was raising the
following issue.
  - **Issue:** #21690
2024-05-15 15:44:35 -04:00
Bagatur
a79c8b3834 docs: openai bind tools nit (#21693) 2024-05-15 01:06:02 +00:00
Erick Friis
4ea6f882d4 infra: remove prints from notebook build (v0.1) (#21689) 2024-05-14 16:27:44 -07:00
Erick Friis
03bd0f51c9 docs: ignore nb echo:false blocks (v0.1) (#21642) 2024-05-14 00:22:53 +00:00
Erick Friis
be3d5904da infra: fix api ref link generation (v0.1) (#21632) 2024-05-13 15:16:54 -07:00
ccurme
9d850054bc docs: update heading in v0.1 extraction docs (#21605) 2024-05-13 07:07:43 -07:00
ccurme
5c07a96de1 langchain: release 0.1.20 (#21549) 2024-05-10 21:15:56 +00:00
ccurme
7440ce0cb4 langchain: cherry-pick moderation fix into v0.1 (#21544)
```
git checkout v0.1
git pull
git checkout -b cc/cherry_pick_into_v01
git cherry-pick d3ca2cc8c3
```

Co-authored-by: Matt Florence <matt@mattflo.com>
Co-authored-by: Emilia Katari <emilia@outpace.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-10 21:05:33 +00:00
Erick Friis
73a5b3dd98 infra: codespell in v1 (#21547) 2024-05-10 13:57:27 -07:00
Erick Friis
cfd827b574 docs: announcement bar (#21511) 2024-05-10 10:10:52 -07:00
Erick Friis
a71c4e22d1 docs: 404 page top level (#21510) 2024-05-09 17:14:56 -07:00
Erick Friis
29be604023 Revert "docs: redirect base slug (0.1)" (#21500)
Reverts langchain-ai/langchain#21458
2024-05-09 12:21:25 -07:00
Erick Friis
25f8467ffb docs: redirect base slug (0.1) (#21458) 2024-05-09 10:52:17 -07:00
Erick Friis
0c84aea1d4 docs: spell correction (#21459) 2024-05-09 10:51:04 -07:00
Erick Friis
d0f043a765 docs: baseUrl for ganalytics (0.1) (#21456) 2024-05-08 18:07:10 -07:00
Erick Friis
264f677528 langchain: bump core community (#21453) 2024-05-08 15:58:01 -07:00
Erick Friis
3b99f428a0 community: fix core version str (#21452) 2024-05-08 15:36:12 -07:00
Erick Friis
6eab0a4709 community: release 0.0.38 (#21451) 2024-05-08 15:33:08 -07:00
Erick Friis
be8006fa9c langchain: fix core version dep (#21449) 2024-05-08 15:08:55 -07:00
Erick Friis
e316d30dcb langchain: release 0.1.19 (#21447) 2024-05-08 14:52:03 -07:00
Erick Friis
61b5159ed0 docs: v0.1 build semantics [BEGIN v0.1 BRANCH] (#21444) 2024-05-08 14:29:49 -07:00
27 changed files with 858 additions and 656 deletions

View File

@@ -3,9 +3,9 @@ name: CI / cd . / make spell_check
on:
push:
branches: [master]
branches: [master, v0.1]
pull_request:
branches: [master]
branches: [master, v0.1]
permissions:
contents: read

View File

@@ -13,7 +13,7 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell ls -1 ../libs/partners | grep -vE "airbyte|ibm" | xargs -I {} echo "../libs/partners/{}" | tr '\n' ' ')
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm|ai21" | tr '\n' ' ')
PORT ?= 3001
@@ -48,8 +48,6 @@ generate-files:
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O $(INTERMEDIATE_DIR)/langgraph.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langgraph.md https://github.com/langchain-ai/langgraph/tree/main/
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(INTERMEDIATE_DIR)
copy-infra:
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
@@ -68,7 +66,20 @@ render:
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync generate-references
vercel-build: install-vercel-deps build
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
yarn run docusaurus build
mv build v0.1
mkdir build
mv v0.1 build
mv build/v0.1/404.html build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)

View File

@@ -7,9 +7,9 @@
"source": [
"# Create a runnable with the @chain decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionally equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested childen.\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested children.\n",
"\n",
"It will also allow you to use this as any other runnable, compose it in chain, etc.\n",
"\n",

View File

@@ -13,12 +13,13 @@ LangChain simplifies every stage of the LLM application lifecycle:
- **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve).
import ThemedImage from '@theme/ThemedImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: '/svg/langchain_stack.svg',
dark: '/svg/langchain_stack_dark.svg',
light: useBaseUrl('/svg/langchain_stack.svg'),
dark: useBaseUrl('/svg/langchain_stack_dark.svg'),
}}
title="LangChain Framework Overview"
/>

View File

@@ -17,7 +17,7 @@ Here's a summary of the key methods and properties of a comparison evaluator:
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
:::note LangSmith Support
The [run_on_dataset](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.smith) evaluation method is designed to evaluate only a single model at a time, and thus, doesn't support these evaluators.
Pairwise evaluations are supported in LangSmith via the [`evaluate_comparative`](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_pairwise) function.
:::
Detailed information about creating custom evaluators and the available built-in comparison evaluators is provided in the following sections.

View File

@@ -80,7 +80,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "238bdbaa-526a-4130-89e9-523aa44bb196",
"metadata": {},
"outputs": [],
@@ -250,16 +250,7 @@
"execution_count": 3,
"id": "42f87466-cb8e-490d-a9f8-aa0f8e9b4217",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `bind_tools` is in beta. It is actively being worked on, so the API may change.\n",
" warn_beta(\n"
]
}
],
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
@@ -369,13 +360,49 @@
"id": "90e015e0-c6e5-4ff5-8fb9-be0cd3c86395",
"metadata": {},
"source": [
"::: {.callout-tip}\n",
":::tip\n",
"\n",
"ChatAnthropic model outputs are always a single AI message that can have either a single string or a list of content blocks. The content blocks can be text blocks or tool-duse blocks. There can be multiple of each and they can be interspersed.\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "b5145dea-0183-4cab-b9e2-0e35fb8370cf",
"metadata": {},
"source": [
"### Forcing tool calls\n",
"\n",
"By default the model can choose whether to call any tools. To force the model to call at least one tool we can specify `bind_tools(..., tool_choice=\"any\")` and to force the model to call a specific tool we can pass in that tool name `bind_tools(..., tool_choice=\"GetWeather\")`"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "05993626-060c-449f-8069-e52d31442977",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': '<UNKNOWN>'},\n",
" 'id': 'toolu_01DwWjKzHPs6EHCUPxsGm9bN'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_force_tools = llm.bind_tools([GetWeather], tool_choice=\"GetWeather\")\n",
"# Notice the model will still return tool calls despite a message that\n",
"# doesn't have anything to do with the tools.\n",
"llm_with_force_tools.invoke(\"this doesn't really require tool use\").tool_calls"
]
},
{
"cell_type": "markdown",
"id": "8652ee98-814c-4ed6-9def-275eeaa9651e",

View File

@@ -147,7 +147,7 @@
"\n",
"### ChatOpenAI.bind_tools()\n",
"\n",
"With `ChatAnthropic.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:\n",
"With `ChatOpenAI.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI tool schemas, which looks like:\n",
"```\n",
"{\n",
" \"name\": \"...\",\n",

View File

@@ -1,6 +1,7 @@
---
sidebar_position: 3
sidebar_class_name: hidden
v02_url: /v0.2/docs/how_to/vectorstores/
---
# Vector stores
@@ -56,6 +57,50 @@ documents = text_splitter.split_documents(raw_documents)
db = Chroma.from_documents(documents, OpenAIEmbeddings())
```
</TabItem>
<TabItem value="pinecone" label="Pinecone">
This walkthrough uses the `Pinecone` vector database, which provides broad functionality to store and search over vectors.
```bash
pip install langchain-pinecone
```
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
```python
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
```
```python
from langchain_community.document_loaders import TextLoader
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
# Load the document, split it into chunks, and embed each chunk.
loader = TextLoader("../../modules/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
```
Next, go to the [Pinecone console](https://app.pinecone.io) and create a new index with `dimension=1536` called "langchain-test-index". Then, copy the API key and index name.
```python
from langchain_pinecone import PineconeVectorStore
os.environ['PINECONE_API_KEY'] = '<YOUR_PINECONE_API_KEY>'
index_name = "langchain-test-index"
# Connect to Pinecone index and insert the chunked docs as contents
docsearch = PineconeVectorStore.from_documents(docs, embeddings, index_name=index_name)
```
</TabItem>
<TabItem value="faiss" label="FAISS">
@@ -280,4 +325,4 @@ Ive worked on these issues a long time.
I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety.
```
</CodeOutputBlock>
</CodeOutputBlock>

View File

@@ -16,7 +16,7 @@
"id": "a15e6a18",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/apis.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/apis.ipynb)\n",
"\n",
"## Use case \n",
"\n",

View File

@@ -14,7 +14,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/code_understanding.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/code_understanding.ipynb)\n",
"\n",
"## Use case\n",
"\n",

View File

@@ -17,7 +17,7 @@
"id": "aa3571cc",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/data_generation.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/data_generation.ipynb)\n",
"\n",
"## Use case\n",
"\n",

View File

@@ -56,9 +56,9 @@
"\n",
"Head to the [Guidelines](/docs/use_cases/extraction/guidelines) page to see a list of opinionated guidelines on how to get the best performance for extraction use cases.\n",
"\n",
"## Use Case Accelerant\n",
"## Reference Application\n",
"\n",
"[langchain-extract](https://github.com/langchain-ai/langchain-extract) is a starter repo that implements a simple web server for information extraction from text and files using LLMs. It is build using **FastAPI**, **LangChain** and **Postgresql**. Feel free to adapt it to your own use cases.\n",
"[langchain-extract](https://github.com/langchain-ai/langchain-extract) is a starter repo that implements a simple web server for information extraction from text and files using LLMs. It is built using **FastAPI**, **LangChain** and **Postgresql**. Feel free to adapt it to your own use cases.\n",
"\n",
"## Other Resources\n",
"\n",

View File

@@ -16,7 +16,7 @@
"id": "cf13f702",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/summarization.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/summarization.ipynb)\n",
"\n",
"## Use case\n",
"\n",

View File

@@ -16,7 +16,7 @@
"id": "a0507a4b",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/tagging.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/tagging.ipynb)\n",
"\n",
"## Use case\n",
"\n",

View File

@@ -16,7 +16,7 @@
"id": "6605e7f7",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/web_scraping.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/web_scraping.ipynb)\n",
"\n",
"## Use case\n",
"\n",

View File

@@ -9,6 +9,8 @@ require("dotenv").config();
const baseLightCodeBlockTheme = require("prism-react-renderer/themes/vsLight");
const baseDarkCodeBlockTheme = require("prism-react-renderer/themes/vsDark");
const baseUrl = "/v0.1/";
/** @type {import('@docusaurus/types').Config} */
const config = {
title: "🦜️🔗 LangChain",
@@ -18,7 +20,7 @@ const config = {
url: "https://python.langchain.com",
// Set the /<baseUrl>/ pathname under which your site is served
// For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: "/",
baseUrl: baseUrl,
trailingSlash: true,
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "throw",
@@ -118,6 +120,10 @@ const config = {
themeConfig:
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
announcementBar: {
content: 'LangChain v0.2 is out! You are currently viewing the old v0.1 docs. <strong>View the latest docs <a href="/v0.2/docs/introduction/">here</a>.</strong>',
isCloseable: true,
},
docs: {
sidebar: {
hideable: true,
@@ -165,7 +171,7 @@ const config = {
position: "left",
},
{
href: "https://api.python.langchain.com",
href: "https://api.python.langchain.com/en/v0.1/",
label: "API Reference",
position: "left",
},
@@ -205,6 +211,21 @@ const config = {
},
]
},
{
type: "dropdown",
label: "v0.1",
position: "right",
items: [
{
label: "v0.2",
href: "https://python.langchain.com/v0.2/docs/introduction"
},
{
label: "v0.1",
href: "/docs/get_started/introduction"
}
]
},
{
type: "dropdown",
label: "🦜️🔗",
@@ -318,7 +339,7 @@ const config = {
}),
scripts: [
"/js/google_analytics.js",
baseUrl + "js/google_analytics.js",
{
src: "https://www.googletagmanager.com/gtag/js?id=G-9B66JQQH2F",
async: true,

View File

@@ -5,7 +5,7 @@
"scripts": {
"docusaurus": "docusaurus",
"start": "rm -rf ./docs/api && docusaurus start",
"build": "bash vercel_build.sh && rm -rf ./build && docusaurus build",
"build": "make vercel-build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",

View File

@@ -7,7 +7,7 @@ from typing import Iterable, Tuple
import nbformat
from nbconvert.exporters import MarkdownExporter
from nbconvert.preprocessors import Preprocessor, RegexRemovePreprocessor
from nbconvert.preprocessors import Preprocessor
class EscapePreprocessor(Preprocessor):
@@ -27,7 +27,9 @@ class EscapePreprocessor(Preprocessor):
)
# rewrite .ipynb links to .md
cell.source = re.sub(
r"\[([^\]]*)\]\(([^)]*).ipynb\)", r"[\1](\2.md)", cell.source
r"\[([^\]]*)\]\((?![^\)]*//)([^)]*)\.ipynb\)",
r"[\1](\2.md)",
cell.source,
)
return cell, resources
@@ -79,11 +81,26 @@ class ExtractAttachmentsPreprocessor(Preprocessor):
return cell, resources
class CustomRegexRemovePreprocessor(Preprocessor):
def check_conditions(self, cell):
pattern = re.compile(r"(?s)(?:\s*\Z)|(?:.*#\s*\|\s*output:\s*false.*)")
rtn = not pattern.match(cell.source)
if not rtn:
return False
else:
return True
def preprocess(self, nb, resources):
nb.cells = [cell for cell in nb.cells if self.check_conditions(cell)]
return nb, resources
exporter = MarkdownExporter(
preprocessors=[
EscapePreprocessor,
ExtractAttachmentsPreprocessor,
RegexRemovePreprocessor(patterns=[r"^\s*$"]),
CustomRegexRemovePreprocessor,
],
template_name="mdoutput",
extra_template_basedirs=["./scripts/notebook_convert_templates"],

View File

@@ -243,3 +243,17 @@ nav, h1, h2, h3, h4 {
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill='white' d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E")
no-repeat;
}
div[class^=announcementBar_] {
height:40px !important;
font-size: 20px !important;
}
[data-theme='dark'] div[class^=announcementBar_] {
background-color: #1b1b1b;
color: #fff;
}
[data-theme='dark'] div[class^=announcementBar_] button {
color: #fff;
}

View File

@@ -9,7 +9,8 @@
import React from "react";
import { Redirect } from "@docusaurus/router";
import useBaseUrl from "@docusaurus/useBaseUrl";
export default function Home() {
return <Redirect to="docs/get_started/introduction" />;
return <Redirect to={useBaseUrl("/docs/get_started/introduction")} />;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
-e ../libs/core
-e ../libs/langchain
-e ../libs/community
-e ../libs/core
-e ../libs/experimental
-e ../libs/text-splitters
langchain-cohere

View File

@@ -3964,7 +3964,7 @@ files = [
[[package]]
name = "langchain-core"
version = "0.1.51"
version = "0.1.52"
description = "Building applications with LLMs through composability"
optional = false
python-versions = ">=3.8.1,<4.0"
@@ -10044,4 +10044,4 @@ extended-testing = ["aiosqlite", "aleph-alpha-client", "anthropic", "arxiv", "as
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "ca64e52a60e8ee6f2f4ea303e1779a4508f401e283f63861161cb6a9560e2178"
content-hash = "6427b52f752b6b8e46c1bc56f03db5a25e959fb471d0b2de3607696a97fdcd77"

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-community"
version = "0.0.37"
version = "0.0.38"
description = "Community contributed LangChain integrations."
authors = []
license = "MIT"
@@ -9,7 +9,7 @@ repository = "https://github.com/langchain-ai/langchain"
[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
langchain-core = "^0.1.51"
langchain-core = "^0.1.52"
SQLAlchemy = ">=1.4,<3"
requests = "^2"
PyYAML = ">=5.3"

View File

@@ -1,9 +1,13 @@
"""Pass input through a moderation endpoint."""
from typing import Any, Dict, List, Optional
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.pydantic_v1 import root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.callbacks import (
AsyncCallbackManagerForChainRun,
CallbackManagerForChainRun,
)
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import check_package_version, get_from_dict_or_env
from langchain.chains.base import Chain
@@ -25,6 +29,7 @@ class OpenAIModerationChain(Chain):
"""
client: Any #: :meta private:
async_client: Any #: :meta private:
model_name: Optional[str] = None
"""Moderation model name to use."""
error: bool = False
@@ -33,6 +38,7 @@ class OpenAIModerationChain(Chain):
output_key: str = "output" #: :meta private:
openai_api_key: Optional[str] = None
openai_organization: Optional[str] = None
_openai_pre_1_0: bool = Field(default=None)
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
@@ -52,7 +58,16 @@ class OpenAIModerationChain(Chain):
openai.api_key = openai_api_key
if openai_organization:
openai.organization = openai_organization
values["client"] = openai.Moderation # type: ignore
values["_openai_pre_1_0"] = False
try:
check_package_version("openai", gte_version="1.0")
except ValueError:
values["_openai_pre_1_0"] = True
if values["_openai_pre_1_0"]:
values["client"] = openai.Moderation
else:
values["client"] = openai.OpenAI()
values["async_client"] = openai.AsyncOpenAI()
except ImportError:
raise ImportError(
"Could not import openai python package. "
@@ -76,8 +91,12 @@ class OpenAIModerationChain(Chain):
"""
return [self.output_key]
def _moderate(self, text: str, results: dict) -> str:
if results["flagged"]:
def _moderate(self, text: str, results: Any) -> str:
if self._openai_pre_1_0:
condition = results["flagged"]
else:
condition = results.flagged
if condition:
error_str = "Text was found that violates OpenAI's content policy."
if self.error:
raise ValueError(error_str)
@@ -87,10 +106,26 @@ class OpenAIModerationChain(Chain):
def _call(
self,
inputs: Dict[str, str],
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
) -> Dict[str, Any]:
text = inputs[self.input_key]
results = self.client.create(text)
output = self._moderate(text, results["results"][0])
if self._openai_pre_1_0:
results = self.client.create(text)
output = self._moderate(text, results["results"][0])
else:
results = self.client.moderations.create(input=text)
output = self._moderate(text, results.results[0])
return {self.output_key: output}
async def _acall(
self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if self._openai_pre_1_0:
return await super()._acall(inputs, run_manager=run_manager)
text = inputs[self.input_key]
results = await self.async_client.moderations.create(input=text)
output = self._moderate(text, results.results[0])
return {self.output_key: output}

View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
[[package]]
name = "aiodns"
@@ -3469,7 +3469,7 @@ files = [
[[package]]
name = "langchain-community"
version = "0.0.37"
version = "0.0.38"
description = "Community contributed LangChain integrations."
optional = false
python-versions = ">=3.8.1,<4.0"
@@ -3479,7 +3479,7 @@ develop = true
[package.dependencies]
aiohttp = "^3.8.3"
dataclasses-json = ">= 0.5.7, < 0.7"
langchain-core = "^0.1.51"
langchain-core = "^0.1.52"
langsmith = "^0.1.0"
numpy = "^1"
PyYAML = ">=5.3"
@@ -3497,7 +3497,7 @@ url = "../community"
[[package]]
name = "langchain-core"
version = "0.1.51"
version = "0.1.52"
description = "Building applications with LLMs through composability"
optional = false
python-versions = ">=3.8.1,<4.0"
@@ -4727,6 +4727,7 @@ description = "Nvidia JIT LTO Library"
optional = true
python-versions = ">=3"
files = [
{file = "nvidia_nvjitlink_cu12-12.4.99-py3-none-manylinux2014_aarch64.whl", hash = "sha256:75d6498c96d9adb9435f2bbdbddb479805ddfb97b5c1b32395c694185c20ca57"},
{file = "nvidia_nvjitlink_cu12-12.4.99-py3-none-manylinux2014_x86_64.whl", hash = "sha256:c6428836d20fe7e327191c175791d38570e10762edc588fb46749217cd444c74"},
{file = "nvidia_nvjitlink_cu12-12.4.99-py3-none-win_amd64.whl", hash = "sha256:991905ffa2144cb603d8ca7962d75c35334ae82bf92820b6ba78157277da1ad2"},
]
@@ -6074,26 +6075,31 @@ python-versions = ">=3.8"
files = [
{file = "PyMuPDF-1.23.26-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:645a05321aecc8c45739f71f0eb574ce33138d19189582ffa5241fea3a8e2549"},
{file = "PyMuPDF-1.23.26-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:2dfc9e010669ae92fade6fb72aaea49ebe3b8dcd7ee4dcbbe50115abcaa4d3fe"},
{file = "PyMuPDF-1.23.26-cp310-none-manylinux2014_aarch64.whl", hash = "sha256:734ee380b3abd038602be79114194a3cb74ac102b7c943bcb333104575922c50"},
{file = "PyMuPDF-1.23.26-cp310-none-manylinux2014_x86_64.whl", hash = "sha256:b22f8d854f8196ad5b20308c1cebad3d5189ed9f0988acbafa043947ea7e6c55"},
{file = "PyMuPDF-1.23.26-cp310-none-win32.whl", hash = "sha256:cc0f794e3466bc96b5bf79d42fbc1551428751e3fef38ebc10ac70396b676144"},
{file = "PyMuPDF-1.23.26-cp310-none-win_amd64.whl", hash = "sha256:2eb701247d8e685a24e45899d1175f01a3ce5fc792a4431c91fbb68633b29298"},
{file = "PyMuPDF-1.23.26-cp311-none-macosx_10_9_x86_64.whl", hash = "sha256:e2804a64bb57da414781e312fb0561f6be67658ad57ed4a73dce008b23fc70a6"},
{file = "PyMuPDF-1.23.26-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:97b40bb22e3056874634617a90e0ed24a5172cf71791b9e25d1d91c6743bc567"},
{file = "PyMuPDF-1.23.26-cp311-none-manylinux2014_aarch64.whl", hash = "sha256:fab8833559bc47ab26ce736f915b8fc1dd37c108049b90396f7cd5e1004d7593"},
{file = "PyMuPDF-1.23.26-cp311-none-manylinux2014_x86_64.whl", hash = "sha256:f25aafd3e7fb9d7761a22acf2b67d704f04cc36d4dc33a3773f0eb3f4ec3606f"},
{file = "PyMuPDF-1.23.26-cp311-none-win32.whl", hash = "sha256:05e672ed3e82caca7ef02a88ace30130b1dd392a1190f03b2b58ffe7aa331400"},
{file = "PyMuPDF-1.23.26-cp311-none-win_amd64.whl", hash = "sha256:92b3c4dd4d0491d495f333be2d41f4e1c155a409bc9d04b5ff29655dccbf4655"},
{file = "PyMuPDF-1.23.26-cp312-none-macosx_10_9_x86_64.whl", hash = "sha256:a217689ede18cc6991b4e6a78afee8a440b3075d53b9dec4ba5ef7487d4547e9"},
{file = "PyMuPDF-1.23.26-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:42ad2b819b90ce1947e11b90ec5085889df0a2e3aa0207bc97ecacfc6157cabc"},
{file = "PyMuPDF-1.23.26-cp312-none-manylinux2014_aarch64.whl", hash = "sha256:99607649f89a02bba7d8ebe96e2410664316adc95e9337f7dfeff6a154f93049"},
{file = "PyMuPDF-1.23.26-cp312-none-manylinux2014_x86_64.whl", hash = "sha256:bb42d4b8407b4de7cb58c28f01449f16f32a6daed88afb41108f1aeb3552bdd4"},
{file = "PyMuPDF-1.23.26-cp312-none-win32.whl", hash = "sha256:c40d044411615e6f0baa7d3d933b3032cf97e168c7fa77d1be8a46008c109aee"},
{file = "PyMuPDF-1.23.26-cp312-none-win_amd64.whl", hash = "sha256:3f876533aa7f9a94bcd9a0225ce72571b7808260903fec1d95c120bc842fb52d"},
{file = "PyMuPDF-1.23.26-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:52df831d46beb9ff494f5fba3e5d069af6d81f49abf6b6e799ee01f4f8fa6799"},
{file = "PyMuPDF-1.23.26-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:0bbb0cf6593e53524f3fc26fb5e6ead17c02c64791caec7c4afe61b677dedf80"},
{file = "PyMuPDF-1.23.26-cp38-none-manylinux2014_aarch64.whl", hash = "sha256:5ef4360f20015673c20cf59b7e19afc97168795188c584254ed3778cde43ce77"},
{file = "PyMuPDF-1.23.26-cp38-none-manylinux2014_x86_64.whl", hash = "sha256:d7cd88842b2e7f4c71eef4d87c98c35646b80b60e6375392d7ce40e519261f59"},
{file = "PyMuPDF-1.23.26-cp38-none-win32.whl", hash = "sha256:6577e2f473625e2d0df5f5a3bf1e4519e94ae749733cc9937994d1b256687bfa"},
{file = "PyMuPDF-1.23.26-cp38-none-win_amd64.whl", hash = "sha256:fbe1a3255b2cd0d769b2da2c4efdd0c0f30d4961a1aac02c0f75cf951b337aa4"},
{file = "PyMuPDF-1.23.26-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:73fce034f2afea886a59ead2d0caedf27e2b2a8558b5da16d0286882e0b1eb82"},
{file = "PyMuPDF-1.23.26-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:b3de8618b7cb5b36db611083840b3bcf09b11a893e2d8262f4e042102c7e65de"},
{file = "PyMuPDF-1.23.26-cp39-none-manylinux2014_aarch64.whl", hash = "sha256:879e7f5ad35709d8760ab6103c3d5dac8ab8043a856ab3653fd324af7358ee87"},
{file = "PyMuPDF-1.23.26-cp39-none-manylinux2014_x86_64.whl", hash = "sha256:deee96c2fd415ded7b5070d8d5b2c60679aee6ed0e28ac0d2cb998060d835c2c"},
{file = "PyMuPDF-1.23.26-cp39-none-win32.whl", hash = "sha256:9f7f4ef99dd8ac97fb0b852efa3dcbee515798078b6c79a6a13c7b1e7c5d41a4"},
{file = "PyMuPDF-1.23.26-cp39-none-win_amd64.whl", hash = "sha256:ba9a54552c7afb9ec85432c765e2fa9a81413acfaa7d70db7c9b528297749e5b"},
@@ -9404,4 +9410,4 @@ text-helpers = ["chardet"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "9ed4d0b11749d1f98e8fbe2895a94e4bc90975817873e52a70f2bbcee934ce19"
content-hash = "56e0417edc75e591b9265c6986b87115c27ddd98b87466f698c68df6f3fb60d3"

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain"
version = "0.1.18"
version = "0.1.20"
description = "Building applications with LLMs through composability"
authors = []
license = "MIT"
@@ -12,9 +12,9 @@ langchain-server = "langchain.server:main"
[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
langchain-core = "^0.1.48"
langchain-core = "^0.1.52"
langchain-text-splitters = ">=0.0.1,<0.1"
langchain-community = ">=0.0.37,<0.1"
langchain-community = ">=0.0.38,<0.1"
langsmith = "^0.1.17"
pydantic = ">=1,<3"
SQLAlchemy = ">=1.4,<3"