Compare commits

..

1 Commits

Author SHA1 Message Date
Bagatur
1812c1527f docs: version section sidebar 2024-07-31 02:49:26 -07:00
177 changed files with 3738 additions and 7899 deletions

View File

@@ -7,6 +7,7 @@
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)

View File

@@ -42,8 +42,6 @@ generate-files:
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
@@ -69,13 +67,10 @@ render:
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
append-related:
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync append-related
build: install-py-deps generate-files copy-infra render md-sync
vercel-build: install-vercel-deps build generate-references
rm -rf docs

View File

@@ -500,8 +500,7 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d
### Key-value stores
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/) or
[caching embeddings](/docs/how_to/caching_embeddings/), having a form of key-value (KV) storage is helpful.
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/), having some sort of key-value (KV) storage is helpful.
LangChain includes a [`BaseStore`](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.BaseStore.html) interface,
which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a

View File

@@ -88,7 +88,6 @@ These are the core building blocks you can use when building applications.
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
### Messages
@@ -107,7 +106,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
- [How to: stream a response back](/docs/how_to/streaming_llm)
- [How to: track token usage](/docs/how_to/llm_token_usage_tracking)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: work with local LLMs](/docs/how_to/local_llms)
### Output parsers

View File

@@ -5,11 +5,11 @@
"id": "b8982428",
"metadata": {},
"source": [
"# Run models locally\n",
"# Run LLMs locally\n",
"\n",
"## Use case\n",
"\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"\n",
"This has at least two important benefits:\n",
"\n",
@@ -66,12 +66,6 @@
"\n",
"![Image description](../../static/img/llama_t_put.png)\n",
"\n",
"### Formatting prompts\n",
"\n",
"Some providers have [chat model](/docs/concepts/#chat-models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model.\n",
"\n",
"This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",
"## Quickstart\n",
"\n",
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
@@ -79,20 +73,10 @@
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
" \n",
"* [Download and run](https://ollama.ai/download) the app\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama3.1:8b`\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama2`\n",
"* When the app is running, all models are automatically served on `localhost:11434`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29450fc9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_ollama"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -102,7 +86,7 @@
{
"data": {
"text/plain": [
"'...Neil Armstrong!\\n\\nOn July 20, 1969, Neil Armstrong became the first person to set foot on the lunar surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he stepped off the lunar module Eagle onto the Moon\\'s surface.\\n\\nWould you like to know more about the Apollo 11 mission or Neil Armstrong\\'s achievements?'"
"' The first man on the moon was Neil Armstrong, who landed on the moon on July 20, 1969 as part of the Apollo 11 mission. obviously.'"
]
},
"execution_count": 2,
@@ -111,78 +95,51 @@
}
],
"source": [
"from langchain_ollama import OllamaLLM\n",
"\n",
"llm = OllamaLLM(model=\"llama3.1:8b\")\n",
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama2\")\n",
"llm.invoke(\"The first man on the moon was ...\")"
]
},
{
"cell_type": "markdown",
"id": "674cc672",
"id": "343ab645",
"metadata": {},
"source": [
"Stream tokens as they are being generated:"
"Stream tokens as they are being generated."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1386a852",
"execution_count": 40,
"id": "9cd83603",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"...|"
" The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon's surface, famously declaring \"That's one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission."
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Neil| Armstrong|,| an| American| astronaut|.| He| stepped| out| of| the| lunar| module| Eagle| and| onto| the| surface| of| the| Moon| on| July| |20|,| |196|9|,| famously| declaring|:| \"|That|'s| one| small| step| for| man|,| one| giant| leap| for| mankind|.\"||"
]
}
],
"source": [
"for chunk in llm.stream(\"The first man on the moon was ...\"):\n",
" print(chunk, end=\"|\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "e5731060",
"metadata": {},
"source": [
"Ollama also includes a chat model wrapper that handles formatting conversation turns:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f14a778a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The answer is a historic one!\\n\\nThe first man to walk on the Moon was Neil Armstrong, an American astronaut and commander of the Apollo 11 mission. On July 20, 1969, Armstrong stepped out of the lunar module Eagle onto the surface of the Moon, famously declaring:\\n\\n\"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nArmstrong was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the Moon during the mission. Michael Collins remained in orbit around the Moon in the command module Columbia.\\n\\nNeil Armstrong passed away on August 25, 2012, but his legacy as a pioneering astronaut and engineer continues to inspire people around the world!', response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-08-01T00:38:29.176717Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 10681861417, 'load_duration': 34270292, 'prompt_eval_count': 19, 'prompt_eval_duration': 6209448000, 'eval_count': 141, 'eval_duration': 4432022000}, id='run-7bed57c5-7f54-4092-912c-ae49073dcd48-0', usage_metadata={'input_tokens': 19, 'output_tokens': 141, 'total_tokens': 160})"
"' The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission.'"
]
},
"execution_count": 4,
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_ollama import ChatOllama\n",
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n",
"\n",
"chat_model = ChatOllama(model=\"llama3.1:8b\")\n",
"\n",
"chat_model.invoke(\"Who was the first man on the moon?\")"
"llm = Ollama(\n",
" model=\"llama2\", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
")\n",
"llm.invoke(\"The first man on the moon was ...\")"
]
},
{
@@ -242,7 +199,7 @@
"\n",
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
"\n",
"* E.g., for Llama 2 7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)"
]
@@ -265,7 +222,9 @@
}
],
"source": [
"llm = OllamaLLM(model=\"llama2:13b\")\n",
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama2:13b\")\n",
"llm.invoke(\"The first man on the moon was ... think step by step\")"
]
},
@@ -309,7 +268,11 @@
"cell_type": "code",
"execution_count": null,
"id": "5eba38dc",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%env CMAKE_ARGS=\"-DLLAMA_METAL=on\"\n",
@@ -579,6 +542,7 @@
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.prompt_selector import ConditionalPromptSelector\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
@@ -649,9 +613,9 @@
],
"source": [
"# Chain\n",
"chain = prompt | llm\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"question = \"What NFL team won the Super Bowl in the year that Justin Bieber was born?\"\n",
"chain.invoke({\"question\": question})"
"llm_chain.run({\"question\": question})"
]
},
{
@@ -702,7 +666,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -43,7 +43,7 @@
"\n",
"This is the easiest and most reliable way to get structured outputs. `with_structured_output()` is implemented for models that provide native APIs for structuring outputs, like tool/function calling or JSON mode, and makes use of these capabilities under the hood.\n",
"\n",
"This method takes a schema as input which specifies the names, types, and descriptions of the desired output attributes. The method returns a model-like Runnable, except that instead of outputting strings or Messages it outputs objects corresponding to the given schema. The schema can be specified as a TypedDict class, [JSON Schema](https://json-schema.org/) or a Pydantic class. If TypedDict or JSON Schema are used then a dictionary will be returned by the Runnable, and if a Pydantic class is used then a Pydantic object will be returned.\n",
"This method takes a schema as input which specifies the names, types, and descriptions of the desired output attributes. The method returns a model-like Runnable, except that instead of outputting strings or Messages it outputs objects corresponding to the given schema. The schema can be specified as a [JSON Schema](https://json-schema.org/) or a Pydantic class. If JSON Schema is used then a dictionary will be returned by the Runnable, and if a Pydantic class is used then Pydantic objects will be returned.\n",
"\n",
"As an example, let's get a model to generate a joke and separate the setup from the punchline:\n",
"\n",
@@ -58,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "6d55008f",
"metadata": {},
"outputs": [],
@@ -68,7 +68,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-4-0125-preview\", temperature=0)"
]
},
{
@@ -76,24 +76,22 @@
"id": "a808a401-be1f-49f9-ad13-58dd68f7db5f",
"metadata": {},
"source": [
"### Pydantic class\n",
"\n",
"If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class. The key advantage of using Pydantic is that the model-generated output will be validated. Pydantic will raise an error if any required fields are missing or if any fields are of the wrong type."
"If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "070bf702",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=7)"
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=8)"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -104,15 +102,12 @@
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Pydantic\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(\n",
" default=None, description=\"How funny the joke is, from 1 to 10\"\n",
" )\n",
" rating: Optional[int] = Field(description=\"How funny the joke is, from 1 to 10\")\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
@@ -135,73 +130,12 @@
"id": "deddb6d3",
"metadata": {},
"source": [
"### TypedDict or JSON Schema\n",
"\n",
"If you don't want to use Pydantic, explicitly don't want validation of the arguments, or want to be able to stream the model outputs, you can define your schema using a TypedDict class. We can optionally use a special `Annotated` syntax supported by LangChain that allows you to specify the default value and description of a field. Note, the default value is *not* filled in automatically if the model doesn't generate it, it is only used in defining the schema that is passed to the model.\n",
"\n",
":::info Requirements\n",
"\n",
"- Core: `langchain-core>=0.2.26`\n",
"- Typing extensions: It is highly recommended to import `Annotated` and `TypedDict` from `typing_extensions` instead of `typing` to ensure consistent behavior across Python versions.\n",
"\n",
":::"
"We can also pass in a [JSON Schema](https://json-schema.org/) dict if you prefer not to use Pydantic. In this case, the response is also a dict:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "70d82891-42e8-424a-919e-07d83bcfec61",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!',\n",
" 'rating': 7}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"\n",
"# TypedDict\n",
"class Joke(TypedDict):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: Annotated[str, ..., \"The setup of the joke\"]\n",
"\n",
" # Alternatively, we could have specified setup as:\n",
"\n",
" # setup: str # no default, no description\n",
" # setup: Annotated[str, ...] # no default, no description\n",
" # setup: Annotated[str, \"foo\"] # default, no description\n",
"\n",
" punchline: Annotated[str, ..., \"The punchline of the joke\"]\n",
" rating: Annotated[Optional[int], None, \"How funny the joke is, from 1 to 10\"]\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"id": "e4d7b4dc-f617-4ea8-aa58-847c228791b4",
"metadata": {},
"source": [
"Equivalently, we can pass in a [JSON Schema](https://json-schema.org/) dict. This requires no imports or classes and makes it very clear exactly how each parameter is documented, at the cost of being a bit more verbose."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6700994a",
"metadata": {},
"outputs": [
@@ -210,10 +144,10 @@
"text/plain": [
"{'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!',\n",
" 'rating': 7}"
" 'rating': 8}"
]
},
"execution_count": 6,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -235,7 +169,6 @@
" \"rating\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"How funny the joke is, from 1 to 10\",\n",
" \"default\": None,\n",
" },\n",
" },\n",
" \"required\": [\"setup\", \"punchline\"],\n",
@@ -252,7 +185,7 @@
"source": [
"### Choosing between multiple schemas\n",
"\n",
"The simplest way to let the model choose from multiple schemas is to create a parent schema that has a Union-typed attribute:"
"The simplest way to let the model choose from multiple schemas is to create a parent Pydantic class that has a Union-typed attribute:"
]
},
{
@@ -276,17 +209,6 @@
"from typing import Union\n",
"\n",
"\n",
"# Pydantic\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(\n",
" default=None, description=\"How funny the joke is, from 1 to 10\"\n",
" )\n",
"\n",
"\n",
"class ConversationalResponse(BaseModel):\n",
" \"\"\"Respond in a conversational manner. Be kind and helpful.\"\"\"\n",
"\n",
@@ -338,7 +260,7 @@
"source": [
"### Streaming\n",
"\n",
"We can stream outputs from our structured model when the output type is a dict (i.e., when the schema is specified as a TypedDict class or JSON Schema dict). \n",
"We can stream outputs from our structured model when the output type is a dict (i.e., when the schema is specified as a JSON Schema dict). \n",
"\n",
":::info\n",
"\n",
@@ -349,7 +271,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 43,
"id": "aff89877-28a3-472f-a1aa-eff893fe7736",
"metadata": {},
"outputs": [
@@ -380,24 +302,12 @@
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the'}\n",
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse'}\n",
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!'}\n",
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 7}\n"
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 8}\n"
]
}
],
"source": [
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"\n",
"# TypedDict\n",
"class Joke(TypedDict):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: Annotated[str, ..., \"The setup of the joke\"]\n",
" punchline: Annotated[str, ..., \"The punchline of the joke\"]\n",
" rating: Annotated[Optional[int], None, \"How funny the joke is, from 1 to 10\"]\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"structured_llm = llm.with_structured_output(json_schema)\n",
"\n",
"for chunk in structured_llm.stream(\"Tell me a joke about cats\"):\n",
" print(chunk)"
@@ -417,7 +327,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 47,
"id": "283ba784-2072-47ee-9b2c-1119e3c69e8e",
"metadata": {},
"outputs": [
@@ -425,11 +335,11 @@
"data": {
"text/plain": [
"{'setup': 'Woodpecker',\n",
" 'punchline': \"Woodpecker who? Woodpecker who can't find a tree is just a bird with a headache!\",\n",
" 'rating': 7}"
" 'punchline': \"Woodpecker goes 'knock knock', but don't worry, they never expect you to answer the door!\",\n",
" 'rating': 8}"
]
},
"execution_count": 11,
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
@@ -467,7 +377,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 46,
"id": "d7381cb0-b2c3-4302-a319-ed72d0b9e43f",
"metadata": {},
"outputs": [
@@ -475,11 +385,11 @@
"data": {
"text/plain": [
"{'setup': 'Crocodile',\n",
" 'punchline': 'Crocodile be seeing you later, alligator!',\n",
" 'punchline': \"Crocodile 'see you later', but in a while, it becomes an alligator!\",\n",
" 'rating': 7}"
]
},
"execution_count": 12,
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
@@ -581,24 +491,23 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 6,
"id": "df0370e3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!'}"
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None)"
]
},
"execution_count": 15,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"structured_llm = llm.with_structured_output(None, method=\"json_mode\")\n",
"structured_llm = llm.with_structured_output(Joke, method=\"json_mode\")\n",
"\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
@@ -617,21 +526,19 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 5,
"id": "10ed2842",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'function': {'arguments': '{\"setup\":\"Why was the cat sitting on the computer?\",\"punchline\":\"Because it wanted to keep an eye on the mouse!\",\"rating\":7}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 33, 'prompt_tokens': 93, 'total_tokens': 126}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None}, id='run-d880d7e2-df08-4e9e-ad92-dfc29f2fd52f-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 7}, 'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 93, 'output_tokens': 33, 'total_tokens': 126}),\n",
" 'parsed': {'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!',\n",
" 'rating': 7},\n",
"{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ASK4EmZeZ69Fi3p554Mb4rWy', 'function': {'arguments': '{\"setup\":\"Why was the cat sitting on the computer?\",\"punchline\":\"Because it wanted to keep an eye on the mouse!\"}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 36, 'prompt_tokens': 107, 'total_tokens': 143}, 'model_name': 'gpt-4-0125-preview', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-6491d35b-9164-4656-b75c-d7882cfb76cb-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!'}, 'id': 'call_ASK4EmZeZ69Fi3p554Mb4rWy'}], usage_metadata={'input_tokens': 107, 'output_tokens': 36, 'total_tokens': 143}),\n",
" 'parsed': Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None),\n",
" 'parsing_error': None}"
]
},
"execution_count": 17,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -639,7 +546,9 @@
"source": [
"structured_llm = llm.with_structured_output(Joke, include_raw=True)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
")"
]
},
{
@@ -915,7 +824,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -929,7 +838,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -24,9 +24,10 @@
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Tool calling](/docs/concepts/#functiontool-calling)\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Output parsers](/docs/concepts/#output-parsers)\n",
"\n",
":::\n",
"\n",
"[Tool calling](/docs/concepts/#functiontool-calling) allows a chat model to respond to a given prompt by \"calling a tool\".\n",
@@ -37,11 +38,15 @@
"\n",
"![Diagram of calling a tool](/img/tool_call.png)\n",
"\n",
"If you want to see how to use the model-generated tool call to actually run a tool [check out this guide](/docs/how_to/tool_results_pass_to_model/).\n",
"If you want to see how to use the model-generated tool call to actually run a tool function [check out this guide](/docs/how_to/tool_results_pass_to_model/).\n",
"\n",
":::note Supported models\n",
"\n",
"Tool calling is not universal, but is supported by many popular LLM providers. You can find a [list of all models that support tool calling here](/docs/integrations/chat/).\n",
"Tool calling is not universal, but is supported by many popular LLM providers, including [Anthropic](/docs/integrations/chat/anthropic/), \n",
"[Cohere](/docs/integrations/chat/cohere/), [Google](/docs/integrations/chat/google_vertex_ai_palm/), \n",
"[Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/), and even for locally-running models via [Ollama](/docs/integrations/chat/ollama/).\n",
"\n",
"You can find a [list of all models that support tool calling here](/docs/integrations/chat/).\n",
"\n",
":::\n",
"\n",
@@ -53,12 +58,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Defining tool schemas\n",
"## Passing tools to chat models\n",
"\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"Chat models that support tool calling features implement a `.bind_tools` method, which \n",
"receives a list of functions, Pydantic models, or LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
"and binds them to the chat model in its expected format. Subsequent invocations of the \n",
"chat model will include tool schemas in its calls to the LLM.\n",
"\n",
"### Python functions\n",
"Our tool schemas can be Python functions:"
"For example, below we implement simple tools for arithmetic:"
]
},
{
@@ -67,41 +74,26 @@
"metadata": {},
"outputs": [],
"source": [
"# The function name, type hints, and docstring are all part of the tool\n",
"# schema that's passed to the model. Defining good, descriptive schemas\n",
"# is an extension of prompt engineering and is an important part of\n",
"# getting models to perform well.\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Add two integers.\n",
"\n",
" Args:\n",
" a: First integer\n",
" b: Second integer\n",
" \"\"\"\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiply two integers.\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
"\n",
" Args:\n",
" a: First integer\n",
" b: Second integer\n",
" \"\"\"\n",
" return a * b"
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### LangChain Tool\n",
"\n",
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for details.\n",
"\n",
"### Pydantic class\n",
"\n",
"You can equivalently define the schemas without the accompanying functions using [Pydantic](https://docs.pydantic.dev):"
"We can also define the schemas without the accompanying functions using [Pydantic](https://docs.pydantic.dev):"
]
},
{
@@ -113,57 +105,23 @@
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class add(BaseModel):\n",
" \"\"\"Add two integers.\"\"\"\n",
"# Note that the docstrings here are crucial, as they will be passed along\n",
"# to the model along with the class name.\n",
"class Add(BaseModel):\n",
" \"\"\"Add two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers.\"\"\"\n",
"class Multiply(BaseModel):\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### TypedDict class\n",
"\n",
":::info Requires `langchain-core>=0.2.25`\n",
":::\n",
"\n",
"Or using TypedDicts and annotations:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from typing_extensions import Annotated, TypedDict\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class add(TypedDict):\n",
" \"\"\"Add two integers.\"\"\"\n",
"\n",
" # Annotations must have the type and can optionally include a default value and description (in that order).\n",
" a: Annotated[int, ..., \"First integer\"]\n",
" b: Annotated[int, ..., \"Second integer\"]\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers.\"\"\"\n",
"\n",
" a: Annotated[int, ..., \"First integer\"]\n",
" b: Annotated[int, ..., \"Second integer\"]\n",
"\n",
"\n",
"tools = [add, multiply]"
"tools = [Add, Multiply]"
]
},
{
@@ -171,7 +129,7 @@
"metadata": {},
"source": [
"To actually bind those schemas to a chat model, we'll use the `.bind_tools()` method. This handles converting\n",
"the `add` and `multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n",
"the `Add` and `Multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
@@ -206,16 +164,16 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_BwYJ4UgU5pRVCBOUmiu7NhF9', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 80, 'total_tokens': 97}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_ba606877f9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7f05e19e-4561-40e2-a2d0-8f4e28e9a00f-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BwYJ4UgU5pRVCBOUmiu7NhF9', 'type': 'tool_call'}], usage_metadata={'input_tokens': 80, 'output_tokens': 17, 'total_tokens': 97})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_wLTBasMppAwpdiA5CD92l9x7', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 89, 'total_tokens': 107}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_0f03d4f0ee', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d3f36cca-f225-416f-ac16-0217046f0b38-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_wLTBasMppAwpdiA5CD92l9x7', 'type': 'tool_call'}], usage_metadata={'input_tokens': 89, 'output_tokens': 18, 'total_tokens': 107})"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -256,23 +214,23 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'multiply',\n",
"[{'name': 'Multiply',\n",
" 'args': {'a': 3, 'b': 12},\n",
" 'id': 'call_rcdMie7E89Xx06lEKKxJyB5N',\n",
" 'id': 'call_uqJsNrDJ8ZZnFa1BHHYAllEv',\n",
" 'type': 'tool_call'},\n",
" {'name': 'add',\n",
" {'name': 'Add',\n",
" 'args': {'a': 11, 'b': 49},\n",
" 'id': 'call_nheGN8yfvSJsnIuGZaXihou3',\n",
" 'id': 'call_ud1uHAaYsdpWuxugwoJ63BDs',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -294,49 +252,31 @@
"are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n",
"a name, string arguments, identifier, and error message.\n",
"\n",
"\n",
"## Parsing\n",
"\n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further process the output. For example, we can convert existing values populated on the `.tool_calls` to Pydantic objects using the\n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further \n",
"process the output. For example, we can convert existing values populated on the `.tool_calls` attribute back to the original Pydantic class using the\n",
"[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html):"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[multiply(a=3, b=12), add(a=11, b=49)]"
"[Multiply(a=3, b=12), Add(a=11, b=49)]"
]
},
"execution_count": 7,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import PydanticToolsParser\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class add(BaseModel):\n",
" \"\"\"Add two integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"chain = llm_with_tools | PydanticToolsParser(tools=[add, multiply])\n",
"chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n",
"chain.invoke(query)"
]
},
@@ -354,18 +294,18 @@
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models\n",
"- Few shot prompting [with tools](/docs/how_to/tools_few_shot/)\n",
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)"
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -377,7 +317,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -17,25 +17,26 @@
"source": [
"# ChatAI21\n",
"\n",
"## Overview\n",
"\n",
"This notebook covers how to get started with AI21 chat models.\n",
"Note that different chat models support different parameters. See the [AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"Note that different chat models support different parameters. See the ",
"[AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"[See all AI21's LangChain components.](https://pypi.org/project/langchain-ai21/) \n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatAI21](https://api.python.langchain.com/en/latest/chat_models/langchain_ai21.chat_models.ChatAI21.html#langchain_ai21.chat_models.ChatAI21) | [langchain-ai21](https://api.python.langchain.com/en/latest/ai21_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ai21?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ai21?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"\n",
"## Setup"
"## Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c3bef91",
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-15T06:50:44.929635Z",
"start_time": "2024-02-15T06:50:41.209704Z"
}
},
"outputs": [],
"source": [
"!pip install -qU langchain-ai21"
]
},
{
@@ -43,9 +44,10 @@
"id": "2b4f3e15",
"metadata": {},
"source": [
"### Credentials\n",
"## Environment Setup\n",
"\n",
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:\n"
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the ",
"`AI21_API_KEY` environment variable:\n"
]
},
{
@@ -63,168 +65,50 @@
"os.environ[\"AI21_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "f6844fff-3702-4489-ab74-732f69f3b9d7",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c2e19d3-7c58-4470-9e1a-718b27a32056",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "98e22f31-8acc-42d6-916d-415d1263c56e",
"metadata": {},
"source": [
"### Installation"
]
},
{
"cell_type": "markdown",
"id": "f9699cd9-58f2-450e-aa64-799e66906c0f",
"metadata": {},
"source": [
"!pip install -qU langchain-ai21"
]
},
{
"cell_type": "markdown",
"id": "4828829d3da430ce",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
"collapsed": false
},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c40756fb-cbf8-4d44-a293-3989d707237e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ai21 import ChatAI21\n",
"\n",
"llm = ChatAI21(model=\"jamba-instruct\", temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "2bdc5d68-2a19-495e-8c04-d11adc86d3ae",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "46b982dc-5d8a-46da-a711-81c03ccd6adc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", id='run-2e8d16d6-a06e-45cb-8d0c-1c8208645033-0')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "10a30f84-b531-4fd5-8b5b-91512fbdc75b",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"id": "39353473fce5dd2e",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', id='run-e1bd82dc-1a7e-4b2e-bde9-ac995929ac0f-0')"
"AIMessage(content='Bonjour, comment vas-tu?')"
]
},
"execution_count": 4,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_ai21 import ChatAI21\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
"chat = ChatAI21(model=\"jamba-instruct\")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"Translate this sentence from English to French. {english_text}.\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e79de691-9dd6-4697-b57e-59a4a3cc073a",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAI21 features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_ai21.chat_models.ChatAI21.html"
"chain = prompt | chat\n",
"chain.invoke({\"english_text\": \"Hello, how are you?\"})"
]
}
],
@@ -244,7 +128,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -115,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -123,8 +123,8 @@
"from langchain_openai import AzureChatOpenAI\n",
"\n",
"llm = AzureChatOpenAI(\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" azure_deployment=\"YOUR-DEPLOYMENT\",\n",
" api_version=\"2024-05-01-preview\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
@@ -143,7 +143,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -152,10 +152,10 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-bea4b46c-e3e1-4495-9d3a-698370ad963d-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-a6a732c2-cb02-4e50-9a9c-ab30eab034fc-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -174,7 +174,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 11,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -202,17 +202,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 12,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-cbc44038-09d3-40d4-9da2-c5910ee636ca-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-084967d7-06f2-441f-b5c1-477e2a9e9d03-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
]
},
"execution_count": 5,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -264,8 +264,8 @@
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2ca02d23-60d0-43eb-8d04-070f61f8fefd",
"execution_count": 5,
"id": "84c411b0-1790-4798-8bb7-47d8ece4c2dc",
"metadata": {},
"outputs": [
{
@@ -288,22 +288,22 @@
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e1b07ae2-3de7-44bd-bfdc-b76f4ba45a35",
"execution_count": 6,
"id": "21234693-d92b-4d69-8a7f-55aa062084bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000074\n"
"Total Cost (USD): $0.000078\n"
]
}
],
"source": [
"llm_0301 = AzureChatOpenAI(\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" azure_deployment=\"YOUR-DEPLOYMENT\",\n",
" api_version=\"2024-05-01-preview\",\n",
" model_version=\"0301\",\n",
")\n",
"with get_openai_callback() as cb:\n",
@@ -338,7 +338,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "raw",
"id": "53fbf15f",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
@@ -12,103 +12,129 @@
},
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# Cohere\n",
"# ChatCohere\n",
"\n",
"This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).\n",
"This doc will help you get started with Cohere [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatCohere features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html).\n",
"\n",
"For an overview of all Cohere models head to the [Cohere docs](https://docs.cohere.com/docs/models).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/cohere) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatCohere](https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html) | [langchain-cohere](https://api.python.langchain.com/en/latest/cohere_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cohere?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cohere?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods."
]
},
{
"cell_type": "markdown",
"id": "3607d67e-e56c-4102-bbba-df2edc0e109e",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"The integration lives in the `langchain-cohere` package. We can install these with:\n",
"To access Cohere models you'll need to create a Cohere account, get an API key, and install the `langchain-cohere` integration package.\n",
"\n",
"```bash\n",
"pip install -U langchain-cohere\n",
"```\n",
"### Credentials\n",
"\n",
"We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:"
"Head to https://dashboard.cohere.com/welcome/login to sign up to Cohere and generate an API key. Once you've done this set the COHERE_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "2108b517-1e8d-473d-92fa-4f930e8072a7",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass()"
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Enter your Cohere API key: \")"
]
},
{
"cell_type": "markdown",
"id": "cf690fbb",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7f11de02",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "4c26754b-b3c9-4d93-8f36-43049bd943bf",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"## Usage\n",
"### Installation\n",
"\n",
"ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:"
"The LangChain Cohere integration lives in the `langchain-cohere` package:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-cohere"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_cohere import ChatCohere\n",
"from langchain_core.messages import HumanMessage"
"\n",
"llm = ChatCohere(\n",
" model=\"command-r-plus\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"chat = ChatCohere()"
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
@@ -116,223 +142,110 @@
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5 \\n6 || 7 \\n\\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')"
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'd84f80f3-4611-46e6-aed0-9d8665a20a11', 'token_count': {'input_tokens': 89, 'output_tokens': 5}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'd84f80f3-4611-46e6-aed0-9d8665a20a11', 'token_count': {'input_tokens': 89, 'output_tokens': 5}}, id='run-514ab516-ed7e-48ac-b132-2598fb80ebef-0')"
]
},
"execution_count": 15,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [HumanMessage(content=\"1\"), HumanMessage(content=\"2 3\")]\n",
"chat.invoke(messages)"
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.ainvoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4 && 5"
"J'adore programmer.\n"
]
}
],
"source": [
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "064288e4-f184-4496-9427-bcf148fa055e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat.batch([messages])"
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "f1c56460",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "0851b103",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "ae950c0f-1691-47f1-b609-273033cae707",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='What color socks do bears wear?\\n\\nThey dont wear socks, they have bear feet. \\n\\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0')"
"AIMessage(content='Ich liebe Programmierung.', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '053bebde-4e1d-4d06-8ee6-3446e7afa25e', 'token_count': {'input_tokens': 84, 'output_tokens': 6}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '053bebde-4e1d-4d06-8ee6-3446e7afa25e', 'token_count': {'input_tokens': 84, 'output_tokens': 6}}, id='run-53700708-b7fb-417b-af36-1a6fcde38e7d-0')"
]
},
"execution_count": 20,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "12db8d69",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## Tool calling\n",
"## API reference\n",
"\n",
"Cohere supports tool calling functionalities!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "337e24af",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" HumanMessage,\n",
" ToolMessage,\n",
")\n",
"from langchain_core.tools import tool"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "74d292e7",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def magic_function(number: int) -> int:\n",
" \"\"\"Applies a magic operation to an integer\n",
" Args:\n",
" number: Number to have magic operation performed on\n",
" \"\"\"\n",
" return number + 10\n",
"\n",
"\n",
"def invoke_tools(tool_calls, messages):\n",
" for tool_call in tool_calls:\n",
" selected_tool = {\"magic_function\": magic_function}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" return messages\n",
"\n",
"\n",
"tools = [magic_function]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ecafcbc6",
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = chat.bind_tools(tools=tools)\n",
"messages = [HumanMessage(content=\"What is the value of magic_function(2)?\")]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "aa34fc39",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The value of magic_function(2) is 12.', additional_kwargs={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, response_metadata={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, id='run-f318a9cf-55c8-44f4-91d1-27cf46c6a465-0')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = llm_with_tools.invoke(messages)\n",
"while res.tool_calls:\n",
" messages.append(res)\n",
" messages = invoke_tools(res.tool_calls, messages)\n",
" res = llm_with_tools.invoke(messages)\n",
"\n",
"res"
"For detailed documentation of all ChatCohere features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-2",
"language": "python",
"name": "python3"
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
@@ -344,7 +257,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -36,7 +36,7 @@
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"| | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"### Supported Methods\n",
"\n",
@@ -395,66 +395,6 @@
"chat_model_external.invoke(\"How to use Databricks?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Function calling on Databricks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Databricks Function Calling is OpenAI-compatible and is only available during model serving as part of Foundation Model APIs.\n",
"\n",
"See [Databricks function calling introduction](https://docs.databricks.com/en/machine-learning/model-serving/function-calling.html#supported-models) for supported models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"\n",
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
" },\n",
" },\n",
" },\n",
" }\n",
"]\n",
"\n",
"# supported tool_choice values: \"auto\", \"required\", \"none\", function name in string format,\n",
"# or a dictionary as {\"type\": \"function\", \"function\": {\"name\": <<tool_name>>}}\n",
"model = llm.bind_tools(tools, tool_choice=\"auto\")\n",
"\n",
"messages = [{\"role\": \"user\", \"content\": \"What is the current temperature of Chicago?\"}]\n",
"print(model.invoke(messages))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [Databricks Unity Catalog](docs/integrations/tools/databricks.ipynb) about how to use UC functions in chains."
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -4,67 +4,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Hugging Face\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatHuggingFace\n",
"# Hugging Face\n",
"\n",
"## Overview\n",
"\n",
"This notebook shows how to get started using Hugging Face LLMs as chat models.\n",
"This notebook shows how to get started using `Hugging Face` LLM's as chat models.\n",
"\n",
"In particular, we will:\n",
"1. Utilize the [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py) integrations to instantiate an LLM.\n",
"1. Utilize the [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py) integrations to instantiate an `LLM`.\n",
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](/docs/concepts/#message-types) abstraction.\n",
"3. Explore tool calling with the `ChatHuggingFace`.\n",
"4. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatHuggingFace](https://api.python.langchain.com/en/latest/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html) | [langchain-huggingface](https://api.python.langchain.com/en/latest/huggingface_api_reference.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_huggingface?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_huggingface?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access Hugging Face models you'll need to create a Hugging Face account, get an API key, and install the `langchain-huggingface` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Generate a [Hugging Face Access Token](https://huggingface.co/docs/hub/security-tokens) and store it as an environment variable: `HUGGINGFACEHUB_API_TOKEN`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"HUGGINGFACEHUB_API_TOKEN\"):\n",
" os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = getpass.getpass(\"Enter your token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Below we install additional packages as well for demonstration purposes:"
"> Note: To get started, you'll need to have a [Hugging Face Access Token](https://huggingface.co/docs/hub/security-tokens) saved as an environment variable: `HUGGINGFACEHUB_API_TOKEN`."
]
},
{
@@ -80,7 +31,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation"
"## 1. Instantiate an LLM"
]
},
{
@@ -129,7 +80,6 @@
" max_new_tokens=512,\n",
" do_sample=False,\n",
" repetition_penalty=1.03,\n",
" return_full_text=False,\n",
" ),\n",
")"
]
@@ -168,7 +118,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation"
"## 2. Instantiate the `ChatHuggingFace` to apply chat templates"
]
},
{
@@ -299,44 +249,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool calling with `ChatHuggingFace`\n",
"## 3. Explore the tool calling with `ChatHuggingFace`\n",
"\n",
"`text-generation-inference` supports tool with open source LLMs starting from v2.0.1"
]
@@ -400,7 +313,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use with agents\n",
"## 4. Take it for a spin as an agent!\n",
"\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. \n",
"\n",
@@ -545,15 +458,6 @@
"\n",
"It's exciting to see how far open-source LLM's can go as general purpose reasoning agents. Give it a try yourself!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatHuggingFace features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html"
]
}
],
"metadata": {
@@ -572,7 +476,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -12,87 +12,43 @@
},
{
"cell_type": "markdown",
"id": "a14c83bf-af26-4f22-8c1a-d632c5795ecf",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# MistralAI\n",
"\n",
"This will help you getting started with Mistral [chat models](/docs/concepts/#chat-models), accessed via their [API](https://docs.mistral.ai/api/). For detailed documentation of all ChatMistralAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html).\n",
"This notebook covers how to get started with MistralAI chat models, via their [API](https://docs.mistral.ai/api/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/mistral) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatMistralAI](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) | [langchain_mistralai](https://api.python.langchain.com/en/latest/mistralai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_mistralai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_mistralai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.\n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) for detailed documentation of all attributes and methods."
]
},
{
"cell_type": "markdown",
"id": "cc686b8f",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"To access Mistral models you'll need to create a Mistral account, get an API key, and install the `langchain-mistralai` integration package.\n",
"You will need the `langchain-core` and `langchain-mistralai` package to use the API. You can install these with:\n",
"\n",
"### Credentials\n",
"```bash\n",
"pip install -U langchain-core langchain-mistralai\n",
"\n",
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API. Once you've obtained an API key, store it in the `MISTRAL_API_KEY` environment variable:"
"We'll also need to get a [Mistral API key](https://console.mistral.ai/users/api-keys/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9acd8340-09d4-4ece-871a-a35b0732c7d8",
"execution_count": 7,
"id": "c3fd4184",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
" os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
" \"Enter your __ModuleName__ API key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "42c979b1-df49-4f6c-9fe6-d9dbf3ea8c2a",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc4f11ec-5cb3-4caf-b3cd-7a20c41b0cfe",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0fc42221-97b2-466b-95db-10368e17ca56",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain MistralAI integration lives in the `langchain-mistralai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85cb1ab8-9f2c-4b93-8415-ad65819dcb38",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-mistralai"
"api_key = getpass.getpass()"
]
},
{
@@ -100,76 +56,57 @@
"id": "502127fd",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2dfa801a-d040-4c09-9634-58604e8eaf16",
"metadata": {},
"outputs": [],
"source": [
"from langchain_mistralai.chat_models import ChatMistralAI\n",
"\n",
"llm = ChatMistralAI(model=\"mistral-large-latest\")"
]
},
{
"cell_type": "markdown",
"id": "f668acff-eb14-4b3a-959a-df5bfc02968b",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "86e3f9e6-67ec-4fbf-8ff1-85331200f412",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 36, 'completion_tokens': 9}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-d6196c33-9410-413b-b454-4ed0bec1f0c7-0', usage_metadata={'input_tokens': 27, 'output_tokens': 9, 'total_tokens': 36})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8f8a24bc-b7f0-4d3a-b310-8a4e0ba125dd",
"metadata": {},
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_mistralai.chat_models import ChatMistralAI"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# If api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.\n",
"chat = ChatMistralAI(api_key=api_key)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
"data": {
"text/plain": [
"AIMessage(content=\"Who's there? I was just about to ask the same thing! How can I assist you today?\")"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"print(ai_msg.content)"
"messages = [HumanMessage(content=\"knock knock\")]\n",
"chat.invoke(messages)"
]
},
{
@@ -182,7 +119,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 10,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
@@ -191,16 +128,16 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime programmer.\", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 34, 'completion_tokens': 7}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-1873888a-186f-49a8-ab81-24335bd3099b-0', usage_metadata={'input_tokens': 27, 'output_tokens': 7, 'total_tokens': 34})"
"AIMessage(content='Who\\'s there?\\n\\n(You can then continue the \"knock knock\" joke by saying the name of the person or character who should be responding. For example, if I say \"Banana,\" you could respond with \"Banana who?\" and I would say \"Banana bunch! Get it? Because a group of bananas is called a \\'bunch\\'!\" and then we would both laugh and have a great time. But really, you can put anything you want in the spot where I put \"Banana\" and it will still technically be a \"knock knock\" joke. The possibilities are endless!)')"
]
},
"execution_count": 4,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await llm.ainvoke(messages)"
"await chat.ainvoke(messages)"
]
},
{
@@ -213,7 +150,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 11,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
@@ -223,12 +160,32 @@
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer."
"Who's there?\n",
"\n",
"(After this, the conversation can continue as a call and response \"who's there\" joke. Here is an example of how it could go:\n",
"\n",
"You say: Orange.\n",
"I say: Orange who?\n",
"You say: Orange you glad I didn't say banana!?)\n",
"\n",
"But since you asked for a knock knock joke specifically, here's one for you:\n",
"\n",
"Knock knock.\n",
"\n",
"Me: Who's there?\n",
"\n",
"You: Lettuce.\n",
"\n",
"Me: Lettuce who?\n",
"\n",
"You: Lettuce in, it's too cold out here!\n",
"\n",
"I hope this brings a smile to your face! Do you have a favorite knock knock joke you'd like to share? I'd love to hear it."
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\")"
]
},
@@ -242,23 +199,23 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 12,
"id": "e63aebcb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 36, 'completion_tokens': 9}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-2aa2a189-c405-4cf5-bd31-e9025e4c8536-0', usage_metadata={'input_tokens': 27, 'output_tokens': 9, 'total_tokens': 36})]"
"[AIMessage(content=\"Who's there? I was just about to ask the same thing! Go ahead and tell me who's there. I love a good knock-knock joke.\")]"
]
},
"execution_count": 6,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.batch([messages])"
"chat.batch([messages])"
]
},
{
@@ -273,52 +230,36 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 13,
"id": "ee43a1ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0dc49212",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', response_metadata={'token_usage': {'prompt_tokens': 21, 'total_tokens': 28, 'completion_tokens': 7}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-409ebc9a-b4a0-4734-ab6f-e11f6b4f808f-0', usage_metadata={'input_tokens': 21, 'output_tokens': 7, 'total_tokens': 28})"
"AIMessage(content='Why do bears hate shoes so much? They like to run around in their bear feet.')"
]
},
"execution_count": 7,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "eb7e01fb-a433-48b1-a4c2-e6009523a896",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatMistralAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html"
"chain.invoke({\"topic\": \"bears\"})"
]
}
],
@@ -338,7 +279,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -2,24 +2,13 @@
"cells": [
{
"cell_type": "markdown",
"id": "1f666798-8635-4bc0-a515-04d318588d67",
"metadata": {},
"id": "cc6caafa",
"metadata": {
"id": "cc6caafa"
},
"source": [
"---\n",
"sidebar_label: NVIDIA AI Endpoints\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fa8eb20e-4db8-45e3-9e79-c595f4f274da",
"metadata": {},
"source": [
"# ChatNVIDIA\n",
"# NVIDIA NIMs\n",
"\n",
"This will help you getting started with NVIDIA [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatNVIDIA` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html).\n",
"\n",
"## Overview\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
@@ -35,66 +24,7 @@
"\n",
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatNVIDIA](https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain_nvidia_ai_endpoints](https://api.python.langchain.com/en/latest/nvidia_ai_endpoints_api_reference.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_nvidia_ai_endpoints?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_nvidia_ai_endpoints?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.\n",
"\n",
"### Credentials\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "208b72da-1535-4249-bbd3-2500028e25e9",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NVIDIA_API_KEY\"):\n",
" # Note: the API key should start with \"nvapi-\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = getpass.getpass(\"Enter your NVIDIA API key: \")"
]
},
{
"cell_type": "markdown",
"id": "52dc8dcb-0a48-4a4e-9947-764116d2ffd4",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2cd9cb12-6ca5-432a-9e42-8a57da073c7e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
@@ -102,9 +32,7 @@
"id": "f2be90a9",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain_nvidia_ai_endpoints` package:"
"## Installation"
]
},
{
@@ -117,14 +45,51 @@
"%pip install --upgrade --quiet langchain-nvidia-ai-endpoints"
]
},
{
"cell_type": "markdown",
"id": "ccff689e",
"metadata": {
"id": "ccff689e"
},
"source": [
"## Setup\n",
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "686c4d2f",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
"else:\n",
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
" assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "markdown",
"id": "af0ce26b",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can access models in the NVIDIA API Catalog:"
"## Working with NVIDIA API Catalog"
]
},
{
@@ -143,24 +108,7 @@
"## Core LC Chat Interface\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")"
]
},
{
"cell_type": "markdown",
"id": "469c8c7f-de62-457f-a30f-674763a8b717",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9512c81b-1f3a-4eca-9470-f52cedff5c74",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")\n",
"result = llm.invoke(\"Write a ballad about LangChain.\")\n",
"print(result.content)"
]
@@ -682,55 +630,6 @@
"source": [
"See [How to use chat models to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling/) for additional examples."
]
},
{
"cell_type": "markdown",
"id": "a9a3c438-121d-46eb-8fb5-b8d5a13cd4a4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af585c6b-fe0a-4833-9860-a4209a71b3c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f2f25dd3-0b4a-465f-a53e-95521cdc253c",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ChatNVIDIA` features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html"
]
}
],
"metadata": {
@@ -752,7 +651,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -12,83 +12,14 @@
},
{
"cell_type": "markdown",
"id": "8f82e243-f4ee-44e2-b417-099b6401ae3e",
"id": "eb7e5679-aa06-47e4-a1a3-b6b70e604017",
"metadata": {},
"source": [
"# vLLM Chat\n",
"\n",
"vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.\n",
"\n",
"## Overview\n",
"This will help you getting started with vLLM [chat models](/docs/concepts/#chat-models), which leverage the `langchain-openai` package. For detailed documentation of all `ChatOpenAI` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain_openai](https://api.python.langchain.com/en/latest/langchain_openai.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"Specific model features-- such as tool calling, support for multi-modal inputs, support for token-level streaming, etc.-- will depend on the hosted model.\n",
"\n",
"## Setup\n",
"\n",
"See the vLLM docs [here](https://docs.vllm.ai/en/latest/).\n",
"\n",
"To access vLLM models through LangChain, you'll need to install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Authentication will depend on specifics of the inference server."
]
},
{
"cell_type": "markdown",
"id": "c3b1707a-cf2c-4367-94e3-436c43402503",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e40bd5e-cbaa-41ef-aaf9-0858eb207184",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0739b647-609b-46d3-bdd3-e86fe4463288",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain vLLM integration can be accessed via the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7afcfbdc-56aa-4529-825a-8acbe7aa5241",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "2cf576d6-7b67-4937-bf99-39071e85720c",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
"This notebook covers how to get started with vLLM chat models using langchain's `ChatOpenAI` **as it is**."
]
},
{
@@ -120,7 +51,7 @@
"source": [
"inference_server_url = \"http://localhost:8000/v1\"\n",
"\n",
"llm = ChatOpenAI(\n",
"chat = ChatOpenAI(\n",
" model=\"mosaicml/mpt-7b\",\n",
" openai_api_key=\"EMPTY\",\n",
" openai_api_base=inference_server_url,\n",
@@ -129,14 +60,6 @@
")"
]
},
{
"cell_type": "markdown",
"id": "34b18328-5e8b-4ff2-9b89-6fbb76b5c7f0",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 15,
@@ -165,66 +88,82 @@
" content=\"Translate the following sentence from English to Italian: I love programming.\"\n",
" ),\n",
"]\n",
"llm.invoke(messages)"
"chat(messages)"
]
},
{
"cell_type": "markdown",
"id": "a580a1e4-11a3-4277-bfba-bfb414ac7201",
"id": "55fc7046-a6dc-4720-8c0c-24a6db76a4f4",
"metadata": {},
"source": [
"## Chaining\n",
"You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use ChatPromptTemplate's format_prompt -- this returns a `PromptValue`, which you can convert to a string or `Message` object, depending on whether you want to use the formatted value as input to an llm or chat model.\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
"For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "123980e9-0dee-4ce5-bde6-d964dd90129c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"template = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"system_message_prompt = SystemMessagePromptTemplate.from_template(template)\n",
"human_template = \"{text}\"\n",
"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "b2fb8c59-8892-4270-85a2-4f8ab276b75d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' I love programming too.', additional_kwargs={}, example=False)"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_prompt = ChatPromptTemplate.from_messages(\n",
" [system_message_prompt, human_message_prompt]\n",
")\n",
"\n",
"# get a chat completion from the formatted messages\n",
"chat(\n",
" chat_prompt.format_prompt(\n",
" input_language=\"English\", output_language=\"Italian\", text=\"I love programming.\"\n",
" ).to_messages()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd0f4043-48bd-4245-8bdb-e7669666a277",
"id": "0bbd9861-2b94-4920-8708-b690004f4c4d",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "265f5d51-0a76-4808-8d13-ef598ee6e366",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all features and configurations exposed via `langchain-openai`, head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html\n",
"\n",
"Refer to the vLLM [documentation](https://docs.vllm.ai/en/latest/) as well."
]
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "conda_pytorch_p310",
"language": "python",
"name": "python3"
"name": "conda_pytorch_p310"
},
"language_info": {
"codemirror_mode": {
@@ -236,7 +175,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -31,8 +31,7 @@
"### Local Partitioning (Optional)\n",
"\n",
"By default, `langchain-unstructured` installs a smaller footprint that requires\n",
"offloading of the partitioning logic to the Unstructured API, which requires an `api_key`. For\n",
"partitioning using the API, refer to the Unstructured API section below.\n",
"offloading of the partitioning logic to the Unstructured API.\n",
"\n",
"If you would like to run the partitioning logic locally, you will need to install\n",
"a combination of system dependencies, as outlined in the \n",
@@ -359,9 +358,8 @@
"Partitioning with the Unstructured API relies on the [Unstructured SDK\n",
"Client](https://docs.unstructured.io/api-reference/api-services/sdk).\n",
"\n",
"Below is an example showing how you can customize some features of the client and use your own `requests.Session()`, pass in an alternative `server_url`, or customize the `RetryConfig` object for more control over how failed requests are handled.\n",
"\n",
"Note that the example below may not use the latest version of the UnstructuredClient and there could be breaking changes in future releases. For the latest examples, refer to the [Unstructured Python SDK](https://docs.unstructured.io/api-reference/api-services/sdk-python) docs."
"Below is an example showing how you can customize some features of the client and use your own\n",
"`requests.Session()`, pass in an alternative `server_url`, or customize the `RetryConfig` object for more control over how failed requests are handled."
]
},
{

View File

@@ -108,7 +108,7 @@
"metadata": {},
"outputs": [],
"source": [
"model = Cohere(max_tokens=256, temperature=0.75)"
"model = Cohere(model=\"command\", max_tokens=256, temperature=0.75)"
]
},
{

View File

@@ -46,55 +46,6 @@ print(llm.invoke("Come up with a pet name"))
```
Usage of the Cohere (legacy) [LLM model](/docs/integrations/llms/cohere)
### Tool calling
```python
from langchain_cohere import ChatCohere
from langchain_core.messages import (
HumanMessage,
ToolMessage,
)
from langchain_core.tools import tool
@tool
def magic_function(number: int) -> int:
"""Applies a magic operation to an integer
Args:
number: Number to have magic operation performed on
"""
return number + 10
def invoke_tools(tool_calls, messages):
for tool_call in tool_calls:
selected_tool = {"magic_function":magic_function}[
tool_call["name"].lower()
]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
return messages
tools = [magic_function]
llm = ChatCohere()
llm_with_tools = llm.bind_tools(tools=tools)
messages = [
HumanMessage(
content="What is the value of magic_function(2)?"
)
]
res = llm_with_tools.invoke(messages)
while res.tool_calls:
messages.append(res)
messages = invoke_tools(res.tool_calls, messages)
res = llm_with_tools.invoke(messages)
print(res.content)
```
Tool calling with Cohere LLM can be done by binding the necessary tools to the llm as seen above.
An alternative, is to support multi hop tool calling with the ReAct agent as seen below.
### ReAct Agent
The agent is based on the paper
@@ -126,7 +77,6 @@ agent_executor.invoke({
"input": "In what year was the company that was founded as Sound of Music added to the S&P 500?",
})
```
The ReAct agent can be used to call multiple tools in sequence.
### RAG Retriever

View File

@@ -34,7 +34,8 @@
},
"outputs": [],
"source": [
"from langchain_cohere import ChatCohere, CohereRagRetriever\n",
"from langchain_cohere import ChatCohere\n",
"from langchain_community.retrievers import CohereRagRetriever\n",
"from langchain_core.documents import Document"
]
},
@@ -199,7 +200,7 @@
"source": [
"docs = rag.invoke(\n",
" \"Does langchain support cohere RAG?\",\n",
" documents=[\n",
" source_documents=[\n",
" Document(page_content=\"Langchain supports cohere RAG!\"),\n",
" Document(page_content=\"The sky is blue!\"),\n",
" ],\n",
@@ -207,14 +208,6 @@
"_pretty_print(docs)"
]
},
{
"cell_type": "markdown",
"id": "45a9470f",
"metadata": {},
"source": [
"Please note that connectors and documents cannot be used simultaneously. If you choose to provide documents in the `invoke` method, they will take precedence, and connectors will not be utilized for that particular request, as shown in the snippet above!"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -1,27 +0,0 @@
---
sidebar_position: 0
sidebar_class_name: hidden
---
# Retrievers
A **retriever** is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of Document's as output.
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
This table lists common retrievers.
| Retriever | Namespace | Native async | Local |
|-----------|-----------|---------------|------|
| [AmazonKnowledgeBasesRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) | langchain_aws.retrievers | ❌ | ❌ |
| [AzureAISearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) | langchain_community.retrievers | ✅ | ❌ |
| [ElasticsearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) | langchain_elasticsearch | ❌ | ❌ |
| [MilvusCollectionHybridSearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) | langchain_milvus | ❌ | ❌ |
| [TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | langchain_community.retrievers | ❌ | ❌ |
| [VertexAISearchRetriever](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) | langchain_google_community.vertex_ai_search | ❌ | ❌ |

View File

@@ -4,70 +4,20 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: TavilySearchAPI\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TavilySearchAPIRetriever\n",
"# Tavily Search API\n",
"\n",
"## Overview\n",
">[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n",
"\n",
"We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain.\n",
"\n",
"### Integration details\n",
"## Setup\n",
"\n",
"| Retriever | Namespace | Native async | Local |\n",
"| :--- | :--- | :---: | :---: |\n",
"[TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | langchain_community.retrievers | ❌ | ❌ |\n",
"The integration lives in the `langchain-community` package. We also need to install the `tavily-python` package itself.\n",
"\n",
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"```bash\n",
"pip install -U langchain-community tavily-python\n",
"```\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `tavily-python` package itself."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community tavily-python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also need to set our Tavily API key."
]
},
@@ -87,20 +37,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our retriever:"
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import TavilySearchAPIRetriever\n",
"\n",
"retriever = TavilySearchAPIRetriever(k=3)"
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
@@ -112,40 +59,42 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'title': 'The Legend of Zelda: Breath of the Wild - Nintendo Switch Wiki', 'source': 'https://nintendo-switch.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.9961155, 'images': []}, page_content='The Legend of Zelda: Breath of the Wild is an open world action-adventure game published by Nintendo for the Wii U and as a launch title for the Nintendo Switch, and was released worldwide on March 3, 2017. It is the nineteenth installment of the The Legend of Zelda series and the first to be developed with a HD resolution. The game features a gigantic open world, with the player being able to ...'),\n",
" Document(metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zelda.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.9804313, 'images': []}, page_content='[]\\nReferences\\nThe Legend of Zelda \\xa0·\\nThe Adventure of Link \\xa0·\\nA Link to the Past (& Four Swords) \\xa0·\\nLink\\'s Awakening (DX; Nintendo Switch) \\xa0·\\nOcarina of Time (Master Quest; 3D) \\xa0·\\nMajora\\'s Mask (3D) \\xa0·\\nOracle of Ages \\xa0·\\nOracle of Seasons \\xa0·\\nFour Swords (Anniversary Edition) \\xa0·\\nThe Wind Waker (HD) \\xa0·\\nFour Swords Adventures \\xa0·\\nThe Minish Cap \\xa0·\\nTwilight Princess (HD) \\xa0·\\nPhantom Hourglass \\xa0·\\nSpirit Tracks \\xa0·\\nSkyward Sword (HD) \\xa0·\\nA Link Between Worlds \\xa0·\\nTri Force Heroes \\xa0·\\nBreath of the Wild \\xa0·\\nTears of the Kingdom\\nZelda (Game & Watch) \\xa0·\\nThe Legend of Zelda Game Watch \\xa0·\\nLink\\'s Crossbow Training \\xa0·\\nMy Nintendo Picross: Twilight Princess \\xa0·\\nCadence of Hyrule \\xa0·\\nGame & Watch: The Legend of Zelda\\nCD-i Games\\n Listings[]\\nCharacters[]\\nBosses[]\\nEnemies[]\\nDungeons[]\\nLocations[]\\nItems[]\\nTranslations[]\\nCredits[]\\nReception[]\\nSales[]\\nEiji Aonuma and Hidemaro Fujibayashi accepting the \"Game of the Year\" award for Breath of the Wild at The Game Awards 2017\\nBreath of the Wild was estimated to have sold approximately 1.3 million copies in its first three weeks and around 89% of Switch owners were estimated to have also purchased the game.[52] Sales of the game have remained strong and as of June 30, 2022, the Switch version has sold 27.14 million copies worldwide while the Wii U version has sold 1.69 million copies worldwide as of December 31, 2019,[53][54] giving Breath of the Wild a cumulative total of 28.83 million copies sold.\\n It also earned a Metacritic score of 97 from more than 100 critics, placing it among the highest-rated games of all time.[59][60] Notably, the game received the most perfect review scores for any game listed on Metacritic up to that point.[61]\\nIn 2022, Breath of the Wild was chosen as the best Legend of Zelda game of all time in their \"Top 10 Best Zelda Games\" list countdown; but was then placed as the \"second\" best Zelda game in their new revamped version of their \"Top 10 Best Zelda Games\" list in 2023, right behind it\\'s successor Tears of Video Game Canon ranks Breath of the Wild as one of the best video games of all time.[74] Metacritic ranked Breath of the Wild as the single best game of the 2010s.[75]\\nFan Reception[]\\nWatchMojo placed Breath of the Wild at the #2 spot in their \"Top 10 Legend of Zelda Games of All Time\" list countdown, right behind Ocarina of Time.[76] The Faces of Evil \\xa0·\\nThe Wand of Gamelon \\xa0·\\nZelda\\'s Adventure\\nHyrule Warriors Series\\nHyrule Warriors (Legends; Definitive Edition) \\xa0·\\nHyrule Warriors: Age of Calamity\\nSatellaview Games\\nBS The Legend of Zelda \\xa0·\\nAncient Stone Tablets\\nTingle Series\\nFreshly-Picked Tingle\\'s Rosy Rupeeland \\xa0·\\nTingle\\'s Balloon Fight DS \\xa0·\\n'),\n",
" Document(metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zeldawiki.wiki/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.9627432, 'images': []}, page_content='The Legend of Zelda\\xa0•\\nThe Adventure of Link\\xa0•\\nA Link to the Past (& Four Swords)\\xa0•\\nLink\\'s Awakening (DX; Nintendo Switch)\\xa0•\\nOcarina of Time (Master Quest; 3D)\\xa0•\\nMajora\\'s Mask (3D)\\xa0•\\nOracle of Ages\\xa0•\\nOracle of Seasons\\xa0•\\nFour Swords (Anniversary Edition)\\xa0•\\nThe Wind Waker (HD)\\xa0•\\nFour Swords Adventures\\xa0•\\nThe Minish Cap\\xa0•\\nTwilight Princess (HD)\\xa0•\\nPhantom Hourglass\\xa0•\\nSpirit Tracks\\xa0•\\nSkyward Sword (HD)\\xa0•\\nA Link Between Worlds\\xa0•\\nTri Force Heroes\\xa0•\\nBreath of the Wild\\xa0•\\nTears of the Kingdom\\nZelda (Game & Watch)\\xa0•\\nThe Legend of Zelda Game Watch\\xa0•\\nHeroes of Hyrule\\xa0•\\nLink\\'s Crossbow Training\\xa0•\\nMy Nintendo Picross: Twilight Princess\\xa0•\\nCadence of Hyrule\\xa0•\\nVermin\\nThe Faces of Evil\\xa0•\\nThe Wand of Gamelon\\xa0•\\nZelda\\'s Adventure\\nHyrule Warriors (Legends; Definitive Edition)\\xa0•\\nHyrule Warriors: Age of Calamity\\nBS The Legend of Zelda\\xa0•\\nAncient Stone Tablets\\nFreshly-Picked Tingle\\'s Rosy Rupeeland\\xa0•\\nTingle\\'s Balloon Fight DS\\xa0•\\nToo Much Tingle Pack\\xa0•\\nRipened Tingle\\'s Balloon Trip of Love\\nSoulcalibur II\\xa0•\\nWarioWare Series\\xa0•\\nCaptain Rainbow\\xa0•\\nNintendo Land\\xa0•\\nScribblenauts Unlimited\\xa0•\\nMario Kart 8\\xa0•\\nSplatoon 3\\nSuper Smash Bros (Series)\\nSuper Smash Bros.\\xa0•\\nSuper Smash Bros. Melee\\xa0•\\nSuper Smash Bros. Brawl\\xa0•\\nSuper Smash Bros. for Nintendo 3DS / Wii U\\xa0•\\n It also earned a Metacritic score of 97 from more than 100 critics, placing it among the highest-rated games of all time.[60][61] Notably, the game received the most perfect review scores for any game listed on Metacritic up to that point.[62]\\nAwards\\nThroughout 2016, Breath of the Wild won several awards as a highly anticipated game, including IGN\\'s and Destructoid\\'s Best of E3,[63][64] at the Game Critic Awards 2016,[65] and at The Game Awards 2016.[66] Following its release, Breath of the Wild received the title of \"Game of the Year\" from the Japan Game Awards 2017,[67] the Golden Joystick Awards 2017,<ref\"Our final award is for the Ultimate Game of the Year. Official website(s)\\nOfficial website(s)\\nCanonicity\\nCanonicity\\nCanon[citation needed]\\nPredecessor\\nPredecessor\\nTri Force Heroes\\nSuccessor\\nSuccessor\\nTears of the Kingdom\\nThe Legend of Zelda: Breath of the Wild guide at StrategyWiki\\nBreath of the Wild Guide at Zelda Universe\\nThe Legend of Zelda: Breath of the Wild is the nineteenth main installment of The Legend of Zelda series. Listings\\nCharacters\\nBosses\\nEnemies\\nDungeons\\nLocations\\nItems\\nTranslations\\nCredits\\nReception\\nSales\\nBreath of the Wild was estimated to have sold approximately 1.3 million copies in its first three weeks and around 89% of Switch owners were estimated to have also purchased the game.[53] Sales of the game have remained strong and as of September 30, 2023, the Switch version has sold 31.15 million copies worldwide while the Wii U version has sold 1.7 million copies worldwide as of December 31, 2021,[54][55] giving Breath of the Wild a cumulative total of 32.85 million copies sold.\\n The Legend of Zelda: Breath of the Wild\\nThe Legend of Zelda: Breath of the Wild\\nThe Legend of Zelda: Breath of the Wild\\nDeveloper(s)\\nDeveloper(s)\\nPublisher(s)\\nPublisher(s)\\nNintendo\\nDesigner(s)\\nDesigner(s)\\n')]"
"[Document(page_content='Trending topics\\nTrending topics\\nThe Legend of Zelda: Breath of the Wild\\nSelect a product\\nThe Legend of Zelda™: Breath of the Wild\\nThe Legend of Zelda™: Breath of the Wild\\nThe Legend of Zelda™: Breath of the Wild and The Legend of Zelda™: Breath of the Wild Expansion Pass Bundle\\nThis item will be sent to your system automatically after purchase or Nintendo Switch Game Voucher redemption. The Legend of Zelda: Breath of the Wild Expansion Pass\\nMore like this\\nSuper Mario Odyssey™\\nThe Legend of Zelda™: Tears of the Kingdom\\nMario + Rabbids® Kingdom Battle\\nThe Legend of Zelda™: Links Awakening\\nHollow Knight\\nThe Legend of Zelda™: Skyward Sword HD\\nStarlink: Battle for Atlas™ Digital Edition\\nDRAGON QUEST BUILDERS™ 2\\nDragon Quest Builders™\\nWARNING: If you have epilepsy or have had seizures or other unusual reactions to flashing lights or patterns, consult a doctor before playing video games. Saddle up with a herd of horse-filled games!\\nESRB rating\\nSupported play modes\\nTV\\nTabletop\\nHandheld\\nProduct information\\nRelease date\\nNo. of players\\nGenre\\nPublisher\\nESRB rating\\nSupported play modes\\nGame file size\\nSupported languages\\nPlay online, access classic NES™ and Super NES™ games, and more with a Nintendo Switch Online membership.\\n Two Game Boy games are now available for Nintendo Switch Online members\\n02/01/23\\nNintendo Switch Online member exclusive: Save on two digital games\\n09/13/22\\nOut of the Shadows … the Legend of Zelda: About Nintendo\\nShop\\nMy Nintendo Store orders\\nSupport\\nParents\\nCommunity\\nPrivacy\\n© Nintendo.', metadata={'title': 'The Legend of Zelda™: Breath of the Wild - Nintendo', 'source': 'https://www.nintendo.com/us/store/products/the-legend-of-zelda-breath-of-the-wild-switch/', 'score': 0.97451, 'images': None}),\n",
" Document(page_content='The Legend of Zelda: Breath of the Wild is a masterpiece of open-world design and exploration, released on March 3, 2017 for Nintendo Switch. Find out the latest news, reviews, guides, videos, and more for this award-winning game on IGN.', metadata={'title': 'The Legend of Zelda: Breath of the Wild - IGN', 'source': 'https://www.ign.com/games/the-legend-of-zelda-breath-of-the-wild', 'score': 0.94496, 'images': None}),\n",
" Document(page_content='Reviewers also commented on the unexpected permutations of interactions between Link, villagers, pets, and enemies,[129][130][131] many of which were shared widely on social media.[132] A tribute to former Nintendo president Satoru Iwata, who died during development, also attracted praise.[129][134]\\nJim Sterling was more critical than most, giving Breath of the Wild a 7/10 score, criticizing the difficulty, weapon durability, and level design, but praising the open world and variety of content.[135] Other criticism focused on the unstable frame rate and the low resolution of 900p;[136] updates addressed some of these problems.[137][138]\\nSales\\nBreath of the Wild broke sales records for a Nintendo launch game in multiple regions.[139][140] In Japan, the Switch and Wii U versions sold a combined 230,000 copies in the first week of release, with the Switch version becoming the top-selling game released that week.[141] Nintendo reported that Breath of the Wild sold more than one million copies in the US that month—925,000 of which were for Switch, outselling the Switch itself.[145][146][147][148] Nintendo president Tatsumi Kimishima said that the attach rate on the Switch was \"unprecedented\".[149] Breath of the Wild had sold 31.15 million copies on the Switch by September 2023 and 1.70 million copies on the Wii U by December 2020.[150][151]\\nAwards\\nFollowing its demonstration at E3 2016, Breath of the Wild received several accolades from the Game Critics Awards[152] and from publications such as IGN and Destructoid.[153][154] It was listed among the best games at E3 by Eurogamer,[81] The game, he continued, would challenge the series\\' conventions, such as the requirement that players complete dungeons in a set order.[2][73] The next year, Nintendo introduced the game\\'s high-definition, cel-shaded visual style with in-game footage at its E3 press event.[74][75] Once planned for release in 2015, the game was delayed early in the year and did not show at that year\\'s E3.[76][77] Zelda series creator Shigeru Miyamoto reaffirmed that the game would still release for the Wii U despite the development of Nintendo\\'s next console, the Nintendo Switch.[78] The Switch version also has higher-quality environmental sounds.[53][54] Certain ideas that were planned for the game, like flying and underground dungeons were not implemented due to the Wii Us limitations; they would eventually resurface in the game\\'s sequel.[55] Aonuma stated that the art design was inspired by gouache and en plein air art to help identify the vast world.[56] Takizawa has also cited the Jōmon period as an inspiration for the ancient Sheikah technology and architecture that is found in the game, due to the mystery surrounding the period.[57] Journalists commented on unexpected interactions between game elements,[129][130][131] with serendipitous moments proving popular on social media.[132] Chris Plante of The Verge predicted that whereas prior open-world games tended to feature prescribed challenges, Zelda would influence a new generation of games with open-ended problem-solving.[132] Digital Trends wrote that the game\\'s level of experimentation allowed players to interact with and exploit the environment in creative ways, resulting in various \"tricks\" still discovered years after release.[127]\\nReviewers lauded the sense of detail and immersion.[133][129] Kotaku recommended turning off UI elements in praise of the indirect cues that contextually indicate the same information, such as Link shivering in the cold or waypoints appearing when using the scope.[133]', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Wikipedia', 'source': 'https://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.93348, 'images': None})]"
]
},
"execution_count": 2,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"what year was breath of the wild released?\"\n",
"from langchain_community.retrievers import TavilySearchAPIRetriever\n",
"\n",
"retriever.invoke(query)"
"retriever = TavilySearchAPIRetriever(k=3)\n",
"\n",
"retriever.invoke(\"what year was breath of the wild released?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within a chain\n",
"## Chaining\n",
"\n",
"We can easily combine this retriever in to a chain."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
@@ -161,50 +110,40 @@
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" RunnablePassthrough.assign(context=(lambda x: x[\"question\"]) | retriever)\n",
" | prompt\n",
" | llm\n",
" | ChatOpenAI(model=\"gpt-4-1106-preview\")\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'As of August 2020, The Legend of Zelda: Breath of the Wild had sold over 20.1 million copies worldwide on Nintendo Switch and Wii U.'"
"'As of the end of 2020, \"The Legend of Zelda: Breath of the Wild\" sold over 21.45 million copies worldwide.'"
]
},
"execution_count": 4,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"how many units did bretch of the wild sell in 2020\")"
"chain.invoke({\"question\": \"how many units did bretch of the wild sell in 2020\"})"
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": null,
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `TavilySearchAPIRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html)."
]
"outputs": [],
"source": []
}
],
"metadata": {
@@ -223,7 +162,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -2,14 +2,10 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"source": [
"---\n",
"sidebar_label: AstraDB\n",
"sidebar_label: Astra DB\n",
"---"
]
},
@@ -17,48 +13,55 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# AstraDBByteStore\n",
"\n",
"This will help you get started with Astra DB [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `AstraDBByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html).\n",
"\n",
"## Overview\n",
"# Astra DB\n",
"\n",
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [AstraDBByteStore](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html) | [langchain_astradb](https://api.python.langchain.com/en/latest/astradb_api_reference.html) | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_astradb?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_astradb?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create an `AstraDBByteStore` byte store, you'll need to [create a DataStax account](https://www.datastax.com/products/datastax-astra).\n",
"\n",
"### Credentials\n",
"\n",
"After signing up, set the following credentials:"
"`AstraDBStore` and `AstraDBByteStore` need the `astrapy` package to be installed:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = getpass(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
"%pip install --upgrade --quiet astrapy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"The Store takes the following parameters:\n",
"\n",
"The LangChain AstraDB integration lives in the `langchain_astradb` package:"
"* `api_endpoint`: Astra DB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: Astra DB token. Looks like `AstraCS:6gBhNmsk135....`\n",
"* `collection_name` : Astra DB collection name\n",
"* `namespace`: (Optional) Astra DB namespace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AstraDBStore\n",
"\n",
"The `AstraDBStore` is an implementation of `BaseStore` that stores everything in your DataStax Astra DB instance.\n",
"The store keys must be strings and will be mapped to the `_id` field of the Astra DB document.\n",
"The store values can be any object that can be serialized by `json.dumps`.\n",
"In the database, entries will have the form:\n",
"\n",
"```json\n",
"{\n",
" \"_id\": \"<key>\",\n",
" \"value\": <value>\n",
"}\n",
"```"
]
},
{
@@ -67,71 +70,73 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_astradb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
"from langchain_community.storage import AstraDBStore"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_astradb import AstraDBByteStore\n",
"from getpass import getpass\n",
"\n",
"kv_store = AstraDBByteStore(\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"store = AstraDBStore(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"my_store\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['v1', [0.1, 0.2, 0.3]]\n"
]
}
],
"source": [
"store.mset([(\"k1\", \"v1\"), (\"k2\", [0.1, 0.2, 0.3])])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"### Usage with CacheBackedEmbeddings\n",
"\n",
"You can set data under keys like this using the `mset` method:"
"You may use the `AstraDBStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n",
"Note that `AstraDBStore` stores the embeddings as a list of floats without converting them first to bytes so we don't use `fromByteStore` there."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
"embeddings = CacheBackedEmbeddings(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_store=store\n",
")"
]
},
@@ -139,67 +144,96 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
"## AstraDBByteStore\n",
"\n",
"The `AstraDBByteStore` is an implementation of `ByteStore` that stores everything in your DataStax Astra DB instance.\n",
"The store keys must be strings and will be mapped to the `_id` field of the Astra DB document.\n",
"The store `bytes` values are converted to base64 strings for storage into Astra DB.\n",
"In the database, entries will have the form:\n",
"\n",
"```json\n",
"{\n",
" \"_id\": \"<key>\",\n",
" \"value\": \"bytes encoded in base 64\"\n",
"}\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"from langchain_community.storage import AstraDBByteStore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"store = AstraDBByteStore(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"my_store\",\n",
")"
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"You can use an `AstraDBByteStore` anywhere you'd use other ByteStores, including as a [cache for embeddings](/docs/how_to/caching_embeddings)."
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AstraDBByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html"
]
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.10.5"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,11 +2,7 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cassandra\n",
@@ -17,34 +13,47 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# CassandraByteStore\n",
"\n",
"This will help you get started with Cassandra [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `CassandraByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html).\n",
"\n",
"## Overview\n",
"# Cassandra\n",
"\n",
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/cassandra_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [CassandraByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"The `CassandraByteStore` is an implementation of `ByteStore` that stores the data in your Cassandra instance.\n",
"The store keys must be strings and will be mapped to the `row_id` column of the Cassandra table.\n",
"The store `bytes` values are mapped to the `body_blob` column of the Cassandra table."
"`CassandraByteStore` needs the `cassio` package to be installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"The Store takes the following parameters:\n",
"\n",
"The LangChain `CassandraByteStore` integration lives in the `langchain_community` package. You'll also need to install the `cassio` package or the `cassandra-driver` package as a peer dependency depending on which initialization method you're using:"
"* table: The table where to store the data.\n",
"* session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used.\n",
"* keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used.\n",
"* setup_mode: (Optional) The mode used to create the Cassandra table (SYNC, ASYNC or OFF). Defaults to SYNC."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CassandraByteStore\n",
"\n",
"The `CassandraByteStore` is an implementation of `ByteStore` that stores the data in your Cassandra instance.\n",
"The store keys must be strings and will be mapped to the `row_id` column of the Cassandra table.\n",
"The store `bytes` values are mapped to the `body_blob` column of the Cassandra table."
]
},
{
@@ -53,26 +62,19 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community\n",
"%pip install -qU cassandra-driver\n",
"%pip install -qU cassio"
"from langchain_community.storage import CassandraByteStore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll also need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"### Init from a cassandra driver Session\n",
"\n",
"You'll first need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
"You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
@@ -88,10 +90,12 @@
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then you can create your store! You'll also need to provide the name of an existing keyspace of the Cassandra instance:"
]
"You need to provide the name of an existing keyspace of the Cassandra instance:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
@@ -99,94 +103,36 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import CassandraByteStore\n",
"\n",
"kv_store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=\"<YOUR KEYSPACE>\",\n",
")"
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
"Creating the store:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Init using `cassio`\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=\"<YOUR KEYSPACE>\")\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=CASSANDRA_KEYSPACE,\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
@@ -195,23 +141,86 @@
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"### Init from cassio\n",
"\n",
"For detailed documentation of all `CassandraByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html"
]
"It's also possible to use cassio to configure the session and keyspace."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Usage with CacheBackedEmbeddings\n",
"\n",
"You may use the `CassandraByteStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_cache=store\n",
")"
],
"metadata": {
"collapsed": false
}
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.10.5"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,14 +2,10 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"source": [
"---\n",
"sidebar_label: Elasticsearch\n",
"sidebar_label: Elasticsearch \n",
"---"
]
},
@@ -19,30 +15,10 @@
"source": [
"# ElasticsearchEmbeddingsCache\n",
"\n",
"This will help you get started with Elasticsearch [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html).\n",
"\n",
"## Overview\n",
"\n",
"The `ElasticsearchEmbeddingsCache` is a `ByteStore` implementation that uses your Elasticsearch instance for efficient storage and retrieval of embeddings.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ElasticsearchEmbeddingsCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html) | [langchain_elasticsearch](https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html) | ✅ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_elasticsearch?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_elasticsearch?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create a `ElasticsearchEmbeddingsCache` byte store, you'll need an Elasticsearch cluster. You can [set one up locally](https://www.elastic.co/downloads/elasticsearch) or create an [Elastic account](https://www.elastic.co/elasticsearch)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `ElasticsearchEmbeddingsCache` integration lives in the `__package_name__` package:"
"First install the LangChain integration with Elasticsearch."
]
},
{
@@ -51,78 +27,37 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_elasticsearch"
"%pip install -U langchain-elasticsearch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
"source": "it can be instantiated using `CacheBackedEmbeddings.from_bytes_store` method."
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# Example config for a locally running Elasticsearch instance\n",
"kv_store = ElasticsearchEmbeddingsCache(\n",
" es_url=\"https://localhost:9200\",\n",
"underlying_embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"\n",
"store = ElasticsearchEmbeddingsCache(\n",
" es_url=\"http://localhost:9200\",\n",
" index_name=\"llm-chat-cache\",\n",
" metadata={\"project\": \"my_chatgpt_project\"},\n",
" namespace=\"my_chatgpt_project\",\n",
" es_user=\"elastic\",\n",
" es_password=\"<GENERATED PASSWORD>\",\n",
" es_params={\n",
" \"ca_certs\": \"~/http_ca.crt\",\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(),\n",
" document_embedding_cache=store,\n",
" query_embedding_cache=store,\n",
")"
]
},
@@ -130,52 +65,19 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"The index_name parameter can also accept aliases. This allows to use the ILM: Manage the index lifecycle that we suggest to consider for managing retention and controlling cache growth.\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
"Look at the class docstring for all parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use as an embeddings cache\n",
"## Index the generated vectors\n",
"The cached vectors won't be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.\n",
"\n",
"Like other `ByteStores`, you can use an `ElasticsearchEmbeddingsCache` instance for [persistent caching in document ingestion](/docs/how_to/caching_embeddings/) for RAG.\n",
"\n",
"However, cached vectors won't be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.\n",
"\n",
"This can be done by subclassing and overriding methods:"
"This can be done by subclassing end overriding methods. "
]
},
{
@@ -186,6 +88,8 @@
"source": [
"from typing import Any, Dict, List\n",
"\n",
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache\n",
"\n",
"\n",
"class SearchableElasticsearchStore(ElasticsearchEmbeddingsCache):\n",
" @property\n",
@@ -208,29 +112,26 @@
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When overriding the mapping and the document building, please only make additive modifications, keeping the base mapping intact."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html"
]
"source": "When overriding the mapping and the document building, please only make additive modifications, keeping the base mapping intact."
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.10.5"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,14 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"source": [
"---\n",
"sidebar_label: Local Filesystem\n",
"sidebar_position: 3\n",
"---"
]
},
@@ -19,119 +16,51 @@
"source": [
"# LocalFileStore\n",
"\n",
"This will help you get started with local filesystem [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all LocalFileStore features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html).\n",
"\n",
"## Overview\n",
"\n",
"The `LocalFileStore` is a persistent implementation of `ByteStore` that stores everything in a folder of your choosing. It's useful if you're using a single machine and are tolerant of files being added or deleted.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/file_system) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html) | [langchain](https://api.python.langchain.com/en/latest/langchain_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain?style=flat-square&label=%20) |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `LocalFileStore` integration lives in the `langchain` package:"
"The `LocalFileStore` is a persistent implementation of `ByteStore` that stores everything in a folder of your choosing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"from langchain.storage import LocalFileStore\n",
"\n",
"root_path = Path.cwd() / \"data\" # can also be a path set by a string\n",
"\n",
"kv_store = LocalFileStore(root_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see the created files in your `data` folder:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"key1 key2\n"
"[b'v1', b'v2']\n"
]
}
],
"source": [
"from pathlib import Path\n",
"\n",
"from langchain.storage import LocalFileStore\n",
"\n",
"root_path = Path.cwd() / \"data\" # can also be a path set by a string\n",
"store = LocalFileStore(root_path)\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's see which files exist in our `data` folder:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"k1 k2\n"
]
}
],
@@ -139,58 +68,17 @@
"!ls {root_path}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `LocalFileStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html"
]
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -204,7 +92,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -9,7 +9,7 @@
},
"source": [
"---\n",
"sidebar_label: In-memory\n",
"sidebar_label: InMemoryByteStore\n",
"---"
]
},
@@ -28,7 +28,7 @@
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/in_memory/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html) | [langchain_core](https://api.python.langchain.com/en/latest/core_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_core?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_core?style=flat-square&label=%20) |"
]
},

View File

@@ -0,0 +1,12 @@
---
sidebar_position: 1
sidebar_class_name: hidden
---
# Key-value stores
[Key-value stores](/docs/concepts/#key-value-stores) are used by other LangChain components to store and retrieve data.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -2,11 +2,7 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"source": [
"---\n",
"sidebar_label: Redis\n",
@@ -19,30 +15,9 @@
"source": [
"# RedisStore\n",
"\n",
"This will help you get started with Redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `RedisStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html).\n",
"\n",
"## Overview\n",
"\n",
"The `RedisStore` is an implementation of `ByteStore` that stores everything in your Redis instance.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/ioredis_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [RedisStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create a Redis byte store, you'll need to set up a Redis instance. You can do this locally or via a provider - see our [Redis guide](/docs/integrations/providers/redis) for an overview of options."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `RedisStore` integration lives in the `langchain_community` package:"
"To configure Redis, follow our [Redis guide](/docs/integrations/providers/redis)."
]
},
{
@@ -51,128 +26,56 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
"%pip install --upgrade --quiet redis"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"from langchain_community.storage import RedisStore\n",
"\n",
"kv_store = RedisStore(redis_url=\"redis://localhost:6379\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"store = RedisStore(redis_url=\"redis://localhost:6379\")\n",
"\n",
"You can set data under keys like this using the `mset` method:"
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `RedisStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html"
]
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.10.5"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -2,11 +2,7 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"source": [
"---\n",
"sidebar_label: Upstash Redis\n",
@@ -19,48 +15,11 @@
"source": [
"# UpstashRedisByteStore\n",
"\n",
"This will help you get started with Upstash redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `UpstashRedisByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html).\n",
"The `UpstashRedisStore` is an implementation of `ByteStore` that stores everything in your Upstash-hosted Redis instance.\n",
"\n",
"## Overview\n",
"To use the base `RedisStore` instead, see [this guide](/docs/integrations/stores/redis/)\n",
"\n",
"The `UpstashRedisStore` is an implementation of `ByteStore` that stores everything in your [Upstash](https://upstash.com/)-hosted Redis instance.\n",
"\n",
"To use the base `RedisStore` instead, see [this guide](/docs/integrations/stores/redis/).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/upstash_redis_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [UpstashRedisByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"You'll first need to [sign up for an Upstash account](https://upstash.com/docs/redis/overall/getstarted). Next, you'll need to create a Redis database to connect to.\n",
"\n",
"### Credentials\n",
"\n",
"Once you've created your database, get your database URL (don't forget the `https://`!) and token:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"URL = getpass(\"Enter your Upstash URL\")\n",
"TOKEN = getpass(\"Enter your Upstash REST token\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Upstash integration lives in the `langchain_community` package. You'll also need to install the `upstash-redis` package as a peer dependency:"
"To configure Upstash Redis, follow our [Upstash guide](/docs/integrations/providers/upstash)."
]
},
{
@@ -69,130 +28,61 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community upstash-redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
"%pip install --upgrade --quiet upstash-redis"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"from langchain_community.storage import UpstashRedisByteStore\n",
"from upstash_redis import Redis\n",
"\n",
"URL = \"<UPSTASH_REDIS_REST_URL>\"\n",
"TOKEN = \"<UPSTASH_REDIS_REST_TOKEN>\"\n",
"\n",
"redis_client = Redis(url=URL, token=TOKEN)\n",
"kv_store = UpstashRedisByteStore(client=redis_client, ttl=None, namespace=\"test-ns\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"store = UpstashRedisByteStore(client=redis_client, ttl=None, namespace=\"test-ns\")\n",
"\n",
"You can set data under keys like this using the `mset` method:"
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `UpstashRedisByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html"
]
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.10.5"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -4,191 +4,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Github\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GithubToolkit\n",
"# Github\n",
"\n",
"The `Github` toolkit contains tools that enable an LLM agent to interact with a github repository. \n",
"The tool is a wrapper for the [PyGitHub](https://github.com/PyGithub/PyGithub) library. \n",
"\n",
"For detailed documentation of all GithubToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"At a high-level, we will:\n",
"## Quickstart\n",
"\n",
"1. Install the pygithub library\n",
"2. Create a Github app\n",
"3. Set your environmental variables\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"#### 1. Install dependencies\n",
"\n",
"This integration is implemented in `langchain-community`. We will also need the `pygithub` dependency:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pygithub langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2. Create a Github App\n",
"\n",
"[Follow the instructions here](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) to create and register a Github app. Make sure your app has the following [repository permissions:](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28)\n",
"\n",
"* Commit statuses (read only)\n",
"* Contents (read and write)\n",
"* Issues (read and write)\n",
"* Metadata (read only)\n",
"* Pull requests (read and write)\n",
"\n",
"Once the app has been registered, you must give your app permission to access each of the repositories you whish it to act upon. Use the App settings on [github.com here](https://github.com/settings/installations).\n",
"\n",
"\n",
"#### 3. Set Environment Variables\n",
"\n",
"Before initializing your agent, the following environment variables need to be set:\n",
"\n",
"* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n",
"* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n",
"* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n",
"* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"for env_var in [\n",
" \"GITHUB_APP_ID\",\n",
" \"GITHUB_APP_PRIVATE_KEY\",\n",
" \"GITHUB_REPOSITORY\",\n",
"]:\n",
" if not os.getenv(env_var):\n",
" os.environ[env_var] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.github.toolkit import GitHubToolkit\n",
"from langchain_community.utilities.github import GitHubAPIWrapper\n",
"\n",
"github = GitHubAPIWrapper()\n",
"toolkit = GitHubToolkit.from_github_api_wrapper(github)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Get Issues\n",
"Get Issue\n",
"Comment on Issue\n",
"List open pull requests (PRs)\n",
"Get Pull Request\n",
"Overview of files included in PR\n",
"Create Pull Request\n",
"List Pull Requests' Files\n",
"Create File\n",
"Read File\n",
"Update File\n",
"Delete File\n",
"Overview of existing files in Main branch\n",
"Overview of files in current working branch\n",
"List branches in this repository\n",
"Set active branch\n",
"Create a new branch\n",
"Get files from a directory\n",
"Search issues and pull requests\n",
"Search code\n",
"Create review request\n"
]
}
],
"source": [
"tools = toolkit.get_tools()\n",
"\n",
"for tool in tools:\n",
" print(tool.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The purpose of these tools is as follows:\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`\n",
"\n",
"Each of these steps will be explained in great detail below.\n",
"\n",
@@ -206,14 +32,70 @@
"\n",
"7. **Update File**- updates a file in the repository.\n",
"\n",
"8. **Delete File**- deletes a file from the repository."
"8. **Delete File**- deletes a file from the repository.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent"
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Install the `pygithub` library "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pygithub langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Create a Github App\n",
"\n",
"[Follow the instructions here](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) to create and register a Github app. Make sure your app has the following [repository permissions:](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28)\n",
"\n",
"* Commit statuses (read only)\n",
"* Contents (read and write)\n",
"* Issues (read and write)\n",
"* Metadata (read only)\n",
"* Pull requests (read and write)\n",
"\n",
"\n",
"Once the app has been registered, you must give your app permission to access each of the repositories you whish it to act upon. Use the App settings on [github.com here](https://github.com/settings/installations).\n",
"\n",
"### 3. Set Environmental Variables\n",
"\n",
"Before initializing your agent, the following environmental variables need to be set:\n",
"\n",
"* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n",
"* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n",
"* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n",
"* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example: Simple Agent"
]
},
{
@@ -942,15 +824,6 @@
"\n",
"agent.run(prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `GithubToolkit` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html)."
]
}
],
"metadata": {
@@ -969,7 +842,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -4,31 +4,34 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: GMail\n",
"---"
"# Gmail\n",
"\n",
"This notebook walks through connecting a LangChain email to the `Gmail API`.\n",
"\n",
"To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API. Once this is done, we'll install the required libraries."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-api-python-client > /dev/null\n",
"%pip install --upgrade --quiet google-auth-oauthlib > /dev/null\n",
"%pip install --upgrade --quiet google-auth-httplib2 > /dev/null\n",
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GmailToolkit\n",
"You also need to install the `langchain-community` package where the integration lives:\n",
"\n",
"This will help you getting started with the GMail [toolkit](/docs/concepts/#toolkits). This toolkit interacts with the GMail API to read messages, draft and send messages, and more. For detailed documentation of all GmailToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-google-community` package. We'll need the `gmail` extra:"
"```bash\n",
"pip install -U langchain-community\n",
"```"
]
},
{
@@ -37,14 +40,14 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-google-community\\[gmail\\]"
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
]
},
{
@@ -54,14 +57,14 @@
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"## Create the Toolkit\n",
"\n",
"By default the toolkit reads the local `credentials.json` file. You can also manually provide a `Credentials` object."
]
@@ -69,10 +72,12 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_google_community import GmailToolkit\n",
"from langchain_community.agent_toolkits import GmailToolkit\n",
"\n",
"toolkit = GmailToolkit()"
]
@@ -95,7 +100,7 @@
},
"outputs": [],
"source": [
"from langchain_google_community.gmail.utils import (\n",
"from langchain_community.tools.gmail.utils import (\n",
" build_resource_service,\n",
" get_gmail_credentials,\n",
")\n",
@@ -111,15 +116,6 @@
"toolkit = GmailToolkit(api_resource=api_resource)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 5,
@@ -151,18 +147,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"- [GmailCreateDraft](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.create_draft.GmailCreateDraft.html)\n",
"- [GmailSendMessage](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.send_message.GmailSendMessage.html)\n",
"- [GmailSearch](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.search.GmailSearch.html)\n",
"- [GmailGetMessage](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.get_message.GmailGetMessage.html)\n",
"- [GmailGetThread](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.get_thread.GmailGetThread.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent\n",
"## Usage\n",
"\n",
"We show here how to use it as part of an [agent](/docs/tutorials/agents). We use the OpenAI Functions Agent, so we will need to setup and install the required dependencies for that. We will also use [LangSmith Hub](https://smith.langchain.com/hub) to pull the prompt from, so we will need to install that.\n",
"\n",
@@ -318,7 +303,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -12,10 +12,10 @@ that share common authentication, services, or other objects. They can be implem
This table lists common toolkits.
| Toolkit | Package |
|------|---------------|
| [GitHubToolkit](/docs/integrations/toolkits/github) | [langchain_community.agent_toolkits.github](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html) |
| [GmailToolkit](/docs/integrations/toolkits/gmail) | [langchain_google_community.gmail.toolkit](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html) |
| [RequestsToolkit](/docs/integrations/toolkits/requests) | [langchain_community.agent_toolkits.openapi](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html) |
| [SlackToolkit](/docs/integrations/toolkits/slack) | [langchain_community.agent_toolkits.slack](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html) |
| [SQLDatabaseToolkit](/docs/integrations/toolkits/sql_database) | [langchain_community.agent_toolkits.sql](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html) |
| Namespace 🔻 | Class |
|------------|---------|
| langchain_community.agent_toolkits.github | [GitHubToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html) |
| langchain_community.agent_toolkits.gmail | [GmailToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.gmail.toolkit.GmailToolkit.html) |
| langchain_community.agent_toolkits.openapi | [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html) |
| langchain_community.agent_toolkits.slack | [SlackToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html) |
| langchain_community.agent_toolkits.sql | [SQLDatabaseToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html) |

View File

@@ -1,361 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "050c5580-2c85-4763-8783-59dbd20395a5",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Requests\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cfe4185a-34dc-4cdc-b831-001954f2d6e8",
"metadata": {},
"source": [
"# Requests Toolkit\n",
"\n",
"We can use the Requests [toolkit](/docs/concepts/#toolkits) to construct agents that generate HTTP requests.\n",
"\n",
"For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html).\n",
"\n",
"## ⚠️ Security note ⚠️\n",
"There are inherent risks in giving models discretion to execute real-world actions. Take precautions to mitigate these risks:\n",
"\n",
"- Make sure that permissions associated with the tools are narrowly-scoped (e.g., for database operations or API requests);\n",
"- When desired, make use of human-in-the-loop workflows."
]
},
{
"cell_type": "markdown",
"id": "d968e982-f370-4614-8469-c1bc71ee3e32",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f74f05fb-3f24-4c0b-a17f-cf4edeedbb9a",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "36a178eb-1f2c-411e-bf25-0240ead4c62a",
"metadata": {},
"source": [
"Note that if you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e68d0cd-6233-481c-b048-e8d95cba4c35",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "a7e2f64a-a72e-4fef-be52-eaf7c5072d24",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"First we will demonstrate a minimal example.\n",
"\n",
"**NOTE**: There are inherent risks in giving models discretion to execute real-world actions. We must \"opt-in\" to these risks by setting `allow_dangerous_request=True` to use these tools.\n",
"**This can be dangerous for calling unwanted requests**. Please make sure your custom OpenAPI spec (yaml) is safe and that permissions associated with the tools are narrowly-scoped."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "018bd070-9fc8-459b-8d28-b4a3e283e640",
"metadata": {},
"outputs": [],
"source": [
"ALLOW_DANGEROUS_REQUEST = True"
]
},
{
"cell_type": "markdown",
"id": "a024f7b3-5437-4878-bd16-c4783bff394c",
"metadata": {},
"source": [
"We can use the [JSONPlaceholder](https://jsonplaceholder.typicode.com) API as a testing ground.\n",
"\n",
"Let's create (a subset of) its API spec:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2dcbcf92-2ad5-49c3-94ac-91047ccc8c5b",
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, Dict, Union\n",
"\n",
"import requests\n",
"import yaml\n",
"\n",
"\n",
"def _get_schema(response_json: Union[dict, list]) -> dict:\n",
" if isinstance(response_json, list):\n",
" response_json = response_json[0] if response_json else {}\n",
" return {key: type(value).__name__ for key, value in response_json.items()}\n",
"\n",
"\n",
"def _get_api_spec() -> str:\n",
" base_url = \"https://jsonplaceholder.typicode.com\"\n",
" endpoints = [\n",
" \"/posts\",\n",
" \"/comments\",\n",
" ]\n",
" common_query_parameters = [\n",
" {\n",
" \"name\": \"_limit\",\n",
" \"in\": \"query\",\n",
" \"required\": False,\n",
" \"schema\": {\"type\": \"integer\", \"example\": 2},\n",
" \"description\": \"Limit the number of results\",\n",
" }\n",
" ]\n",
" openapi_spec: Dict[str, Any] = {\n",
" \"openapi\": \"3.0.0\",\n",
" \"info\": {\"title\": \"JSONPlaceholder API\", \"version\": \"1.0.0\"},\n",
" \"servers\": [{\"url\": base_url}],\n",
" \"paths\": {},\n",
" }\n",
" # Iterate over the endpoints to construct the paths\n",
" for endpoint in endpoints:\n",
" response = requests.get(base_url + endpoint)\n",
" if response.status_code == 200:\n",
" schema = _get_schema(response.json())\n",
" openapi_spec[\"paths\"][endpoint] = {\n",
" \"get\": {\n",
" \"summary\": f\"Get {endpoint[1:]}\",\n",
" \"parameters\": common_query_parameters,\n",
" \"responses\": {\n",
" \"200\": {\n",
" \"description\": \"Successful response\",\n",
" \"content\": {\n",
" \"application/json\": {\n",
" \"schema\": {\"type\": \"object\", \"properties\": schema}\n",
" }\n",
" },\n",
" }\n",
" },\n",
" }\n",
" }\n",
" return yaml.dump(openapi_spec, sort_keys=False)\n",
"\n",
"\n",
"api_spec = _get_api_spec()"
]
},
{
"cell_type": "markdown",
"id": "db3d6148-ae65-4a1d-91a6-59ee3e4e6efa",
"metadata": {},
"source": [
"Next we can instantiate the toolkit. We require no authorization or other headers for this API:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "63a630b3-45bb-4525-865b-083f322b944b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit\n",
"from langchain_community.utilities.requests import TextRequestsWrapper\n",
"\n",
"toolkit = RequestsToolkit(\n",
" requests_wrapper=TextRequestsWrapper(headers={}),\n",
" allow_dangerous_requests=ALLOW_DANGEROUS_REQUEST,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f4224a64-843a-479d-8a7b-84719e4b9d0c",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "70ea0f4e-9f10-4906-894b-08df832fd515",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[RequestsGetTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPostTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPatchTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPutTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsDeleteTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = toolkit.get_tools()\n",
"\n",
"tools"
]
},
{
"cell_type": "markdown",
"id": "a21a6ca4-d650-4b7d-a944-1a8771b5293a",
"metadata": {},
"source": [
"- [RequestsGetTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsGetTool.html)\n",
"- [RequestsPostTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPostTool.html)\n",
"- [RequestsPatchTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPatchTool.html)\n",
"- [RequestsPutTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPutTool.html)\n",
"- [RequestsDeleteTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsDeleteTool.html)"
]
},
{
"cell_type": "markdown",
"id": "e2dbb304-abf2-472a-9130-f03150a40549",
"metadata": {},
"source": [
"## Use within an agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "db062da7-f22c-4f36-9df8-1da96c9f7538",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"system_message = \"\"\"\n",
"You have access to an API to help answer user queries.\n",
"Here is documentation on the API:\n",
"{api_spec}\n",
"\"\"\".format(api_spec=api_spec)\n",
"\n",
"agent_executor = create_react_agent(llm, tools, state_modifier=system_message)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c1e47be9-374a-457c-928a-48f02b5530e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Fetch the top two posts. What are their titles?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" requests_get (call_RV2SOyzCnV5h2sm4WPgG8fND)\n",
" Call ID: call_RV2SOyzCnV5h2sm4WPgG8fND\n",
" Args:\n",
" url: https://jsonplaceholder.typicode.com/posts?_limit=2\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: requests_get\n",
"\n",
"[\n",
" {\n",
" \"userId\": 1,\n",
" \"id\": 1,\n",
" \"title\": \"sunt aut facere repellat provident occaecati excepturi optio reprehenderit\",\n",
" \"body\": \"quia et suscipit\\nsuscipit recusandae consequuntur expedita et cum\\nreprehenderit molestiae ut ut quas totam\\nnostrum rerum est autem sunt rem eveniet architecto\"\n",
" },\n",
" {\n",
" \"userId\": 1,\n",
" \"id\": 2,\n",
" \"title\": \"qui est esse\",\n",
" \"body\": \"est rerum tempore vitae\\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\\nqui aperiam non debitis possimus qui neque nisi nulla\"\n",
" }\n",
"]\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The titles of the top two posts are:\n",
"1. \"sunt aut facere repellat provident occaecati excepturi optio reprehenderit\"\n",
"2. \"qui est esse\"\n"
]
}
],
"source": [
"example_query = \"Fetch the top two posts. What are their titles?\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "01ec4886-de3d-4fda-bd05-e3f254810969",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -4,139 +4,109 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Slack\n",
"---"
"# Slack\n",
"\n",
"This notebook walks through connecting LangChain to your `Slack` account.\n",
"\n",
"To use this toolkit, you will need to get a token explained in the [Slack API docs](https://api.slack.com/tutorials/tracks/getting-a-token). Once you've received a SLACK_USER_TOKEN, you can input it as an environmental variable below."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet slack_sdk > /dev/null\n",
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages\n",
"%pip install --upgrade --quiet python-dotenv > /dev/null # This is for loading environmental variables from a .env file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SlackToolkit\n",
"## Set Environmental Variables\n",
"\n",
"This will help you getting started with the Slack [toolkit](/docs/concepts/#toolkits). For detailed documentation of all SlackToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"To use this toolkit, you will need to get a token as explained in the [Slack API docs](https://api.slack.com/tutorials/tracks/getting-a-token). Once you've received a SLACK_USER_TOKEN, you can input it as an environment variable below."
"The toolkit will read the SLACK_USER_TOKEN environmental variable to authenticate the user so you need to set them here. You will also need to set your OPENAI_API_KEY to use the agent later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"SLACK_USER_TOKEN\"):\n",
" os.environ[\"SLACK_USER_TOKEN\"] = getpass.getpass(\"Enter your Slack user token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-community` package. We will also need the Slack SDK:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community slack_sdk"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, we can install beautifulsoup4 to assist in parsing HTML messages:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU beautifulsoup4 # This is optional but is useful for parsing HTML messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits import SlackToolkit\n",
"\n",
"toolkit = SlackToolkit()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SlackGetChannel(client=<slack_sdk.web.client.WebClient object at 0x10ce3a4d0>),\n",
" SlackGetMessage(client=<slack_sdk.web.client.WebClient object at 0x10ce3a0e0>),\n",
" SlackScheduleMessage(client=<slack_sdk.web.client.WebClient object at 0x10ce3a050>),\n",
" SlackSendMessage(client=<slack_sdk.web.client.WebClient object at 0x10ce3a020>)]"
"True"
]
},
"execution_count": 3,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = toolkit.get_tools()\n",
"# Set environmental variables here\n",
"# In this example, you set environmental variables by loading a .env file.\n",
"import dotenv\n",
"\n",
"dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Toolkit and Get Tools\n",
"\n",
"To start, you need to create the toolkit, so you can access its tools later."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SlackGetChannel(client=<slack_sdk.web.client.WebClient object at 0x11eba6a00>),\n",
" SlackGetMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba69d0>),\n",
" SlackScheduleMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba65b0>),\n",
" SlackSendMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba6790>)]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.agent_toolkits import SlackToolkit\n",
"\n",
"toolkit = SlackToolkit()\n",
"tools = toolkit.get_tools()\n",
"tools"
]
},
@@ -144,78 +114,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This toolkit loads:\n",
"\n",
"- [SlackGetChannel](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.get_channel.SlackGetChannel.html)\n",
"- [SlackGetMessage](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.get_message.SlackGetMessage.html)\n",
"- [SlackScheduleMessage](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.schedule_message.SlackScheduleMessage.html)\n",
"- [SlackSendMessage](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.send_message.SlackSendMessage.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"Let's equip an agent with the Slack toolkit and query for information about a channel."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"agent_executor = create_react_agent(llm, tools)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"When was the #general channel created?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" get_channelid_name_dict (call_mINmB55OWDIkXykGXZXaL5Ar)\n",
" Call ID: call_mINmB55OWDIkXykGXZXaL5Ar\n",
" Args:\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The #general channel was created on Unix timestamp 1671043305, which corresponds to \"Mon, 12 Dec 2022 18:41:45 GMT\" in human-readable format.\n"
]
}
],
"source": [
"example_query = \"When was the #general channel created?\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" message = event[\"messages\"][-1]\n",
" if message.type != \"tool\": # mask sensitive information\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example with AgentExecutor:"
"## Use within an ReAct Agent"
]
},
{
@@ -337,13 +236,11 @@
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": null,
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all __ModuleName__Toolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html)."
]
"outputs": [],
"source": []
}
],
"metadata": {
@@ -362,7 +259,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -29,6 +29,10 @@
"\n",
"## Setup\n",
"\n",
"This uses the example `Chinook` database. \n",
"\n",
"To set it up follow [these instructions](https://database.guide/2-sample-databases-sqlite/). This notebook reads from the resulting .db file.\n",
"\n",
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
@@ -83,62 +87,7 @@
},
{
"cell_type": "markdown",
"id": "804533b1-2f16-497b-821b-c82d67fcf7b6",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"The `SQLDatabaseToolkit` toolkit requires:\n",
"\n",
"- a [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) object;\n",
"- a LLM or chat model (for instantiating the [QuerySQLCheckerTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.sql_database.tool.QuerySQLCheckerTool.html) tool).\n",
"\n",
"Below, we instantiate the toolkit with these objects. Let's first create a database object.\n",
"\n",
"This guide uses the example `Chinook` database based on [these instructions](https://database.guide/2-sample-databases-sqlite/).\n",
"\n",
"Below we will use the `requests` library to pull the `.sql` file and create an in-memory SQLite database. Note that this approach is lightweight, but ephemeral and not thread-safe. If you'd prefer, you can follow the instructions to save the file locally as `Chinook.db` and instantiate the database via `db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "40d05f9b-5a8f-4307-8f8b-4153db0fdfa9",
"metadata": {},
"outputs": [],
"source": [
"import sqlite3\n",
"\n",
"import requests\n",
"from langchain_community.utilities.sql_database import SQLDatabase\n",
"from sqlalchemy import create_engine\n",
"from sqlalchemy.pool import StaticPool\n",
"\n",
"\n",
"def get_engine_for_chinook_db():\n",
" \"\"\"Pull sql file, populate in-memory database, and create engine.\"\"\"\n",
" url = \"https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql\"\n",
" response = requests.get(url)\n",
" sql_script = response.text\n",
"\n",
" connection = sqlite3.connect(\":memory:\", check_same_thread=False)\n",
" connection.executescript(sql_script)\n",
" return create_engine(\n",
" \"sqlite://\",\n",
" creator=lambda: connection,\n",
" poolclass=StaticPool,\n",
" connect_args={\"check_same_thread\": False},\n",
" )\n",
"\n",
"\n",
"engine = get_engine_for_chinook_db()\n",
"\n",
"db = SQLDatabase(engine)"
]
},
{
"cell_type": "markdown",
"id": "2b9a6326-78fd-4c42-a1cb-4316619ac449",
"id": "79e86f98-3436-474d-ac67-529c93726b95",
"metadata": {},
"source": [
"We will also need a LLM or chat model:\n",
@@ -152,8 +101,8 @@
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cc6e6108-83d9-404f-8f31-474c2fbf5f6c",
"execution_count": 1,
"id": "a5076e3d-3a04-4be9-ae82-41d7685e2197",
"metadata": {},
"outputs": [],
"source": [
@@ -167,20 +116,30 @@
},
{
"cell_type": "markdown",
"id": "77925e72-4730-43c3-8726-d68cedf635f4",
"id": "804533b1-2f16-497b-821b-c82d67fcf7b6",
"metadata": {},
"source": [
"We can now instantiate the toolkit:"
"## Instantiation\n",
"\n",
"The `SQLDatabaseToolkit` toolkit requires:\n",
"\n",
"- a [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) object;\n",
"- a LLM or chat model (for instantiating the [QuerySQLCheckerTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.sql_database.tool.QuerySQLCheckerTool.html) tool).\n",
"\n",
"Below, we instantiate the toolkit with these objects:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "42bd5a41-672a-4a53-b70a-2f0c0555758c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit\n",
"from langchain_community.utilities.sql_database import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)"
]
@@ -197,20 +156,20 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "a18c3e69-bee0-4f5d-813e-eeb540f41b98",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[QuerySQLDataBaseTool(description=\"Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.\", db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>),\n",
" InfoSQLDatabaseTool(description='Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>),\n",
" ListSQLDatabaseTool(db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>),\n",
" QuerySQLCheckerTool(description='Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>, llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x1148a97b0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x1148aaec0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy=''), llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['dialect', 'query'], template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.\\n\\nOutput the final SQL query only.\\n\\nSQL Query: '), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x1148a97b0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x1148aaec0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy='')))]"
"[QuerySQLDataBaseTool(description=\"Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.\", db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>),\n",
" InfoSQLDatabaseTool(description='Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>),\n",
" ListSQLDatabaseTool(db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>),\n",
" QuerySQLCheckerTool(description='Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>, llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x10e4a3190>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x10e4c08e0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy=''), llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['dialect', 'query'], template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.\\n\\nOutput the final SQL query only.\\n\\nSQL Query: '), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x10e4a3190>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x10e4c08e0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy='')))]"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -244,7 +203,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "eda12f8b-be90-4697-ac84-2ece9e2d1708",
"metadata": {},
"outputs": [
@@ -267,7 +226,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "3470ae96-e5e5-4717-a6d6-d7d28c7b7347",
"metadata": {},
"outputs": [],
@@ -285,7 +244,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "48bca92c-9b4b-4d5c-bcce-1b239c9e901c",
"metadata": {},
"outputs": [],
@@ -307,7 +266,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "39e6d2bf-3194-4aba-854b-63faf919157b",
"metadata": {},
"outputs": [
@@ -320,8 +279,8 @@
"Which country's customers spent the most?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_list_tables (call_eiheSxiL0s90KE50XyBnBtJY)\n",
" Call ID: call_eiheSxiL0s90KE50XyBnBtJY\n",
" sql_db_list_tables (call_xK4hUKXF8wb1tPM1s5e6gZVb)\n",
" Call ID: call_xK4hUKXF8wb1tPM1s5e6gZVb\n",
" Args:\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: sql_db_list_tables\n",
@@ -329,8 +288,8 @@
"Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_schema (call_YKwGWt4UUVmxxY7vjjBDzFLJ)\n",
" Call ID: call_YKwGWt4UUVmxxY7vjjBDzFLJ\n",
" sql_db_schema (call_XnagYKuUNXo4FgK0a0bUSlIM)\n",
" Call ID: call_XnagYKuUNXo4FgK0a0bUSlIM\n",
" Args:\n",
" table_names: Customer, Invoice, InvoiceLine\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -407,8 +366,8 @@
"*/\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_query (call_7WBDcMxl1h7MnI05njx1q8V9)\n",
" Call ID: call_7WBDcMxl1h7MnI05njx1q8V9\n",
" sql_db_query (call_tnibWEiAbTD0Al4u4lFRCcO0)\n",
" Call ID: call_tnibWEiAbTD0Al4u4lFRCcO0\n",
" Args:\n",
" query: SELECT c.Country, SUM(i.Total) AS TotalSpent FROM Customer c JOIN Invoice i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY TotalSpent DESC LIMIT 1\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -442,7 +401,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "23c1235c-6d18-43e4-98ab-85b426b53d94",
"metadata": {},
"outputs": [
@@ -455,8 +414,8 @@
"Who are the top 3 best selling artists?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_query (call_9F6Bp2vwsDkeLW6FsJFqLiet)\n",
" Call ID: call_9F6Bp2vwsDkeLW6FsJFqLiet\n",
" sql_db_query (call_EBmGkOb4ceEc6VNCszE9s9N7)\n",
" Call ID: call_EBmGkOb4ceEc6VNCszE9s9N7\n",
" Args:\n",
" query: SELECT artist_name, SUM(quantity) AS total_sold FROM sales GROUP BY artist_name ORDER BY total_sold DESC LIMIT 3\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -467,8 +426,8 @@
"(Background on this error at: https://sqlalche.me/e/20/e3q8)\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_list_tables (call_Gx5adzWnrBDIIxzUDzsn83zO)\n",
" Call ID: call_Gx5adzWnrBDIIxzUDzsn83zO\n",
" sql_db_list_tables (call_mEBlNVGQmf6IiikdqlFSoBzN)\n",
" Call ID: call_mEBlNVGQmf6IiikdqlFSoBzN\n",
" Args:\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: sql_db_list_tables\n",
@@ -476,8 +435,8 @@
"Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_schema (call_ftywrZgEgGWLrnk9dYC0xtZv)\n",
" Call ID: call_ftywrZgEgGWLrnk9dYC0xtZv\n",
" sql_db_schema (call_ZEnt0V29DVZf2RDpyVDqCjyN)\n",
" Call ID: call_ZEnt0V29DVZf2RDpyVDqCjyN\n",
" Args:\n",
" table_names: Artist, Album, InvoiceLine\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -536,8 +495,8 @@
"*/\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_query (call_i6n3lmS7E2ZivN758VOayTiy)\n",
" Call ID: call_i6n3lmS7E2ZivN758VOayTiy\n",
" sql_db_query (call_6tHsI79n3dYWphezh3fp9EKp)\n",
" Call ID: call_6tHsI79n3dYWphezh3fp9EKp\n",
" Args:\n",
" query: SELECT Artist.Name AS artist_name, SUM(InvoiceLine.Quantity) AS total_sold FROM Artist JOIN Album ON Artist.ArtistId = Album.ArtistId JOIN Track ON Album.AlbumId = Track.AlbumId JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY total_sold DESC LIMIT 3\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",

View File

@@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet databricks-sdk langchain-community mlflow"
"%pip install --upgrade --quiet databricks-sdk langchain-community langchain-openai"
]
},
{
@@ -47,9 +47,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\")"
]
},
{

View File

@@ -19,28 +19,17 @@
"\n",
":::\n",
"\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [GPT4All](https://github.com/nomic-ai/gpt4all), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.\n",
"\n",
"LangChain has integrations with [many open-source LLM providers](/docs/how_to/local_llms) that can be run locally.\n",
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
"\n",
"This guide will show how to run `LLaMA 3.1` via one provider, [Ollama](/docs/integrations/providers/ollama/) locally (e.g., on your laptop) using local embeddings and a local LLM. However, you can set up and swap in other local providers, such as [LlamaCPP](/docs/integrations/chat/llamacpp/) if you prefer.\n",
"See [here](/docs/how_to/local_llms) for setup instructions for these LLMs. \n",
"\n",
"**Note:** This guide uses a [chat model](/docs/concepts/#chat-models) wrapper that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models directly with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model. This will often [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
"\n",
"## Setup\n",
"## Document Loading \n",
"\n",
"First we'll need to set up Ollama.\n",
"\n",
"The instructions [on their GitHub repo](https://github.com/ollama/ollama) provide details, which we summarize here:\n",
"\n",
"- [Download](https://ollama.com/download) and run their desktop app\n",
"- From command line, fetch models from [this list of options](https://ollama.com/library). For this guide, you'll need:\n",
" - A general purpose model like `llama3.1:8b`, which you can pull with something like `ollama pull llama3.1:8b`\n",
" - A [text embedding model](https://ollama.com/search?c=embedding) like `nomic-embed-text`, which you can pull with something like `ollama pull nomic-embed-text`\n",
"- When the app is running, all models are automatically served on `localhost:11434`\n",
"- Note that your model choice will depend on your hardware capabilities\n",
"\n",
"Next, install packages needed for local embeddings, vector storage, and inference."
"First, install packages needed for local embeddings and vector storage."
]
},
{
@@ -50,22 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Document loading, retrieval methods and text splitting\n",
"%pip install -qU langchain langchain_community\n",
"\n",
"# Local vector store via Chroma\n",
"%pip install -qU langchain_chroma\n",
"\n",
"# Local inference and embeddings via Ollama\n",
"%pip install -qU langchain_ollama"
]
},
{
"cell_type": "markdown",
"id": "02b7914e",
"metadata": {},
"source": [
"You can also [see this page](/docs/integrations/text_embedding/) for a full list of available embeddings models"
"%pip install --upgrade --quiet langchain langchain-community langchainhub gpt4all langchain-chroma "
]
},
{
@@ -73,22 +47,20 @@
"id": "5e7543fa",
"metadata": {},
"source": [
"## Document Loading\n",
"Load and split an example document.\n",
"\n",
"Now let's load and split an example document.\n",
"\n",
"We'll use a [blog post](https://lilianweng.github.io/posts/2023-06-23-agent/) by Lilian Weng on agents as an example."
"We'll use a blog post on agents as an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "f8cf5765",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
@@ -102,22 +74,20 @@
"id": "131d5059",
"metadata": {},
"source": [
"Next, the below steps will initialize your vector store. We use [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text), but you can explore other providers or options as well:"
"Next, the below steps will download the `GPT4All` embeddings locally (if you don't already have them)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "fdce8923",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_ollama import OllamaEmbeddings\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"\n",
"local_embeddings = OllamaEmbeddings(model=\"nomic-embed-text\")\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=local_embeddings)"
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())"
]
},
{
@@ -125,12 +95,12 @@
"id": "29137915",
"metadata": {},
"source": [
"And now we have a working vector store! Test that similarity search is working:"
"Test similarity search is working with our local embeddings."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "b0c55e98",
"metadata": {},
"outputs": [
@@ -140,7 +110,7 @@
"4"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -153,17 +123,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 7,
"id": "32b43339",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\"}, page_content='Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.')"
"Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\"})"
]
},
"execution_count": 5,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -174,100 +144,258 @@
},
{
"cell_type": "markdown",
"id": "fcf81052",
"id": "557cd9b8",
"metadata": {},
"source": [
"Next, set up a model. We use Ollama with `llama3.1:8b` here, but you can [explore other providers](/docs/how_to/local_llms/) or [model options depending on your hardware setup](https://ollama.com/library):"
"## Model \n",
"\n",
"### LLaMA2\n",
"\n",
"Note: new versions of `llama-cpp-python` use GGUF model files (see [here](https://github.com/abetlen/llama-cpp-python/pull/633)).\n",
"\n",
"If you have an existing GGML model, see [here](/docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
" \n",
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
"\n",
"Finally, as noted in detail [here](/docs/how_to/local_llms) install `llama-cpp-python`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "9f218576",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet llama-cpp-python"
]
},
{
"cell_type": "markdown",
"id": "0dd1804f",
"metadata": {},
"source": [
"To enable use of GPU on Apple Silicon, follow the steps [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to use the Python binding `with Metal support`.\n",
"\n",
"In particular, ensure that `conda` is using the correct virtual environment that you created (`miniforge3`).\n",
"\n",
"E.g., for me:\n",
"\n",
"```\n",
"conda activate /Users/rlm/miniforge3/envs/llama\n",
"```\n",
"\n",
"With this confirmed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5884779a-957e-4c4c-b447-bc8385edc67e",
"metadata": {},
"outputs": [],
"source": [
"! CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama/bin/pip install -U llama-cpp-python --no-cache-dir"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cd7164e3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import LlamaCpp"
]
},
{
"cell_type": "markdown",
"id": "fcf81052",
"metadata": {},
"source": [
"Setting model parameters as noted in the [llama.cpp docs](/docs/integrations/llms/llamacpp)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af1176bb-d52a-4cf0-b983-8b7433d45b4f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ollama import ChatOllama\n",
"n_gpu_layers = 1 # Metal set to 1 is enough.\n",
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.\n",
"\n",
"model = ChatOllama(\n",
" model=\"llama3.1:8b\",\n",
"# Make sure the model path is correct for your system!\n",
"llm = LlamaCpp(\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/llama-2-13b-chat.ggufv3.q4_0.bin\",\n",
" n_gpu_layers=n_gpu_layers,\n",
" n_batch=n_batch,\n",
" n_ctx=2048,\n",
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8c4f7adf",
"id": "3831b16a",
"metadata": {},
"source": [
"Test it to make sure you've set everything up properly:"
"Note that these indicate that [Metal was enabled properly](/docs/integrations/llms/llamacpp):\n",
"\n",
"```\n",
"ggml_metal_init: allocating\n",
"ggml_metal_init: using MPS\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 11,
"id": "bf0162e0-8c41-4344-88ae-ff2bbaeb12eb",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"**The scene is set: a packed arena, the crowd on their feet. In the blue corner, we have Stephen Colbert, aka \"The O'Reilly Factor\" himself. In the red corner, the challenger, John Oliver. The judges are announced as Tina Fey, Larry Wilmore, and Patton Oswalt. The crowd roars as the two opponents face off.**\n",
"by jonathan \n",
"\n",
"**Stephen Colbert (aka \"The Truth with a Twist\"):**\n",
"Yo, I'm the king of satire, the one they all fear\n",
"My show's on late, but my jokes are clear\n",
"I skewer the politicians, with precision and might\n",
"They tremble at my wit, day and night\n",
"Here's the hypothetical rap battle:\n",
"\n",
"**John Oliver:**\n",
"Hold up, Stevie boy, you may have had your time\n",
"But I'm the new kid on the block, with a different prime\n",
"Time to wake up from that 90s coma, son\n",
"My show's got bite, and my facts are never done\n",
"[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\n",
"\n",
"**Stephen Colbert:**\n",
"Oh, so you think you're the one, with the \"Last Week\" crown\n",
"But your jokes are stale, like the ones I wore down\n",
"I'm the master of absurdity, the lord of the spin\n",
"You're just a British import, trying to fit in\n",
"[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\n",
"\n",
"**John Oliver:**\n",
"Stevie, my friend, you may have been the first\n",
"But I've got the skill and the wit, that's never blurred\n",
"My show's not afraid, to take on the fray\n",
"I'm the one who'll make you think, come what may\n",
"[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\n",
"\n",
"**Stephen Colbert:**\n",
"Well, it's time for a showdown, like two old friends\n",
"Let's see whose satire reigns supreme, till the very end\n",
"But I've got a secret, that might just seal your fate\n",
"My humor's contagious, and it's already too late!\n",
"\n",
"**John Oliver:**\n",
"Bring it on, Stevie! I'm ready for you\n",
"I'll take on your jokes, and show them what to do\n",
"My sarcasm's sharp, like a scalpel in the night\n",
"You're just a relic of the past, without a fight\n",
"\n",
"**The judges deliberate, weighing the rhymes and the flow. Finally, they announce their decision:**\n",
"\n",
"Tina Fey: I've got to go with John Oliver. His jokes were sharper, and his delivery was smoother.\n",
"\n",
"Larry Wilmore: Agreed! But Stephen Colbert's still got that old-school charm.\n",
"\n",
"Patton Oswalt: You know what? It's a tie. Both of them brought the heat!\n",
"\n",
"**The crowd goes wild as both opponents take a bow. The rap battle may be over, but the satire war is just beginning...\n"
"[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 4481.74 ms\n",
"llama_print_timings: sample time = 183.05 ms / 256 runs ( 0.72 ms per token, 1398.53 tokens per second)\n",
"llama_print_timings: prompt eval time = 456.05 ms / 13 tokens ( 35.08 ms per token, 28.51 tokens per second)\n",
"llama_print_timings: eval time = 7375.20 ms / 255 runs ( 28.92 ms per token, 34.58 tokens per second)\n",
"llama_print_timings: total time = 8388.92 ms\n"
]
},
{
"data": {
"text/plain": [
"\"by jonathan \\n\\nHere's the hypothetical rap battle:\\n\\n[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\\n\\n[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\\n\\n[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\\n\\n[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response_message = model.invoke(\n",
" \"Simulate a rap battle between Stephen Colbert and John Oliver\"\n",
")\n",
"llm.invoke(\"Simulate a rap battle between Stephen Colbert and John Oliver\")"
]
},
{
"cell_type": "markdown",
"id": "0d9579a7",
"metadata": {},
"source": [
"### GPT4All\n",
"\n",
"print(response_message.content)"
"Similarly, we can use `GPT4All`.\n",
"\n",
"[Download the GPT4All model binary](/docs/integrations/llms/gpt4all).\n",
"\n",
"The Model Explorer on the [GPT4All](https://gpt4all.io/index.html) is a great way to choose and download a model.\n",
"\n",
"Then, specify the path that you downloaded to to.\n",
"\n",
"E.g., for me, the model lives here:\n",
"\n",
"`/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57c1aec0-04c7-479e-b9bf-af3c547ba0a3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import GPT4All\n",
"\n",
"gpt4all = GPT4All(\n",
" model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\",\n",
" max_tokens=2048,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e6d012e4-0eef-4734-a826-89ec74fe9f88",
"metadata": {},
"source": [
"### llamafile\n",
"\n",
"One of the simplest ways to run an LLM locally is using a [llamafile](https://github.com/Mozilla-Ocho/llamafile). All you need to do is:\n",
"\n",
"1) Download a llamafile from [HuggingFace](https://huggingface.co/models?other=llamafile)\n",
"2) Make the file executable\n",
"3) Run the file\n",
"\n",
"llamafiles bundle model weights and a [specially-compiled](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#technical-details) version of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) into a single file that can run on most computers without any additional dependencies. They also come with an embedded inference server that provides an [API](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md#api-endpoints) for interacting with your model. \n",
"\n",
"Here's a simple bash script that shows all 3 setup steps:\n",
"\n",
"```bash\n",
"# Download a llamafile from HuggingFace\n",
"wget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n",
"\n",
"# Make the file executable. On Windows, instead just rename the file to end in \".exe\".\n",
"chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n",
"\n",
"# Start the model server. Listens at http://localhost:8080 by default.\n",
"./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser\n",
"```\n",
"\n",
"After you run the above setup steps, you can interact with the model via LangChain:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "735e45b6-9aff-463e-aae4-bbf8ac2b21c5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n-1 1/2 (8 oz. Pounds) ground beef, browned and cooked until no longer pink\\n-3 cups whole wheat spaghetti\\n-4 (10 oz) cans diced tomatoes with garlic and basil\\n-2 eggs, beaten\\n-1 cup grated parmesan cheese\\n-1/2 teaspoon salt\\n-1/4 teaspoon black pepper\\n-1 cup breadcrumbs (16 oz)\\n-2 tablespoons olive oil\\n\\nInstructions:\\n1. Cook spaghetti according to package directions. Drain and set aside.\\n2. In a large skillet, brown ground beef over medium heat until no longer pink. Drain any excess grease.\\n3. Stir in diced tomatoes with garlic and basil, and season with salt and pepper. Cook for 5 to 7 minutes or until sauce is heated through. Set aside.\\n4. In a large bowl, beat eggs with a fork or whisk until fluffy. Add cheese, salt, and black pepper. Set aside.\\n5. In another bowl, combine breadcrumbs and olive oil. Dip each spaghetti into the egg mixture and then coat in the breadcrumb mixture. Place on baking sheet lined with parchment paper to prevent sticking. Repeat until all spaghetti are coated.\\n6. Heat oven to 375 degrees. Bake for 18 to 20 minutes, or until lightly golden brown.\\n7. Serve hot with meatballs and sauce on the side. Enjoy!'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.llms.llamafile import Llamafile\n",
"\n",
"llamafile = Llamafile()\n",
"\n",
"llamafile.invoke(\"Here is my grandmother's beloved recipe for spaghetti and meatballs:\")"
]
},
{
@@ -277,49 +405,79 @@
"source": [
"## Using in a chain\n",
"\n",
"We can create a summarization chain with either model by passing in retrieved docs and a simple prompt.\n",
"We can create a summarization chain with either model by passing in the retrieved docs and a simple prompt.\n",
"\n",
"It formats the prompt template using the input key values provided and passes the formatted string to the specified model:"
"It formats the prompt template using the input key values provided and passes the formatted string to `GPT4All`, `LLama-V2`, or another specified LLM."
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 27,
"id": "18a3716d",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Based on the retrieved documents, the main themes are:\n",
"1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\n",
"2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\n",
"3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\n",
"4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 1191.88 ms\n",
"llama_print_timings: sample time = 134.47 ms / 193 runs ( 0.70 ms per token, 1435.25 tokens per second)\n",
"llama_print_timings: prompt eval time = 39470.18 ms / 1055 tokens ( 37.41 ms per token, 26.73 tokens per second)\n",
"llama_print_timings: eval time = 8090.85 ms / 192 runs ( 42.14 ms per token, 23.73 tokens per second)\n",
"llama_print_timings: total time = 47943.12 ms\n"
]
},
{
"data": {
"text/plain": [
"'The main themes in these documents are:\\n\\n1. **Task Decomposition**: The process of breaking down complex tasks into smaller, manageable subgoals is crucial for efficient task handling.\\n2. **Autonomous Agent System**: A system powered by Large Language Models (LLMs) that can perform planning, reflection, and refinement to improve the quality of final results.\\n3. **Challenges in Planning and Decomposition**:\\n\\t* Long-term planning and task decomposition are challenging for LLMs.\\n\\t* Adjusting plans when faced with unexpected errors is difficult for LLMs.\\n\\t* Humans learn from trial and error, making them more robust than LLMs in certain situations.\\n\\nOverall, the documents highlight the importance of task decomposition and planning in autonomous agent systems powered by LLMs, as well as the challenges that still need to be addressed.'"
"'\\nBased on the retrieved documents, the main themes are:\\n1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\\n2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\\n3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\\n4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems.'"
]
},
"execution_count": 8,
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
"# Prompt\n",
"prompt = PromptTemplate.from_template(\n",
" \"Summarize the main themes in these retrieved docs: {docs}\"\n",
")\n",
"\n",
"\n",
"# Convert loaded documents into strings by concatenating their content\n",
"# and ignoring metadata\n",
"# Chain\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = {\"docs\": format_docs} | prompt | model | StrOutputParser()\n",
"chain = {\"docs\": format_docs} | prompt | llm | StrOutputParser()\n",
"\n",
"# Run\n",
"question = \"What are the approaches to Task Decomposition?\"\n",
"\n",
"docs = vectorstore.similarity_search(question)\n",
"\n",
"chain.invoke(docs)"
]
},
@@ -328,54 +486,184 @@
"id": "3cce6977-52e7-4944-89b4-c161d04f6698",
"metadata": {},
"source": [
"## Q&A\n",
"## Q&A \n",
"\n",
"You can also perform question-answering with your local model and vector store. Here's an example with a simple string prompt:"
"We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific.\n",
"\n",
"Let's try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt)."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "67cefb46-acd3-4c2a-a8f6-b62c7c3e30dc",
"execution_count": 3,
"id": "59ed5f0d-7089-41cc-8486-af37b690dd33",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition can be done through (1) simple prompting using LLM, (2) task-specific instructions, or (3) human inputs. This approach helps break down large tasks into smaller, manageable subgoals for efficient handling of complex tasks. It enables agents to plan ahead and improve the quality of final results through reflection and refinement.'"
"[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: {question} \\nContext: {context} \\nAnswer:\"))]"
]
},
"execution_count": 9,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain import hub\n",
"\n",
"RAG_TEMPLATE = \"\"\"\n",
"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"Answer the following question:\n",
"\n",
"{question}\"\"\"\n",
"\n",
"rag_prompt = ChatPromptTemplate.from_template(RAG_TEMPLATE)\n",
"rag_prompt = hub.pull(\"rlm/rag-prompt\")\n",
"rag_prompt.messages"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "c01c1725",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Task can be done by down a task into smaller subtasks, using simple prompting like \"Steps for XYZ.\" or task-specific like \"Write a story outline\" for writing a novel."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 33.03 ms / 47 runs ( 0.70 ms per token, 1422.86 tokens per second)\n",
"llama_print_timings: prompt eval time = 1387.31 ms / 242 tokens ( 5.73 ms per token, 174.44 tokens per second)\n",
"llama_print_timings: eval time = 1321.62 ms / 46 runs ( 28.73 ms per token, 34.81 tokens per second)\n",
"llama_print_timings: total time = 2801.08 ms\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': '\\nTask can be done by down a task into smaller subtasks, using simple prompting like \"Steps for XYZ.\" or task-specific like \"Write a story outline\" for writing a novel.'}"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough, RunnablePick\n",
"\n",
"# Chain\n",
"chain = (\n",
" RunnablePassthrough.assign(context=lambda input: format_docs(input[\"context\"]))\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt\n",
" | model\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"question = \"What are the approaches to Task Decomposition?\"\n",
"\n",
"docs = vectorstore.similarity_search(question)\n",
"# Run\n",
"chain.invoke({\"context\": docs, \"question\": question})"
]
},
{
"cell_type": "markdown",
"id": "2e5913f0-cf92-4e21-8794-0502ba11b202",
"metadata": {},
"source": [
"Now, let's try with [a prompt specifically for LLaMA](https://smith.langchain.com/hub/rlm/rag-prompt-llama), which [includes special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "78f6862d-b7a6-4e03-84e4-45667185bf9b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, template=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: {question} \\nContext: {context} \\nAnswer: [/INST]\", template_format='f-string', validate_template=True), additional_kwargs={})])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"rag_prompt_llama = hub.pull(\"rlm/rag-prompt-llama\")\n",
"rag_prompt_llama.messages"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "67cefb46-acd3-4c2a-a8f6-b62c7c3e30dc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure, I'd be happy to help! Based on the context, here are some to task:\n",
"\n",
"1. LLM with simple prompting: This using a large model (LLM) with simple prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" to decompose tasks into smaller steps.\n",
"2. Task-specific: Another is to use task-specific, such as \"Write a story outline\" for writing a novel, to guide the of tasks.\n",
"3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\n",
"\n",
"As fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 144.81 ms / 207 runs ( 0.70 ms per token, 1429.47 tokens per second)\n",
"llama_print_timings: prompt eval time = 1506.13 ms / 258 tokens ( 5.84 ms per token, 171.30 tokens per second)\n",
"llama_print_timings: eval time = 6231.92 ms / 206 runs ( 30.25 ms per token, 33.06 tokens per second)\n",
"llama_print_timings: total time = 8158.41 ms\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': ' Sure, I\\'d be happy to help! Based on the context, here are some to task:\\n\\n1. LLM with simple prompting: This using a large model (LLM) with simple prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" to decompose tasks into smaller steps.\\n2. Task-specific: Another is to use task-specific, such as \"Write a story outline\" for writing a novel, to guide the of tasks.\\n3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\\n\\nAs fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error.'}"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain\n",
"chain = (\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt_llama\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"# Run\n",
"chain.invoke({\"context\": docs, \"question\": question})"
@@ -388,64 +676,82 @@
"source": [
"## Q&A with retrieval\n",
"\n",
"Finally, instead of manually passing in docs, you can automatically retrieve them from our vector store based on the user question:"
"Instead of manually passing in docs, we can automatically retrieve them from our vector store based on the user question.\n",
"\n",
"This will use a QA default prompt (shown [here](https://github.com/langchain-ai/langchain/blob/275b926cf745b5668d3ea30236635e20e7866442/langchain/chains/retrieval_qa/prompt.py#L4)) and will retrieve from the vectorDB."
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 29,
"id": "86c7a349",
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()\n",
"\n",
"qa_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | rag_prompt\n",
" | model\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 30,
"id": "112ca227",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure! Based on the context, here's my answer to your:\n",
"\n",
"There are several to task,:\n",
"\n",
"1. LLM-based with simple prompting, such as \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\n",
"2. Task-specific, like \"Write a story outline\" for writing a novel.\n",
"3. Human inputs to guide the process.\n",
"\n",
"These can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 139.20 ms / 200 runs ( 0.70 ms per token, 1436.76 tokens per second)\n",
"llama_print_timings: prompt eval time = 1532.26 ms / 258 tokens ( 5.94 ms per token, 168.38 tokens per second)\n",
"llama_print_timings: eval time = 5977.62 ms / 199 runs ( 30.04 ms per token, 33.29 tokens per second)\n",
"llama_print_timings: total time = 7916.21 ms\n"
]
},
{
"data": {
"text/plain": [
"'Task decomposition can be done through (1) simple prompting in Large Language Models (LLM), (2) using task-specific instructions, or (3) with human inputs. This process involves breaking down large tasks into smaller, manageable subgoals for efficient handling of complex tasks.'"
"{'query': 'What are the approaches to Task Decomposition?',\n",
" 'result': ' Sure! Based on the context, here\\'s my answer to your:\\n\\nThere are several to task,:\\n\\n1. LLM-based with simple prompting, such as \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Task-specific, like \"Write a story outline\" for writing a novel.\\n3. Human inputs to guide the process.\\n\\nThese can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error.'}"
]
},
"execution_count": 11,
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What are the approaches to Task Decomposition?\"\n",
"\n",
"qa_chain.invoke(question)"
]
},
{
"cell_type": "markdown",
"id": "e75d3e9e",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now seen how to build a RAG application using all local components. RAG is a very deep topic, and you might be interested in the following guides that discuss and demonstrate additional techniques:\n",
"\n",
"- [Video: Reliable, fully local RAG agents with LLaMA 3](https://www.youtube.com/watch?v=-ROS6gfYIts) for an agentic approach to RAG with local models\n",
"- [Video: Building Corrective RAG from scratch with open-source, local LLMs](https://www.youtube.com/watch?v=E2shqsYwxck)\n",
"- [Conceptual guide on retrieval](/docs/concepts/#retrieval) for an overview of various retrieval techniques you can apply to improve performance\n",
"- [How to guides on RAG](/docs/how_to/#qa-with-rag) for a deeper dive into different specifics around of RAG\n",
"- [How to run models locally](/docs/how_to/local_llms/) for different approaches to setting up different providers"
]
}
],
"metadata": {
@@ -464,7 +770,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -936,8 +936,7 @@
"- [Return sources](/docs/how_to/qa_sources): Learn how to return source documents\n",
"- [Streaming](/docs/how_to/streaming): Learn how to stream outputs and intermediate steps\n",
"- [Add chat history](/docs/how_to/message_history): Learn how to add chat history to your app\n",
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques\n",
"- [Build a local RAG application](/docs/tutorials/local_rag): Create an app similar to the one above using all local components"
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques"
]
}
],
@@ -957,7 +956,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,74 +0,0 @@
import itertools
import multiprocessing
import re
import sys
from pathlib import Path
def _generate_related_links_section(integration_type: str, notebook_name: str):
concept_display_name = None
concept_heading = None
if integration_type == "chat":
concept_display_name = "Chat model"
concept_heading = "chat-models"
elif integration_type == "llms":
concept_display_name = "LLM"
concept_heading = "llms"
elif integration_type == "text_embedding":
concept_display_name = "Embedding model"
concept_heading = "embedding-models"
elif integration_type == "document_loaders":
concept_display_name = "Document loader"
concept_heading = "document-loaders"
elif integration_type == "vectorstores":
concept_display_name = "Vector store"
concept_heading = "vector-stores"
elif integration_type == "retrievers":
concept_display_name = "Retriever"
concept_heading = "retrievers"
elif integration_type == "tools":
concept_display_name = "Tool"
concept_heading = "tools"
elif integration_type == "stores":
concept_display_name = "Key-value store"
concept_heading = "key-value-stores"
# Special case because there are no key-value store how-tos yet
return f"""## Related
- [{concept_display_name} conceptual guide](/docs/concepts/#{concept_heading})
"""
else:
return None
return f"""## Related
- {concept_display_name} [conceptual guide](/docs/concepts/#{concept_heading})
- {concept_display_name} [how-to guides](/docs/how_to/#{concept_heading})
"""
def _process_path(doc_path: Path):
content = doc_path.read_text()
print(doc_path)
pattern = r"/docs/integrations/([^/]+)/([^/]+).mdx?"
match = re.search(pattern, str(doc_path))
print(bool(match))
if match and match.group(2) != "index":
integration_type = match.group(1)
notebook_name = match.group(2)
related_links_section = _generate_related_links_section(
integration_type, notebook_name
)
if related_links_section:
content = content + "\n\n" + related_links_section
doc_path.write_text(content)
if __name__ == "__main__":
output_docs_dir = Path(sys.argv[1])
mds = output_docs_dir.rglob("integrations/**/*.md")
mdxs = output_docs_dir.rglob("integrations/**/*.mdx")
paths = itertools.chain(mds, mdxs)
# modify all md files in place
with multiprocessing.Pool() as pool:
pool.map(_process_path, paths)

View File

@@ -1,89 +1,69 @@
import json
import re
import sys
from functools import cache
from pathlib import Path
from typing import Dict, Iterable, List, Union
from typing import Union
CURR_DIR = Path(__file__).parent.absolute()
CLI_TEMPLATE_DIR = (
CURR_DIR.parent.parent / "libs/cli/langchain_cli/integration_template/docs"
CHAT_MODEL_HEADERS = (
"## Overview",
"### Integration details",
"### Model features",
"## Setup",
"## Instantiation",
"## Invocation",
"## Chaining",
"## API reference",
)
CHAT_MODEL_REGEX = r".*".join(CHAT_MODEL_HEADERS)
INFO_BY_DIR: Dict[str, Dict[str, Union[int, str]]] = {
"chat": {
"issue_number": 22296,
},
"document_loaders": {
"issue_number": 22866,
},
"stores": {},
"llms": {
"issue_number": 24803,
},
"text_embedding": {"issue_number": 14856},
"toolkits": {"issue_number": "TODO"},
"tools": {"issue_number": "TODO"},
"vectorstores": {"issue_number": 24800},
"retrievers": {"issue_number": "TODO"},
}
DOCUMENT_LOADER_HEADERS = (
"## Overview",
"### Integration details",
"### Loader features",
"## Setup",
"## Instantiation",
"## Load",
"## Lazy Load",
"## API reference",
)
DOCUMENT_LOADER_REGEX = r".*".join(DOCUMENT_LOADER_HEADERS)
@cache
def _get_headers(doc_dir: str) -> Iterable[str]:
"""Gets all markdown headers ## and below from the integration template.
Ignores headers that contain "TODO"."""
ipynb_name = f"{doc_dir}.ipynb"
if not (CLI_TEMPLATE_DIR / ipynb_name).exists():
raise FileNotFoundError(f"Could not find {ipynb_name} in {CLI_TEMPLATE_DIR}")
with open(CLI_TEMPLATE_DIR / ipynb_name, "r") as f:
nb = json.load(f)
headers: List[str] = []
for cell in nb["cells"]:
if cell["cell_type"] == "markdown":
for line in cell["source"]:
if not line.startswith("##") or "TODO" in line:
continue
header = line.strip()
headers.append(header)
return headers
def check_header_order(path: Path) -> None:
doc_dir = path.parent.name
if doc_dir not in INFO_BY_DIR:
# Skip if not a directory we care about
return
headers = _get_headers(doc_dir)
issue_number = INFO_BY_DIR[doc_dir].get("issue_number", "nonexistent")
print(f"Checking {doc_dir} page {path}")
def check_chat_model(path: Path) -> None:
with open(path, "r") as f:
doc = f.read()
regex = r".*".join(headers)
if not re.search(regex, doc, re.DOTALL):
issueline = (
(
" Please see https://github.com/langchain-ai/langchain/issues/"
f"{issue_number} for instructions on how to correctly format a "
f"{doc_dir} integration page."
)
if isinstance(issue_number, int)
else ""
)
if not re.search(CHAT_MODEL_REGEX, doc, re.DOTALL):
raise ValueError(
f"Document {path} does not match the expected header order.{issueline}"
f"Document {path} does not match the ChatModel Integration page template. "
f"Please see https://github.com/langchain-ai/langchain/issues/22296 for "
f"instructions on how to correctly format a ChatModel Integration page."
)
def check_document_loader(path: Path) -> None:
with open(path, "r") as f:
doc = f.read()
if not re.search(DOCUMENT_LOADER_REGEX, doc, re.DOTALL):
raise ValueError(
f"Document {path} does not match the DocumentLoader Integration page template. "
f"Please see https://github.com/langchain-ai/langchain/issues/22866 for "
f"instructions on how to correctly format a DocumentLoader Integration page."
)
def main(*new_doc_paths: Union[str, Path]) -> None:
for path in new_doc_paths:
path = Path(path).resolve().absolute()
if CURR_DIR.parent / "docs" / "integrations" in path.parents:
check_header_order(path)
if CURR_DIR.parent / "docs" / "integrations" / "chat" in path.parents:
print(f"Checking chat model page {path}")
check_chat_model(path)
elif (
CURR_DIR.parent / "docs" / "integrations" / "document_loaders"
in path.parents
):
print(f"Checking document loader page {path}")
check_document_loader(path)
else:
continue

View File

@@ -1,107 +0,0 @@
import sys
from pathlib import Path
from langchain_community import document_loaders
from langchain_core.document_loaders.base import BaseLoader
KV_STORE_TEMPLATE = """\
---
sidebar_class_name: hidden
keywords: [compatibility]
custom_edit_url:
hide_table_of_contents: true
---
# Key-value stores
[Key-value stores](/docs/concepts/#key-value-stores) are used by other LangChain components to store and retrieve data.
:::info
If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
:::
## Features
The following table shows information on all available key-value stores.
{table}
"""
KV_STORE_FEAT_TABLE = {
"AstraDBByteStore": {
"class": "[AstraDBByteStore](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html)",
"local": False,
"package": "[langchain_astradb](https://api.python.langchain.com/en/latest/astradb_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_astradb?style=flat-square&label=%20)",
},
"CassandraByteStore": {
"class": "[CassandraByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html)",
"local": False,
"package": "[langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20)",
},
"ElasticsearchEmbeddingsCache": {
"class": "[ElasticsearchEmbeddingsCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html)",
"local": True,
"package": "[langchain_elasticsearch](https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_elasticsearch?style=flat-square&label=%20)",
},
"LocalFileStore": {
"class": "[LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html)",
"local": True,
"package": "[langchain](https://api.python.langchain.com/en/latest/langchain_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain?style=flat-square&label=%20)",
},
"InMemoryByteStore": {
"class": "[InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html)",
"local": True,
"package": "[langchain_core](https://api.python.langchain.com/en/latest/core_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_core?style=flat-square&label=%20)",
},
"RedisStore": {
"class": "[RedisStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html)",
"local": True,
"package": "[langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20)",
},
"UpstashRedisByteStore": {
"class": "[UpstashRedisByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html)",
"local": False,
"package": "[langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20)",
},
}
DEPRECATED = []
def get_kv_store_table() -> str:
"""Get the table of KV stores."""
header = ["name", "local", "package", "downloads"]
title = ["Class", "Local", "Package", "Downloads"]
rows = [title, [":-"] + [":-:"] * (len(title) - 1)]
for loader, feats in sorted(KV_STORE_FEAT_TABLE.items()):
if not feats or loader in DEPRECATED:
continue
rows += [
[feats["class"]]
+ ["" if feats.get(h) else "" for h in header[1:2]]
+ [feats["package"], feats["downloads"]]
]
return "\n".join(["|".join(row) for row in rows])
if __name__ == "__main__":
output_dir = Path(sys.argv[1])
output_integrations_dir = output_dir / "integrations"
output_integrations_dir_kv_stores = output_integrations_dir / "stores"
output_integrations_dir_kv_stores.mkdir(parents=True, exist_ok=True)
kv_stores_page = KV_STORE_TEMPLATE.format(table=get_kv_store_table())
with open(output_integrations_dir / "stores" / "index.mdx", "w") as f:
f.write(kv_stores_page)

View File

@@ -174,6 +174,8 @@ hide_table_of_contents: true
# Chat models
## Advanced features
:::info
If you'd like to write your own chat model, see [this how-to](/docs/how_to/custom_chat_model/).
@@ -181,8 +183,6 @@ If you'd like to contribute an integration, see [Contributing integrations](/doc
:::
## Advanced features
The following table shows all the chat model classes that support one or more advanced features.
:::info

View File

@@ -68,13 +68,11 @@ module.exports = {
},
{
type: "category",
label: "Versions",
collapsed: false,
collapsible: false,
label: "Versioning",
collapsed: true,
items: [
"versions/overview",
"versions/release_policy",
"versions/packages",
{
type: 'doc',
id: "how_to/pydantic_compatibility",
@@ -82,26 +80,20 @@ module.exports = {
},
{
type: "category",
label: "v0.2",
label: "Upgrading to v0.2",
link: {type: 'doc', id: 'versions/v0_2/index'},
collapsible: false,
collapsed: false,
items: [{
type: 'autogenerated',
dirName: 'versions/v0_2',
className: 'hidden',
}],
},
{
type: "category",
label: "Migrating to LCEL",
link: {type: 'doc', id: 'versions/migrating_chains/index'},
collapsible: false,
collapsed: false,
items: [{
type: 'autogenerated',
dirName: 'versions/migrating_chains',
className: 'hidden',
}],
},
],
@@ -275,8 +267,8 @@ module.exports = {
},
],
link: {
type: "doc",
id: "integrations/toolkits/index",
type: "generated-index",
slug: "integrations/toolkits",
},
},
{

View File

@@ -8,7 +8,7 @@ import CodeBlock from "@theme-original/CodeBlock";
* @typedef {Object} ChatModelTabsProps - Component props.
* @property {string} [openaiParams] - Parameters for OpenAI chat model. Defaults to `model="gpt-3.5-turbo-0125"`
* @property {string} [anthropicParams] - Parameters for Anthropic chat model. Defaults to `model="claude-3-sonnet-20240229"`
* @property {string} [cohereParams] - Parameters for Cohere chat model. Defaults to `model="command-r-plus"`
* @property {string} [cohereParams] - Parameters for Cohere chat model. Defaults to `model="command-r"`
* @property {string} [fireworksParams] - Parameters for Fireworks chat model. Defaults to `model="accounts/fireworks/models/mixtral-8x7b-instruct"`
* @property {string} [groqParams] - Parameters for Groq chat model. Defaults to `model="llama3-8b-8192"`
* @property {string} [mistralParams] - Parameters for Mistral chat model. Defaults to `model="mistral-large-latest"`

View File

@@ -27,7 +27,7 @@
"\n",
"## Overview\n",
"\n",
"- TODO: (Optional) A short introduction to the underlying technology/API.\n",
"- TODO: (Optional) A short introduciton to the underlying technology/API.\n",
"\n",
"### Integration details\n",
"\n",
@@ -36,7 +36,7 @@
"- TODO: Make sure API reference links are correct.\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/_package_name_) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [__ModuleName__ByteStore](https://api.python.langchain.com/en/latest/stores/__module_name__.stores.__ModuleName__ByteStore.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | ✅/❌ | ✅/❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/__package_name__?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/__package_name__?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",

View File

@@ -1,217 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: __ModuleName__\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# __ModuleName__Retriever\n",
"\n",
"## Overview\n",
"- TODO: Make sure API reference link is correct.\n",
"\n",
"This will help you getting started with the __ModuleName__ [retriever](/docs/concepts/#retrievers). For detailed documentation of all __ModuleName__Retriever features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/__module_name__.retrievers.__ModuleName__.__ModuleName__Retriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Namespace | Native async | Local |\n",
"| :--- | :--- | :---: | :---: |\n",
"[__ModuleName__Retriever](https://api.python.langchain.com/en/latest/retrievers/__package_name__.retrievers.__module_name__.__ModuleName__Retriever.html) | __package_name__.retrievers | ❌ | ❌ |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"- TODO: Update with relevant info."
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `__package_name__` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU __package_name__"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our retriever:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70cc8e65-2a02-408a-bbc6-8ef649057d82",
"metadata": {},
"outputs": [],
"source": [
"from __module_name__ import __ModuleName__Retriever\n",
"\n",
"retriever = __ModuleName__Retriever(\n",
" # ...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5c5f2839-4020-424e-9fc9-07777eede442",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe-9f2e-4e04-bb62-23968f17164a",
"metadata": {},
"outputs": [],
"source": [
"query = \"...\"\n",
"\n",
"retriever.invoke(query)"
]
},
{
"cell_type": "markdown",
"id": "dfe8aad4-8626-4330-98a9-7ea1ca5d2e0e",
"metadata": {},
"source": [
"## Use within a chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23e11cc9-abd6-4855-a7eb-799f45ca01ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d47c37dd-5c11-416c-a3b6-bec413cd70e8",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"...\")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## TODO: Any functionality or considerations specific to this retriever\n",
"\n",
"Fill in or delete if not relevant."
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all __ModuleName__Retriever features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/__module_name__.retrievers.__ModuleName__.__ModuleName__Retriever.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,89 +0,0 @@
"""__ModuleName__ retrievers."""
from typing import List
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
class __ModuleName__Retriever(BaseRetriever):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/tavily_search_api.py#L17
"""__ModuleName__ retriever.
# TODO: Replace with relevant packages, env vars, etc.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args:
arg 1: type
description
arg 2: type
description
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __package_name__ import __ModuleName__Retriever
retriever = __ModuleName__Retriever(
# ...
)
Usage:
.. code-block:: python
query = "..."
retriever.invoke(query)
.. code-block:: python
# TODO: Example output.
Use within a chain:
.. code-block:: python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
\"\"\"Answer the question based only on the context provided.
Context: {context}
Question: {question}\"\"\"
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
chain.invoke("...")
.. code-block:: python
# TODO: Example output.
""" # noqa: E501
# TODO: This method must be implemented to retrieve documents.
def _get_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError()

View File

@@ -1,13 +1,12 @@
"""__ModuleName__ toolkits."""
"""__ModuleName__ chat models."""
from typing import List
from langchain_core.tools import BaseTool, BaseToolKit
class __ModuleName__Toolkit(BaseToolKit):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/c123cb2b304f52ab65db4714eeec46af69a861ec/libs/community/langchain_community/agent_toolkits/sql/toolkit.py#L19
# https://github.com/langchain-ai/langchain/blob/a6d1fb4275801a4850e62b6209cfbf096a24f93f/libs/community/langchain_community/agent_toolkits/sql/toolkit.py#L20
"""__ModuleName__ toolkit.
# TODO: Replace with relevant packages, env vars, etc.
@@ -67,6 +66,6 @@ class __ModuleName__Toolkit(BaseToolKit):
""" # noqa: E501
# TODO: This method must be implemented to list tools.
# TODO: This method must be implemented to generate chat responses.
def get_tools(self) -> List[BaseTool]:
raise NotImplementedError()

View File

@@ -3,9 +3,10 @@
from typing import Optional, Type
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import BaseTool

View File

@@ -9,7 +9,7 @@ if __name__ == "__main__":
try:
SourceFileLoader("x", file).load_module()
except Exception:
has_failure = True
has_faillure = True
print(file) # noqa: T201
traceback.print_exc()
print() # noqa: T201

View File

@@ -127,20 +127,6 @@ def new(
)
TEMPLATE_MAP: dict[str, str] = {
"ChatModel": "chat.ipynb",
"DocumentLoader": "document_loaders.ipynb",
"Tool": "tools.ipynb",
"VectorStore": "vectorstores.ipynb",
"Embeddings": "text_embedding.ipynb",
"ByteStore": "kv_store.ipynb",
"LLM": "llms.ipynb",
"Provider": "provider.ipynb",
"Toolkit": "toolkits.ipynb",
"Retriever": "retrievers.ipynb",
}
@integration_cli.command()
def create_doc(
name: Annotated[
@@ -187,7 +173,7 @@ def create_doc(
Creates a new integration doc.
"""
try:
replacements = _process_name(name, community=component_type == "Tool")
replacements = _process_name(name, community=component_type=="Tool")
except ValueError as e:
typer.echo(e)
raise typer.Exit(code=1)
@@ -216,8 +202,14 @@ def create_doc(
# copy over template from ../integration_template
template_dir = Path(__file__).parents[1] / "integration_template" / "docs"
if component_type in TEMPLATE_MAP:
docs_template = template_dir / TEMPLATE_MAP[component_type]
if component_type == "ChatModel":
docs_template = template_dir / "chat.ipynb"
elif component_type == "DocumentLoader":
docs_template = template_dir / "document_loaders.ipynb"
elif component_type == "Tool":
docs_template = template_dir / "tools.ipynb"
elif component_type == "VectorStore":
docs_template = template_dir / "vectorstores.ipynb"
else:
raise ValueError(
f"Unrecognized {component_type=}. Expected one of 'ChatModel', "

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-cli"
version = "0.0.28"
version = "0.0.27"
description = "CLI for interacting with LangChain"
authors = ["Erick Friis <erick@langchain.dev>"]
readme = "README.md"

View File

@@ -91,4 +91,3 @@ vdms>=0.0.20
xata>=1.0.0a7,<2
xmltodict>=0.13.0,<0.14
nanopq==0.2.1
mlflow[genai]>=2.14.0

View File

@@ -58,7 +58,7 @@ class UpstashRatelimitHandler(BaseCallbackHandler):
every time you invoke.
"""
raise_error: bool = True
raise_error = True
_checked: bool = False
def __init__(

View File

@@ -8,7 +8,7 @@ import inspect
import json
import logging
from http import HTTPStatus
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, Dict, List, Optional
import requests # type: ignore
from langchain.chains.base import Chain
@@ -37,7 +37,6 @@ from langchain_community.chains.pebblo_retrieval.utilities import (
CLASSIFIER_URL,
PEBBLO_CLOUD_URL,
PLUGIN_VERSION,
PROMPT_GOV_URL,
PROMPT_URL,
get_runtime,
)
@@ -76,12 +75,10 @@ class PebbloRetrievalQA(Chain):
"""Classifier endpoint."""
classifier_location: str = "local" #: :meta private:
"""Classifier location. It could be either of 'local' or 'pebblo-cloud'."""
_discover_sent: bool = False #: :meta private:
_discover_sent = False #: :meta private:
"""Flag to check if discover payload has been sent."""
_prompt_sent: bool = False #: :meta private:
"""Flag to check if prompt payload has been sent."""
enable_prompt_gov: bool = True #: :meta private:
"""Flag to check if prompt governance is enabled or not"""
def _call(
self,
@@ -105,8 +102,6 @@ class PebbloRetrievalQA(Chain):
question = inputs[self.input_key]
auth_context = inputs.get(self.auth_context_key, {})
semantic_context = inputs.get(self.semantic_context_key, {})
_, prompt_entities = self._check_prompt_validity(question)
accepts_run_manager = (
"run_manager" in inspect.signature(self._get_docs).parameters
)
@@ -138,12 +133,7 @@ class PebbloRetrievalQA(Chain):
for doc in docs
if isinstance(doc, Document)
],
"prompt": {
"data": question,
"entities": prompt_entities.get("entities", {}),
"entityCount": prompt_entities.get("entityCount", 0),
"prompt_gov_enabled": self.enable_prompt_gov,
},
"prompt": {"data": question},
"response": {
"data": answer,
},
@@ -154,7 +144,6 @@ class PebbloRetrievalQA(Chain):
else [],
"classifier_location": self.classifier_location,
}
qa_payload = Qa(**qa)
self._send_prompt(qa_payload)
@@ -186,9 +175,6 @@ class PebbloRetrievalQA(Chain):
accepts_run_manager = (
"run_manager" in inspect.signature(self._aget_docs).parameters
)
_, prompt_entities = self._check_prompt_validity(question)
if accepts_run_manager:
docs = await self._aget_docs(
question, auth_context, semantic_context, run_manager=_run_manager
@@ -527,66 +513,6 @@ class PebbloRetrievalQA(Chain):
logger.warning("API key is missing for sending prompt to Pebblo cloud.")
raise NameError("API key is missing for sending prompt to Pebblo cloud.")
def _check_prompt_validity(self, question: str) -> Tuple[bool, Dict[str, Any]]:
"""
Check the validity of the given prompt using a remote classification service.
This method sends a prompt to a remote classifier service and return entities
present in prompt or not.
Args:
question (str): The prompt question to be validated.
Returns:
bool: True if the prompt is valid (does not contain deny list entities),
False otherwise.
dict: The entities present in the prompt
"""
headers = {
"Accept": "application/json",
"Content-Type": "application/json",
}
prompt_payload = {"prompt": question}
is_valid_prompt: bool = True
prompt_gov_api_url = f"{self.classifier_url}{PROMPT_GOV_URL}"
pebblo_resp = None
prompt_entities: dict = {"entities": {}, "entityCount": 0}
if self.classifier_location == "local":
try:
pebblo_resp = requests.post(
prompt_gov_api_url,
headers=headers,
json=prompt_payload,
timeout=20,
)
logger.debug("prompt-payload: %s", prompt_payload)
logger.debug(
"send_prompt[local]: request url %s, body %s len %s\
response status %s body %s",
pebblo_resp.request.url,
str(pebblo_resp.request.body),
str(
len(
pebblo_resp.request.body if pebblo_resp.request.body else []
)
),
str(pebblo_resp.status_code),
pebblo_resp.json(),
)
logger.debug(f"pebblo_resp.json() {pebblo_resp.json()}")
prompt_entities["entities"] = pebblo_resp.json().get("entities", {})
prompt_entities["entityCount"] = pebblo_resp.json().get(
"entityCount", 0
)
except requests.exceptions.RequestException:
logger.warning("Unable to reach pebblo server.")
except Exception as e:
logger.warning("An Exception caught in _send_discover: local %s", e)
return is_valid_prompt, prompt_entities
@classmethod
def get_chain_details(cls, llm: BaseLanguageModel, **kwargs): # type: ignore
llm_dict = llm.__dict__

View File

@@ -133,10 +133,7 @@ class Context(BaseModel):
class Prompt(BaseModel):
data: Optional[Union[list, str]]
entityCount: Optional[int]
entities: Optional[dict]
prompt_gov_enabled: Optional[bool]
data: str
class Qa(BaseModel):

View File

@@ -15,7 +15,6 @@ CLASSIFIER_URL = os.getenv("PEBBLO_CLASSIFIER_URL", "http://localhost:8000")
PEBBLO_CLOUD_URL = os.getenv("PEBBLO_CLOUD_URL", "https://api.daxa.ai")
PROMPT_URL = "/v1/prompt"
PROMPT_GOV_URL = "/v1/prompt/governance"
APP_DISCOVER_URL = "/v1/app/discover"

View File

@@ -111,7 +111,7 @@ class ChatCohere(BaseChatModel, BaseCohere):
from langchain_community.chat_models import ChatCohere
from langchain_core.messages import HumanMessage
chat = ChatCohere(max_tokens=256, temperature=0.75)
chat = ChatCohere(model="command", max_tokens=256, temperature=0.75)
messages = [HumanMessage(content="knock knock")]
chat.invoke(messages)

View File

@@ -134,9 +134,9 @@ class ChatFriendli(BaseChatModel, BaseFriendli):
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
if run_manager:
run_manager.on_llm_new_token(delta)
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
async def _astream(
self,
@@ -152,9 +152,9 @@ class ChatFriendli(BaseChatModel, BaseFriendli):
async for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
if run_manager:
await run_manager.on_llm_new_token(delta)
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
def _generate(
self,

View File

@@ -1,7 +1,13 @@
import base64
import hashlib
import hmac
import json
import logging
import time
from typing import Any, Dict, Iterator, List, Mapping, Optional, Type
from urllib.parse import urlparse
import requests
from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models.chat_models import (
BaseChatModel,
@@ -28,15 +34,18 @@ from langchain_core.utils import (
logger = logging.getLogger(__name__)
DEFAULT_API_BASE = "https://hunyuan.cloud.tencent.com"
DEFAULT_PATH = "/hyllm/v1/chat/completions"
def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict: Dict[str, Any]
if isinstance(message, ChatMessage):
message_dict = {"Role": message.role, "Content": message.content}
message_dict = {"role": message.role, "content": message.content}
elif isinstance(message, HumanMessage):
message_dict = {"Role": "user", "Content": message.content}
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict = {"Role": "assistant", "Content": message.content}
message_dict = {"role": "assistant", "content": message.content}
else:
raise TypeError(f"Got unknown type {message}")
@@ -44,20 +53,20 @@ def _convert_message_to_dict(message: BaseMessage) -> dict:
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
role = _dict["Role"]
role = _dict["role"]
if role == "user":
return HumanMessage(content=_dict["Content"])
return HumanMessage(content=_dict["content"])
elif role == "assistant":
return AIMessage(content=_dict.get("Content", "") or "")
return AIMessage(content=_dict.get("content", "") or "")
else:
return ChatMessage(content=_dict["Content"], role=role)
return ChatMessage(content=_dict["content"], role=role)
def _convert_delta_to_message_chunk(
_dict: Mapping[str, Any], default_class: Type[BaseMessageChunk]
) -> BaseMessageChunk:
role = _dict.get("Role")
content = _dict.get("Content") or ""
role = _dict.get("role")
content = _dict.get("content") or ""
if role == "user" or default_class == HumanMessageChunk:
return HumanMessageChunk(content=content)
@@ -69,13 +78,43 @@ def _convert_delta_to_message_chunk(
return default_class(content=content) # type: ignore[call-arg]
# signature generation
# https://cloud.tencent.com/document/product/1729/97732#532252ce-e960-48a7-8821-940a9ce2ccf3
def _signature(secret_key: SecretStr, url: str, payload: Dict[str, Any]) -> str:
sorted_keys = sorted(payload.keys())
url_info = urlparse(url)
sign_str = url_info.netloc + url_info.path + "?"
for key in sorted_keys:
value = payload[key]
if isinstance(value, list) or isinstance(value, dict):
value = json.dumps(value, separators=(",", ":"), ensure_ascii=False)
elif isinstance(value, float):
value = "%g" % value
sign_str = sign_str + key + "=" + str(value) + "&"
sign_str = sign_str[:-1]
hmacstr = hmac.new(
key=secret_key.get_secret_value().encode("utf-8"),
msg=sign_str.encode("utf-8"),
digestmod=hashlib.sha1,
).digest()
return base64.b64encode(hmacstr).decode("utf-8")
def _create_chat_result(response: Mapping[str, Any]) -> ChatResult:
generations = []
for choice in response["Choices"]:
message = _convert_dict_to_message(choice["Message"])
for choice in response["choices"]:
message = _convert_dict_to_message(choice["messages"])
generations.append(ChatGeneration(message=message))
token_usage = response["Usage"]
token_usage = response["usage"]
llm_output = {"token_usage": token_usage}
return ChatResult(generations=generations, llm_output=llm_output)
@@ -98,6 +137,8 @@ class ChatHunyuan(BaseChatModel):
def lc_serializable(self) -> bool:
return True
hunyuan_api_base: str = Field(default=DEFAULT_API_BASE)
"""Hunyuan custom endpoints"""
hunyuan_app_id: Optional[int] = None
"""Hunyuan App ID"""
hunyuan_secret_id: Optional[str] = None
@@ -108,26 +149,13 @@ class ChatHunyuan(BaseChatModel):
"""Whether to stream the results or not."""
request_timeout: int = 60
"""Timeout for requests to Hunyuan API. Default is 60 seconds."""
query_id: Optional[str] = None
"""Query id for troubleshooting"""
temperature: float = 1.0
"""What sampling temperature to use."""
top_p: float = 1.0
"""What probability mass to use."""
model: str = "hunyuan-lite"
"""What Model to use.
Optional model:
- hunyuan-lite、
- hunyuan-standard
- hunyuan-standard-256K
- hunyuan-pro
- hunyuan-code
- hunyuan-role
- hunyuan-functioncall
- hunyuan-vision
"""
stream_moderation: bool = False
"""Whether to review the results or not when streaming is true."""
enable_enhancement: bool = True
"""Whether to enhancement the results or not."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for API call not explicitly specified."""
@@ -165,6 +193,12 @@ class ChatHunyuan(BaseChatModel):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
values["hunyuan_api_base"] = get_from_dict_or_env(
values,
"hunyuan_api_base",
"HUNYUAN_API_BASE",
DEFAULT_API_BASE,
)
values["hunyuan_app_id"] = get_from_dict_or_env(
values,
"hunyuan_app_id",
@@ -182,19 +216,22 @@ class ChatHunyuan(BaseChatModel):
"HUNYUAN_SECRET_KEY",
)
)
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling Hunyuan API."""
normal_params = {
"Temperature": self.temperature,
"TopP": self.top_p,
"Model": self.model,
"Stream": self.streaming,
"StreamModeration": self.stream_moderation,
"EnableEnhancement": self.enable_enhancement,
"app_id": self.hunyuan_app_id,
"secret_id": self.hunyuan_secret_id,
"temperature": self.temperature,
"top_p": self.top_p,
}
if self.query_id is not None:
normal_params["query_id"] = self.query_id
return {**normal_params, **self.model_kwargs}
def _generate(
@@ -211,7 +248,13 @@ class ChatHunyuan(BaseChatModel):
return generate_from_stream(stream_iter)
res = self._chat(messages, **kwargs)
return _create_chat_result(json.loads(res.to_json_string()))
response = res.json()
if "error" in response:
raise ValueError(f"Error from Hunyuan api response: {response}")
return _create_chat_result(response)
def _stream(
self,
@@ -223,17 +266,19 @@ class ChatHunyuan(BaseChatModel):
res = self._chat(messages, **kwargs)
default_chunk_class = AIMessageChunk
for chunk in res:
chunk = chunk.get("data", "")
for chunk in res.iter_lines():
chunk = chunk.decode(encoding="UTF-8", errors="strict").replace(
"data: ", ""
)
if len(chunk) == 0:
continue
response = json.loads(chunk)
if "error" in response:
raise ValueError(f"Error from Hunyuan api response: {response}")
for choice in response["Choices"]:
for choice in response["choices"]:
chunk = _convert_delta_to_message_chunk(
choice["Delta"], default_chunk_class
choice["delta"], default_chunk_class
)
default_chunk_class = chunk.__class__
cg_chunk = ChatGenerationChunk(message=chunk)
@@ -241,32 +286,42 @@ class ChatHunyuan(BaseChatModel):
run_manager.on_llm_new_token(chunk.content, chunk=cg_chunk)
yield cg_chunk
def _chat(self, messages: List[BaseMessage], **kwargs: Any) -> Any:
def _chat(self, messages: List[BaseMessage], **kwargs: Any) -> requests.Response:
if self.hunyuan_secret_key is None:
raise ValueError("Hunyuan secret key is not set.")
try:
from tencentcloud.common import credential
from tencentcloud.hunyuan.v20230901 import hunyuan_client, models
except ImportError:
raise ImportError(
"Could not import tencentcloud python package. "
"Please install it with `pip install tencentcloud-sdk-python`."
)
parameters = {**self._default_params, **kwargs}
cred = credential.Credential(
self.hunyuan_secret_id, str(self.hunyuan_secret_key.get_secret_value())
)
client = hunyuan_client.HunyuanClient(cred, "")
req = models.ChatCompletionsRequest()
params = {
"Messages": [_convert_message_to_dict(m) for m in messages],
headers = parameters.pop("headers", {})
timestamp = parameters.pop("timestamp", int(time.time()))
expired = parameters.pop("expired", timestamp + 24 * 60 * 60)
payload = {
"timestamp": timestamp,
"expired": expired,
"messages": [_convert_message_to_dict(m) for m in messages],
**parameters,
}
req.from_json_string(json.dumps(params))
resp = client.ChatCompletions(req)
return resp
if self.streaming:
payload["stream"] = 1
url = self.hunyuan_api_base + DEFAULT_PATH
res = requests.post(
url=url,
timeout=self.request_timeout,
headers={
"Content-Type": "application/json",
"Authorization": _signature(
secret_key=self.hunyuan_secret_key, url=url, payload=payload
),
**headers,
},
json=payload,
stream=self.streaming,
)
return res
@property
def _llm_type(self) -> str:

View File

@@ -9,7 +9,7 @@ import os
import re
from importlib.metadata import version
from pathlib import Path
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Pattern, cast
from typing import TYPE_CHECKING, Any, Dict, List, Optional, cast
from langchain_core.utils import pre_init
@@ -164,7 +164,7 @@ class _KineticaLlmFileContextParser:
"""Parser for Kinetica LLM context datafiles."""
# parse line into a dict containing role and content
PARSER: Pattern = re.compile(r"^<\|(?P<role>\w+)\|>\W*(?P<content>.*)$", re.DOTALL)
PARSER = re.compile(r"^<\|(?P<role>\w+)\|>\W*(?P<content>.*)$", re.DOTALL)
@classmethod
def _removesuffix(cls, text: str, suffix: str) -> str:

View File

@@ -1,19 +1,5 @@
import json
import logging
from typing import (
Any,
Callable,
Dict,
Iterator,
List,
Literal,
Mapping,
Optional,
Sequence,
Type,
Union,
cast,
)
from typing import Any, Dict, Iterator, List, Mapping, Optional, cast
from urllib.parse import urlparse
from langchain_core.callbacks import CallbackManagerForLLMRun
@@ -29,27 +15,15 @@ from langchain_core.messages import (
FunctionMessage,
HumanMessage,
HumanMessageChunk,
InvalidToolCall,
SystemMessage,
SystemMessageChunk,
ToolCall,
ToolMessage,
ToolMessageChunk,
)
from langchain_core.messages.tool import tool_call_chunk
from langchain_core.output_parsers.openai_tools import (
make_invalid_tool_call,
parse_tool_call,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import (
BaseModel,
Field,
PrivateAttr,
)
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_core.runnables import RunnableConfig
logger = logging.getLogger(__name__)
@@ -254,32 +228,11 @@ class ChatMlflow(BaseChatModel):
@staticmethod
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
role = _dict["role"]
content = cast(str, _dict.get("content"))
content = _dict["content"]
if role == "user":
return HumanMessage(content=content)
elif role == "assistant":
content = content or ""
additional_kwargs: Dict = {}
tool_calls = []
invalid_tool_calls = []
if raw_tool_calls := _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = raw_tool_calls
for raw_tool_call in raw_tool_calls:
try:
tool_calls.append(
parse_tool_call(raw_tool_call, return_id=True)
)
except Exception as e:
invalid_tool_calls.append(
make_invalid_tool_call(raw_tool_call, str(e))
)
return AIMessage(
content=content,
additional_kwargs=additional_kwargs,
id=_dict.get("id"),
tool_calls=tool_calls,
invalid_tool_calls=invalid_tool_calls,
)
return AIMessage(content=content)
elif role == "system":
return SystemMessage(content=content)
else:
@@ -290,38 +243,13 @@ class ChatMlflow(BaseChatModel):
_dict: Mapping[str, Any], default_role: str
) -> BaseMessageChunk:
role = _dict.get("role", default_role)
content = _dict.get("content") or ""
content = _dict["content"]
if role == "user":
return HumanMessageChunk(content=content)
elif role == "assistant":
additional_kwargs: Dict = {}
tool_call_chunks = []
if raw_tool_calls := _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = raw_tool_calls
try:
tool_call_chunks = [
tool_call_chunk(
name=rtc["function"].get("name"),
args=rtc["function"].get("arguments"),
id=rtc.get("id"),
index=rtc["index"],
)
for rtc in raw_tool_calls
]
except KeyError:
pass
return AIMessageChunk(
content=content,
additional_kwargs=additional_kwargs,
id=_dict.get("id"),
tool_call_chunks=tool_call_chunks,
)
return AIMessageChunk(content=content)
elif role == "system":
return SystemMessageChunk(content=content)
elif role == "tool":
return ToolMessageChunk(
content=content, tool_call_id=_dict["tool_call_id"], id=_dict.get("id")
)
else:
return ChatMessageChunk(content=content, role=role)
@@ -334,47 +262,14 @@ class ChatMlflow(BaseChatModel):
@staticmethod
def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict = {"content": message.content}
if (name := message.name or message.additional_kwargs.get("name")) is not None:
message_dict["name"] = name
if isinstance(message, ChatMessage):
message_dict["role"] = message.role
message_dict = {"role": message.role, "content": message.content}
elif isinstance(message, HumanMessage):
message_dict["role"] = "user"
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict["role"] = "assistant"
if message.tool_calls or message.invalid_tool_calls:
message_dict["tool_calls"] = [
_lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
] + [
_lc_invalid_tool_call_to_openai_tool_call(tc)
for tc in message.invalid_tool_calls
] # type: ignore[assignment]
elif "tool_calls" in message.additional_kwargs:
message_dict["tool_calls"] = message.additional_kwargs["tool_calls"]
tool_call_supported_props = {"id", "type", "function"}
message_dict["tool_calls"] = [
{
k: v
for k, v in tool_call.items() # type: ignore[union-attr]
if k in tool_call_supported_props
}
for tool_call in message_dict["tool_calls"]
]
else:
pass
# If tool calls present, content null value should be None not empty string.
if "tool_calls" in message_dict:
message_dict["content"] = message_dict["content"] or None # type: ignore[assignment]
message_dict = {"role": "assistant", "content": message.content}
elif isinstance(message, SystemMessage):
message_dict["role"] = "system"
elif isinstance(message, ToolMessage):
message_dict["role"] = "tool"
message_dict["tool_call_id"] = message.tool_call_id
supported_props = {"content", "role", "tool_call_id"}
message_dict = {
k: v for k, v in message_dict.items() if k in supported_props
}
message_dict = {"role": "system", "content": message.content}
elif isinstance(message, FunctionMessage):
raise ValueError(
"Function messages are not supported by Databricks. Please"
@@ -385,6 +280,12 @@ class ChatMlflow(BaseChatModel):
if "function_call" in message.additional_kwargs:
ChatMlflow._raise_functions_not_supported()
if message.additional_kwargs:
logger.warning(
"Additional message arguments are unsupported by Databricks"
" and will be ignored: %s",
message.additional_kwargs,
)
return message_dict
@staticmethod
@@ -401,89 +302,3 @@ class ChatMlflow(BaseChatModel):
usage = response.get("usage", {})
return ChatResult(generations=generations, llm_output=usage)
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
*,
tool_choice: Optional[
Union[dict, str, Literal["auto", "none", "required", "any"], bool]
] = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
Args:
tools: A list of tool definitions to bind to this chat model.
Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic
models, callables, and BaseTools will be automatically converted to
their schema dictionary representation.
tool_choice: Which tool to require the model to call.
Options are:
name of the tool (str): calls corresponding tool;
"auto": automatically selects a tool (including no tool);
"none": model does not generate any tool calls and instead must
generate a standard assistant message;
"required": the model picks the most relevant tool in tools and
must generate a tool call;
or a dict of the form:
{"type": "function", "function": {"name": <<tool_name>>}}.
**kwargs: Any additional parameters to pass to the
:class:`~langchain.runnable.Runnable` constructor.
"""
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
if tool_choice:
if isinstance(tool_choice, str):
# tool_choice is a tool/function name
if tool_choice not in ("auto", "none", "required"):
tool_choice = {
"type": "function",
"function": {"name": tool_choice},
}
elif isinstance(tool_choice, dict):
tool_names = [
formatted_tool["function"]["name"]
for formatted_tool in formatted_tools
]
if not any(
tool_name == tool_choice["function"]["name"]
for tool_name in tool_names
):
raise ValueError(
f"Tool choice {tool_choice} was specified, but the only "
f"provided tools were {tool_names}."
)
else:
raise ValueError(
f"Unrecognized tool_choice type. Expected str, bool or dict. "
f"Received: {tool_choice}"
)
kwargs["tool_choice"] = tool_choice
return super().bind(tools=formatted_tools, **kwargs)
def _lc_tool_call_to_openai_tool_call(tool_call: ToolCall) -> dict:
return {
"type": "function",
"id": tool_call["id"],
"function": {
"name": tool_call["name"],
"arguments": json.dumps(tool_call["args"]),
},
}
def _lc_invalid_tool_call_to_openai_tool_call(
invalid_tool_call: InvalidToolCall,
) -> dict:
return {
"type": "function",
"id": invalid_tool_call["id"],
"function": {
"name": invalid_tool_call["name"],
"arguments": invalid_tool_call["args"],
},
}

View File

@@ -186,9 +186,9 @@ class ChatMLX(BaseChatModel):
# yield text, if any
if text:
chunk = ChatGenerationChunk(message=AIMessageChunk(content=text))
yield chunk
if run_manager:
run_manager.on_llm_new_token(text, chunk=chunk)
yield chunk
# break if stop sequence found
if token == eos_token_id or (stop is not None and text in stop):

View File

@@ -135,7 +135,7 @@ class Provider(ABC):
class CohereProvider(Provider):
stop_sequence_key: str = "stop_sequences"
stop_sequence_key = "stop_sequences"
def __init__(self) -> None:
from oci.generative_ai_inference import models
@@ -364,7 +364,7 @@ class CohereProvider(Provider):
class MetaProvider(Provider):
stop_sequence_key: str = "stop"
stop_sequence_key = "stop"
def __init__(self) -> None:
from oci.generative_ai_inference import models

View File

@@ -1,7 +1,7 @@
# LLM Lingua Document Compressor
import re
from typing import Any, Dict, List, Optional, Pattern, Sequence, Tuple
from typing import Any, Dict, List, Optional, Sequence, Tuple
from langchain_core.callbacks import Callbacks
from langchain_core.documents import Document
@@ -24,8 +24,8 @@ class LLMLinguaCompressor(BaseDocumentCompressor):
# Pattern to match ref tags at the beginning or end of the string,
# allowing for malformed tags
_pattern_beginning: Pattern = re.compile(r"\A(?:<#)?(?:ref)?(\d+)(?:#>?)?")
_pattern_ending: Pattern = re.compile(r"(?:<#)?(?:ref)?(\d+)(?:#>?)?\Z")
_pattern_beginning = re.compile(r"\A(?:<#)?(?:ref)?(\d+)(?:#>?)?")
_pattern_ending = re.compile(r"(?:<#)?(?:ref)?(\d+)(?:#>?)?\Z")
model_name: str = "NousResearch/Llama-2-7b-hf"
"""The hugging face model to use"""

View File

@@ -1,6 +1,6 @@
import re
from pathlib import Path
from typing import Iterator, Pattern, Union
from typing import Iterator, Union
from langchain_core.documents import Document
@@ -10,9 +10,7 @@ from langchain_community.document_loaders.base import BaseLoader
class AcreomLoader(BaseLoader):
"""Load `acreom` vault from a directory."""
FRONT_MATTER_REGEX: Pattern = re.compile(
r"^---\n(.*?)\n---\n", re.MULTILINE | re.DOTALL
)
FRONT_MATTER_REGEX = re.compile(r"^---\n(.*?)\n---\n", re.MULTILINE | re.DOTALL)
"""Regex to match front matter metadata in markdown files."""
def __init__(

View File

@@ -44,13 +44,13 @@ class DocugamiLoader(BaseLoader, BaseModel):
access_token: Optional[str] = os.environ.get("DOCUGAMI_API_KEY")
"""The Docugami API access token to use."""
max_text_length: int = 4096
max_text_length = 4096
"""Max length of chunk text returned."""
min_text_length: int = 32
"""Threshold under which chunks are appended to next to avoid over-chunking."""
max_metadata_length: int = 512
max_metadata_length = 512
"""Max length of metadata text returned."""
include_xml_tags: bool = False

View File

@@ -36,8 +36,8 @@ class HuggingFaceModelLoader(BaseLoader):
print(doc.metadata) # Metadata of the model
"""
BASE_URL: str = "https://huggingface.co/api/models"
README_BASE_URL: str = "https://huggingface.co/{model_id}/raw/main/README.md"
BASE_URL = "https://huggingface.co/api/models"
README_BASE_URL = "https://huggingface.co/{model_id}/raw/main/README.md"
def __init__(
self,

View File

@@ -1,4 +1,3 @@
import logging
from typing import Any, Dict, List, Optional
import requests
@@ -11,10 +10,6 @@ DATABASE_URL = NOTION_BASE_URL + "/databases/{database_id}/query"
PAGE_URL = NOTION_BASE_URL + "/pages/{page_id}"
BLOCK_URL = NOTION_BASE_URL + "/blocks/{block_id}/children"
# Configure logging
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(__name__)
class NotionDBLoader(BaseLoader):
"""Load from `Notion DB`.
@@ -68,6 +63,7 @@ class NotionDBLoader(BaseLoader):
List[Document]: List of documents.
"""
page_summaries = self._retrieve_page_summaries()
return list(self.load_page(page_summary) for page_summary in page_summaries)
def _retrieve_page_summaries(
@@ -137,16 +133,11 @@ class NotionDBLoader(BaseLoader):
elif prop_type == "status":
value = prop_data["status"]["name"] if prop_data["status"] else None
elif prop_type == "people":
value = []
if prop_data["people"]:
for item in prop_data["people"]:
name = item.get("name")
if not name:
logger.warning(
"Missing 'name' in 'people' property "
f"for page {page_id}"
)
value.append(name)
value = (
[item["name"] for item in prop_data["people"]]
if prop_data["people"]
else []
)
elif prop_type == "date":
value = prop_data["date"] if prop_data["date"] else None
elif prop_type == "last_edited_time":

View File

@@ -2,7 +2,7 @@ import functools
import logging
import re
from pathlib import Path
from typing import Any, Dict, Iterator, Pattern, Union
from typing import Any, Dict, Iterator, Union
import yaml
from langchain_core.documents import Document
@@ -15,16 +15,12 @@ logger = logging.getLogger(__name__)
class ObsidianLoader(BaseLoader):
"""Load `Obsidian` files from directory."""
FRONT_MATTER_REGEX: Pattern = re.compile(r"^---\n(.*?)\n---\n", re.DOTALL)
TEMPLATE_VARIABLE_REGEX: Pattern = re.compile(r"{{(.*?)}}", re.DOTALL)
TAG_REGEX: Pattern = re.compile(r"[^\S\/]#([a-zA-Z_]+[-_/\w]*)")
DATAVIEW_LINE_REGEX: Pattern = re.compile(r"^\s*(\w+)::\s*(.*)$", re.MULTILINE)
DATAVIEW_INLINE_BRACKET_REGEX: Pattern = re.compile(
r"\[(\w+)::\s*(.*)\]", re.MULTILINE
)
DATAVIEW_INLINE_PAREN_REGEX: Pattern = re.compile(
r"\((\w+)::\s*(.*)\)", re.MULTILINE
)
FRONT_MATTER_REGEX = re.compile(r"^---\n(.*?)\n---\n", re.DOTALL)
TEMPLATE_VARIABLE_REGEX = re.compile(r"{{(.*?)}}", re.DOTALL)
TAG_REGEX = re.compile(r"[^\S\/]#([a-zA-Z_]+[-_/\w]*)")
DATAVIEW_LINE_REGEX = re.compile(r"^\s*(\w+)::\s*(.*)$", re.MULTILINE)
DATAVIEW_INLINE_BRACKET_REGEX = re.compile(r"\[(\w+)::\s*(.*)\]", re.MULTILINE)
DATAVIEW_INLINE_PAREN_REGEX = re.compile(r"\((\w+)::\s*(.*)\)", re.MULTILINE)
def __init__(
self,

View File

@@ -39,7 +39,7 @@ class OneNoteLoader(BaseLoader, BaseModel):
"""Personal access token"""
onenote_api_base_url: str = "https://graph.microsoft.com/v1.0/me/onenote"
"""URL of Microsoft Graph API for OneNote"""
authority_url: str = "https://login.microsoftonline.com/consumers/"
authority_url = "https://login.microsoftonline.com/consumers/"
"""A URL that identifies a token authority"""
token_path: FilePath = Path.home() / ".credentials" / "onenote_graph_token.txt"
"""Path to the file where the access token is stored"""

View File

@@ -1,5 +1,5 @@
import re
from typing import Callable, List, Pattern
from typing import Callable, List
from langchain_community.document_loaders.parsers.language.code_segmenter import (
CodeSegmenter,
@@ -9,11 +9,11 @@ from langchain_community.document_loaders.parsers.language.code_segmenter import
class CobolSegmenter(CodeSegmenter):
"""Code segmenter for `COBOL`."""
PARAGRAPH_PATTERN: Pattern = re.compile(r"^[A-Z0-9\-]+(\s+.*)?\.$", re.IGNORECASE)
DIVISION_PATTERN: Pattern = re.compile(
PARAGRAPH_PATTERN = re.compile(r"^[A-Z0-9\-]+(\s+.*)?\.$", re.IGNORECASE)
DIVISION_PATTERN = re.compile(
r"^\s*(IDENTIFICATION|DATA|PROCEDURE|ENVIRONMENT)\s+DIVISION.*$", re.IGNORECASE
)
SECTION_PATTERN: Pattern = re.compile(r"^\s*[A-Z0-9\-]+\s+SECTION.$", re.IGNORECASE)
SECTION_PATTERN = re.compile(r"^\s*[A-Z0-9\-]+\s+SECTION.$", re.IGNORECASE)
def __init__(self, code: str):
super().__init__(code)

View File

@@ -13,7 +13,6 @@ from langchain_core.documents import Document
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.utilities.pebblo import (
APP_DISCOVER_URL,
BATCH_SIZE_BYTES,
CLASSIFIER_URL,
LOADER_DOC_URL,
PEBBLO_CLOUD_URL,
@@ -21,7 +20,6 @@ from langchain_community.utilities.pebblo import (
App,
Doc,
IndexedDocument,
generate_size_based_batches,
get_full_path,
get_loader_full_path,
get_loader_type,
@@ -70,7 +68,6 @@ class PebbloSafeLoader(BaseLoader):
self.source_aggregate_size = 0
self.classifier_url = classifier_url or CLASSIFIER_URL
self.classifier_location = classifier_location
self.batch_size = BATCH_SIZE_BYTES
self.loader_details = {
"loader": loader_name,
"source_path": self.source_path,
@@ -92,37 +89,15 @@ class PebbloSafeLoader(BaseLoader):
list: Documents fetched from load method of the wrapped `loader`.
"""
self.docs = self.loader.load()
# Classify docs in batches
self.classify_in_batches()
self.docs_with_id = self._index_docs()
classified_docs = self._classify_doc(loading_end=True)
self._add_pebblo_specific_metadata(classified_docs)
if self.load_semantic:
self.docs = self._add_semantic_to_docs(classified_docs)
else:
self.docs = self._unindex_docs() # type: ignore
return self.docs
def classify_in_batches(self) -> None:
"""
Classify documents in batches.
This is to avoid API timeouts when sending large number of documents.
Batches are generated based on the page_content size.
"""
batches: List[List[Document]] = generate_size_based_batches(
self.docs, self.batch_size
)
processed_docs: List[Document] = []
total_batches = len(batches)
for i, batch in enumerate(batches):
is_last_batch: bool = i == total_batches - 1
self.docs = batch
self.docs_with_id = self._index_docs()
classified_docs = self._classify_doc(loading_end=is_last_batch)
self._add_pebblo_specific_metadata(classified_docs)
if self.load_semantic:
batch_processed_docs = self._add_semantic_to_docs(classified_docs)
else:
batch_processed_docs = self._unindex_docs()
processed_docs.extend(batch_processed_docs)
self.docs = processed_docs
def lazy_load(self) -> Iterator[Document]:
"""Load documents in lazy fashion.
@@ -556,6 +531,7 @@ class PebbloSafeLoader(BaseLoader):
"full_path", doc_metadata.get("source", self.source_path)
)
)
doc_metadata["pb_id"] = doc.pb_id
doc_metadata["pb_checksum"] = classified_docs.get(doc.pb_id, {}).get(
"pb_checksum", None
)

View File

@@ -27,7 +27,7 @@ class ErnieEmbeddings(BaseModel, Embeddings):
chunk_size: int = 16
model_name: str = "ErnieBot-Embedding-V1"
model_name = "ErnieBot-Embedding-V1"
_lock = threading.Lock()

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
import json
import re
from hashlib import md5
from typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Pattern, Tuple, Union
from typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Tuple, Union
from langchain_community.graphs.graph_document import GraphDocument
from langchain_community.graphs.graph_store import GraphStore
@@ -63,7 +63,7 @@ class AGEGraph(GraphStore):
}
# precompiled regex for checking chars in graph labels
label_regex: Pattern = re.compile("[^0-9a-zA-Z]+")
label_regex = re.compile("[^0-9a-zA-Z]+")
def __init__(
self, graph_name: str, conf: Dict[str, Any], create: bool = True

View File

@@ -75,7 +75,7 @@ class Arcee(LLM):
model_name=self.model,
)
@root_validator(pre=True)
@root_validator(pre=False)
def validate_environments(cls, values: Dict) -> Dict:
"""Validate Arcee environment variables."""

View File

@@ -39,9 +39,9 @@ class MoonshotCommon(BaseModel):
"""Moonshot API key. Get it here: https://platform.moonshot.cn/console/api-keys"""
model_name: str = Field(default="moonshot-v1-8k", alias="model")
"""Model name. Available models listed here: https://platform.moonshot.cn/pricing"""
max_tokens: int = 1024
max_tokens = 1024
"""Maximum number of tokens to generate."""
temperature: float = 0.3
temperature = 0.3
"""Temperature parameter (higher values make the model more creative)."""
class Config:

View File

@@ -244,7 +244,7 @@ class OCIModelDeploymentTGI(OCIModelDeploymentLLM):
"""Watermarking with `A Watermark for Large Language Models <https://arxiv.org/abs/2301.10226>`_.
Defaults to True."""
return_full_text: bool = False
return_full_text = False
"""Whether to prepend the prompt to the generated text. Defaults to False."""
@property

View File

@@ -26,7 +26,7 @@ class Provider(ABC):
class CohereProvider(Provider):
stop_sequence_key: str = "stop_sequences"
stop_sequence_key = "stop_sequences"
def __init__(self) -> None:
from oci.generative_ai_inference import models
@@ -38,7 +38,7 @@ class CohereProvider(Provider):
class MetaProvider(Provider):
stop_sequence_key: str = "stop"
stop_sequence_key = "stop"
def __init__(self) -> None:
from oci.generative_ai_inference import models

View File

@@ -16,7 +16,7 @@ class SVEndpointHandler:
:param str host_url: Base URL of the DaaS API service
"""
API_BASE_PATH: str = "/api/predict"
API_BASE_PATH = "/api/predict"
def __init__(self, host_url: str):
"""

View File

@@ -41,7 +41,7 @@ class SolarCommon(BaseModel):
model_name: str = Field(default="solar-1-mini-chat", alias="model")
"""Model name. Available models listed here: https://console.upstage.ai/services/solar"""
max_tokens: int = Field(default=1024)
temperature: float = 0.3
temperature = 0.3
class Config:
allow_population_by_field_name = True

View File

@@ -57,7 +57,7 @@ class _BaseYandexGPT(Serializable):
disable_request_logging: bool = False
"""YandexGPT API logs all request data by default.
If you provide personal data, confidential information, disable logging."""
_grpc_metadata: Optional[Sequence] = None
_grpc_metadata: Sequence
@property
def _llm_type(self) -> str:

View File

@@ -27,7 +27,7 @@ class SupabaseVectorTranslator(Visitor):
]
"""Subset of allowed logical comparators."""
metadata_column: str = "metadata"
metadata_column = "metadata"
def _map_comparator(self, comparator: Comparator) -> str:
"""

View File

@@ -15,71 +15,7 @@ class SearchDepth(Enum):
class TavilySearchAPIRetriever(BaseRetriever):
"""Tavily Search API retriever.
Setup:
Install ``langchain-community`` and set environment variable ``TAVILY_API_KEY``.
.. code-block:: bash
pip install -U langchain-community
export TAVILY_API_KEY="your-api-key"
Key init args:
k: int
Number of results to include.
include_generated_answer: bool
Include a generated answer with results
include_raw_content: bool
Include raw content with results.
include_images: bool
Return images in addition to text.
Instantiate:
.. code-block:: python
from langchain_community.retrievers import TavilySearchAPIRetriever
retriever = TavilySearchAPIRetriever(k=3)
Usage:
.. code-block:: python
query = "what year was breath of the wild released?"
retriever.invoke(query)
Use within a chain:
.. code-block:: python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
\"\"\"Answer the question based only on the context provided.
Context: {context}
Question: {question}\"\"\"
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
chain.invoke("how many units did bretch of the wild sell in 2020")
""" # noqa: E501
"""Tavily Search API retriever."""
k: int = 10
include_generated_answer: bool = False

View File

@@ -74,8 +74,8 @@ class BearlyInterpreterTool:
"""Tool for evaluating python code in a sandbox environment."""
api_key: str
endpoint: str = "https://exec.bearly.ai/v1/interpreter"
name: str = "bearly_interpreter"
endpoint = "https://exec.bearly.ai/v1/interpreter"
name = "bearly_interpreter"
args_schema: Type[BaseModel] = BearlyInterpreterToolArguments
files: Dict[str, FileInfo] = {}

View File

@@ -51,12 +51,12 @@ class ZenGuardTool(BaseTool):
"ZenGuard AI integration package. ZenGuard AI - the fastest GenAI guardrails."
)
args_schema = ZenGuardInput
return_direct: bool = True
return_direct = True
zenguard_api_key: Optional[str] = Field(default=None)
_ZENGUARD_API_URL_ROOT: str = "https://api.zenguard.ai/"
_ZENGUARD_API_KEY_ENV_NAME: str = "ZENGUARD_API_KEY"
_ZENGUARD_API_URL_ROOT = "https://api.zenguard.ai/"
_ZENGUARD_API_KEY_ENV_NAME = "ZENGUARD_API_KEY"
@validator("zenguard_api_key", pre=True, always=True, check_fields=False)
def set_api_key(cls, v: str) -> str:

View File

@@ -4,7 +4,7 @@ import logging
import os
import pathlib
import platform
from typing import List, Optional, Tuple
from typing import Optional, Tuple
from langchain_core.documents import Document
from langchain_core.env import get_runtime_environment
@@ -20,7 +20,6 @@ PEBBLO_CLOUD_URL = os.getenv("PEBBLO_CLOUD_URL", "https://api.daxa.ai")
LOADER_DOC_URL = "/v1/loader/doc"
APP_DISCOVER_URL = "/v1/app/discover"
BATCH_SIZE_BYTES = 100 * 1024 # 100 KB
# Supported loaders for Pebblo safe data loading
file_loader = [
@@ -302,43 +301,3 @@ def get_ip() -> str:
except Exception:
public_ip = socket.gethostbyname("localhost")
return public_ip
def generate_size_based_batches(
docs: List[Document], max_batch_size: int = 100 * 1024
) -> List[List[Document]]:
"""
Generate batches of documents based on page_content size.
Args:
docs: List of documents to be batched.
max_batch_size: Maximum size of each batch in bytes. Defaults to 100*1024(100KB)
Returns:
List[List[Document]]: List of batches of documents
"""
batches: List[List[Document]] = []
current_batch: List[Document] = []
current_batch_size: int = 0
for doc in docs:
# Calculate the size of the document in bytes
doc_size: int = len(doc.page_content.encode("utf-8"))
if doc_size > max_batch_size:
# If a single document exceeds the max batch size, send it as a single batch
batches.append([doc])
else:
if current_batch_size + doc_size > max_batch_size:
# If adding this document exceeds the max batch size, start a new batch
batches.append(current_batch)
current_batch = []
current_batch_size = 0
# Add document to the current batch
current_batch.append(doc)
current_batch_size += doc_size
# Add the last batch if it has documents
if current_batch:
batches.append(current_batch)
return batches

View File

@@ -11,7 +11,7 @@ class Portkey:
Default: "https://api.portkey.ai/v1/proxy"
"""
base: str = "https://api.portkey.ai/v1/proxy"
base = "https://api.portkey.ai/v1/proxy"
@staticmethod
def Config(

View File

@@ -28,7 +28,7 @@ class TokenEscaper:
# Characters that RediSearch requires us to escape during queries.
# Source: https://redis.io/docs/stack/search/reference/escaping/#the-rules-of-text-field-tokenization
DEFAULT_ESCAPED_CHARS: str = r"[,.<>{}\[\]\\\"\':;!@#$%^&*()\-+=~\/ ]"
DEFAULT_ESCAPED_CHARS = r"[,.<>{}\[\]\\\"\':;!@#$%^&*()\-+=~\/ ]"
def __init__(self, escape_chars_re: Optional[Pattern] = None):
if escape_chars_re:

View File

@@ -29,7 +29,7 @@ class AtlasDB(VectorStore):
vectorstore = AtlasDB("my_project", embeddings.embed_query)
"""
_ATLAS_DEFAULT_ID_FIELD: str = "atlas_id"
_ATLAS_DEFAULT_ID_FIELD = "atlas_id"
def __init__(
self,

View File

@@ -21,7 +21,7 @@ DEFAULT_TOPN = 4
class AwaDB(VectorStore):
"""`AwaDB` vector store."""
_DEFAULT_TABLE_NAME: str = "langchain_awadb"
_DEFAULT_TABLE_NAME = "langchain_awadb"
def __init__(
self,

View File

@@ -53,7 +53,7 @@ class Bagel(VectorStore):
vectorstore = Bagel(cluster_name="langchain_store")
"""
_LANGCHAIN_DEFAULT_CLUSTER_NAME: str = "langchain"
_LANGCHAIN_DEFAULT_CLUSTER_NAME = "langchain"
def __init__(
self,

View File

@@ -66,7 +66,7 @@ class Chroma(VectorStore):
vectorstore = Chroma("langchain_store", embeddings)
"""
_LANGCHAIN_DEFAULT_COLLECTION_NAME: str = "langchain"
_LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain"
def __init__(
self,

View File

@@ -60,10 +60,10 @@ class CouchbaseVectorStore(VectorStore):
"""
# Default batch size
DEFAULT_BATCH_SIZE: int = 100
_metadata_key: str = "metadata"
_default_text_key: str = "text"
_default_embedding_key: str = "embedding"
DEFAULT_BATCH_SIZE = 100
_metadata_key = "metadata"
_default_text_key = "text"
_default_embedding_key = "embedding"
def _check_bucket_exists(self) -> bool:
"""Check if the bucket exists in the linked Couchbase cluster"""

View File

@@ -51,7 +51,7 @@ class DeepLake(VectorStore):
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
"""
_LANGCHAIN_DEFAULT_DEEPLAKE_PATH: str = "./deeplake/"
_LANGCHAIN_DEFAULT_DEEPLAKE_PATH = "./deeplake/"
_valid_search_kwargs = ["lambda_mult"]
def __init__(

View File

@@ -45,9 +45,9 @@ class Epsilla(VectorStore):
epsilla = Epsilla(client, embeddings, db_path, db_name)
"""
_LANGCHAIN_DEFAULT_DB_NAME: str = "langchain_store"
_LANGCHAIN_DEFAULT_DB_PATH: str = "/tmp/langchain-epsilla"
_LANGCHAIN_DEFAULT_TABLE_NAME: str = "langchain_collection"
_LANGCHAIN_DEFAULT_DB_NAME = "langchain_store"
_LANGCHAIN_DEFAULT_DB_PATH = "/tmp/langchain-epsilla"
_LANGCHAIN_DEFAULT_TABLE_NAME = "langchain_collection"
def __init__(
self,

View File

@@ -13,7 +13,6 @@ from typing import (
Iterable,
List,
Optional,
Pattern,
Tuple,
Type,
)
@@ -224,7 +223,7 @@ class HanaDB(VectorStore):
return embedding
# Compile pattern only once, for better performance
_compiled_pattern: Pattern = re.compile("^[_a-zA-Z][_a-zA-Z0-9]*$")
_compiled_pattern = re.compile("^[_a-zA-Z][_a-zA-Z0-9]*$")
@staticmethod
def _sanitize_metadata_keys(metadata: dict) -> dict:

View File

@@ -48,7 +48,7 @@ class ManticoreSearchSettings(BaseSettings):
hnsw_m: int = 16 # The default is 16.
# An optional setting that defines a construction time/accuracy trade-off.
hnsw_ef_construction: int = 100
hnsw_ef_construction = 100
def get_connection_string(self) -> str:
return self.proto + "://" + self.host + ":" + str(self.port)

View File

@@ -85,8 +85,8 @@ class Qdrant(VectorStore):
qdrant = Qdrant(client, collection_name, embedding_function)
"""
CONTENT_KEY: str = "page_content"
METADATA_KEY: str = "metadata"
CONTENT_KEY = "page_content"
METADATA_KEY = "metadata"
VECTOR_NAME = None
def __init__(

View File

@@ -25,7 +25,7 @@ class SemaDB(VectorStore):
"""
HOST: str = "semadb.p.rapidapi.com"
HOST = "semadb.p.rapidapi.com"
BASE_URL = "https://" + HOST
def __init__(

Some files were not shown because too many files have changed in this diff Show More