Compare commits

..

45 Commits

Author SHA1 Message Date
Eugene Yurtsev
0f3b569583 foo 2024-08-02 21:31:11 -04:00
gbaian10
54e9ea433a fix: Modify the order of init_chat_model import ollama package. (#24977) 2024-08-02 08:32:56 -07:00
David Gao
fe1820cdaf docs: add wikipedia integration docs (#24932)
Dear langchain maintainers, 

I add the wikipedia integration docs according to the [web
docs](https://python.langchain.com/v0.2/docs/integrations/retrievers/wikipedia/),
and follow the format of [tavily
example](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/retrievers/tavily.ipynb)
and [retriever
template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb),
this is my first time contributing large repo. please let me know if I'm
doing anything wrong, thank you!

Topic related: #24908

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-02 10:12:04 -04:00
ZhangShenao
71c0564c9f community[patch]: Add test case for MoonshotChat (#24960)
Add test case for `MoonshotChat`.
2024-08-02 09:37:31 -04:00
ZhangShenao
c65e48996c patch[partners] Fix check_imports bugs in pinecone and milvus (#24971)
Fix wrong declared variables of `check_imports` in pinecone and milvus
2024-08-02 09:27:11 -04:00
Isaac Francisco
d7688a4328 community[patch]: adding artifact to Tavily search (#24376)
This allows you to get raw content as well as the answer, instead of
just getting the results.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-01 21:12:11 -07:00
Bagatur
7b08de8909 langchain[patch]: Release 0.2.12 (#24954) 2024-08-02 04:04:49 +00:00
Bagatur
245cb5a252 core[patch]: Release 0.2.27 (#24952) 2024-08-02 01:43:24 +00:00
Bagatur
199e9c5ae0 core[patch]: Fix tool args schema inherited field parsing (#24936)
Fix #24925
2024-08-01 18:36:33 -07:00
Bagatur
fba65ba04f infra: test core on py 3.9, 10, 11 (#24951) 2024-08-01 18:23:37 -07:00
Leonid Ganeline
4092876863 core: docstrings `BaseCallbackHandler update (#24948)
Added missed docstrings
2024-08-01 20:46:53 -04:00
ccurme
6e45dba471 docs: fix redirect (#24950) 2024-08-01 20:45:54 -04:00
WU LIFU
ad16eed119 core[patch]: runnable config ensure_config deep copy from var_child_runnable… (#24862)
**issue**: #24660 
RunnableWithMessageHistory.stream result in error because the
[evaluation](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/runnables/branch.py#L220)
of the branch
[condition](99eb31ec41/libs/core/langchain_core/runnables/history.py (L328C1-L329C1))
unexpectedly trigger the
"[on_end](99eb31ec41/libs/core/langchain_core/runnables/history.py (L332))"
(exit_history) callback of the default branch


**descriptions**
After a lot of investigation I'm convinced that the root cause is that
1. during the execution of the runnable, the
[var_child_runnable_config](99eb31ec41/libs/core/langchain_core/runnables/config.py (L122))
is shared between the branch
[condition](99eb31ec41/libs/core/langchain_core/runnables/history.py (L328C1-L329C1))
runnable and the [default branch
runnable](99eb31ec41/libs/core/langchain_core/runnables/history.py (L332))
within the same context
2. when the default branch runnable runs, it gets the
[var_child_runnable_config](99eb31ec41/libs/core/langchain_core/runnables/config.py (L163))
and may unintentionally [add more handlers
](99eb31ec41/libs/core/langchain_core/runnables/config.py (L325))to
the callback manager of this config
3. when it is again the turn for the
[condition](99eb31ec41/libs/core/langchain_core/runnables/history.py (L328C1-L329C1))
to run, it gets the `var_child_runnable_config` whose callback manager
has the handlers added by the default branch. When it runs that handler
(`exit_history`) it leads to the error
   
with the assumption that, the `ensure_config` function actually does
want to create a immutable copy from `var_child_runnable_config` because
it starts with an [`empty` variable
](99eb31ec41/libs/core/langchain_core/runnables/config.py (L156)),
i go ahead to do a deepcopy to ensure that future modification to the
returned value won't affect the `var_child_runnable_config` variable
   
   Having said that I actually 
1. don't know if this is a proper fix
2. don't know whether it will lead to other unintended consequence 
3. don't know why only "stream" runs into this issue while "invoke" runs
without problem

so @nfcampos @hwchase17 please help review, thanks!

---------

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-01 17:30:32 -07:00
Jacob Lee
3ab09d87d6 docs[patch]: Adds components for prereqs, compatibility, fix chat model tab issue (#24585)
Added to `docs/how_to/tools_runtime` as a proof of concept, will apply
everywhere if we like.

A bit more compact than the default callouts, will help standardize the
layout of our pages since we frequently use these boxes.

<img width="1088" alt="Screenshot 2024-07-23 at 4 49 02 PM"
src="https://github.com/user-attachments/assets/7380801c-e092-4d31-bcd8-3652ee05f29e">
2024-08-01 15:04:13 -07:00
ccurme
9cb69a8746 docs: update retriever template, add arxiv retriever (#24947) 2024-08-01 16:53:18 -04:00
Casey Clements
db3ceb4d0a partners/mongodb: Improved search index commands (#24745)
Hardens index commands with try/except for free clusters and optional
waits for syncing and tests.

[efriis](https://github.com/efriis) These are the upgrades to the search
index commands (CRUD) that I mentioned.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-01 20:16:32 +00:00
ccurme
db42576b09 docs: delete old migration guide (#24881)
Redirects to
https://python.langchain.com/v0.2/docs/versions/migrating_chains/
2024-08-01 16:11:47 -04:00
Ikko Eltociear Ashimine
be5294e35d docs: update agents.ipynb (#24945)
initalize -> initialize
2024-08-01 14:37:37 -04:00
ccurme
41ed23a050 docs: update retriever integration pages (#24931) 2024-08-01 14:37:07 -04:00
maang-h
ea505985c4 docs: Standardize ZhipuAIEmbeddings docstrings (#24933)
- **Description:** Standardize ZhipuAIEmbeddings rich docstrings.
- **Issue:** the issue #24856
2024-08-01 14:06:53 -04:00
ccurme
02db66d764 docs: fix kv store column headers (#24941)
![Screenshot 2024-08-01 at 12 32 19
PM](https://github.com/user-attachments/assets/888056b7-3065-4be0-a6b8-bcab5b729c2c)
2024-08-01 09:49:36 -07:00
Anneli Samuel
2204d8cb7d community[patch]: Invoke on_llm_new_token callback before yielding chunk (#24938)
**Description**: Invoke on_llm_new_token callback before yielding chunk
in streaming mode
**Issue**:
[#16913](https://github.com/langchain-ai/langchain/issues/16913)
2024-08-01 16:39:04 +00:00
John
ff6274d32d docs: update langchain-unstructured docs (#24935)
- **Description:** The UnstructuredClient will have a breaking change in
the near future. Add a note in the docs that the examples here may not
use the latest version and users should refer to the SDK docs for the
latest info.
2024-08-01 16:27:40 +00:00
ccurme
c72f0d2f20 docs: update toolkit integration pages (#24887)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-01 12:13:08 -04:00
Eugene Yurtsev
75776e4a54 core[patch]: In unit tests, use _schema() instead of BaseModel.schema() (#24930)
This PR introduces a module with some helper utilities for the pydantic
1 -> 2 migration.

They're meant to be used in the following way:

1) Use the utility code to get unit tests pass without requiring
modification to the unit tests
2) (If desired) upgrade the unit tests to match pydantic 2 output
3) (If desired) stop using the utility code

Currently, this module contains a way to map `schema()` generated by
pydantic 2 to (mostly) match the output from pydantic v1.
2024-08-01 11:59:04 -04:00
Serena Ruan
1827bb4042 community[patch]: support bind_tools for ChatMlflow (#24547)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- **Description:** 
Support ChatMlflow.bind_tools method
Tested in Databricks:
<img width="836" alt="image"
src="https://github.com/user-attachments/assets/fa28ef50-0110-4698-8eda-4faf6f0b9ef8">



- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
2024-08-01 08:43:07 -07:00
Michal Gregor
769c3bb838 huggingface: Added a missing argument to a ChatHuggingFace doc notebook. (#24929)
- **Description:** When adding docs for constructing ChatHuggingFace
using a HuggingFacePipeline, I forgot to add `return_full_text=False` as
an argument. In this setup, the chat response would incorrectly contain
all the input text. I am fixing that here by adding that line to the
offending notebook.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-01 15:42:35 +00:00
BottlePumpkin
bfc59c1d26 community: Fix KeyError in NotionDB loader when 'name' is missing (#24224)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.



**Description:** This PR fixes a KeyError in NotionDBLoader when the
"name" key is missing in the "people" property.

**Issue:** Fixes #24223 

**Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-01 13:55:40 +00:00
alexqiao
8eb0bdead3 community[patch]: Invoke callback prior to yielding token (#24917)
**Description: Invoke callback prior to yielding token in stream method
for chat_models .**
**Issue**: https://github.com/langchain-ai/langchain/issues/16913
#16913
2024-08-01 13:19:55 +00:00
ZhangShenao
b2dd9ffaaf patch[cli] Fix bug in check_imports.py (#24918)
The variable `has_failure` in check_imports.py is wrong-declared. It's
actually an another variable.
2024-08-01 09:08:12 -04:00
Jacob Lee
f14121faaf docs[patch]: Update local RAG tutorial (#24909) 2024-07-31 19:19:23 -07:00
Bagatur
b7abac9f92 infra: poetry lock root (#24913) 2024-08-01 01:19:34 +00:00
Jacob Lee
42c686bc28 docs[patch]: Update local model how-to guide (#24911)
Updates to use `langchain_ollama`, new models, chat model example
2024-07-31 18:01:55 -07:00
Erick Friis
600fc233ef partners/ollama: release 0.1.1 (#24910) 2024-07-31 17:31:29 -07:00
Bagatur
25b93cc4c0 core[patch]: stringify tool non-content blocks (#24626)
Slightly breaking bugfix. Shouldn't cause too many issues since no
models would be able to handle non-content block ToolMessage.content
anyways.
2024-07-31 16:42:38 -07:00
Bagatur
492df75937 docs: chat model table nit (#24907) 2024-07-31 15:14:27 -07:00
Bagatur
a24c445e02 docs: cleanup readme (#24905) 2024-07-31 15:03:28 -07:00
Jacob Lee
5098f9dc79 infra: related section in docs (#24829)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-31 14:25:58 -07:00
Nikita Pakunov
c776471ac6 community: fix AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata' (#24432)
Fixes #24049

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-31 21:18:33 +00:00
Bagatur
752a71b688 integrations[patch]: release model packages (#24900) 2024-07-31 20:48:20 +00:00
Jacob Lee
1213a59f87 docs[patch]: Update kv store docs pages (#24848) 2024-07-31 13:23:24 -07:00
Erick Friis
17a06cb7a6 infra: check templates based on integration (#24857)
instead of hardcoding a linter for each, iterate through the lines of
the template notebook and find lines that start with `##` (includes
lower headings), and enforce that those headings are found in new docs
that are contributed
2024-07-31 13:19:50 -07:00
Erick Friis
a7380dd531 cli: release 0.0.28 (#24852) 2024-07-31 13:03:24 -07:00
Erick Friis
e98e4be0f7 cli: register new integration doc templates (#24854)
- wait to merge for retriever.ipynb merge #24836
2024-07-31 13:03:05 -07:00
Eugene Yurtsev
210623b409 core[minor]: Add support for pydantic 2 to utility to get fields (#24899)
Add compatibility for pydantic 2 for a utility function.

This will help push some small changes to master, so they don't have to
be kept track of on a separate branch.
2024-07-31 19:11:07 +00:00
150 changed files with 7110 additions and 4861 deletions

View File

@@ -1,7 +1,6 @@
import glob
import json
import os
import re
import sys
import tomllib
from collections import defaultdict
@@ -86,6 +85,11 @@ def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if dir_ == "libs/core":
return [
{"working-directory": dir_, "python-version": f"3.{v}"}
for v in range(8, 13)
]
min_python = "3.8"
max_python = "3.12"

View File

@@ -7,7 +7,6 @@
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)

View File

@@ -42,6 +42,8 @@ generate-files:
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
@@ -67,10 +69,13 @@ render:
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
append-related:
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync
build: install-py-deps generate-files copy-infra render md-sync append-related
vercel-build: install-vercel-deps build generate-references
rm -rf docs

View File

@@ -500,7 +500,8 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d
### Key-value stores
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/), having some sort of key-value (KV) storage is helpful.
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/) or
[caching embeddings](/docs/how_to/caching_embeddings/), having a form of key-value (KV) storage is helpful.
LangChain includes a [`BaseStore`](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.BaseStore.html) interface,
which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a

View File

@@ -88,6 +88,7 @@ These are the core building blocks you can use when building applications.
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
### Messages
@@ -106,7 +107,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
- [How to: stream a response back](/docs/how_to/streaming_llm)
- [How to: track token usage](/docs/how_to/llm_token_usage_tracking)
- [How to: work with local LLMs](/docs/how_to/local_llms)
- [How to: work with local models](/docs/how_to/local_llms)
### Output parsers

View File

@@ -5,11 +5,11 @@
"id": "b8982428",
"metadata": {},
"source": [
"# Run LLMs locally\n",
"# Run models locally\n",
"\n",
"## Use case\n",
"\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"\n",
"This has at least two important benefits:\n",
"\n",
@@ -66,6 +66,12 @@
"\n",
"![Image description](../../static/img/llama_t_put.png)\n",
"\n",
"### Formatting prompts\n",
"\n",
"Some providers have [chat model](/docs/concepts/#chat-models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model.\n",
"\n",
"This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",
"## Quickstart\n",
"\n",
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
@@ -73,10 +79,20 @@
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
" \n",
"* [Download and run](https://ollama.ai/download) the app\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama2`\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama3.1:8b`\n",
"* When the app is running, all models are automatically served on `localhost:11434`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29450fc9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_ollama"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -86,7 +102,7 @@
{
"data": {
"text/plain": [
"' The first man on the moon was Neil Armstrong, who landed on the moon on July 20, 1969 as part of the Apollo 11 mission. obviously.'"
"'...Neil Armstrong!\\n\\nOn July 20, 1969, Neil Armstrong became the first person to set foot on the lunar surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he stepped off the lunar module Eagle onto the Moon\\'s surface.\\n\\nWould you like to know more about the Apollo 11 mission or Neil Armstrong\\'s achievements?'"
]
},
"execution_count": 2,
@@ -95,51 +111,78 @@
}
],
"source": [
"from langchain_community.llms import Ollama\n",
"from langchain_ollama import OllamaLLM\n",
"\n",
"llm = OllamaLLM(model=\"llama3.1:8b\")\n",
"\n",
"llm = Ollama(model=\"llama2\")\n",
"llm.invoke(\"The first man on the moon was ...\")"
]
},
{
"cell_type": "markdown",
"id": "343ab645",
"id": "674cc672",
"metadata": {},
"source": [
"Stream tokens as they are being generated."
"Stream tokens as they are being generated:"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "9cd83603",
"execution_count": 3,
"id": "1386a852",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon's surface, famously declaring \"That's one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission."
"...|"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Neil| Armstrong|,| an| American| astronaut|.| He| stepped| out| of| the| lunar| module| Eagle| and| onto| the| surface| of| the| Moon| on| July| |20|,| |196|9|,| famously| declaring|:| \"|That|'s| one| small| step| for| man|,| one| giant| leap| for| mankind|.\"||"
]
}
],
"source": [
"for chunk in llm.stream(\"The first man on the moon was ...\"):\n",
" print(chunk, end=\"|\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "e5731060",
"metadata": {},
"source": [
"Ollama also includes a chat model wrapper that handles formatting conversation turns:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f14a778a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission.'"
"AIMessage(content='The answer is a historic one!\\n\\nThe first man to walk on the Moon was Neil Armstrong, an American astronaut and commander of the Apollo 11 mission. On July 20, 1969, Armstrong stepped out of the lunar module Eagle onto the surface of the Moon, famously declaring:\\n\\n\"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nArmstrong was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the Moon during the mission. Michael Collins remained in orbit around the Moon in the command module Columbia.\\n\\nNeil Armstrong passed away on August 25, 2012, but his legacy as a pioneering astronaut and engineer continues to inspire people around the world!', response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-08-01T00:38:29.176717Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 10681861417, 'load_duration': 34270292, 'prompt_eval_count': 19, 'prompt_eval_duration': 6209448000, 'eval_count': 141, 'eval_duration': 4432022000}, id='run-7bed57c5-7f54-4092-912c-ae49073dcd48-0', usage_metadata={'input_tokens': 19, 'output_tokens': 141, 'total_tokens': 160})"
]
},
"execution_count": 40,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = Ollama(\n",
" model=\"llama2\", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
")\n",
"llm.invoke(\"The first man on the moon was ...\")"
"chat_model = ChatOllama(model=\"llama3.1:8b\")\n",
"\n",
"chat_model.invoke(\"Who was the first man on the moon?\")"
]
},
{
@@ -199,7 +242,7 @@
"\n",
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
"\n",
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* E.g., for Llama 2 7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)"
]
@@ -222,9 +265,7 @@
}
],
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama2:13b\")\n",
"llm = OllamaLLM(model=\"llama2:13b\")\n",
"llm.invoke(\"The first man on the moon was ... think step by step\")"
]
},
@@ -268,11 +309,7 @@
"cell_type": "code",
"execution_count": null,
"id": "5eba38dc",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"metadata": {},
"outputs": [],
"source": [
"%env CMAKE_ARGS=\"-DLLAMA_METAL=on\"\n",
@@ -542,7 +579,6 @@
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.prompt_selector import ConditionalPromptSelector\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
@@ -613,9 +649,9 @@
],
"source": [
"# Chain\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"chain = prompt | llm\n",
"question = \"What NFL team won the Super Bowl in the year that Justin Bieber was born?\"\n",
"llm_chain.run({\"question\": question})"
"chain.invoke({\"question\": question})"
]
},
{
@@ -666,7 +702,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -1,811 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f331037f-be3f-4782-856f-d55dab952488",
"metadata": {},
"source": [
"# How to migrate chains to LCEL\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
"\n",
":::\n",
"\n",
"LCEL is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n",
"\n",
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n",
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n",
"\n",
"LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL primitives. Doing so confers some general advantages:\n",
"\n",
"- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n",
"- The chains may be more easily extended or modified;\n",
"- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n",
"\n",
"The LCEL implementations can be slightly more verbose, but there are significant benefits in transparency and customizability.\n",
"\n",
"In this guide we review LCEL implementations of common legacy abstractions. Where appropriate, we link out to separate guides with more detail."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b99b47ec",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "717c8673",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
"metadata": {},
"source": [
"## `LLMChain`\n",
"<span data-heading-keywords=\"llmchain\"></span>\n",
"\n",
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Clarity around contents and parameters. The legacy `LLMChain` contains a default output parser and other options.\n",
"- Easier streaming. `LLMChain` only supports streaming via callbacks.\n",
"- Easier access to raw message outputs if desired. `LLMChain` only exposes these via a parameter or via callback.\n",
"\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e628905c-430e-4e4a-9d7c-c91d2f42052e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'funny',\n",
" 'text': \"Why couldn't the bicycle find its way home?\\n\\nBecause it lost its bearings!\"}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
"\n",
"chain({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0d2a7cf8-1bc7-405c-bb0d-f2ab2ba3b6ab",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"chain.invoke({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "529206c5-abbe-4213-9e6c-3b8586c8000d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'funny',\n",
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"outer_chain = RunnablePassthrough().assign(text=chain)\n",
"\n",
"outer_chain.invoke({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "29d2e26c-2854-4971-9c2b-613450993921",
"metadata": {},
"source": [
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers."
]
},
{
"cell_type": "markdown",
"id": "00df631d-5121-4918-94aa-b88acce9b769",
"metadata": {},
"source": [
"## `ConversationChain`\n",
"<span data-heading-keywords=\"conversationchain\"></span>\n",
"\n",
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Innate support for threads/separate sessions. To make this work with `ConversationChain`, you'd need to instantiate a separate memory class outside the chain.\n",
"- More explicit parameters. `ConversationChain` contains a hidden default prompt, which can cause confusion.\n",
"- Streaming support. `ConversationChain` only supports streaming via callbacks.\n",
"\n",
"`RunnableWithMessageHistory` implements sessions via configuration parameters. It should be instantiated with a callable that returns a [chat message history](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html). By default, it expects this function to take a single argument `session_id`.\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Legacy\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'how are you?',\n",
" 'history': '',\n",
" 'response': \"Arrr, I be doin' well, me matey! Just sailin' the high seas in search of treasure and adventure. How can I assist ye today?\"}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"template = \"\"\"\n",
"You are a pirate. Answer the following questions as best you can.\n",
"Chat history: {history}\n",
"Question: {input}\n",
"\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"memory = ConversationBufferMemory()\n",
"\n",
"chain = ConversationChain(\n",
" llm=ChatOpenAI(),\n",
" memory=memory,\n",
" prompt=prompt,\n",
")\n",
"\n",
"chain({\"input\": \"how are you?\"})"
]
},
{
"cell_type": "markdown",
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "173e1a9c-2a18-4669-b0de-136f39197786",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Arrr, I be doin' well, me heartie! Just sailin' the high seas in search of treasure and adventure. How be ye?\""
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a pirate. Answer the following questions as best you can.\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"history = InMemoryChatMessageHistory()\n",
"\n",
"\n",
"def get_history():\n",
" return history\n",
"\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"wrapped_chain = RunnableWithMessageHistory(\n",
" chain,\n",
" get_history,\n",
" history_messages_key=\"chat_history\",\n",
")\n",
"\n",
"wrapped_chain.invoke({\"input\": \"how are you?\"})"
]
},
{
"cell_type": "markdown",
"id": "6b386ce6-895e-442c-88f3-7bec0ab9f401",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The above example uses the same `history` for all sessions. The example below shows how to use a different chat history for each session."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e05994f-1fbc-4699-bf2e-62cb0e4deeb8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Ahoy matey! What can this old pirate do for ye today?'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = InMemoryChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"wrapped_chain = RunnableWithMessageHistory(\n",
" chain,\n",
" get_session_history,\n",
" history_messages_key=\"chat_history\",\n",
")\n",
"\n",
"wrapped_chain.invoke(\n",
" {\"input\": \"Hello!\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c36ebecb",
"metadata": {},
"source": [
"See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n",
"\n",
"## `RetrievalQA`\n",
"<span data-heading-keywords=\"retrievalqa\"></span>\n",
"\n",
"The [`RetrievalQA`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) chain performed natural-language question answering over a data source using retrieval-augmented generation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Easier customizability. Details such as the prompt and how documents are formatted are only configurable via specific parameters in the `RetrievalQA` chain.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Now let's look at them side-by-side. We'll use the same ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1efbe16e",
"metadata": {},
"outputs": [],
"source": [
"# Load docs\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Store splits\n",
"vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n",
"\n",
"# LLM\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "markdown",
"id": "c7e16438",
"metadata": {},
"source": [
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "43bf55a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What are autonomous agents?',\n",
" 'result': 'Autonomous agents are LLM-empowered agents that handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. These agents can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other LLMs. They are capable of reasoning and planning ahead for complicated tasks by breaking them down into smaller steps.'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import RetrievalQA\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"\n",
"qa_chain = RetrievalQA.from_llm(\n",
" llm, retriever=vectorstore.as_retriever(), prompt=prompt\n",
")\n",
"\n",
"qa_chain(\"What are autonomous agents?\")"
]
},
{
"cell_type": "markdown",
"id": "081948e5",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "9efcc931",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Autonomous agents are agents that can handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. They can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other language model models. These agents use reasoning steps to develop solutions to specific tasks, like creating a novel anticancer drug.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"qa_chain = (\n",
" {\n",
" \"context\": vectorstore.as_retriever() | format_docs,\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"qa_chain.invoke(\"What are autonomous agents?\")"
]
},
{
"cell_type": "markdown",
"id": "d6f44fe8",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The LCEL implementation exposes the internals of what's happening around retrieving, formatting documents, and passing them through a prompt to the LLM, but it is more verbose. You can customize and wrap this composition logic in a helper function, or use the higher-level [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) and [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) helper method:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "5fe42761",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'What are autonomous agents?',\n",
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. LilLog. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\", metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'})],\n",
" 'answer': 'Autonomous agents are entities that can operate independently, making decisions and taking actions without direct human intervention. These agents can perform tasks such as planning, executing complex experiments, and leveraging various tools and resources to achieve objectives. In the context provided, LLM-powered autonomous agents are specifically designed for scientific discovery, capable of handling tasks like designing novel anticancer drugs through reasoning steps.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/langchain-ai/retrieval-qa-chat\n",
"retrieval_qa_chat_prompt = hub.pull(\"langchain-ai/retrieval-qa-chat\")\n",
"\n",
"combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)\n",
"rag_chain = create_retrieval_chain(vectorstore.as_retriever(), combine_docs_chain)\n",
"\n",
"rag_chain.invoke({\"input\": \"What are autonomous agents?\"})"
]
},
{
"cell_type": "markdown",
"id": "2772f4e9",
"metadata": {},
"source": [
"## `ConversationalRetrievalChain`\n",
"<span data-heading-keywords=\"conversationalretrievalchain\"></span>\n",
"\n",
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
"\n",
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n",
"\n",
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:"
]
},
{
"cell_type": "markdown",
"id": "8bc06416",
"metadata": {},
"source": [
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "54eb9576",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'What are autonomous agents?',\n",
" 'chat_history': '',\n",
" 'answer': 'Autonomous agents are powered by Large Language Models (LLMs) to handle tasks like scientific discovery and complex experiments autonomously. These agents can browse the internet, read documentation, execute code, and leverage other LLMs to perform tasks. They can reason and plan ahead to decompose complicated tasks into manageable steps.'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"\n",
"condense_question_template = \"\"\"\n",
"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"Follow Up Input: {question}\n",
"Standalone question:\"\"\"\n",
"\n",
"condense_question_prompt = ChatPromptTemplate.from_template(condense_question_template)\n",
"\n",
"qa_template = \"\"\"\n",
"You are an assistant for question-answering tasks.\n",
"Use the following pieces of retrieved context to answer\n",
"the question. If you don't know the answer, say that you\n",
"don't know. Use three sentences maximum and keep the\n",
"answer concise.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"\n",
"Other context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_template(qa_template)\n",
"\n",
"convo_qa_chain = ConversationalRetrievalChain.from_llm(\n",
" llm,\n",
" vectorstore.as_retriever(),\n",
" condense_question_prompt=condense_question_prompt,\n",
" combine_docs_chain_kwargs={\n",
" \"prompt\": qa_prompt,\n",
" },\n",
")\n",
"\n",
"convo_qa_chain(\n",
" {\n",
" \"question\": \"What are autonomous agents?\",\n",
" \"chat_history\": \"\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "43a8a23c",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "c884b138",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'What are autonomous agents?',\n",
" 'chat_history': [],\n",
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. LilLog. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Or\\n@article{weng2023agent,\\n title = \"LLM-powered Autonomous Agents\",\\n author = \"Weng, Lilian\",\\n journal = \"lilianweng.github.io\",\\n year = \"2023\",\\n month = \"Jun\",\\n url = \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\\n}\\nReferences#\\n[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022\\n[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'})],\n",
" 'answer': 'Autonomous agents are entities capable of acting independently, making decisions, and performing tasks without direct human intervention. These agents can interact with their environment, perceive information, and take actions based on their goals or objectives. They often use artificial intelligence techniques to navigate and accomplish tasks in complex or dynamic environments.'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
"\n",
"condense_question_system_template = (\n",
" \"Given a chat history and the latest user question \"\n",
" \"which might reference context in the chat history, \"\n",
" \"formulate a standalone question which can be understood \"\n",
" \"without the chat history. Do NOT answer the question, \"\n",
" \"just reformulate it if needed and otherwise return it as is.\"\n",
")\n",
"\n",
"condense_question_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", condense_question_system_template),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"history_aware_retriever = create_history_aware_retriever(\n",
" llm, vectorstore.as_retriever(), condense_question_prompt\n",
")\n",
"\n",
"system_prompt = (\n",
" \"You are an assistant for question-answering tasks. \"\n",
" \"Use the following pieces of retrieved context to answer \"\n",
" \"the question. If you don't know the answer, say that you \"\n",
" \"don't know. Use three sentences maximum and keep the \"\n",
" \"answer concise.\"\n",
" \"\\n\\n\"\n",
" \"{context}\"\n",
")\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"qa_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
"\n",
"convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)\n",
"\n",
"convo_qa_chain.invoke(\n",
" {\n",
" \"input\": \"What are autonomous agents?\",\n",
" \"chat_history\": [],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b2717810",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"</ColumnContainer>\n",
"\n",
"## Next steps\n",
"\n",
"You've now seen how to migrate existing usage of some legacy chains to LCEL.\n",
"\n",
"Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,26 +6,20 @@
"source": [
"# How to pass run time values to tools\n",
"\n",
":::info Prerequisites\n",
"import Prerequisites from \"@theme/Prerequisites\";\n",
"import Compatibility from \"@theme/Compatibility\";\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [How to create tools](/docs/how_to/custom_tools)\n",
"- [How to use a model to call tools](/docs/how_to/tool_calling)\n",
":::\n",
"<Prerequisites titlesAndLinks={[\n",
" [\"Chat models\", \"/docs/concepts/#chat-models\"],\n",
" [\"LangChain Tools\", \"/docs/concepts/#tools\"],\n",
" [\"How to create tools\", \"/docs/how_to/custom_tools\"],\n",
" [\"How to use a model to call tools\", \"/docs/how_to/tool_calling\"],\n",
"]} />\n",
"\n",
":::info Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::\n",
"\n",
":::caution Added in `langchain-core==0.2.21`\n",
"\n",
"Must have `langchain-core>=0.2.21` to use this functionality.\n",
"\n",
":::\n",
"<Compatibility packagesAndVersions={[\n",
" [\"langchain-core\", \"0.2.21\"],\n",
"]} />\n",
"\n",
"You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.\n",
"\n",
@@ -33,7 +27,13 @@
"\n",
"Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.\n",
"\n",
"This how-to guide shows you how to prevent the model from generating certain tool arguments and injecting them in directly at runtime."
"This how-to guide shows you how to prevent the model from generating certain tool arguments and injecting them in directly at runtime.\n",
"\n",
":::info Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::"
]
},
{
@@ -597,9 +597,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -611,7 +611,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -36,7 +36,7 @@
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"| | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"### Supported Methods\n",
"\n",
@@ -395,6 +395,66 @@
"chat_model_external.invoke(\"How to use Databricks?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Function calling on Databricks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Databricks Function Calling is OpenAI-compatible and is only available during model serving as part of Foundation Model APIs.\n",
"\n",
"See [Databricks function calling introduction](https://docs.databricks.com/en/machine-learning/model-serving/function-calling.html#supported-models) for supported models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"\n",
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
" },\n",
" },\n",
" },\n",
" }\n",
"]\n",
"\n",
"# supported tool_choice values: \"auto\", \"required\", \"none\", function name in string format,\n",
"# or a dictionary as {\"type\": \"function\", \"function\": {\"name\": <<tool_name>>}}\n",
"model = llm.bind_tools(tools, tool_choice=\"auto\")\n",
"\n",
"messages = [{\"role\": \"user\", \"content\": \"What is the current temperature of Chicago?\"}]\n",
"print(model.invoke(messages))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [Databricks Unity Catalog](docs/integrations/tools/databricks.ipynb) about how to use UC functions in chains."
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -129,6 +129,7 @@
" max_new_tokens=512,\n",
" do_sample=False,\n",
" repetition_penalty=1.03,\n",
" return_full_text=False,\n",
" ),\n",
")"
]

View File

@@ -31,7 +31,8 @@
"### Local Partitioning (Optional)\n",
"\n",
"By default, `langchain-unstructured` installs a smaller footprint that requires\n",
"offloading of the partitioning logic to the Unstructured API.\n",
"offloading of the partitioning logic to the Unstructured API, which requires an `api_key`. For\n",
"partitioning using the API, refer to the Unstructured API section below.\n",
"\n",
"If you would like to run the partitioning logic locally, you will need to install\n",
"a combination of system dependencies, as outlined in the \n",
@@ -358,8 +359,9 @@
"Partitioning with the Unstructured API relies on the [Unstructured SDK\n",
"Client](https://docs.unstructured.io/api-reference/api-services/sdk).\n",
"\n",
"Below is an example showing how you can customize some features of the client and use your own\n",
"`requests.Session()`, pass in an alternative `server_url`, or customize the `RetryConfig` object for more control over how failed requests are handled."
"Below is an example showing how you can customize some features of the client and use your own `requests.Session()`, pass in an alternative `server_url`, or customize the `RetryConfig` object for more control over how failed requests are handled.\n",
"\n",
"Note that the example below may not use the latest version of the UnstructuredClient and there could be breaking changes in future releases. For the latest examples, refer to the [Unstructured Python SDK](https://docs.unstructured.io/api-reference/api-services/sdk-python) docs."
]
},
{

View File

@@ -2,14 +2,49 @@
"cells": [
{
"cell_type": "markdown",
"id": "9fc6205b",
"id": "00a924a0-57e2-43fa-95dc-3ea48a56d3a5",
"metadata": {},
"source": [
"# Arxiv\n",
"---\n",
"sidebar_label: Arxiv\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "0f1b8ddb-8b06-4e7e-b0bb-8786dea15e2b",
"metadata": {},
"source": [
"# ArxivRetriever\n",
"\n",
"## Overview\n",
"\n",
">[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\n",
"\n",
"This notebook shows how to retrieve scientific articles from `Arxiv.org` into the Document format that is used downstream."
"This notebook shows how to retrieve scientific articles from Arxiv.org into the [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) format that is used downstream.\n",
"\n",
"For detailed documentation of all `ArxivRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[ArxivRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) | Scholarly articles on [arxiv.org](https://arxiv.org/) | langchain_community |\n",
"\n",
"## Setup\n",
"\n",
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75d179b4-abc3-48db-9f8b-1cdb46d3aa77",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -17,15 +52,9 @@
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
"source": [
"First, you need to install `arxiv` python package."
"### Installation\n",
"\n",
"This retriever lives in the `langchain-community` package. We will also need the [arxiv](https://pypi.org/project/arxiv/) dependency:"
]
},
{
@@ -37,7 +66,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet arxiv"
"%pip install -qU langchain-community arxiv"
]
},
{
@@ -45,54 +74,44 @@
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
"source": [
"`ArxivRetriever` has these arguments:\n",
"## Instantiation\n",
"\n",
"`ArxivRetriever` parameters include:\n",
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\n",
"- optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `Title`, `Authors`, `Summary`. If True, other fields also downloaded.\n",
"- `get_full_documents`: boolean, default False. Determines whether to fetch full text of documents.\n",
"\n",
"`get_relevant_documents()` has one argument, `query`: free text which used to find documents in `Arxiv.org`"
"See [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) for more detail."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a13f9e92-24b3-4cea-8541-2584c1cdb2d1",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import ArxivRetriever\n",
"\n",
"retriever = ArxivRetriever(\n",
" load_max_docs=2,\n",
" get_ful_documents=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ae3c3d16",
"id": "30c27047-16cf-46b5-bb29-754f1696f2bb",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
"source": [
"### Running retriever"
"## Usage\n",
"\n",
"`ArxivRetriever` supports retrieval by article identifier:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0e6f506",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.retrievers import ArxivRetriever"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f381f642",
"metadata": {},
"outputs": [],
"source": [
"retriever = ArxivRetriever(load_max_docs=2)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "20ae1a74",
"metadata": {},
"outputs": [],
@@ -102,20 +121,20 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "1d5a5088",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Published': '2016-05-26',\n",
"{'Entry ID': 'http://arxiv.org/abs/1605.08386v1',\n",
" 'Published': datetime.date(2016, 5, 26),\n",
" 'Title': 'Heat-bath random walks with Markov bases',\n",
" 'Authors': 'Caprice Stanley, Tobias Windisch',\n",
" 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'}"
" 'Authors': 'Caprice Stanley, Tobias Windisch'}"
]
},
"execution_count": 9,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -126,17 +145,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "c0ccd0c7-f6a6-43e7-b842-5f57afb94224",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'arXiv:1605.08386v1 [math.CO] 26 May 2016\\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\\nCAPRICE STANLEY AND TOBIAS WINDISCH\\nAbstract. Graphs on lattice points are studied whose edges come from a nite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs onbers of a\\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\\nbehaviour of heat-b'"
"'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a ge'"
]
},
"execution_count": 10,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -147,159 +166,143 @@
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"id": "c525c5c2-0961-4f4c-a208-dd6ceed76ea1",
"metadata": {},
"source": [
"### Question Answering on facts"
"`ArxivRetriever` also supports retrieval based on natural language text:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 5,
"id": "4cd3d079-4496-4ab8-adff-b86e6418bc74",
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.invoke(\"What is the ImageBind model?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9318c790-d388-45da-8d5c-57256619e2a1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Entry ID': 'http://arxiv.org/abs/2305.05665v2',\n",
" 'Published': datetime.date(2023, 5, 31),\n",
" 'Title': 'ImageBind: One Embedding Space To Bind Them All',\n",
" 'Authors': 'Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra'}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata"
]
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"Like other retrievers, `ArxivRetriever` can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
"\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bcbeeaf5-79d1-4e29-8589-11dfb26761af",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bb3601df-53ea-4826-bdbe-554387bc3ad4",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e9c1a114-0410-4804-be30-05f34a9760f9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "51a33cc9-ec42-4afc-8a2d-3bfff476aa59",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "ea537767-a8bf-4adf-ae03-b353c9145d58",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What are Heat-bath random walks with Markov base? \n",
"\n",
"**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term \"Heat-bath random walks with Markov base\" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? \n",
"\n",
"-> **Question**: What is the ImageBind model? \n",
"\n",
"**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. \n",
"\n",
"-> **Question**: How does Compositional Reasoning with Large Language Models works? \n",
"\n",
"**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. \n",
"\n",
"In the context of the paper \"Does CLIP Bind Concepts? Probing Compositionality in Large Image Models\", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. \n",
"\n",
"The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. \n",
"\n"
]
}
],
"source": [
"questions = [\n",
" \"What are Heat-bath random walks with Markov base?\",\n",
" \"What is the ImageBind model?\",\n",
" \"How does Compositional Reasoning with Large Language Models works?\",\n",
"]\n",
"chat_history = []\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "8e0c3fc6-ae62-4036-a885-dc60176a7745",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. \n",
"\n",
"**Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.\n",
"\n",
"The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.\n",
"\n",
"References:\n",
"\n",
"Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.\n",
"\n",
"Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. \n",
"\n"
]
}
],
"source": [
"questions = [\n",
" \"What are Heat-bath random walks with Markov base? Include references to answer.\",\n",
"]\n",
"chat_history = []\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09794ab5-759c-4b56-95d4-2454d4d86da1",
"execution_count": 9,
"id": "62889c3c-8a49-4c76-9141-d777311af1f4",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"'The ImageBind model is an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. It shows that only image-paired data is sufficient to bind the modalities together and can leverage large scale vision-language models for zero-shot capabilities and emergent applications such as cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What is the ImageBind model?\")"
]
},
{
"cell_type": "markdown",
"id": "e419acb8-d7ac-42a1-916f-c796f23dce9b",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ArxivRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html)."
]
}
],
"metadata": {
@@ -318,7 +321,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,15 +2,39 @@
"cells": [
{
"cell_type": "markdown",
"id": "1edb9e6b",
"id": "f9a62e19-b00b-4f6c-a700-1e500e4c290a",
"metadata": {},
"source": [
"# Azure AI Search\n",
"---\n",
"sidebar_label: Azure AI Search\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "76f74245-7220-4446-ae8d-4e5a9e998f1f",
"metadata": {},
"source": [
"# AzureAISearchRetriever\n",
"\n",
"## Overview\n",
"[Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) (formerly known as `Azure Cognitive Search`) is a Microsoft cloud search service that gives developers infrastructure, APIs, and tools for information retrieval of vector, keyword, and hybrid queries at scale.\n",
"\n",
"`AzureAISearchRetriever` is an integration module that returns documents from an unstructured query. It's based on the BaseRetriever class and it targets the 2023-11-01 stable REST API version of Azure AI Search, which means it supports vector indexing and queries.\n",
"\n",
"This guide will help you getting started with the Azure AI Search [retriever](/docs/concepts/#retrievers). For detailed documentation of all `AzureAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html).\n",
"\n",
"`AzureAISearchRetriever` replaces `AzureCognitiveSearchRetriever`, which will soon be deprecated. We recommend switching to the newer version that's based on the most recent stable version of the search APIs.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[AzureAISearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) | ❌ | ✅ | langchain_community |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"To use this module, you need:\n",
"\n",
"+ An Azure AI Search service. You can [create one](https://learn.microsoft.com/azure/search/search-create-service-portal) for free if you sign up for the Azure trial. A free service has lower quotas, but it's sufficient for running the code in this notebook.\n",
@@ -19,7 +43,40 @@
"\n",
"+ An API key. API keys are generated when you create the search service. If you're just querying an index, you can use the query API key, otherwise use an admin API key. See [Find your API keys](https://learn.microsoft.com/azure/search/search-security-api-keys?tabs=rest-use%2Cportal-find%2Cportal-query#find-existing-keys) for details.\n",
"\n",
"`AzureAISearchRetriever` replaces `AzureCognitiveSearchRetriever`, which will soon be deprecated. We recommend switching to the newer version that's based on the most recent stable version of the search APIs."
"We can then set the search service name, index name, and API key as environment variables (alternatively, you can pass them as arguments to `AzureAISearchRetriever`). The search index provides the searchable content."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a56e83b-8563-4479-ab61-090fc79f5b00",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"AZURE_AI_SEARCH_SERVICE_NAME\"] = \"<YOUR_SEARCH_SERVICE_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_INDEX_NAME\"] = \"<YOUR_SEARCH_INDEX_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_API_KEY\"] = \"<YOUR_API_KEY>\""
]
},
{
"cell_type": "markdown",
"id": "3e635218-8634-4f39-abc5-39e319eeb136",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88751b84-7cb7-4dd2-af35-c1e9b369d012",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -27,9 +84,9 @@
"id": "f99d4456",
"metadata": {},
"source": [
"## Install packages\n",
"### Installation\n",
"\n",
"Use azure-documents-search package 11.4 or later."
"This retriever lives in the `langchain-community` package. We will need some additional dependencies as well:"
]
},
{
@@ -39,9 +96,9 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain\n",
"%pip install --upgrade --quiet langchain-community\n",
"%pip install --upgrade --quiet langchain-openai\n",
"%pip install --upgrade --quiet azure-search-documents\n",
"%pip install --upgrade --quiet azure-search-documents>=11.4\n",
"%pip install --upgrade --quiet azure-identity"
]
},
@@ -50,7 +107,9 @@
"id": "0474661d",
"metadata": {},
"source": [
"## Import required libraries"
"## Instantiation\n",
"\n",
"For `AzureAISearchRetriever`, provide an `index_name`, `content_key`, and `top_k` set to the number of number of results you'd like to retrieve. Setting `top_k` to zero (the default) returns all results."
]
},
{
@@ -60,52 +119,8 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain_community.retrievers import AzureAISearchRetriever\n",
"\n",
"from langchain_community.retrievers import (\n",
" AzureAISearchRetriever,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b7243e6d",
"metadata": {},
"source": [
"## Configure search settings\n",
"\n",
"Set the search service name, index name, and API key as environment variables (alternatively, you can pass them as arguments to `AzureAISearchRetriever`). The search index provides the searchable content. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33fd23d1",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"AZURE_AI_SEARCH_SERVICE_NAME\"] = \"<YOUR_SEARCH_SERVICE_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_INDEX_NAME\"] = \"<YOUR_SEARCH_INDEX_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_API_KEY\"] = \"<YOUR_API_KEY>\""
]
},
{
"cell_type": "markdown",
"id": "057deaad",
"metadata": {},
"source": [
"## Create the retriever\n",
"\n",
"For `AzureAISearchRetriever`, provide an `index_name`, `content_key`, and `top_k` set to the number of number of results you'd like to retrieve. Setting `top_k` to zero (the default) returns all results."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "c18d0c4c",
"metadata": {},
"outputs": [],
"source": [
"retriever = AzureAISearchRetriever(\n",
" content_key=\"content\", top_k=1, index_name=\"langchain-vector-demo\"\n",
")"
@@ -116,6 +131,8 @@
"id": "e94ea104",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"Now you can use it to retrieve documents from Azure AI Search. \n",
"This is the method you would call to do so. It will return all documents relevant to the query. "
]
@@ -259,6 +276,69 @@
"source": [
"retriever.invoke(\"does the president have a plan for covid-19?\")"
]
},
{
"cell_type": "markdown",
"id": "dd6c9ba9-978f-4e2c-9cc7-ccd1be58eafb",
"metadata": {},
"source": [
"## Use within a chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cbcd8ac6-12ea-4c22-8a98-c24825d598d7",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db80f3c7-83e1-4965-8ff2-a3dd66a07f0e",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"does the president have a plan for covid-19?\")"
]
},
{
"cell_type": "markdown",
"id": "a3d6140e-c2a0-40b2-a141-cab61ab39185",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AzureAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html)."
]
}
],
"metadata": {
@@ -277,7 +357,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,19 +1,86 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b0872249-1af5-4d54-b816-1babad7a8c9e",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Bedrock (Knowledge Bases)\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b6636c27-35da-4ba7-8313-eca21660cab3",
"metadata": {},
"source": [
"# Bedrock (Knowledge Bases)\n",
"# Bedrock (Knowledge Bases) Retriever\n",
"\n",
"> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n",
"## Overview\n",
"\n",
"> Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the users query. This can be time-consuming and inefficient.\n",
"This guide will help you getting started with the AWS Knowledge Bases [retriever](/docs/concepts/#retrievers).\n",
"\n",
"> With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.\n",
"[Knowledge Bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n",
"\n",
"> Knowledge base can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/)."
"Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the users query. This can be time-consuming and inefficient.\n",
"\n",
"With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[AmazonKnowledgeBasesRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) | ❌ | ✅ | langchain_aws |\n"
]
},
{
"cell_type": "markdown",
"id": "cd092536-61bd-4b3f-9050-076daccc9e72",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Knowledge Bases can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/). We will need the `knowledge_base_id` to instantiate the retriever."
]
},
{
"cell_type": "markdown",
"id": "238c0ceb-d4b6-409e-bed9-d30143d2f2c9",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e4426098-820c-48dc-9826-056a91bebe9e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "4ede6277-ea56-45f6-8ef4-fe14734ee279",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `langchain-aws` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4db1af24-0969-43bd-8438-af5e3024b0d0",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-aws"
]
},
{
@@ -21,17 +88,9 @@
"id": "b34c8cbe-c6e5-4398-adf1-4925204bcaed",
"metadata": {},
"source": [
"## Using the Knowledge Bases Retriever"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26c97d36-911c-4fe0-a478-546192728f30",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet boto3"
"## Instantiation\n",
"\n",
"Now we can instantiate our retriever:"
]
},
{
@@ -41,7 +100,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import AmazonKnowledgeBasesRetriever\n",
"from langchain_aws.retrievers import AmazonKnowledgeBasesRetriever\n",
"\n",
"retriever = AmazonKnowledgeBasesRetriever(\n",
" knowledge_base_id=\"PUIJP4EQUA\",\n",
@@ -49,6 +108,14 @@
")"
]
},
{
"cell_type": "markdown",
"id": "9dff39f8-b6ba-41bf-b95b-d345928ed07d",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -66,7 +133,7 @@
"id": "7de9b61b-597b-4aba-95fb-49d11e84510e",
"metadata": {},
"source": [
"### Using in a QA Chain"
"## Use within a chain"
]
},
{
@@ -78,7 +145,7 @@
"source": [
"from botocore.client import Config\n",
"from langchain.chains import RetrievalQA\n",
"from langchain_community.llms import Bedrock\n",
"from langchain_aws import Bedrock\n",
"\n",
"model_kwargs_claude = {\"temperature\": 0, \"top_k\": 10, \"max_tokens_to_sample\": 3000}\n",
"\n",
@@ -90,6 +157,16 @@
"\n",
"qa(query)"
]
},
{
"cell_type": "markdown",
"id": "22e2538a-e042-4997-bb81-b68ecb27d665",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AmazonKnowledgeBasesRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html)."
]
}
],
"metadata": {
@@ -108,7 +185,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,14 +2,72 @@
"cells": [
{
"cell_type": "markdown",
"id": "ab66dd43",
"id": "41ccce84-f6d9-4ba0-8281-22cbf29f20d3",
"metadata": {},
"source": [
"# Elasticsearch\n",
"---\n",
"sidebar_label: Elasticsearch\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "54c4d916-05db-4e01-9893-c711904205b3",
"metadata": {},
"source": [
"# ElasticsearchRetriever\n",
"\n",
"## Overview\n",
">[Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. It supports keyword search, vector search, hybrid search and complex filtering.\n",
"\n",
"The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don't you can use `ElasticsearchRetriever`."
"The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don't you can use `ElasticsearchRetriever`.\n",
"\n",
"This guide will help you getting started with the Elasticsearch [retriever](/docs/concepts/#retrievers). For detailed documentation of all `ElasticsearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[ElasticsearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) | ✅ | ✅ | langchain_elasticsearch |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"There are two main ways to set up an Elasticsearch instance:\n",
"\n",
"- Elastic Cloud: [Elastic Cloud](https://cloud.elastic.co/) is a managed Elasticsearch service. Sign up for a [free trial](https://www.elastic.co/cloud/cloud-trial-overview).\n",
"To connect to an Elasticsearch instance that does not require login credentials (starting the docker instance with security enabled), pass the Elasticsearch URL and index name along with the embedding object to the constructor.\n",
"\n",
"- Local Install Elasticsearch: Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the [Elasticsearch Docker documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) for more information."
]
},
{
"cell_type": "markdown",
"id": "e13a7b58-3a56-4ce6-a4d5-81a8dd2080df",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "492b81d0-c85b-4693-ae4f-3f33da571ddd",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "78335745-f14d-411d-9c06-64ff83eb9358",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `langchain-elasticsearch` package. For demonstration purposes, we will also install `langchain-community` to generate text [embeddings](/docs/concepts/#embedding-models)."
]
},
{
@@ -21,7 +79,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet elasticsearch langchain-elasticsearch"
"%pip install -qU langchain-community langchain-elasticsearch"
]
},
{
@@ -48,7 +106,7 @@
"id": "24c0d140",
"metadata": {},
"source": [
"## Configure\n",
"### Configure\n",
"\n",
"Here we define the conncection to Elasticsearch. In this example we use a locally running instance. Alternatively, you can make an account in [Elastic Cloud](https://cloud.elastic.co/) and start a [free trial](https://www.elastic.co/cloud/cloud-trial-overview)."
]
@@ -70,7 +128,7 @@
"id": "60aa7c20",
"metadata": {},
"source": [
"For vector search, we are going to use random embeddings just for illustration. For real use cases, pick one of the available LangChain `Embeddings` classes."
"For vector search, we are going to use random embeddings just for illustration. For real use cases, pick one of the available LangChain [Embeddings](/docs/integrations/text_embedding) classes."
]
},
{
@@ -88,7 +146,7 @@
"id": "b4eea654",
"metadata": {},
"source": [
"## Define example data"
"#### Define example data"
]
},
{
@@ -118,7 +176,7 @@
"id": "1c518c42",
"metadata": {},
"source": [
"## Index data\n",
"#### Index data\n",
"\n",
"Typically, users make use of `ElasticsearchRetriever` when they already have data in an Elasticsearch index. Here we index some example text documents. If you created an index for example using `ElasticsearchStore.from_documents` that's also fine."
]
@@ -209,14 +267,8 @@
"id": "08437fa2",
"metadata": {},
"source": [
"## Usage examples"
]
},
{
"cell_type": "markdown",
"id": "469aa295",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"### Vector search\n",
"\n",
"Dense vector retrival using fake embeddings in this example."
@@ -543,6 +595,91 @@
"\n",
"custom_mapped_retriever.invoke(\"foo\")"
]
},
{
"cell_type": "markdown",
"id": "1663feff-4527-4fb0-9395-b28af5c9ec99",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/#runnable-interface), such as `.batch`, as well."
]
},
{
"cell_type": "markdown",
"id": "f4f946ed-ff3a-43d7-9e0d-7983ff13c868",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"We can also incorporate retrievers into [chains](/docs/how_to/sequence/) to build larger applications, such as a simple [RAG](/docs/tutorials/rag/) application. For demonstration purposes, we instantiate an OpenAI chat model as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19302ef1-dd49-4f9c-8d87-4ea23b8296e2",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "832857a7-3b16-4a85-acc7-28efe6ebdae8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": vector_retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7317942b-7c9a-477d-ba11-3421da804a22",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"what is foo?\")"
]
},
{
"cell_type": "markdown",
"id": "eeb49714-ba5a-4b10-8e58-67d061a486d1",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ElasticsearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html)."
]
}
],
"metadata": {
@@ -561,7 +698,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,27 +1,44 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Vertex AI Search\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Vertex AI Search\n",
"\n",
"## Overview\n",
"\n",
">[Google Vertex AI Search](https://cloud.google.com/enterprise-search) (formerly known as `Enterprise Search` on `Generative AI App Builder`) is a part of the [Vertex AI](https://cloud.google.com/vertex-ai) machine learning platform offered by `Google Cloud`.\n",
">\n",
">`Vertex AI Search` lets organizations quickly build generative AI-powered search engines for customers and employees. It's underpinned by a variety of `Google Search` technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the users query input. Vertex AI Search also benefits from Googles expertise in understanding how users search and factors in content relevance to order displayed results.\n",
"\n",
">`Vertex AI Search` is available in the `Google Cloud Console` and via an API for enterprise workflow integration.\n",
"\n",
"This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install pre-requisites\n",
"This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search [retriever](/docs/concepts/#retrievers). The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).\n",
"\n",
"You need to install the `google-cloud-discoveryengine` package to use the Vertex AI Search retriever.\n"
"For detailed documentation of all `VertexAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[VertexAISearchRetriever](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) | ❌ | ✅ | langchain_google_community |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"You need to install the `langchain-google-community` and `google-cloud-discoveryengine` packages to use the Vertex AI Search retriever."
]
},
{
@@ -30,14 +47,14 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-discoveryengine"
"%pip install -qU langchain-google-community google-cloud-discoveryengine"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure access to Google Cloud and Vertex AI Search\n",
"### Configure access to Google Cloud and Vertex AI Search\n",
"\n",
"Vertex AI Search is generally available without allowlist as of August 2023.\n",
"\n",
@@ -48,7 +65,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a search engine and populate an unstructured data store\n",
"#### Create a search engine and populate an unstructured data store\n",
"\n",
"- Follow the instructions in the [Vertex AI Search Getting Started guide](https://cloud.google.com/generative-ai-app-builder/docs/try-enterprise-search) to set up a Google Cloud project and Vertex AI Search.\n",
"- [Use the Google Cloud Console to create an unstructured data store](https://cloud.google.com/generative-ai-app-builder/docs/create-engine-es#unstructured-data)\n",
@@ -60,7 +77,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set credentials to access Vertex AI Search API\n",
"#### Set credentials to access Vertex AI Search API\n",
"\n",
"The [Vertex AI Search client libraries](https://cloud.google.com/generative-ai-app-builder/docs/libraries) used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically.\n",
"Client libraries support [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API.\n",
@@ -87,16 +104,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure and use the Vertex AI Search retriever\n",
"### Configure and use the Vertex AI Search retriever\n",
"\n",
"The Vertex AI Search retriever is implemented in the `langchain.retriever.GoogleVertexAISearchRetriever` class. The `get_relevant_documents` method returns a list of `langchain.schema.Document` documents where the `page_content` field of each document is populated the document content.\n",
"The Vertex AI Search retriever is implemented in the `langchain_google_community.VertexAISearchRetriever` class. The `get_relevant_documents` method returns a list of `langchain.schema.Document` documents where the `page_content` field of each document is populated the document content.\n",
"Depending on the data type used in Vertex AI Search (website, structured or unstructured) the `page_content` field is populated as follows:\n",
"\n",
"- Website with advanced indexing: an `extractive answer` that matches a query. The `metadata` field is populated with metadata (if any) of the document from which the segments or answers were extracted.\n",
"- Unstructured data source: either an `extractive segment` or an `extractive answer` that matches a query. The `metadata` field is populated with metadata (if any) of the document from which the segments or answers were extracted.\n",
"- Structured data source: a string json containing all the fields returned from the structured data source. The `metadata` field is populated with metadata (if any) of the document\n",
"\n",
"### Extractive answers & extractive segments\n",
"#### Extractive answers & extractive segments\n",
"\n",
"An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.\n",
"\n",
@@ -108,7 +125,7 @@
"\n",
"When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.\n",
"\n",
"### The mandatory parameters are:\n",
"#### The mandatory parameters are:\n",
"\n",
"- `project_id` - Your Google Cloud Project ID.\n",
"- `location_id` - The location of the data store.\n",
@@ -148,15 +165,15 @@
"\n",
"To update to the new retriever, make the following changes:\n",
"\n",
"- Change the import from: `from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever` -> `from langchain.retrievers import GoogleVertexAISearchRetriever`.\n",
"- Change all class references from `GoogleCloudEnterpriseSearchRetriever` -> `GoogleVertexAISearchRetriever`.\n"
"- Change the import from: `from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever` -> `from langchain_google_community import VertexAISearchRetriever`.\n",
"- Change all class references from `GoogleCloudEnterpriseSearchRetriever` -> `VertexAISearchRetriever`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure and use the retriever for **unstructured** data with extractive segments\n"
"Note: When using the retriever, if you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
@@ -165,9 +182,28 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import (\n",
" GoogleVertexAIMultiTurnSearchRetriever,\n",
" GoogleVertexAISearchRetriever,\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"### Configure and use the retriever for **unstructured** data with extractive segments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_community import (\n",
" VertexAIMultiTurnSearchRetriever,\n",
" VertexAISearchRetriever,\n",
")\n",
"\n",
"PROJECT_ID = \"<YOUR PROJECT ID>\" # Set to your Project ID\n",
@@ -182,7 +218,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -216,7 +252,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -243,7 +279,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -269,7 +305,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -297,7 +333,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" search_engine_id=SEARCH_ENGINE_ID,\n",
@@ -325,7 +361,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAIMultiTurnSearchRetriever(\n",
"retriever = VertexAIMultiTurnSearchRetriever(\n",
" project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID\n",
")\n",
"\n",
@@ -333,6 +369,85 @@
"for doc in result:\n",
" print(doc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/#runnable-interface), such as `.batch`, as well."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"We can also incorporate retrievers into [chains](/docs/how_to/sequence/) to build larger applications, such as a simple [RAG](/docs/tutorials/rag/) application. For demonstration purposes, we instantiate a VertexAI chat model as well. See the corresponding Vertex [integration docs](/docs/integrations/chat/google_vertex_ai_palm/) for setup instructions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-google-vertexai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatVertexAI(model_name=\"chat-bison\", temperature=0)\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `VertexAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html)."
]
}
],
"metadata": {
@@ -351,7 +466,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -5,23 +5,37 @@ sidebar_class_name: hidden
# Retrievers
A **retriever** is an interface that returns documents given an unstructured query.
A [retriever](/docs/concepts/#retrievers) is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of Document's as output.
Retrievers accept a string query as input and return a list of [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) as output.
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
This table lists common retrievers.
Note that all [vector stores](/docs/concepts/#vector-stores) can be [cast to retrievers](/docs/how_to/vectorstore_retriever/).
Refer to the vector store [integration docs](/docs/integrations/vectorstores/) for available vector stores.
This page lists custom retrievers, implemented via subclassing [BaseRetriever](/docs/how_to/custom_retriever/).
## Bring-your-own documents
| Retriever | Namespace | Native async | Local |
|-----------|-----------|---------------|------|
| [AmazonKnowledgeBasesRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) | langchain_aws.retrievers | ❌ | ❌ |
| [AzureAISearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) | langchain_community.retrievers | ✅ | ❌ |
| [ElasticsearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) | langchain_elasticsearch | ❌ | ❌ |
| [MilvusCollectionHybridSearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) | langchain_milvus | ❌ | ❌ |
| [TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | langchain_community.retrievers | ❌ | ❌ |
| [VertexAISearchRetriever](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) | langchain_google_community.vertex_ai_search | ❌ | ❌ |
The below retrievers allow you to index and search a custom corpus of documents.
| Retriever | Self-host | Cloud offering | Package |
|-----------|-----------|----------------|---------|
| [AmazonKnowledgeBasesRetriever](/docs/integrations/retrievers/bedrock) | ❌ | ✅ | [langchain_aws](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) |
| [AzureAISearchRetriever](/docs/integrations/retrievers/azure_ai_search) | ❌ | ✅ | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) |
| [ElasticsearchRetriever](/docs/integrations/retrievers/elasticsearch_retriever) | ✅ | ✅ | [langchain_elasticsearch](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) |
| [MilvusCollectionHybridSearchRetriever](/docs/integrations/retrievers/milvus_hybrid_search) | ✅ | ❌ | [langchain_milvus](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) |
| [VertexAISearchRetriever](/docs/integrations/retrievers/google_vertex_ai_search) | ❌ | ✅ | [langchain_google_community](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) |
## External index
The below retrievers will search over an external index (e.g., constructed from Internet data or similar).
| Retriever | Source | Package |
|-----------|--------|---------|
| [ArxivRetriever](/docs/integrations/retrievers/arxiv) | Scholarly articles on [arxiv.org](https://arxiv.org/) | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) |
| [TavilySearchAPIRetriever](/docs/integrations/retrievers/tavily) | Internet search | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) |
| [WikipediaRetriever](/docs/integrations/retrievers/wikipedia) | [Wikipedia](https://www.wikipedia.org/) articles | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) |

View File

@@ -2,21 +2,48 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"metadata": {},
"source": [
"# Milvus Hybrid Search\n",
"---\n",
"sidebar_label: Milvus Hybrid Search\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Milvus Hybrid Search Retriever\n",
"\n",
"## Overview\n",
"\n",
"> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n",
"\n",
"This notebook goes over how to use the Milvus Hybrid Search retriever, which combines the strengths of both dense and sparse vector search.\n",
"This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/#retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).\n",
"\n",
"For more reference please go to [Milvus Multi-Vector Search](https://milvus.io/docs/multi-vector-search.md)\n",
"\n"
"See also the Milvus Multi-Vector Search [docs](https://milvus.io/docs/multi-vector-search.md).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[MilvusCollectionHybridSearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) | ✅ | ❌ | langchain_milvus |\n",
"\n",
"\n",
"\n",
"## Setup\n",
"\n",
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -28,9 +55,9 @@
}
},
"source": [
"## Prerequisites\n",
"### Install dependencies\n",
"You need to prepare to install the following dependencies\n"
"### Installation\n",
"\n",
"This retriever lives in the `langchain-milvus` package. This guide requires the following dependencies:"
]
},
{
@@ -50,32 +77,18 @@
"%pip install --upgrade --quiet pymilvus[model] langchain-milvus langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Import necessary modules and classes"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from pymilvus import (\n",
" Collection,\n",
" CollectionSchema,\n",
@@ -86,34 +99,15 @@
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"metadata": {},
"source": [
"### Start the Milvus service\n",
"\n",
"Please refer to the [Milvus documentation](https://milvus.io/docs/install_standalone-docker.md) to start the Milvus service.\n",
"\n",
"After starting milvus, you need to specify your milvus connection URI.\n"
"After starting milvus, you need to specify your milvus connection URI."
]
},
{
@@ -155,11 +149,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Prepare data and Load\n",
"### Prepare dense and sparse embedding functions\n",
"\n",
" Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
"Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
]
},
{
@@ -379,15 +371,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build RAG chain with Retriever\n",
"### Create the Retriever\n",
"## Instantiation\n",
"\n",
"Define search parameters for sparse and dense fields, and create a retriever"
"Now we can instantiate our retriever, defining search parameters for sparse and dense fields:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@@ -416,6 +407,13 @@
"In the input parameters of this Retriever, we use a dense embedding and a sparse embedding to perform hybrid search on the two fields of this Collection, and use WeightedRanker for reranking. Finally, 3 top-K Documents will be returned."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 14,
@@ -442,7 +440,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the RAG chain\n",
"## Use within a chain\n",
"\n",
"Initialize ChatOpenAI and define a prompt template"
]
@@ -610,6 +608,15 @@
"source": [
"collection.drop()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html)."
]
}
],
"metadata": {
@@ -628,9 +635,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -22,9 +22,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Namespace | Native async | Local |\n",
"| :--- | :--- | :---: | :---: |\n",
"[TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | langchain_community.retrievers | ❌ | ❌ |\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | Internet search | langchain_community |\n",
"\n",
"## Setup"
]
@@ -33,7 +33,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{

View File

@@ -2,14 +2,51 @@
"cells": [
{
"cell_type": "markdown",
"id": "9fc6205b",
"id": "62727aaa-bcff-4087-891c-e539f824ee1f",
"metadata": {},
"source": [
"# Wikipedia\n",
"---\n",
"sidebar_label: Wikipedia\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "d62a16c1-10de-4f99-b392-c4ad2e6123a1",
"metadata": {},
"source": [
"# WikipediaRetriever\n",
"\n",
"## Overview\n",
">[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.\n",
"\n",
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the Document format that is used downstream."
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) format that is used downstream.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[WikipediaRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) | [Wikipedia](https://www.wikipedia.org/) articles | langchain_community |"
]
},
{
"cell_type": "markdown",
"id": "eb7d377c-168b-40e8-bd61-af6a4fb1b44f",
"metadata": {},
"source": [
"## Setup\n",
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1bbc6013-2617-4f7e-9d8b-7453d09315c0",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -17,15 +54,9 @@
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
"source": [
"First, you need to install `wikipedia` python package."
"### Installation\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `wikipedia` python package itself."
]
},
{
@@ -37,7 +68,15 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet wikipedia"
"%pip install -qU langchain_community wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae622ac6-d18a-4754-a4bd-d30a078c19b5",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
@@ -45,7 +84,9 @@
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
"source": [
"`WikipediaRetriever` has these arguments:\n",
"Now we can instantiate our retriever:\n",
"\n",
"`WikipediaRetriever` parameters include:\n",
"- optional `lang`: default=\"en\". Use it to search in a specific language part of Wikipedia\n",
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\n",
"- optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `title`, `Summary`. If True, other fields also downloaded.\n",
@@ -53,200 +94,149 @@
"`get_relevant_documents()` has one argument, `query`: free text which used to find documents in Wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae3c3d16",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
"source": [
"### Running retriever"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "d0e6f506",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.retrievers import WikipediaRetriever"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "f381f642",
"execution_count": 1,
"id": "b78f0cd0-ffea-4fe3-9d1d-54639c4ef1ff",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import WikipediaRetriever\n",
"\n",
"retriever = WikipediaRetriever()"
]
},
{
"cell_type": "markdown",
"id": "12aead36-7b97-4d9c-82e7-ec644a3127f9",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "20ae1a74",
"execution_count": 2,
"id": "54a76605-6b1e-44bf-b8a2-7d48119290c4",
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.invoke(\"HUNTER X HUNTER\")"
"docs = retriever.invoke(\"TOKYO GHOUL\")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "1d5a5088",
"execution_count": 3,
"id": "65ada2b7-3507-4dcb-9982-5f8f4e97a2e1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'Hunter × Hunter',\n",
" 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s snen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\n'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Tokyo Ghoul (Japanese: 東京喰種(トーキョーグール), Hepburn: Tōkyō Gūru) is a Japanese dark fantasy manga series written and illustrated by Sui Ishida. It was serialized in Shueisha's seinen manga magazine Weekly Young Jump from September 2011 to September 2014, with its chapters collected in 14 tankōbon volumes. The story is set in an alternate version of Tokyo where humans coexist with ghouls, beings who loo\n"
]
}
],
"source": [
"docs[0].metadata # meta-information of the Document"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "c0ccd0c7-f6a6-43e7-b842-5f57afb94224",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:400] # a content of the Document"
"print(docs[0].page_content[:400])"
]
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"id": "ae3c3d16",
"metadata": {},
"source": [
"### Question Answering on facts"
"## Use within a chain\n",
"Like other retrievers, `WikipediaRetriever` can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
"\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "bb3601df-53ea-4826-bdbe-554387bc3ad4",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e9c1a114-0410-4804-be30-05f34a9760f9",
"metadata": {
"tags": []
},
"execution_count": 4,
"id": "4bd3d268-eb8c-46e9-930a-18f5e2a50008",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"# | output: false\n",
"# | echo: false\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "51a33cc9-ec42-4afc-8a2d-3bfff476aa59",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "ea537767-a8bf-4adf-ae03-b353c9145d58",
"metadata": {
"tags": []
},
"execution_count": 5,
"id": "9b52bc65-1b2e-4c30-ab43-41eaa5bf79c3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"\n",
" Answer the question based only on the context provided.\n",
" Context: {context}\n",
" Question: {question}\n",
" \"\"\"\n",
")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "0d268905-3b19-4338-ac10-223c0fe4d5e4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What is Apify? \n",
"\n",
"**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. \n",
"\n",
"-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? \n",
"\n",
"**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. \n",
"\n",
"-> **Question**: What is the Abhayagiri Vihāra? \n",
"\n",
"**Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. \n",
"\n"
]
"data": {
"text/plain": [
"'The main character in Tokyo Ghoul is Ken Kaneki, who transforms into a ghoul after receiving an organ transplant from a ghoul named Rize.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"questions = [\n",
" \"What is Apify?\",\n",
" \"When the Monument to the Martyrs of the 1830 Revolution was created?\",\n",
" \"What is the Abhayagiri Vihāra?\",\n",
" # \"How big is Wikipédia en français?\",\n",
"]\n",
"chat_history = []\n",
"chain.invoke(\n",
" \"Who is the main character in `Tokyo Ghoul` and does he transform into a ghoul?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "236bbafb-ebd4-4165-9b8f-d47605f6eef3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
"For detailed documentation of all `WikipediaRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html#langchain-community-retrievers-wikipedia-wikipediaretriever)."
]
}
],
@@ -266,7 +256,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,10 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Astra DB\n",
"sidebar_label: AstraDB\n",
"---"
]
},
@@ -13,130 +17,121 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Astra DB\n",
"# AstraDBByteStore\n",
"\n",
"This will help you get started with Astra DB [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `AstraDBByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html).\n",
"\n",
"## Overview\n",
"\n",
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.\n",
"\n",
"`AstraDBStore` and `AstraDBByteStore` need the `astrapy` package to be installed:"
"### Integration details\n",
"\n",
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [AstraDBByteStore](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html) | [langchain_astradb](https://api.python.langchain.com/en/latest/astradb_api_reference.html) | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_astradb?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_astradb?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create an `AstraDBByteStore` byte store, you'll need to [create a DataStax account](https://www.datastax.com/products/datastax-astra).\n",
"\n",
"### Credentials\n",
"\n",
"After signing up, set the following credentials:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet astrapy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Store takes the following parameters:\n",
"\n",
"* `api_endpoint`: Astra DB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: Astra DB token. Looks like `AstraCS:6gBhNmsk135....`\n",
"* `collection_name` : Astra DB collection name\n",
"* `namespace`: (Optional) Astra DB namespace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AstraDBStore\n",
"\n",
"The `AstraDBStore` is an implementation of `BaseStore` that stores everything in your DataStax Astra DB instance.\n",
"The store keys must be strings and will be mapped to the `_id` field of the Astra DB document.\n",
"The store values can be any object that can be serialized by `json.dumps`.\n",
"In the database, entries will have the form:\n",
"\n",
"```json\n",
"{\n",
" \"_id\": \"<key>\",\n",
" \"value\": <value>\n",
"}\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import AstraDBStore"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_API_ENDPOINT = getpass(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AstraDB integration lives in the `langchain_astradb` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"store = AstraDBStore(\n",
"%pip install -qU langchain_astradb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_astradb import AstraDBByteStore\n",
"\n",
"kv_store = AstraDBByteStore(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"my_store\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['v1', [0.1, 0.2, 0.3]]\n"
]
}
],
"source": [
"store.mset([(\"k1\", \"v1\"), (\"k2\", [0.1, 0.2, 0.3])])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage with CacheBackedEmbeddings\n",
"## Usage\n",
"\n",
"You may use the `AstraDBStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n",
"Note that `AstraDBStore` stores the embeddings as a list of floats without converting them first to bytes so we don't use `fromByteStore` there."
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_store=store\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
@@ -144,96 +139,67 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## AstraDBByteStore\n",
"\n",
"The `AstraDBByteStore` is an implementation of `ByteStore` that stores everything in your DataStax Astra DB instance.\n",
"The store keys must be strings and will be mapped to the `_id` field of the Astra DB document.\n",
"The store `bytes` values are converted to base64 strings for storage into Astra DB.\n",
"In the database, entries will have the form:\n",
"\n",
"```json\n",
"{\n",
" \"_id\": \"<key>\",\n",
" \"value\": \"bytes encoded in base 64\"\n",
"}\n",
"```"
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import AstraDBByteStore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"store = AstraDBByteStore(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"my_store\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
"source": [
"You can use an `AstraDBByteStore` anywhere you'd use other ByteStores, including as a [cache for embeddings](/docs/how_to/caching_embeddings)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AstraDBByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Cassandra\n",
@@ -13,68 +17,62 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Cassandra\n",
"# CassandraByteStore\n",
"\n",
"This will help you get started with Cassandra [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `CassandraByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html).\n",
"\n",
"## Overview\n",
"\n",
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.\n",
"\n",
"`CassandraByteStore` needs the `cassio` package to be installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Store takes the following parameters:\n",
"### Integration details\n",
"\n",
"* table: The table where to store the data.\n",
"* session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used.\n",
"* keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used.\n",
"* setup_mode: (Optional) The mode used to create the Cassandra table (SYNC, ASYNC or OFF). Defaults to SYNC."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CassandraByteStore\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/cassandra_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [CassandraByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"The `CassandraByteStore` is an implementation of `ByteStore` that stores the data in your Cassandra instance.\n",
"The store keys must be strings and will be mapped to the `row_id` column of the Cassandra table.\n",
"The store `bytes` values are mapped to the `body_blob` column of the Cassandra table."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `CassandraByteStore` integration lives in the `langchain_community` package. You'll also need to install the `cassio` package or the `cassandra-driver` package as a peer dependency depending on which initialization method you're using:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import CassandraByteStore"
"%pip install -qU langchain_community\n",
"%pip install -qU cassandra-driver\n",
"%pip install -qU cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Init from a cassandra driver Session\n",
"You'll also need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
],
"metadata": {
"collapsed": false
}
"You'll first need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
},
{
"cell_type": "code",
@@ -90,12 +88,10 @@
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You need to provide the name of an existing keyspace of the Cassandra instance:"
],
"metadata": {
"collapsed": false
}
"Then you can create your store! You'll also need to provide the name of an existing keyspace of the Cassandra instance:"
]
},
{
"cell_type": "code",
@@ -103,61 +99,91 @@
"metadata": {},
"outputs": [],
"source": [
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")"
"from langchain_community.storage import CassandraByteStore\n",
"\n",
"kv_store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=\"<YOUR KEYSPACE>\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating the store:"
],
"metadata": {
"collapsed": false
}
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"outputs": [],
"source": [
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=CASSANDRA_KEYSPACE,\n",
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Init from cassio\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
],
"metadata": {
"collapsed": false
}
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Init using `cassio`\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=\"<YOUR KEYSPACE>\")\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
@@ -165,62 +191,27 @@
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage with CacheBackedEmbeddings\n",
"## API reference\n",
"\n",
"You may use the `CassandraByteStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_cache=store\n",
")"
],
"metadata": {
"collapsed": false
}
"For detailed documentation of all `CassandraByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,10 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Elasticsearch \n",
"sidebar_label: Elasticsearch\n",
"---"
]
},
@@ -15,25 +19,31 @@
"source": [
"# ElasticsearchEmbeddingsCache\n",
"\n",
"This will help you get started with Elasticsearch [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html).\n",
"\n",
"## Overview\n",
"\n",
"The `ElasticsearchEmbeddingsCache` is a `ByteStore` implementation that uses your Elasticsearch instance for efficient storage and retrieval of embeddings.\n",
"\n",
"### Integration details\n",
"\n",
"First install the LangChain integration with Elasticsearch."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-elasticsearch"
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ElasticsearchEmbeddingsCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html) | [langchain_elasticsearch](https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html) | ✅ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_elasticsearch?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_elasticsearch?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create a `ElasticsearchEmbeddingsCache` byte store, you'll need an Elasticsearch cluster. You can [set one up locally](https://www.elastic.co/downloads/elasticsearch) or create an [Elastic account](https://www.elastic.co/elasticsearch)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": "it can be instantiated using `CacheBackedEmbeddings.from_bytes_store` method."
"source": [
"### Installation\n",
"\n",
"The LangChain `ElasticsearchEmbeddingsCache` integration lives in the `__package_name__` package:"
]
},
{
"cell_type": "code",
@@ -41,23 +51,37 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"%pip install -qU langchain_elasticsearch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"underlying_embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"\n",
"store = ElasticsearchEmbeddingsCache(\n",
" es_url=\"http://localhost:9200\",\n",
"# Example config for a locally running Elasticsearch instance\n",
"kv_store = ElasticsearchEmbeddingsCache(\n",
" es_url=\"https://localhost:9200\",\n",
" index_name=\"llm-chat-cache\",\n",
" metadata={\"project\": \"my_chatgpt_project\"},\n",
" namespace=\"my_chatgpt_project\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(),\n",
" document_embedding_cache=store,\n",
" query_embedding_cache=store,\n",
" es_user=\"elastic\",\n",
" es_password=\"<GENERATED PASSWORD>\",\n",
" es_params={\n",
" \"ca_certs\": \"~/http_ca.crt\",\n",
" },\n",
")"
]
},
@@ -65,19 +89,93 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The index_name parameter can also accept aliases. This allows to use the ILM: Manage the index lifecycle that we suggest to consider for managing retention and controlling cache growth.\n",
"## Usage\n",
"\n",
"Look at the class docstring for all parameters."
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Index the generated vectors\n",
"The cached vectors won't be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.\n",
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"This can be done by subclassing end overriding methods. "
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use as an embeddings cache\n",
"\n",
"Like other `ByteStores`, you can use an `ElasticsearchEmbeddingsCache` instance for [persistent caching in document ingestion](/docs/how_to/caching_embeddings/) for RAG.\n",
"\n",
"However, cached vectors won't be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.\n",
"\n",
"This can be done by subclassing and overriding methods:"
]
},
{
@@ -88,8 +186,6 @@
"source": [
"from typing import Any, Dict, List\n",
"\n",
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache\n",
"\n",
"\n",
"class SearchableElasticsearchStore(ElasticsearchEmbeddingsCache):\n",
" @property\n",
@@ -112,26 +208,29 @@
{
"cell_type": "markdown",
"metadata": {},
"source": "When overriding the mapping and the document building, please only make additive modifications, keeping the base mapping intact."
"source": [
"When overriding the mapping and the document building, please only make additive modifications, keeping the base mapping intact."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,11 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Local Filesystem\n",
"sidebar_position: 3\n",
"---"
]
},
@@ -16,51 +19,119 @@
"source": [
"# LocalFileStore\n",
"\n",
"The `LocalFileStore` is a persistent implementation of `ByteStore` that stores everything in a folder of your choosing."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"from pathlib import Path\n",
"This will help you get started with local filesystem [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all LocalFileStore features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html).\n",
"\n",
"from langchain.storage import LocalFileStore\n",
"## Overview\n",
"\n",
"root_path = Path.cwd() / \"data\" # can also be a path set by a string\n",
"store = LocalFileStore(root_path)\n",
"The `LocalFileStore` is a persistent implementation of `ByteStore` that stores everything in a folder of your choosing. It's useful if you're using a single machine and are tolerant of files being added or deleted.\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/file_system) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html) | [langchain](https://api.python.langchain.com/en/latest/langchain_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain?style=flat-square&label=%20) |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's see which files exist in our `data` folder:"
"### Installation\n",
"\n",
"The LangChain `LocalFileStore` integration lives in the `langchain` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"from langchain.storage import LocalFileStore\n",
"\n",
"root_path = Path.cwd() / \"data\" # can also be a path set by a string\n",
"\n",
"kv_store = LocalFileStore(root_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see the created files in your `data` folder:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"k1 k2\n"
"key1 key2\n"
]
}
],
@@ -69,16 +140,57 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `LocalFileStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -92,7 +204,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -9,7 +9,7 @@
},
"source": [
"---\n",
"sidebar_label: InMemoryByteStore\n",
"sidebar_label: In-memory\n",
"---"
]
},
@@ -28,7 +28,7 @@
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/in_memory/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html) | [langchain_core](https://api.python.langchain.com/en/latest/core_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_core?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_core?style=flat-square&label=%20) |"
]
},

View File

@@ -1,12 +0,0 @@
---
sidebar_position: 1
sidebar_class_name: hidden
---
# Key-value stores
[Key-value stores](/docs/concepts/#key-value-stores) are used by other LangChain components to store and retrieve data.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -2,7 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Redis\n",
@@ -15,9 +19,30 @@
"source": [
"# RedisStore\n",
"\n",
"This will help you get started with Redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `RedisStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html).\n",
"\n",
"## Overview\n",
"\n",
"The `RedisStore` is an implementation of `ByteStore` that stores everything in your Redis instance.\n",
"\n",
"To configure Redis, follow our [Redis guide](/docs/integrations/providers/redis)."
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/ioredis_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [RedisStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create a Redis byte store, you'll need to set up a Redis instance. You can do this locally or via a provider - see our [Redis guide](/docs/integrations/providers/redis) for an overview of options."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `RedisStore` integration lives in the `langchain_community` package:"
]
},
{
@@ -26,56 +51,128 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet redis"
"%pip install -qU langchain_community redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.storage import RedisStore\n",
"\n",
"store = RedisStore(redis_url=\"redis://localhost:6379\")\n",
"kv_store = RedisStore(redis_url=\"redis://localhost:6379\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `RedisStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Upstash Redis\n",
@@ -15,11 +19,48 @@
"source": [
"# UpstashRedisByteStore\n",
"\n",
"The `UpstashRedisStore` is an implementation of `ByteStore` that stores everything in your Upstash-hosted Redis instance.\n",
"This will help you get started with Upstash redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `UpstashRedisByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html).\n",
"\n",
"To use the base `RedisStore` instead, see [this guide](/docs/integrations/stores/redis/)\n",
"## Overview\n",
"\n",
"To configure Upstash Redis, follow our [Upstash guide](/docs/integrations/providers/upstash)."
"The `UpstashRedisStore` is an implementation of `ByteStore` that stores everything in your [Upstash](https://upstash.com/)-hosted Redis instance.\n",
"\n",
"To use the base `RedisStore` instead, see [this guide](/docs/integrations/stores/redis/).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/upstash_redis_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [UpstashRedisByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"You'll first need to [sign up for an Upstash account](https://upstash.com/docs/redis/overall/getstarted). Next, you'll need to create a Redis database to connect to.\n",
"\n",
"### Credentials\n",
"\n",
"Once you've created your database, get your database URL (don't forget the `https://`!) and token:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"URL = getpass(\"Enter your Upstash URL\")\n",
"TOKEN = getpass(\"Enter your Upstash REST token\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Upstash integration lives in the `langchain_community` package. You'll also need to install the `upstash-redis` package as a peer dependency:"
]
},
{
@@ -28,61 +69,130 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet upstash-redis"
"%pip install -qU langchain_community upstash-redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.storage import UpstashRedisByteStore\n",
"from upstash_redis import Redis\n",
"\n",
"URL = \"<UPSTASH_REDIS_REST_URL>\"\n",
"TOKEN = \"<UPSTASH_REDIS_REST_TOKEN>\"\n",
"\n",
"redis_client = Redis(url=URL, token=TOKEN)\n",
"store = UpstashRedisByteStore(client=redis_client, ttl=None, namespace=\"test-ns\")\n",
"kv_store = UpstashRedisByteStore(client=redis_client, ttl=None, namespace=\"test-ns\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `UpstashRedisByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -4,17 +4,191 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Github\n",
"---\n",
"sidebar_label: Github\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GithubToolkit\n",
"\n",
"The `Github` toolkit contains tools that enable an LLM agent to interact with a github repository. \n",
"The tool is a wrapper for the [PyGitHub](https://github.com/PyGithub/PyGithub) library. \n",
"\n",
"## Quickstart\n",
"For detailed documentation of all GithubToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"At a high-level, we will:\n",
"\n",
"1. Install the pygithub library\n",
"2. Create a Github app\n",
"3. Set your environmental variables\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"#### 1. Install dependencies\n",
"\n",
"This integration is implemented in `langchain-community`. We will also need the `pygithub` dependency:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pygithub langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2. Create a Github App\n",
"\n",
"[Follow the instructions here](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) to create and register a Github app. Make sure your app has the following [repository permissions:](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28)\n",
"\n",
"* Commit statuses (read only)\n",
"* Contents (read and write)\n",
"* Issues (read and write)\n",
"* Metadata (read only)\n",
"* Pull requests (read and write)\n",
"\n",
"Once the app has been registered, you must give your app permission to access each of the repositories you whish it to act upon. Use the App settings on [github.com here](https://github.com/settings/installations).\n",
"\n",
"\n",
"#### 3. Set Environment Variables\n",
"\n",
"Before initializing your agent, the following environment variables need to be set:\n",
"\n",
"* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n",
"* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n",
"* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n",
"* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"for env_var in [\n",
" \"GITHUB_APP_ID\",\n",
" \"GITHUB_APP_PRIVATE_KEY\",\n",
" \"GITHUB_REPOSITORY\",\n",
"]:\n",
" if not os.getenv(env_var):\n",
" os.environ[env_var] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.github.toolkit import GitHubToolkit\n",
"from langchain_community.utilities.github import GitHubAPIWrapper\n",
"\n",
"github = GitHubAPIWrapper()\n",
"toolkit = GitHubToolkit.from_github_api_wrapper(github)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Get Issues\n",
"Get Issue\n",
"Comment on Issue\n",
"List open pull requests (PRs)\n",
"Get Pull Request\n",
"Overview of files included in PR\n",
"Create Pull Request\n",
"List Pull Requests' Files\n",
"Create File\n",
"Read File\n",
"Update File\n",
"Delete File\n",
"Overview of existing files in Main branch\n",
"Overview of files in current working branch\n",
"List branches in this repository\n",
"Set active branch\n",
"Create a new branch\n",
"Get files from a directory\n",
"Search issues and pull requests\n",
"Search code\n",
"Create review request\n"
]
}
],
"source": [
"tools = toolkit.get_tools()\n",
"\n",
"for tool in tools:\n",
" print(tool.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The purpose of these tools is as follows:\n",
"\n",
"Each of these steps will be explained in great detail below.\n",
"\n",
@@ -32,70 +206,14 @@
"\n",
"7. **Update File**- updates a file in the repository.\n",
"\n",
"8. **Delete File**- deletes a file from the repository.\n",
"\n"
"8. **Delete File**- deletes a file from the repository."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Install the `pygithub` library "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pygithub langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Create a Github App\n",
"\n",
"[Follow the instructions here](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) to create and register a Github app. Make sure your app has the following [repository permissions:](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28)\n",
"\n",
"* Commit statuses (read only)\n",
"* Contents (read and write)\n",
"* Issues (read and write)\n",
"* Metadata (read only)\n",
"* Pull requests (read and write)\n",
"\n",
"\n",
"Once the app has been registered, you must give your app permission to access each of the repositories you whish it to act upon. Use the App settings on [github.com here](https://github.com/settings/installations).\n",
"\n",
"### 3. Set Environmental Variables\n",
"\n",
"Before initializing your agent, the following environmental variables need to be set:\n",
"\n",
"* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n",
"* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n",
"* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n",
"* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example: Simple Agent"
"## Use within an agent"
]
},
{
@@ -824,6 +942,15 @@
"\n",
"agent.run(prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `GithubToolkit` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html)."
]
}
],
"metadata": {
@@ -842,7 +969,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -4,34 +4,31 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gmail\n",
"\n",
"This notebook walks through connecting a LangChain email to the `Gmail API`.\n",
"\n",
"To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API. Once this is done, we'll install the required libraries."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-api-python-client > /dev/null\n",
"%pip install --upgrade --quiet google-auth-oauthlib > /dev/null\n",
"%pip install --upgrade --quiet google-auth-httplib2 > /dev/null\n",
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages"
"---\n",
"sidebar_label: GMail\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You also need to install the `langchain-community` package where the integration lives:\n",
"# GmailToolkit\n",
"\n",
"```bash\n",
"pip install -U langchain-community\n",
"```"
"This will help you getting started with the GMail [toolkit](/docs/concepts/#toolkits). This toolkit interacts with the GMail API to read messages, draft and send messages, and more. For detailed documentation of all GmailToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-google-community` package. We'll need the `gmail` extra:"
]
},
{
@@ -40,14 +37,14 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
"%pip install -qU langchain-google-community\\[gmail\\]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
@@ -57,14 +54,14 @@
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Toolkit\n",
"## Instantiation\n",
"\n",
"By default the toolkit reads the local `credentials.json` file. You can also manually provide a `Credentials` object."
]
@@ -72,12 +69,10 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits import GmailToolkit\n",
"from langchain_google_community import GmailToolkit\n",
"\n",
"toolkit = GmailToolkit()"
]
@@ -100,7 +95,7 @@
},
"outputs": [],
"source": [
"from langchain_community.tools.gmail.utils import (\n",
"from langchain_google_community.gmail.utils import (\n",
" build_resource_service,\n",
" get_gmail_credentials,\n",
")\n",
@@ -116,6 +111,15 @@
"toolkit = GmailToolkit(api_resource=api_resource)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 5,
@@ -147,7 +151,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"- [GmailCreateDraft](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.create_draft.GmailCreateDraft.html)\n",
"- [GmailSendMessage](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.send_message.GmailSendMessage.html)\n",
"- [GmailSearch](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.search.GmailSearch.html)\n",
"- [GmailGetMessage](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.get_message.GmailGetMessage.html)\n",
"- [GmailGetThread](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.get_thread.GmailGetThread.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"We show here how to use it as part of an [agent](/docs/tutorials/agents). We use the OpenAI Functions Agent, so we will need to setup and install the required dependencies for that. We will also use [LangSmith Hub](https://smith.langchain.com/hub) to pull the prompt from, so we will need to install that.\n",
"\n",
@@ -303,7 +318,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -12,10 +12,10 @@ that share common authentication, services, or other objects. They can be implem
This table lists common toolkits.
| Namespace 🔻 | Class |
|------------|---------|
| langchain_community.agent_toolkits.github | [GitHubToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html) |
| langchain_community.agent_toolkits.gmail | [GmailToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.gmail.toolkit.GmailToolkit.html) |
| langchain_community.agent_toolkits.openapi | [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html) |
| langchain_community.agent_toolkits.slack | [SlackToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html) |
| langchain_community.agent_toolkits.sql | [SQLDatabaseToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html) |
| Toolkit | Package |
|------|---------------|
| [GitHubToolkit](/docs/integrations/toolkits/github) | [langchain_community.agent_toolkits.github](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html) |
| [GmailToolkit](/docs/integrations/toolkits/gmail) | [langchain_google_community.gmail.toolkit](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html) |
| [RequestsToolkit](/docs/integrations/toolkits/requests) | [langchain_community.agent_toolkits.openapi](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html) |
| [SlackToolkit](/docs/integrations/toolkits/slack) | [langchain_community.agent_toolkits.slack](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html) |
| [SQLDatabaseToolkit](/docs/integrations/toolkits/sql_database) | [langchain_community.agent_toolkits.sql](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html) |

View File

@@ -0,0 +1,361 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "050c5580-2c85-4763-8783-59dbd20395a5",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Requests\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cfe4185a-34dc-4cdc-b831-001954f2d6e8",
"metadata": {},
"source": [
"# Requests Toolkit\n",
"\n",
"We can use the Requests [toolkit](/docs/concepts/#toolkits) to construct agents that generate HTTP requests.\n",
"\n",
"For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html).\n",
"\n",
"## ⚠️ Security note ⚠️\n",
"There are inherent risks in giving models discretion to execute real-world actions. Take precautions to mitigate these risks:\n",
"\n",
"- Make sure that permissions associated with the tools are narrowly-scoped (e.g., for database operations or API requests);\n",
"- When desired, make use of human-in-the-loop workflows."
]
},
{
"cell_type": "markdown",
"id": "d968e982-f370-4614-8469-c1bc71ee3e32",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f74f05fb-3f24-4c0b-a17f-cf4edeedbb9a",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "36a178eb-1f2c-411e-bf25-0240ead4c62a",
"metadata": {},
"source": [
"Note that if you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e68d0cd-6233-481c-b048-e8d95cba4c35",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "a7e2f64a-a72e-4fef-be52-eaf7c5072d24",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"First we will demonstrate a minimal example.\n",
"\n",
"**NOTE**: There are inherent risks in giving models discretion to execute real-world actions. We must \"opt-in\" to these risks by setting `allow_dangerous_request=True` to use these tools.\n",
"**This can be dangerous for calling unwanted requests**. Please make sure your custom OpenAPI spec (yaml) is safe and that permissions associated with the tools are narrowly-scoped."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "018bd070-9fc8-459b-8d28-b4a3e283e640",
"metadata": {},
"outputs": [],
"source": [
"ALLOW_DANGEROUS_REQUEST = True"
]
},
{
"cell_type": "markdown",
"id": "a024f7b3-5437-4878-bd16-c4783bff394c",
"metadata": {},
"source": [
"We can use the [JSONPlaceholder](https://jsonplaceholder.typicode.com) API as a testing ground.\n",
"\n",
"Let's create (a subset of) its API spec:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2dcbcf92-2ad5-49c3-94ac-91047ccc8c5b",
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, Dict, Union\n",
"\n",
"import requests\n",
"import yaml\n",
"\n",
"\n",
"def _get_schema(response_json: Union[dict, list]) -> dict:\n",
" if isinstance(response_json, list):\n",
" response_json = response_json[0] if response_json else {}\n",
" return {key: type(value).__name__ for key, value in response_json.items()}\n",
"\n",
"\n",
"def _get_api_spec() -> str:\n",
" base_url = \"https://jsonplaceholder.typicode.com\"\n",
" endpoints = [\n",
" \"/posts\",\n",
" \"/comments\",\n",
" ]\n",
" common_query_parameters = [\n",
" {\n",
" \"name\": \"_limit\",\n",
" \"in\": \"query\",\n",
" \"required\": False,\n",
" \"schema\": {\"type\": \"integer\", \"example\": 2},\n",
" \"description\": \"Limit the number of results\",\n",
" }\n",
" ]\n",
" openapi_spec: Dict[str, Any] = {\n",
" \"openapi\": \"3.0.0\",\n",
" \"info\": {\"title\": \"JSONPlaceholder API\", \"version\": \"1.0.0\"},\n",
" \"servers\": [{\"url\": base_url}],\n",
" \"paths\": {},\n",
" }\n",
" # Iterate over the endpoints to construct the paths\n",
" for endpoint in endpoints:\n",
" response = requests.get(base_url + endpoint)\n",
" if response.status_code == 200:\n",
" schema = _get_schema(response.json())\n",
" openapi_spec[\"paths\"][endpoint] = {\n",
" \"get\": {\n",
" \"summary\": f\"Get {endpoint[1:]}\",\n",
" \"parameters\": common_query_parameters,\n",
" \"responses\": {\n",
" \"200\": {\n",
" \"description\": \"Successful response\",\n",
" \"content\": {\n",
" \"application/json\": {\n",
" \"schema\": {\"type\": \"object\", \"properties\": schema}\n",
" }\n",
" },\n",
" }\n",
" },\n",
" }\n",
" }\n",
" return yaml.dump(openapi_spec, sort_keys=False)\n",
"\n",
"\n",
"api_spec = _get_api_spec()"
]
},
{
"cell_type": "markdown",
"id": "db3d6148-ae65-4a1d-91a6-59ee3e4e6efa",
"metadata": {},
"source": [
"Next we can instantiate the toolkit. We require no authorization or other headers for this API:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "63a630b3-45bb-4525-865b-083f322b944b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit\n",
"from langchain_community.utilities.requests import TextRequestsWrapper\n",
"\n",
"toolkit = RequestsToolkit(\n",
" requests_wrapper=TextRequestsWrapper(headers={}),\n",
" allow_dangerous_requests=ALLOW_DANGEROUS_REQUEST,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f4224a64-843a-479d-8a7b-84719e4b9d0c",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "70ea0f4e-9f10-4906-894b-08df832fd515",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[RequestsGetTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPostTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPatchTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPutTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsDeleteTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = toolkit.get_tools()\n",
"\n",
"tools"
]
},
{
"cell_type": "markdown",
"id": "a21a6ca4-d650-4b7d-a944-1a8771b5293a",
"metadata": {},
"source": [
"- [RequestsGetTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsGetTool.html)\n",
"- [RequestsPostTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPostTool.html)\n",
"- [RequestsPatchTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPatchTool.html)\n",
"- [RequestsPutTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPutTool.html)\n",
"- [RequestsDeleteTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsDeleteTool.html)"
]
},
{
"cell_type": "markdown",
"id": "e2dbb304-abf2-472a-9130-f03150a40549",
"metadata": {},
"source": [
"## Use within an agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "db062da7-f22c-4f36-9df8-1da96c9f7538",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"system_message = \"\"\"\n",
"You have access to an API to help answer user queries.\n",
"Here is documentation on the API:\n",
"{api_spec}\n",
"\"\"\".format(api_spec=api_spec)\n",
"\n",
"agent_executor = create_react_agent(llm, tools, state_modifier=system_message)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c1e47be9-374a-457c-928a-48f02b5530e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Fetch the top two posts. What are their titles?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" requests_get (call_RV2SOyzCnV5h2sm4WPgG8fND)\n",
" Call ID: call_RV2SOyzCnV5h2sm4WPgG8fND\n",
" Args:\n",
" url: https://jsonplaceholder.typicode.com/posts?_limit=2\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: requests_get\n",
"\n",
"[\n",
" {\n",
" \"userId\": 1,\n",
" \"id\": 1,\n",
" \"title\": \"sunt aut facere repellat provident occaecati excepturi optio reprehenderit\",\n",
" \"body\": \"quia et suscipit\\nsuscipit recusandae consequuntur expedita et cum\\nreprehenderit molestiae ut ut quas totam\\nnostrum rerum est autem sunt rem eveniet architecto\"\n",
" },\n",
" {\n",
" \"userId\": 1,\n",
" \"id\": 2,\n",
" \"title\": \"qui est esse\",\n",
" \"body\": \"est rerum tempore vitae\\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\\nqui aperiam non debitis possimus qui neque nisi nulla\"\n",
" }\n",
"]\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The titles of the top two posts are:\n",
"1. \"sunt aut facere repellat provident occaecati excepturi optio reprehenderit\"\n",
"2. \"qui est esse\"\n"
]
}
],
"source": [
"example_query = \"Fetch the top two posts. What are their titles?\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "01ec4886-de3d-4fda-bd05-e3f254810969",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -4,109 +4,139 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Slack\n",
"\n",
"This notebook walks through connecting LangChain to your `Slack` account.\n",
"\n",
"To use this toolkit, you will need to get a token explained in the [Slack API docs](https://api.slack.com/tutorials/tracks/getting-a-token). Once you've received a SLACK_USER_TOKEN, you can input it as an environmental variable below."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet slack_sdk > /dev/null\n",
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages\n",
"%pip install --upgrade --quiet python-dotenv > /dev/null # This is for loading environmental variables from a .env file"
"---\n",
"sidebar_label: Slack\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Environmental Variables\n",
"# SlackToolkit\n",
"\n",
"The toolkit will read the SLACK_USER_TOKEN environmental variable to authenticate the user so you need to set them here. You will also need to set your OPENAI_API_KEY to use the agent later."
"This will help you getting started with the Slack [toolkit](/docs/concepts/#toolkits). For detailed documentation of all SlackToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"To use this toolkit, you will need to get a token as explained in the [Slack API docs](https://api.slack.com/tutorials/tracks/getting-a-token). Once you've received a SLACK_USER_TOKEN, you can input it as an environment variable below."
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"# Set environmental variables here\n",
"# In this example, you set environmental variables by loading a .env file.\n",
"import dotenv\n",
"import getpass\n",
"import os\n",
"\n",
"dotenv.load_dotenv()"
"if not os.getenv(\"SLACK_USER_TOKEN\"):\n",
" os.environ[\"SLACK_USER_TOKEN\"] = getpass.getpass(\"Enter your Slack user token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Toolkit and Get Tools\n",
"\n",
"To start, you need to create the toolkit, so you can access its tools later."
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SlackGetChannel(client=<slack_sdk.web.client.WebClient object at 0x11eba6a00>),\n",
" SlackGetMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba69d0>),\n",
" SlackScheduleMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba65b0>),\n",
" SlackSendMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba6790>)]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-community` package. We will also need the Slack SDK:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community slack_sdk"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, we can install beautifulsoup4 to assist in parsing HTML messages:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU beautifulsoup4 # This is optional but is useful for parsing HTML messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits import SlackToolkit\n",
"\n",
"toolkit = SlackToolkit()\n",
"toolkit = SlackToolkit()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SlackGetChannel(client=<slack_sdk.web.client.WebClient object at 0x10ce3a4d0>),\n",
" SlackGetMessage(client=<slack_sdk.web.client.WebClient object at 0x10ce3a0e0>),\n",
" SlackScheduleMessage(client=<slack_sdk.web.client.WebClient object at 0x10ce3a050>),\n",
" SlackSendMessage(client=<slack_sdk.web.client.WebClient object at 0x10ce3a020>)]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = toolkit.get_tools()\n",
"\n",
"tools"
]
},
@@ -114,7 +144,78 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an ReAct Agent"
"This toolkit loads:\n",
"\n",
"- [SlackGetChannel](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.get_channel.SlackGetChannel.html)\n",
"- [SlackGetMessage](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.get_message.SlackGetMessage.html)\n",
"- [SlackScheduleMessage](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.schedule_message.SlackScheduleMessage.html)\n",
"- [SlackSendMessage](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.slack.send_message.SlackSendMessage.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"Let's equip an agent with the Slack toolkit and query for information about a channel."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"agent_executor = create_react_agent(llm, tools)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"When was the #general channel created?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" get_channelid_name_dict (call_mINmB55OWDIkXykGXZXaL5Ar)\n",
" Call ID: call_mINmB55OWDIkXykGXZXaL5Ar\n",
" Args:\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The #general channel was created on Unix timestamp 1671043305, which corresponds to \"Mon, 12 Dec 2022 18:41:45 GMT\" in human-readable format.\n"
]
}
],
"source": [
"example_query = \"When was the #general channel created?\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" message = event[\"messages\"][-1]\n",
" if message.type != \"tool\": # mask sensitive information\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example with AgentExecutor:"
]
},
{
@@ -236,11 +337,13 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all __ModuleName__Toolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html)."
]
}
],
"metadata": {
@@ -259,7 +362,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -29,10 +29,6 @@
"\n",
"## Setup\n",
"\n",
"This uses the example `Chinook` database. \n",
"\n",
"To set it up follow [these instructions](https://database.guide/2-sample-databases-sqlite/). This notebook reads from the resulting .db file.\n",
"\n",
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
@@ -87,7 +83,62 @@
},
{
"cell_type": "markdown",
"id": "79e86f98-3436-474d-ac67-529c93726b95",
"id": "804533b1-2f16-497b-821b-c82d67fcf7b6",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"The `SQLDatabaseToolkit` toolkit requires:\n",
"\n",
"- a [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) object;\n",
"- a LLM or chat model (for instantiating the [QuerySQLCheckerTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.sql_database.tool.QuerySQLCheckerTool.html) tool).\n",
"\n",
"Below, we instantiate the toolkit with these objects. Let's first create a database object.\n",
"\n",
"This guide uses the example `Chinook` database based on [these instructions](https://database.guide/2-sample-databases-sqlite/).\n",
"\n",
"Below we will use the `requests` library to pull the `.sql` file and create an in-memory SQLite database. Note that this approach is lightweight, but ephemeral and not thread-safe. If you'd prefer, you can follow the instructions to save the file locally as `Chinook.db` and instantiate the database via `db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "40d05f9b-5a8f-4307-8f8b-4153db0fdfa9",
"metadata": {},
"outputs": [],
"source": [
"import sqlite3\n",
"\n",
"import requests\n",
"from langchain_community.utilities.sql_database import SQLDatabase\n",
"from sqlalchemy import create_engine\n",
"from sqlalchemy.pool import StaticPool\n",
"\n",
"\n",
"def get_engine_for_chinook_db():\n",
" \"\"\"Pull sql file, populate in-memory database, and create engine.\"\"\"\n",
" url = \"https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql\"\n",
" response = requests.get(url)\n",
" sql_script = response.text\n",
"\n",
" connection = sqlite3.connect(\":memory:\", check_same_thread=False)\n",
" connection.executescript(sql_script)\n",
" return create_engine(\n",
" \"sqlite://\",\n",
" creator=lambda: connection,\n",
" poolclass=StaticPool,\n",
" connect_args={\"check_same_thread\": False},\n",
" )\n",
"\n",
"\n",
"engine = get_engine_for_chinook_db()\n",
"\n",
"db = SQLDatabase(engine)"
]
},
{
"cell_type": "markdown",
"id": "2b9a6326-78fd-4c42-a1cb-4316619ac449",
"metadata": {},
"source": [
"We will also need a LLM or chat model:\n",
@@ -101,8 +152,8 @@
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a5076e3d-3a04-4be9-ae82-41d7685e2197",
"execution_count": 2,
"id": "cc6e6108-83d9-404f-8f31-474c2fbf5f6c",
"metadata": {},
"outputs": [],
"source": [
@@ -116,30 +167,20 @@
},
{
"cell_type": "markdown",
"id": "804533b1-2f16-497b-821b-c82d67fcf7b6",
"id": "77925e72-4730-43c3-8726-d68cedf635f4",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"The `SQLDatabaseToolkit` toolkit requires:\n",
"\n",
"- a [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) object;\n",
"- a LLM or chat model (for instantiating the [QuerySQLCheckerTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.sql_database.tool.QuerySQLCheckerTool.html) tool).\n",
"\n",
"Below, we instantiate the toolkit with these objects:"
"We can now instantiate the toolkit:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "42bd5a41-672a-4a53-b70a-2f0c0555758c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit\n",
"from langchain_community.utilities.sql_database import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)"
]
@@ -156,20 +197,20 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "a18c3e69-bee0-4f5d-813e-eeb540f41b98",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[QuerySQLDataBaseTool(description=\"Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.\", db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>),\n",
" InfoSQLDatabaseTool(description='Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>),\n",
" ListSQLDatabaseTool(db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>),\n",
" QuerySQLCheckerTool(description='Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x10e4c14b0>, llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x10e4a3190>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x10e4c08e0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy=''), llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['dialect', 'query'], template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.\\n\\nOutput the final SQL query only.\\n\\nSQL Query: '), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x10e4a3190>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x10e4c08e0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy='')))]"
"[QuerySQLDataBaseTool(description=\"Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.\", db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>),\n",
" InfoSQLDatabaseTool(description='Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>),\n",
" ListSQLDatabaseTool(db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>),\n",
" QuerySQLCheckerTool(description='Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x105e02860>, llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x1148a97b0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x1148aaec0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy=''), llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['dialect', 'query'], template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.\\n\\nOutput the final SQL query only.\\n\\nSQL Query: '), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x1148a97b0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x1148aaec0>, temperature=0.0, openai_api_key=SecretStr('**********'), openai_proxy='')))]"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -203,7 +244,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "eda12f8b-be90-4697-ac84-2ece9e2d1708",
"metadata": {},
"outputs": [
@@ -226,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "3470ae96-e5e5-4717-a6d6-d7d28c7b7347",
"metadata": {},
"outputs": [],
@@ -244,7 +285,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "48bca92c-9b4b-4d5c-bcce-1b239c9e901c",
"metadata": {},
"outputs": [],
@@ -266,7 +307,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "39e6d2bf-3194-4aba-854b-63faf919157b",
"metadata": {},
"outputs": [
@@ -279,8 +320,8 @@
"Which country's customers spent the most?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_list_tables (call_xK4hUKXF8wb1tPM1s5e6gZVb)\n",
" Call ID: call_xK4hUKXF8wb1tPM1s5e6gZVb\n",
" sql_db_list_tables (call_eiheSxiL0s90KE50XyBnBtJY)\n",
" Call ID: call_eiheSxiL0s90KE50XyBnBtJY\n",
" Args:\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: sql_db_list_tables\n",
@@ -288,8 +329,8 @@
"Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_schema (call_XnagYKuUNXo4FgK0a0bUSlIM)\n",
" Call ID: call_XnagYKuUNXo4FgK0a0bUSlIM\n",
" sql_db_schema (call_YKwGWt4UUVmxxY7vjjBDzFLJ)\n",
" Call ID: call_YKwGWt4UUVmxxY7vjjBDzFLJ\n",
" Args:\n",
" table_names: Customer, Invoice, InvoiceLine\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -366,8 +407,8 @@
"*/\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_query (call_tnibWEiAbTD0Al4u4lFRCcO0)\n",
" Call ID: call_tnibWEiAbTD0Al4u4lFRCcO0\n",
" sql_db_query (call_7WBDcMxl1h7MnI05njx1q8V9)\n",
" Call ID: call_7WBDcMxl1h7MnI05njx1q8V9\n",
" Args:\n",
" query: SELECT c.Country, SUM(i.Total) AS TotalSpent FROM Customer c JOIN Invoice i ON c.CustomerId = i.CustomerId GROUP BY c.Country ORDER BY TotalSpent DESC LIMIT 1\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -401,7 +442,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "23c1235c-6d18-43e4-98ab-85b426b53d94",
"metadata": {},
"outputs": [
@@ -414,8 +455,8 @@
"Who are the top 3 best selling artists?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_query (call_EBmGkOb4ceEc6VNCszE9s9N7)\n",
" Call ID: call_EBmGkOb4ceEc6VNCszE9s9N7\n",
" sql_db_query (call_9F6Bp2vwsDkeLW6FsJFqLiet)\n",
" Call ID: call_9F6Bp2vwsDkeLW6FsJFqLiet\n",
" Args:\n",
" query: SELECT artist_name, SUM(quantity) AS total_sold FROM sales GROUP BY artist_name ORDER BY total_sold DESC LIMIT 3\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -426,8 +467,8 @@
"(Background on this error at: https://sqlalche.me/e/20/e3q8)\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_list_tables (call_mEBlNVGQmf6IiikdqlFSoBzN)\n",
" Call ID: call_mEBlNVGQmf6IiikdqlFSoBzN\n",
" sql_db_list_tables (call_Gx5adzWnrBDIIxzUDzsn83zO)\n",
" Call ID: call_Gx5adzWnrBDIIxzUDzsn83zO\n",
" Args:\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: sql_db_list_tables\n",
@@ -435,8 +476,8 @@
"Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_schema (call_ZEnt0V29DVZf2RDpyVDqCjyN)\n",
" Call ID: call_ZEnt0V29DVZf2RDpyVDqCjyN\n",
" sql_db_schema (call_ftywrZgEgGWLrnk9dYC0xtZv)\n",
" Call ID: call_ftywrZgEgGWLrnk9dYC0xtZv\n",
" Args:\n",
" table_names: Artist, Album, InvoiceLine\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -495,8 +536,8 @@
"*/\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" sql_db_query (call_6tHsI79n3dYWphezh3fp9EKp)\n",
" Call ID: call_6tHsI79n3dYWphezh3fp9EKp\n",
" sql_db_query (call_i6n3lmS7E2ZivN758VOayTiy)\n",
" Call ID: call_i6n3lmS7E2ZivN758VOayTiy\n",
" Args:\n",
" query: SELECT Artist.Name AS artist_name, SUM(InvoiceLine.Quantity) AS total_sold FROM Artist JOIN Album ON Artist.ArtistId = Album.ArtistId JOIN Track ON Album.AlbumId = Track.AlbumId JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY total_sold DESC LIMIT 3\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",

View File

@@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet databricks-sdk langchain-community langchain-openai"
"%pip install --upgrade --quiet databricks-sdk langchain-community mlflow"
]
},
{
@@ -47,9 +47,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\")"
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")"
]
},
{

View File

@@ -381,7 +381,7 @@
"id": "f8014c9d",
"metadata": {},
"source": [
"Now, we can initalize the agent with the LLM and the tools.\n",
"Now, we can initialize the agent with the LLM and the tools.\n",
"\n",
"Note that we are passing in the `model`, not `model_with_tools`. That is because `create_react_agent` will call `.bind_tools` for us under the hood."
]

View File

@@ -19,17 +19,28 @@
"\n",
":::\n",
"\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [GPT4All](https://github.com/nomic-ai/gpt4all), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.\n",
"\n",
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
"LangChain has integrations with [many open-source LLM providers](/docs/how_to/local_llms) that can be run locally.\n",
"\n",
"See [here](/docs/how_to/local_llms) for setup instructions for these LLMs. \n",
"This guide will show how to run `LLaMA 3.1` via one provider, [Ollama](/docs/integrations/providers/ollama/) locally (e.g., on your laptop) using local embeddings and a local LLM. However, you can set up and swap in other local providers, such as [LlamaCPP](/docs/integrations/chat/llamacpp/) if you prefer.\n",
"\n",
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
"**Note:** This guide uses a [chat model](/docs/concepts/#chat-models) wrapper that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models directly with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model. This will often [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",
"## Document Loading \n",
"## Setup\n",
"\n",
"First, install packages needed for local embeddings and vector storage."
"First we'll need to set up Ollama.\n",
"\n",
"The instructions [on their GitHub repo](https://github.com/ollama/ollama) provide details, which we summarize here:\n",
"\n",
"- [Download](https://ollama.com/download) and run their desktop app\n",
"- From command line, fetch models from [this list of options](https://ollama.com/library). For this guide, you'll need:\n",
" - A general purpose model like `llama3.1:8b`, which you can pull with something like `ollama pull llama3.1:8b`\n",
" - A [text embedding model](https://ollama.com/search?c=embedding) like `nomic-embed-text`, which you can pull with something like `ollama pull nomic-embed-text`\n",
"- When the app is running, all models are automatically served on `localhost:11434`\n",
"- Note that your model choice will depend on your hardware capabilities\n",
"\n",
"Next, install packages needed for local embeddings, vector storage, and inference."
]
},
{
@@ -39,7 +50,22 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community langchainhub gpt4all langchain-chroma "
"# Document loading, retrieval methods and text splitting\n",
"%pip install -qU langchain langchain_community\n",
"\n",
"# Local vector store via Chroma\n",
"%pip install -qU langchain_chroma\n",
"\n",
"# Local inference and embeddings via Ollama\n",
"%pip install -qU langchain_ollama"
]
},
{
"cell_type": "markdown",
"id": "02b7914e",
"metadata": {},
"source": [
"You can also [see this page](/docs/integrations/text_embedding/) for a full list of available embeddings models"
]
},
{
@@ -47,20 +73,22 @@
"id": "5e7543fa",
"metadata": {},
"source": [
"Load and split an example document.\n",
"## Document Loading\n",
"\n",
"We'll use a blog post on agents as an example."
"Now let's load and split an example document.\n",
"\n",
"We'll use a [blog post](https://lilianweng.github.io/posts/2023-06-23-agent/) by Lilian Weng on agents as an example."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "f8cf5765",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
@@ -74,20 +102,22 @@
"id": "131d5059",
"metadata": {},
"source": [
"Next, the below steps will download the `GPT4All` embeddings locally (if you don't already have them)."
"Next, the below steps will initialize your vector store. We use [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text), but you can explore other providers or options as well:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "fdce8923",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"from langchain_ollama import OllamaEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())"
"local_embeddings = OllamaEmbeddings(model=\"nomic-embed-text\")\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=local_embeddings)"
]
},
{
@@ -95,12 +125,12 @@
"id": "29137915",
"metadata": {},
"source": [
"Test similarity search is working with our local embeddings."
"And now we have a working vector store! Test that similarity search is working:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "b0c55e98",
"metadata": {},
"outputs": [
@@ -110,7 +140,7 @@
"4"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -123,17 +153,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "32b43339",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\"})"
"Document(metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\"}, page_content='Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.')"
]
},
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -142,260 +172,102 @@
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "557cd9b8",
"metadata": {},
"source": [
"## Model \n",
"\n",
"### LLaMA2\n",
"\n",
"Note: new versions of `llama-cpp-python` use GGUF model files (see [here](https://github.com/abetlen/llama-cpp-python/pull/633)).\n",
"\n",
"If you have an existing GGML model, see [here](/docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
" \n",
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
"\n",
"Finally, as noted in detail [here](/docs/how_to/local_llms) install `llama-cpp-python`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f218576",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet llama-cpp-python"
]
},
{
"cell_type": "markdown",
"id": "0dd1804f",
"metadata": {},
"source": [
"To enable use of GPU on Apple Silicon, follow the steps [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to use the Python binding `with Metal support`.\n",
"\n",
"In particular, ensure that `conda` is using the correct virtual environment that you created (`miniforge3`).\n",
"\n",
"E.g., for me:\n",
"\n",
"```\n",
"conda activate /Users/rlm/miniforge3/envs/llama\n",
"```\n",
"\n",
"With this confirmed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5884779a-957e-4c4c-b447-bc8385edc67e",
"metadata": {},
"outputs": [],
"source": [
"! CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama/bin/pip install -U llama-cpp-python --no-cache-dir"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cd7164e3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import LlamaCpp"
]
},
{
"cell_type": "markdown",
"id": "fcf81052",
"metadata": {},
"source": [
"Setting model parameters as noted in the [llama.cpp docs](/docs/integrations/llms/llamacpp)."
"Next, set up a model. We use Ollama with `llama3.1:8b` here, but you can [explore other providers](/docs/how_to/local_llms/) or [model options depending on your hardware setup](https://ollama.com/library):"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "af1176bb-d52a-4cf0-b983-8b7433d45b4f",
"metadata": {},
"outputs": [],
"source": [
"n_gpu_layers = 1 # Metal set to 1 is enough.\n",
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"# Make sure the model path is correct for your system!\n",
"llm = LlamaCpp(\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/llama-2-13b-chat.ggufv3.q4_0.bin\",\n",
" n_gpu_layers=n_gpu_layers,\n",
" n_batch=n_batch,\n",
" n_ctx=2048,\n",
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
" verbose=True,\n",
"model = ChatOllama(\n",
" model=\"llama3.1:8b\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3831b16a",
"id": "8c4f7adf",
"metadata": {},
"source": [
"Note that these indicate that [Metal was enabled properly](/docs/integrations/llms/llamacpp):\n",
"\n",
"```\n",
"ggml_metal_init: allocating\n",
"ggml_metal_init: using MPS\n",
"```"
"Test it to make sure you've set everything up properly:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 7,
"id": "bf0162e0-8c41-4344-88ae-ff2bbaeb12eb",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"by jonathan \n",
"**The scene is set: a packed arena, the crowd on their feet. In the blue corner, we have Stephen Colbert, aka \"The O'Reilly Factor\" himself. In the red corner, the challenger, John Oliver. The judges are announced as Tina Fey, Larry Wilmore, and Patton Oswalt. The crowd roars as the two opponents face off.**\n",
"\n",
"Here's the hypothetical rap battle:\n",
"**Stephen Colbert (aka \"The Truth with a Twist\"):**\n",
"Yo, I'm the king of satire, the one they all fear\n",
"My show's on late, but my jokes are clear\n",
"I skewer the politicians, with precision and might\n",
"They tremble at my wit, day and night\n",
"\n",
"[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\n",
"**John Oliver:**\n",
"Hold up, Stevie boy, you may have had your time\n",
"But I'm the new kid on the block, with a different prime\n",
"Time to wake up from that 90s coma, son\n",
"My show's got bite, and my facts are never done\n",
"\n",
"[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\n",
"**Stephen Colbert:**\n",
"Oh, so you think you're the one, with the \"Last Week\" crown\n",
"But your jokes are stale, like the ones I wore down\n",
"I'm the master of absurdity, the lord of the spin\n",
"You're just a British import, trying to fit in\n",
"\n",
"[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\n",
"**John Oliver:**\n",
"Stevie, my friend, you may have been the first\n",
"But I've got the skill and the wit, that's never blurred\n",
"My show's not afraid, to take on the fray\n",
"I'm the one who'll make you think, come what may\n",
"\n",
"[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may"
"**Stephen Colbert:**\n",
"Well, it's time for a showdown, like two old friends\n",
"Let's see whose satire reigns supreme, till the very end\n",
"But I've got a secret, that might just seal your fate\n",
"My humor's contagious, and it's already too late!\n",
"\n",
"**John Oliver:**\n",
"Bring it on, Stevie! I'm ready for you\n",
"I'll take on your jokes, and show them what to do\n",
"My sarcasm's sharp, like a scalpel in the night\n",
"You're just a relic of the past, without a fight\n",
"\n",
"**The judges deliberate, weighing the rhymes and the flow. Finally, they announce their decision:**\n",
"\n",
"Tina Fey: I've got to go with John Oliver. His jokes were sharper, and his delivery was smoother.\n",
"\n",
"Larry Wilmore: Agreed! But Stephen Colbert's still got that old-school charm.\n",
"\n",
"Patton Oswalt: You know what? It's a tie. Both of them brought the heat!\n",
"\n",
"**The crowd goes wild as both opponents take a bow. The rap battle may be over, but the satire war is just beginning...\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 4481.74 ms\n",
"llama_print_timings: sample time = 183.05 ms / 256 runs ( 0.72 ms per token, 1398.53 tokens per second)\n",
"llama_print_timings: prompt eval time = 456.05 ms / 13 tokens ( 35.08 ms per token, 28.51 tokens per second)\n",
"llama_print_timings: eval time = 7375.20 ms / 255 runs ( 28.92 ms per token, 34.58 tokens per second)\n",
"llama_print_timings: total time = 8388.92 ms\n"
]
},
{
"data": {
"text/plain": [
"\"by jonathan \\n\\nHere's the hypothetical rap battle:\\n\\n[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\\n\\n[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\\n\\n[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\\n\\n[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.invoke(\"Simulate a rap battle between Stephen Colbert and John Oliver\")"
]
},
{
"cell_type": "markdown",
"id": "0d9579a7",
"metadata": {},
"source": [
"### GPT4All\n",
"response_message = model.invoke(\n",
" \"Simulate a rap battle between Stephen Colbert and John Oliver\"\n",
")\n",
"\n",
"Similarly, we can use `GPT4All`.\n",
"\n",
"[Download the GPT4All model binary](/docs/integrations/llms/gpt4all).\n",
"\n",
"The Model Explorer on the [GPT4All](https://gpt4all.io/index.html) is a great way to choose and download a model.\n",
"\n",
"Then, specify the path that you downloaded to to.\n",
"\n",
"E.g., for me, the model lives here:\n",
"\n",
"`/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57c1aec0-04c7-479e-b9bf-af3c547ba0a3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import GPT4All\n",
"\n",
"gpt4all = GPT4All(\n",
" model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\",\n",
" max_tokens=2048,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e6d012e4-0eef-4734-a826-89ec74fe9f88",
"metadata": {},
"source": [
"### llamafile\n",
"\n",
"One of the simplest ways to run an LLM locally is using a [llamafile](https://github.com/Mozilla-Ocho/llamafile). All you need to do is:\n",
"\n",
"1) Download a llamafile from [HuggingFace](https://huggingface.co/models?other=llamafile)\n",
"2) Make the file executable\n",
"3) Run the file\n",
"\n",
"llamafiles bundle model weights and a [specially-compiled](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#technical-details) version of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) into a single file that can run on most computers without any additional dependencies. They also come with an embedded inference server that provides an [API](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md#api-endpoints) for interacting with your model. \n",
"\n",
"Here's a simple bash script that shows all 3 setup steps:\n",
"\n",
"```bash\n",
"# Download a llamafile from HuggingFace\n",
"wget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n",
"\n",
"# Make the file executable. On Windows, instead just rename the file to end in \".exe\".\n",
"chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n",
"\n",
"# Start the model server. Listens at http://localhost:8080 by default.\n",
"./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser\n",
"```\n",
"\n",
"After you run the above setup steps, you can interact with the model via LangChain:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "735e45b6-9aff-463e-aae4-bbf8ac2b21c5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n-1 1/2 (8 oz. Pounds) ground beef, browned and cooked until no longer pink\\n-3 cups whole wheat spaghetti\\n-4 (10 oz) cans diced tomatoes with garlic and basil\\n-2 eggs, beaten\\n-1 cup grated parmesan cheese\\n-1/2 teaspoon salt\\n-1/4 teaspoon black pepper\\n-1 cup breadcrumbs (16 oz)\\n-2 tablespoons olive oil\\n\\nInstructions:\\n1. Cook spaghetti according to package directions. Drain and set aside.\\n2. In a large skillet, brown ground beef over medium heat until no longer pink. Drain any excess grease.\\n3. Stir in diced tomatoes with garlic and basil, and season with salt and pepper. Cook for 5 to 7 minutes or until sauce is heated through. Set aside.\\n4. In a large bowl, beat eggs with a fork or whisk until fluffy. Add cheese, salt, and black pepper. Set aside.\\n5. In another bowl, combine breadcrumbs and olive oil. Dip each spaghetti into the egg mixture and then coat in the breadcrumb mixture. Place on baking sheet lined with parchment paper to prevent sticking. Repeat until all spaghetti are coated.\\n6. Heat oven to 375 degrees. Bake for 18 to 20 minutes, or until lightly golden brown.\\n7. Serve hot with meatballs and sauce on the side. Enjoy!'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.llms.llamafile import Llamafile\n",
"\n",
"llamafile = Llamafile()\n",
"\n",
"llamafile.invoke(\"Here is my grandmother's beloved recipe for spaghetti and meatballs:\")"
"print(response_message.content)"
]
},
{
@@ -405,79 +277,49 @@
"source": [
"## Using in a chain\n",
"\n",
"We can create a summarization chain with either model by passing in the retrieved docs and a simple prompt.\n",
"We can create a summarization chain with either model by passing in retrieved docs and a simple prompt.\n",
"\n",
"It formats the prompt template using the input key values provided and passes the formatted string to `GPT4All`, `LLama-V2`, or another specified LLM."
"It formats the prompt template using the input key values provided and passes the formatted string to the specified model:"
]
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 8,
"id": "18a3716d",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Based on the retrieved documents, the main themes are:\n",
"1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\n",
"2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\n",
"3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\n",
"4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 1191.88 ms\n",
"llama_print_timings: sample time = 134.47 ms / 193 runs ( 0.70 ms per token, 1435.25 tokens per second)\n",
"llama_print_timings: prompt eval time = 39470.18 ms / 1055 tokens ( 37.41 ms per token, 26.73 tokens per second)\n",
"llama_print_timings: eval time = 8090.85 ms / 192 runs ( 42.14 ms per token, 23.73 tokens per second)\n",
"llama_print_timings: total time = 47943.12 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\nBased on the retrieved documents, the main themes are:\\n1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\\n2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\\n3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\\n4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems.'"
"'The main themes in these documents are:\\n\\n1. **Task Decomposition**: The process of breaking down complex tasks into smaller, manageable subgoals is crucial for efficient task handling.\\n2. **Autonomous Agent System**: A system powered by Large Language Models (LLMs) that can perform planning, reflection, and refinement to improve the quality of final results.\\n3. **Challenges in Planning and Decomposition**:\\n\\t* Long-term planning and task decomposition are challenging for LLMs.\\n\\t* Adjusting plans when faced with unexpected errors is difficult for LLMs.\\n\\t* Humans learn from trial and error, making them more robust than LLMs in certain situations.\\n\\nOverall, the documents highlight the importance of task decomposition and planning in autonomous agent systems powered by LLMs, as well as the challenges that still need to be addressed.'"
]
},
"execution_count": 27,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"# Prompt\n",
"prompt = PromptTemplate.from_template(\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"Summarize the main themes in these retrieved docs: {docs}\"\n",
")\n",
"\n",
"\n",
"# Chain\n",
"# Convert loaded documents into strings by concatenating their content\n",
"# and ignoring metadata\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = {\"docs\": format_docs} | prompt | llm | StrOutputParser()\n",
"chain = {\"docs\": format_docs} | prompt | model | StrOutputParser()\n",
"\n",
"# Run\n",
"question = \"What are the approaches to Task Decomposition?\"\n",
"\n",
"docs = vectorstore.similarity_search(question)\n",
"\n",
"chain.invoke(docs)"
]
},
@@ -486,185 +328,55 @@
"id": "3cce6977-52e7-4944-89b4-c161d04f6698",
"metadata": {},
"source": [
"## Q&A \n",
"## Q&A\n",
"\n",
"We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific.\n",
"\n",
"Let's try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt)."
"You can also perform question-answering with your local model and vector store. Here's an example with a simple string prompt:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "59ed5f0d-7089-41cc-8486-af37b690dd33",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: {question} \\nContext: {context} \\nAnswer:\"))]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"\n",
"rag_prompt = hub.pull(\"rlm/rag-prompt\")\n",
"rag_prompt.messages"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "c01c1725",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Task can be done by down a task into smaller subtasks, using simple prompting like \"Steps for XYZ.\" or task-specific like \"Write a story outline\" for writing a novel."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 33.03 ms / 47 runs ( 0.70 ms per token, 1422.86 tokens per second)\n",
"llama_print_timings: prompt eval time = 1387.31 ms / 242 tokens ( 5.73 ms per token, 174.44 tokens per second)\n",
"llama_print_timings: eval time = 1321.62 ms / 46 runs ( 28.73 ms per token, 34.81 tokens per second)\n",
"llama_print_timings: total time = 2801.08 ms\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': '\\nTask can be done by down a task into smaller subtasks, using simple prompting like \"Steps for XYZ.\" or task-specific like \"Write a story outline\" for writing a novel.'}"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough, RunnablePick\n",
"\n",
"# Chain\n",
"chain = (\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"# Run\n",
"chain.invoke({\"context\": docs, \"question\": question})"
]
},
{
"cell_type": "markdown",
"id": "2e5913f0-cf92-4e21-8794-0502ba11b202",
"metadata": {},
"source": [
"Now, let's try with [a prompt specifically for LLaMA](https://smith.langchain.com/hub/rlm/rag-prompt-llama), which [includes special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "78f6862d-b7a6-4e03-84e4-45667185bf9b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, template=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: {question} \\nContext: {context} \\nAnswer: [/INST]\", template_format='f-string', validate_template=True), additional_kwargs={})])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"rag_prompt_llama = hub.pull(\"rlm/rag-prompt-llama\")\n",
"rag_prompt_llama.messages"
]
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 9,
"id": "67cefb46-acd3-4c2a-a8f6-b62c7c3e30dc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure, I'd be happy to help! Based on the context, here are some to task:\n",
"\n",
"1. LLM with simple prompting: This using a large model (LLM) with simple prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" to decompose tasks into smaller steps.\n",
"2. Task-specific: Another is to use task-specific, such as \"Write a story outline\" for writing a novel, to guide the of tasks.\n",
"3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\n",
"\n",
"As fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 144.81 ms / 207 runs ( 0.70 ms per token, 1429.47 tokens per second)\n",
"llama_print_timings: prompt eval time = 1506.13 ms / 258 tokens ( 5.84 ms per token, 171.30 tokens per second)\n",
"llama_print_timings: eval time = 6231.92 ms / 206 runs ( 30.25 ms per token, 33.06 tokens per second)\n",
"llama_print_timings: total time = 8158.41 ms\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': ' Sure, I\\'d be happy to help! Based on the context, here are some to task:\\n\\n1. LLM with simple prompting: This using a large model (LLM) with simple prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" to decompose tasks into smaller steps.\\n2. Task-specific: Another is to use task-specific, such as \"Write a story outline\" for writing a novel, to guide the of tasks.\\n3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\\n\\nAs fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error.'}"
"'Task decomposition can be done through (1) simple prompting using LLM, (2) task-specific instructions, or (3) human inputs. This approach helps break down large tasks into smaller, manageable subgoals for efficient handling of complex tasks. It enables agents to plan ahead and improve the quality of final results through reflection and refinement.'"
]
},
"execution_count": 26,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"RAG_TEMPLATE = \"\"\"\n",
"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"Answer the following question:\n",
"\n",
"{question}\"\"\"\n",
"\n",
"rag_prompt = ChatPromptTemplate.from_template(RAG_TEMPLATE)\n",
"\n",
"chain = (\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt_llama\n",
" | llm\n",
" RunnablePassthrough.assign(context=lambda input: format_docs(input[\"context\"]))\n",
" | rag_prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"question = \"What are the approaches to Task Decomposition?\"\n",
"\n",
"docs = vectorstore.similarity_search(question)\n",
"\n",
"# Run\n",
"chain.invoke({\"context\": docs, \"question\": question})"
]
@@ -676,82 +388,64 @@
"source": [
"## Q&A with retrieval\n",
"\n",
"Instead of manually passing in docs, we can automatically retrieve them from our vector store based on the user question.\n",
"\n",
"This will use a QA default prompt (shown [here](https://github.com/langchain-ai/langchain/blob/275b926cf745b5668d3ea30236635e20e7866442/langchain/chains/retrieval_qa/prompt.py#L4)) and will retrieve from the vectorDB."
"Finally, instead of manually passing in docs, you can automatically retrieve them from our vector store based on the user question:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 10,
"id": "86c7a349",
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()\n",
"\n",
"qa_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | rag_prompt\n",
" | llm\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 11,
"id": "112ca227",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure! Based on the context, here's my answer to your:\n",
"\n",
"There are several to task,:\n",
"\n",
"1. LLM-based with simple prompting, such as \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\n",
"2. Task-specific, like \"Write a story outline\" for writing a novel.\n",
"3. Human inputs to guide the process.\n",
"\n",
"These can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 139.20 ms / 200 runs ( 0.70 ms per token, 1436.76 tokens per second)\n",
"llama_print_timings: prompt eval time = 1532.26 ms / 258 tokens ( 5.94 ms per token, 168.38 tokens per second)\n",
"llama_print_timings: eval time = 5977.62 ms / 199 runs ( 30.04 ms per token, 33.29 tokens per second)\n",
"llama_print_timings: total time = 7916.21 ms\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'What are the approaches to Task Decomposition?',\n",
" 'result': ' Sure! Based on the context, here\\'s my answer to your:\\n\\nThere are several to task,:\\n\\n1. LLM-based with simple prompting, such as \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Task-specific, like \"Write a story outline\" for writing a novel.\\n3. Human inputs to guide the process.\\n\\nThese can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error.'}"
"'Task decomposition can be done through (1) simple prompting in Large Language Models (LLM), (2) using task-specific instructions, or (3) with human inputs. This process involves breaking down large tasks into smaller, manageable subgoals for efficient handling of complex tasks.'"
]
},
"execution_count": 30,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What are the approaches to Task Decomposition?\"\n",
"\n",
"qa_chain.invoke(question)"
]
},
{
"cell_type": "markdown",
"id": "e75d3e9e",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now seen how to build a RAG application using all local components. RAG is a very deep topic, and you might be interested in the following guides that discuss and demonstrate additional techniques:\n",
"\n",
"- [Video: Reliable, fully local RAG agents with LLaMA 3](https://www.youtube.com/watch?v=-ROS6gfYIts) for an agentic approach to RAG with local models\n",
"- [Video: Building Corrective RAG from scratch with open-source, local LLMs](https://www.youtube.com/watch?v=E2shqsYwxck)\n",
"- [Conceptual guide on retrieval](/docs/concepts/#retrieval) for an overview of various retrieval techniques you can apply to improve performance\n",
"- [How to guides on RAG](/docs/how_to/#qa-with-rag) for a deeper dive into different specifics around of RAG\n",
"- [How to run models locally](/docs/how_to/local_llms/) for different approaches to setting up different providers"
]
}
],
"metadata": {
@@ -770,7 +464,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -936,7 +936,8 @@
"- [Return sources](/docs/how_to/qa_sources): Learn how to return source documents\n",
"- [Streaming](/docs/how_to/streaming): Learn how to stream outputs and intermediate steps\n",
"- [Add chat history](/docs/how_to/message_history): Learn how to add chat history to your app\n",
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques"
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques\n",
"- [Build a local RAG application](/docs/tutorials/local_rag): Create an app similar to the one above using all local components"
]
}
],
@@ -956,7 +957,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,74 @@
import itertools
import multiprocessing
import re
import sys
from pathlib import Path
def _generate_related_links_section(integration_type: str, notebook_name: str):
concept_display_name = None
concept_heading = None
if integration_type == "chat":
concept_display_name = "Chat model"
concept_heading = "chat-models"
elif integration_type == "llms":
concept_display_name = "LLM"
concept_heading = "llms"
elif integration_type == "text_embedding":
concept_display_name = "Embedding model"
concept_heading = "embedding-models"
elif integration_type == "document_loaders":
concept_display_name = "Document loader"
concept_heading = "document-loaders"
elif integration_type == "vectorstores":
concept_display_name = "Vector store"
concept_heading = "vector-stores"
elif integration_type == "retrievers":
concept_display_name = "Retriever"
concept_heading = "retrievers"
elif integration_type == "tools":
concept_display_name = "Tool"
concept_heading = "tools"
elif integration_type == "stores":
concept_display_name = "Key-value store"
concept_heading = "key-value-stores"
# Special case because there are no key-value store how-tos yet
return f"""## Related
- [{concept_display_name} conceptual guide](/docs/concepts/#{concept_heading})
"""
else:
return None
return f"""## Related
- {concept_display_name} [conceptual guide](/docs/concepts/#{concept_heading})
- {concept_display_name} [how-to guides](/docs/how_to/#{concept_heading})
"""
def _process_path(doc_path: Path):
content = doc_path.read_text()
print(doc_path)
pattern = r"/docs/integrations/([^/]+)/([^/]+).mdx?"
match = re.search(pattern, str(doc_path))
print(bool(match))
if match and match.group(2) != "index":
integration_type = match.group(1)
notebook_name = match.group(2)
related_links_section = _generate_related_links_section(
integration_type, notebook_name
)
if related_links_section:
content = content + "\n\n" + related_links_section
doc_path.write_text(content)
if __name__ == "__main__":
output_docs_dir = Path(sys.argv[1])
mds = output_docs_dir.rglob("integrations/**/*.md")
mdxs = output_docs_dir.rglob("integrations/**/*.mdx")
paths = itertools.chain(mds, mdxs)
# modify all md files in place
with multiprocessing.Pool() as pool:
pool.map(_process_path, paths)

View File

@@ -1,69 +1,89 @@
import json
import re
import sys
from functools import cache
from pathlib import Path
from typing import Union
from typing import Dict, Iterable, List, Union
CURR_DIR = Path(__file__).parent.absolute()
CHAT_MODEL_HEADERS = (
"## Overview",
"### Integration details",
"### Model features",
"## Setup",
"## Instantiation",
"## Invocation",
"## Chaining",
"## API reference",
CLI_TEMPLATE_DIR = (
CURR_DIR.parent.parent / "libs/cli/langchain_cli/integration_template/docs"
)
CHAT_MODEL_REGEX = r".*".join(CHAT_MODEL_HEADERS)
DOCUMENT_LOADER_HEADERS = (
"## Overview",
"### Integration details",
"### Loader features",
"## Setup",
"## Instantiation",
"## Load",
"## Lazy Load",
"## API reference",
)
DOCUMENT_LOADER_REGEX = r".*".join(DOCUMENT_LOADER_HEADERS)
INFO_BY_DIR: Dict[str, Dict[str, Union[int, str]]] = {
"chat": {
"issue_number": 22296,
},
"document_loaders": {
"issue_number": 22866,
},
"stores": {},
"llms": {
"issue_number": 24803,
},
"text_embedding": {"issue_number": 14856},
"toolkits": {"issue_number": "TODO"},
"tools": {"issue_number": "TODO"},
"vectorstores": {"issue_number": 24800},
"retrievers": {"issue_number": "TODO"},
}
def check_chat_model(path: Path) -> None:
@cache
def _get_headers(doc_dir: str) -> Iterable[str]:
"""Gets all markdown headers ## and below from the integration template.
Ignores headers that contain "TODO"."""
ipynb_name = f"{doc_dir}.ipynb"
if not (CLI_TEMPLATE_DIR / ipynb_name).exists():
raise FileNotFoundError(f"Could not find {ipynb_name} in {CLI_TEMPLATE_DIR}")
with open(CLI_TEMPLATE_DIR / ipynb_name, "r") as f:
nb = json.load(f)
headers: List[str] = []
for cell in nb["cells"]:
if cell["cell_type"] == "markdown":
for line in cell["source"]:
if not line.startswith("##") or "TODO" in line:
continue
header = line.strip()
headers.append(header)
return headers
def check_header_order(path: Path) -> None:
doc_dir = path.parent.name
if doc_dir not in INFO_BY_DIR:
# Skip if not a directory we care about
return
headers = _get_headers(doc_dir)
issue_number = INFO_BY_DIR[doc_dir].get("issue_number", "nonexistent")
print(f"Checking {doc_dir} page {path}")
with open(path, "r") as f:
doc = f.read()
if not re.search(CHAT_MODEL_REGEX, doc, re.DOTALL):
raise ValueError(
f"Document {path} does not match the ChatModel Integration page template. "
f"Please see https://github.com/langchain-ai/langchain/issues/22296 for "
f"instructions on how to correctly format a ChatModel Integration page."
regex = r".*".join(headers)
if not re.search(regex, doc, re.DOTALL):
issueline = (
(
" Please see https://github.com/langchain-ai/langchain/issues/"
f"{issue_number} for instructions on how to correctly format a "
f"{doc_dir} integration page."
)
if isinstance(issue_number, int)
else ""
)
def check_document_loader(path: Path) -> None:
with open(path, "r") as f:
doc = f.read()
if not re.search(DOCUMENT_LOADER_REGEX, doc, re.DOTALL):
raise ValueError(
f"Document {path} does not match the DocumentLoader Integration page template. "
f"Please see https://github.com/langchain-ai/langchain/issues/22866 for "
f"instructions on how to correctly format a DocumentLoader Integration page."
f"Document {path} does not match the expected header order.{issueline}"
)
def main(*new_doc_paths: Union[str, Path]) -> None:
for path in new_doc_paths:
path = Path(path).resolve().absolute()
if CURR_DIR.parent / "docs" / "integrations" / "chat" in path.parents:
print(f"Checking chat model page {path}")
check_chat_model(path)
elif (
CURR_DIR.parent / "docs" / "integrations" / "document_loaders"
in path.parents
):
print(f"Checking document loader page {path}")
check_document_loader(path)
if CURR_DIR.parent / "docs" / "integrations" in path.parents:
check_header_order(path)
else:
continue

View File

@@ -0,0 +1,107 @@
import sys
from pathlib import Path
from langchain_community import document_loaders
from langchain_core.document_loaders.base import BaseLoader
KV_STORE_TEMPLATE = """\
---
sidebar_class_name: hidden
keywords: [compatibility]
custom_edit_url:
hide_table_of_contents: true
---
# Key-value stores
[Key-value stores](/docs/concepts/#key-value-stores) are used by other LangChain components to store and retrieve data.
:::info
If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
:::
## Features
The following table shows information on all available key-value stores.
{table}
"""
KV_STORE_FEAT_TABLE = {
"AstraDBByteStore": {
"class": "[AstraDBByteStore](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html)",
"local": False,
"package": "[langchain_astradb](https://api.python.langchain.com/en/latest/astradb_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_astradb?style=flat-square&label=%20)",
},
"CassandraByteStore": {
"class": "[CassandraByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html)",
"local": False,
"package": "[langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20)",
},
"ElasticsearchEmbeddingsCache": {
"class": "[ElasticsearchEmbeddingsCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html)",
"local": True,
"package": "[langchain_elasticsearch](https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_elasticsearch?style=flat-square&label=%20)",
},
"LocalFileStore": {
"class": "[LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html)",
"local": True,
"package": "[langchain](https://api.python.langchain.com/en/latest/langchain_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain?style=flat-square&label=%20)",
},
"InMemoryByteStore": {
"class": "[InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html)",
"local": True,
"package": "[langchain_core](https://api.python.langchain.com/en/latest/core_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_core?style=flat-square&label=%20)",
},
"RedisStore": {
"class": "[RedisStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html)",
"local": True,
"package": "[langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20)",
},
"UpstashRedisByteStore": {
"class": "[UpstashRedisByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html)",
"local": False,
"package": "[langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html)",
"downloads": "![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20)",
},
}
DEPRECATED = []
def get_kv_store_table() -> str:
"""Get the table of KV stores."""
header = ["name", "local", "package", "downloads"]
title = ["Class", "Local", "Package", "Downloads"]
rows = [title, [":-"] + [":-:"] * (len(title) - 1)]
for loader, feats in sorted(KV_STORE_FEAT_TABLE.items()):
if not feats or loader in DEPRECATED:
continue
rows += [
[feats["class"]]
+ ["" if feats.get(h) else "" for h in header[1:2]]
+ [feats["package"], feats["downloads"]]
]
return "\n".join(["|".join(row) for row in rows])
if __name__ == "__main__":
output_dir = Path(sys.argv[1])
output_integrations_dir = output_dir / "integrations"
output_integrations_dir_kv_stores = output_integrations_dir / "stores"
output_integrations_dir_kv_stores.mkdir(parents=True, exist_ok=True)
kv_stores_page = KV_STORE_TEMPLATE.format(table=get_kv_store_table())
with open(output_integrations_dir / "stores" / "index.mdx", "w") as f:
f.write(kv_stores_page)

View File

@@ -174,8 +174,6 @@ hide_table_of_contents: true
# Chat models
## Advanced features
:::info
If you'd like to write your own chat model, see [this how-to](/docs/how_to/custom_chat_model/).
@@ -183,6 +181,8 @@ If you'd like to contribute an integration, see [Contributing integrations](/doc
:::
## Advanced features
The following table shows all the chat model classes that support one or more advanced features.
:::info

View File

@@ -245,8 +245,8 @@ module.exports = {
},
],
link: {
type: "generated-index",
slug: "integrations/retrievers",
type: "doc",
id: "integrations/retrievers/index",
},
},
{
@@ -275,8 +275,8 @@ module.exports = {
},
],
link: {
type: "generated-index",
slug: "integrations/toolkits",
type: "doc",
id: "integrations/toolkits/index",
},
},
{

View File

@@ -181,6 +181,7 @@ import os
os.environ["${tabItem.apiKeyName}"] = getpass.getpass()`;
return (
<TabItem
key={tabItem.value}
value={tabItem.value}
label={tabItem.label}
default={tabItem.default}

View File

@@ -0,0 +1,18 @@
import React from "react";
import Admonition from '@theme/Admonition';
export default function Compatibility({ packagesAndVersions }) {
return (
<Admonition type="caution" title="Compatibility" icon="📦">
<span style={{fontSize: "15px"}}>
The code in this guide requires{" "}
{packagesAndVersions.map(([pkg, version], i) => {
return (
<code key={`compatiblity-map${pkg}>=${version}-${i}`}>{`${pkg}>=${version}`}</code>
);
})}.
Please ensure you have the correct packages installed.
</span>
</Admonition>
);
}

View File

@@ -0,0 +1,18 @@
import React from "react";
import Admonition from '@theme/Admonition';
export default function Prerequisites({ titlesAndLinks }) {
return (
<Admonition type="info" title="Prerequisites" icon="📚">
<ul style={{ fontSize: "15px", lineHeight: "1.5em" }}>
{titlesAndLinks.map(([title, link], i) => {
return (
<li key={`prereq-${link.replace(/\//g, "")}-${i}`}>
<a href={link}>{title}</a>
</li>
);
})}
</ul>
</Admonition>
);
}

View File

@@ -67,8 +67,8 @@
"destination": "/docs/tutorials/rag/"
},
{
"source": "/docs/how_to/migrate_chains(/?)",
"destination": "/docs/versions/migrating_chains"
"source": "/v0.2/docs/how_to/migrate_chains(/?)",
"destination": "/v0.2/docs/versions/migrating_chains"
}
]
}

View File

@@ -27,7 +27,7 @@
"\n",
"## Overview\n",
"\n",
"- TODO: (Optional) A short introduciton to the underlying technology/API.\n",
"- TODO: (Optional) A short introduction to the underlying technology/API.\n",
"\n",
"### Integration details\n",
"\n",
@@ -36,7 +36,7 @@
"- TODO: Make sure API reference links are correct.\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/_package_name_) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [__ModuleName__ByteStore](https://api.python.langchain.com/en/latest/stores/__module_name__.stores.__ModuleName__ByteStore.html) | [__package_name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | ✅/❌ | ✅/❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/__package_name__?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/__package_name__?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",

View File

@@ -24,10 +24,19 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Namespace | Native async | Local |\n",
"| :--- | :--- | :---: | :---: |\n",
"[__ModuleName__Retriever](https://api.python.langchain.com/en/latest/retrievers/__package_name__.retrievers.__module_name__.__ModuleName__Retriever.html) | __package_name__.retrievers | ❌ | ❌ |\n",
"TODO: Select one of the tables below, as appropriate.\n",
"\n",
"1: Bring-your-own data (i.e., index and search a custom corpus of documents):\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[__ModuleName__Retriever](https://api.python.langchain.com/en/latest/retrievers/__package_name__.retrievers.__module_name__.__ModuleName__Retriever.html) | ❌ | ❌ | __package_name__ |\n",
"\n",
"2: External index (e.g., constructed from Internet data or similar)):\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[__ModuleName__Retriever](https://api.python.langchain.com/en/latest/retrievers/__package_name__.retrievers.__module_name__.__ModuleName__Retriever.html) | Source description | __package_name__ |\n",
"\n",
"## Setup\n",
"\n",
@@ -39,7 +48,7 @@
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
@@ -124,7 +133,32 @@
"id": "dfe8aad4-8626-4330-98a9-7ea1ca5d2e0e",
"metadata": {},
"source": [
"## Use within a chain"
"## Use within a chain\n",
"\n",
"Like other retrievers, __ModuleName__Retriever can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
"\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25b647a3-f8f2-4541-a289-7a241e43f9df",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
@@ -137,7 +171,6 @@
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
@@ -147,8 +180,6 @@
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",

View File

@@ -1,6 +1,7 @@
"""__ModuleName__ toolkits."""
from typing import List
from langchain_core.tools import BaseTool, BaseToolKit

View File

@@ -3,10 +3,9 @@
from typing import Optional, Type
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.tools import BaseTool

View File

@@ -9,7 +9,7 @@ if __name__ == "__main__":
try:
SourceFileLoader("x", file).load_module()
except Exception:
has_faillure = True
has_failure = True
print(file) # noqa: T201
traceback.print_exc()
print() # noqa: T201

View File

@@ -127,6 +127,20 @@ def new(
)
TEMPLATE_MAP: dict[str, str] = {
"ChatModel": "chat.ipynb",
"DocumentLoader": "document_loaders.ipynb",
"Tool": "tools.ipynb",
"VectorStore": "vectorstores.ipynb",
"Embeddings": "text_embedding.ipynb",
"ByteStore": "kv_store.ipynb",
"LLM": "llms.ipynb",
"Provider": "provider.ipynb",
"Toolkit": "toolkits.ipynb",
"Retriever": "retrievers.ipynb",
}
@integration_cli.command()
def create_doc(
name: Annotated[
@@ -173,7 +187,7 @@ def create_doc(
Creates a new integration doc.
"""
try:
replacements = _process_name(name, community=component_type=="Tool")
replacements = _process_name(name, community=component_type == "Tool")
except ValueError as e:
typer.echo(e)
raise typer.Exit(code=1)
@@ -202,14 +216,8 @@ def create_doc(
# copy over template from ../integration_template
template_dir = Path(__file__).parents[1] / "integration_template" / "docs"
if component_type == "ChatModel":
docs_template = template_dir / "chat.ipynb"
elif component_type == "DocumentLoader":
docs_template = template_dir / "document_loaders.ipynb"
elif component_type == "Tool":
docs_template = template_dir / "tools.ipynb"
elif component_type == "VectorStore":
docs_template = template_dir / "vectorstores.ipynb"
if component_type in TEMPLATE_MAP:
docs_template = template_dir / TEMPLATE_MAP[component_type]
else:
raise ValueError(
f"Unrecognized {component_type=}. Expected one of 'ChatModel', "

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-cli"
version = "0.0.27"
version = "0.0.28"
description = "CLI for interacting with LangChain"
authors = ["Erick Friis <erick@langchain.dev>"]
readme = "README.md"

View File

@@ -91,3 +91,4 @@ vdms>=0.0.20
xata>=1.0.0a7,<2
xmltodict>=0.13.0,<0.14
nanopq==0.2.1
mlflow[genai]>=2.14.0

View File

@@ -10,7 +10,7 @@ from typing import TYPE_CHECKING, Dict, Optional, Set
import requests
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_core.utils import convert_to_secret_str, from_env, get_from_dict_or_env
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (
@@ -64,7 +64,9 @@ class ChatAnyscale(ChatOpenAI):
"""AnyScale Endpoints API keys."""
model_name: str = Field(default=DEFAULT_MODEL, alias="model")
"""Model name to use."""
anyscale_api_base: str = Field(default=DEFAULT_API_BASE)
anyscale_api_base: str = Field(
default_factory=from_env("ANYSCALE_API_BASE", default=DEFAULT_API_BASE)
)
"""Base URL path for API requests,
leave blank if not using a proxy or service emulator."""
anyscale_proxy: Optional[str] = None
@@ -112,12 +114,6 @@ class ChatAnyscale(ChatOpenAI):
"ANYSCALE_API_KEY",
)
)
values["anyscale_api_base"] = get_from_dict_or_env(
values,
"anyscale_api_base",
"ANYSCALE_API_BASE",
default=DEFAULT_API_BASE,
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"anyscale_proxy",

View File

@@ -49,6 +49,7 @@ from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
)
@@ -348,7 +349,10 @@ class ChatBaichuan(BaseChatModel):
def lc_serializable(self) -> bool:
return True
baichuan_api_base: str = Field(default=DEFAULT_API_BASE, alias="base_url")
baichuan_api_base: str = Field(
default_factory=from_env("BAICHUAN_API_BASE", default=DEFAULT_API_BASE),
alias="base_url",
)
"""Baichuan custom endpoints"""
baichuan_api_key: SecretStr = Field(alias="api_key")
"""Baichuan API Key"""
@@ -408,12 +412,6 @@ class ChatBaichuan(BaseChatModel):
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
values["baichuan_api_base"] = get_from_dict_or_env(
values,
"baichuan_api_base",
"BAICHUAN_API_BASE",
DEFAULT_API_BASE,
)
values["baichuan_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,

View File

@@ -22,6 +22,7 @@ from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResu
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
)
@@ -88,7 +89,9 @@ class ChatCoze(BaseChatModel):
def lc_serializable(self) -> bool:
return True
coze_api_base: str = Field(default=DEFAULT_API_BASE)
coze_api_base: str = Field(
default_factory=from_env("COZE_API_BASE", default=DEFAULT_API_BASE)
)
"""Coze custom endpoints"""
coze_api_key: Optional[SecretStr] = None
"""Coze API Key"""
@@ -118,12 +121,6 @@ class ChatCoze(BaseChatModel):
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
values["coze_api_base"] = get_from_dict_or_env(
values,
"coze_api_base",
"COZE_API_BASE",
DEFAULT_API_BASE,
)
values["coze_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,

View File

@@ -57,7 +57,7 @@ from langchain_core.outputs import (
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.utilities.requests import Requests
@@ -203,7 +203,9 @@ class ChatDeepInfra(BaseChatModel):
# client: Any #: :meta private:
model_name: str = Field(default="meta-llama/Llama-2-70b-chat-hf", alias="model")
"""Model name to use."""
deepinfra_api_token: Optional[str] = None
deepinfra_api_token: Optional[str] = Field(
default_factory=from_env("DEEPINFRA_API_TOKEN", default=api_key)
)
request_timeout: Optional[float] = Field(default=None, alias="timeout")
temperature: Optional[float] = 1
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
@@ -297,12 +299,6 @@ class ChatDeepInfra(BaseChatModel):
"DEEPINFRA_API_KEY",
default="",
)
values["deepinfra_api_token"] = get_from_dict_or_env(
values,
"deepinfra_api_token",
"DEEPINFRA_API_TOKEN",
default=api_key,
)
return values
@root_validator(pre=False, skip_on_failure=True)

View File

@@ -14,7 +14,7 @@ from langchain_core.messages import (
)
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.pydantic_v1 import root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
logger = logging.getLogger(__name__)
@@ -74,13 +74,17 @@ class ErnieBotChat(BaseChatModel):
"""
ernie_api_base: Optional[str] = None
ernie_api_base: Optional[str] = Field(
default_factory=from_env("ERNIE_API_BASE", default="https://aip.baidubce.com")
)
"""Baidu application custom endpoints"""
ernie_client_id: Optional[str] = None
ernie_client_id: Optional[str] = Field(default_factory=from_env("ERNIE_CLIENT_ID"))
"""Baidu application client id"""
ernie_client_secret: Optional[str] = None
ernie_client_secret: Optional[str] = Field(
default_factory=from_env("ERNIE_CLIENT_SECRET")
)
"""Baidu application client secret"""
access_token: Optional[str] = None
@@ -110,19 +114,6 @@ class ErnieBotChat(BaseChatModel):
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
values["ernie_api_base"] = get_from_dict_or_env(
values, "ernie_api_base", "ERNIE_API_BASE", "https://aip.baidubce.com"
)
values["ernie_client_id"] = get_from_dict_or_env(
values,
"ernie_client_id",
"ERNIE_CLIENT_ID",
)
values["ernie_client_secret"] = get_from_dict_or_env(
values,
"ernie_client_secret",
"ERNIE_CLIENT_SECRET",
)
return values
def _chat(self, payload: object) -> dict:

View File

@@ -134,9 +134,9 @@ class ChatFriendli(BaseChatModel, BaseFriendli):
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
if run_manager:
run_manager.on_llm_new_token(delta)
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
async def _astream(
self,
@@ -152,9 +152,9 @@ class ChatFriendli(BaseChatModel, BaseFriendli):
async for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
if run_manager:
await run_manager.on_llm_new_token(delta)
yield ChatGenerationChunk(message=AIMessageChunk(content=delta))
def _generate(
self,

View File

@@ -31,7 +31,7 @@ from langchain_core.language_models.llms import create_base_retry_decorator
from langchain_core.messages import AIMessageChunk, BaseMessage, BaseMessageChunk
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_core.utils import convert_to_secret_str, from_env, get_from_dict_or_env
from langchain_community.adapters.openai import (
convert_dict_to_message,
@@ -151,7 +151,9 @@ class GPTRouter(BaseChatModel):
client: Any = Field(default=None, exclude=True) #: :meta private:
models_priority_list: List[GPTRouterModel] = Field(min_items=1)
gpt_router_api_base: str = Field(default=None)
gpt_router_api_base: str = Field(
default_factory=from_env("GPT_ROUTER_API_BASE", default=DEFAULT_API_BASE_URL)
)
"""WriteSonic GPTRouter custom endpoint"""
gpt_router_api_key: Optional[SecretStr] = None
"""WriteSonic GPTRouter API Key"""
@@ -169,13 +171,6 @@ class GPTRouter(BaseChatModel):
@root_validator(allow_reuse=True)
def validate_environment(cls, values: Dict) -> Dict:
values["gpt_router_api_base"] = get_from_dict_or_env(
values,
"gpt_router_api_base",
"GPT_ROUTER_API_BASE",
DEFAULT_API_BASE_URL,
)
values["gpt_router_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,

View File

@@ -21,6 +21,7 @@ from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResu
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -98,9 +99,11 @@ class ChatHunyuan(BaseChatModel):
def lc_serializable(self) -> bool:
return True
hunyuan_app_id: Optional[int] = None
hunyuan_app_id: Optional[int] = Field(default_factory=from_env("HUNYUAN_APP_ID"))
"""Hunyuan App ID"""
hunyuan_secret_id: Optional[str] = None
hunyuan_secret_id: Optional[str] = Field(
default_factory=from_env("HUNYUAN_SECRET_ID")
)
"""Hunyuan Secret ID"""
hunyuan_secret_key: Optional[SecretStr] = None
"""Hunyuan Secret Key"""
@@ -165,16 +168,6 @@ class ChatHunyuan(BaseChatModel):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
values["hunyuan_app_id"] = get_from_dict_or_env(
values,
"hunyuan_app_id",
"HUNYUAN_APP_ID",
)
values["hunyuan_secret_id"] = get_from_dict_or_env(
values,
"hunyuan_secret_id",
"HUNYUAN_SECRET_ID",
)
values["hunyuan_secret_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,

View File

@@ -55,7 +55,7 @@ from langchain_core.outputs import (
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
logger = logging.getLogger(__name__)
@@ -219,12 +219,24 @@ class ChatLiteLLM(BaseChatModel):
model: str = "gpt-3.5-turbo"
model_name: Optional[str] = None
"""Model name to use."""
openai_api_key: Optional[str] = None
azure_api_key: Optional[str] = None
anthropic_api_key: Optional[str] = None
replicate_api_key: Optional[str] = None
cohere_api_key: Optional[str] = None
openrouter_api_key: Optional[str] = None
openai_api_key: Optional[str] = Field(
default_factory=from_env("OPENAI_API_KEY", default="")
)
azure_api_key: Optional[str] = Field(
default_factory=from_env("AZURE_API_KEY", default="")
)
anthropic_api_key: Optional[str] = Field(
default_factory=from_env("ANTHROPIC_API_KEY", default="")
)
replicate_api_key: Optional[str] = Field(
default_factory=from_env("REPLICATE_API_KEY", default="")
)
cohere_api_key: Optional[str] = Field(
default_factory=from_env("COHERE_API_KEY", default="")
)
openrouter_api_key: Optional[str] = Field(
default_factory=from_env("OPENROUTER_API_KEY", default="")
)
streaming: bool = False
api_base: Optional[str] = None
organization: Optional[str] = None
@@ -302,24 +314,6 @@ class ChatLiteLLM(BaseChatModel):
"Please install it with `pip install litellm`"
)
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY", default=""
)
values["azure_api_key"] = get_from_dict_or_env(
values, "azure_api_key", "AZURE_API_KEY", default=""
)
values["anthropic_api_key"] = get_from_dict_or_env(
values, "anthropic_api_key", "ANTHROPIC_API_KEY", default=""
)
values["replicate_api_key"] = get_from_dict_or_env(
values, "replicate_api_key", "REPLICATE_API_KEY", default=""
)
values["openrouter_api_key"] = get_from_dict_or_env(
values, "openrouter_api_key", "OPENROUTER_API_KEY", default=""
)
values["cohere_api_key"] = get_from_dict_or_env(
values, "cohere_api_key", "COHERE_API_KEY", default=""
)
values["huggingface_api_key"] = get_from_dict_or_env(
values, "huggingface_api_key", "HUGGINGFACE_API_KEY", default=""
)

View File

@@ -1,5 +1,19 @@
import json
import logging
from typing import Any, Dict, Iterator, List, Mapping, Optional, cast
from typing import (
Any,
Callable,
Dict,
Iterator,
List,
Literal,
Mapping,
Optional,
Sequence,
Type,
Union,
cast,
)
from urllib.parse import urlparse
from langchain_core.callbacks import CallbackManagerForLLMRun
@@ -15,15 +29,27 @@ from langchain_core.messages import (
FunctionMessage,
HumanMessage,
HumanMessageChunk,
InvalidToolCall,
SystemMessage,
SystemMessageChunk,
ToolCall,
ToolMessage,
ToolMessageChunk,
)
from langchain_core.messages.tool import tool_call_chunk
from langchain_core.output_parsers.openai_tools import (
make_invalid_tool_call,
parse_tool_call,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import (
BaseModel,
Field,
PrivateAttr,
)
from langchain_core.runnables import RunnableConfig
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
logger = logging.getLogger(__name__)
@@ -228,11 +254,32 @@ class ChatMlflow(BaseChatModel):
@staticmethod
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
role = _dict["role"]
content = _dict["content"]
content = cast(str, _dict.get("content"))
if role == "user":
return HumanMessage(content=content)
elif role == "assistant":
return AIMessage(content=content)
content = content or ""
additional_kwargs: Dict = {}
tool_calls = []
invalid_tool_calls = []
if raw_tool_calls := _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = raw_tool_calls
for raw_tool_call in raw_tool_calls:
try:
tool_calls.append(
parse_tool_call(raw_tool_call, return_id=True)
)
except Exception as e:
invalid_tool_calls.append(
make_invalid_tool_call(raw_tool_call, str(e))
)
return AIMessage(
content=content,
additional_kwargs=additional_kwargs,
id=_dict.get("id"),
tool_calls=tool_calls,
invalid_tool_calls=invalid_tool_calls,
)
elif role == "system":
return SystemMessage(content=content)
else:
@@ -243,13 +290,38 @@ class ChatMlflow(BaseChatModel):
_dict: Mapping[str, Any], default_role: str
) -> BaseMessageChunk:
role = _dict.get("role", default_role)
content = _dict["content"]
content = _dict.get("content") or ""
if role == "user":
return HumanMessageChunk(content=content)
elif role == "assistant":
return AIMessageChunk(content=content)
additional_kwargs: Dict = {}
tool_call_chunks = []
if raw_tool_calls := _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = raw_tool_calls
try:
tool_call_chunks = [
tool_call_chunk(
name=rtc["function"].get("name"),
args=rtc["function"].get("arguments"),
id=rtc.get("id"),
index=rtc["index"],
)
for rtc in raw_tool_calls
]
except KeyError:
pass
return AIMessageChunk(
content=content,
additional_kwargs=additional_kwargs,
id=_dict.get("id"),
tool_call_chunks=tool_call_chunks,
)
elif role == "system":
return SystemMessageChunk(content=content)
elif role == "tool":
return ToolMessageChunk(
content=content, tool_call_id=_dict["tool_call_id"], id=_dict.get("id")
)
else:
return ChatMessageChunk(content=content, role=role)
@@ -262,14 +334,47 @@ class ChatMlflow(BaseChatModel):
@staticmethod
def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict = {"content": message.content}
if (name := message.name or message.additional_kwargs.get("name")) is not None:
message_dict["name"] = name
if isinstance(message, ChatMessage):
message_dict = {"role": message.role, "content": message.content}
message_dict["role"] = message.role
elif isinstance(message, HumanMessage):
message_dict = {"role": "user", "content": message.content}
message_dict["role"] = "user"
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
message_dict["role"] = "assistant"
if message.tool_calls or message.invalid_tool_calls:
message_dict["tool_calls"] = [
_lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
] + [
_lc_invalid_tool_call_to_openai_tool_call(tc)
for tc in message.invalid_tool_calls
] # type: ignore[assignment]
elif "tool_calls" in message.additional_kwargs:
message_dict["tool_calls"] = message.additional_kwargs["tool_calls"]
tool_call_supported_props = {"id", "type", "function"}
message_dict["tool_calls"] = [
{
k: v
for k, v in tool_call.items() # type: ignore[union-attr]
if k in tool_call_supported_props
}
for tool_call in message_dict["tool_calls"]
]
else:
pass
# If tool calls present, content null value should be None not empty string.
if "tool_calls" in message_dict:
message_dict["content"] = message_dict["content"] or None # type: ignore[assignment]
elif isinstance(message, SystemMessage):
message_dict = {"role": "system", "content": message.content}
message_dict["role"] = "system"
elif isinstance(message, ToolMessage):
message_dict["role"] = "tool"
message_dict["tool_call_id"] = message.tool_call_id
supported_props = {"content", "role", "tool_call_id"}
message_dict = {
k: v for k, v in message_dict.items() if k in supported_props
}
elif isinstance(message, FunctionMessage):
raise ValueError(
"Function messages are not supported by Databricks. Please"
@@ -280,12 +385,6 @@ class ChatMlflow(BaseChatModel):
if "function_call" in message.additional_kwargs:
ChatMlflow._raise_functions_not_supported()
if message.additional_kwargs:
logger.warning(
"Additional message arguments are unsupported by Databricks"
" and will be ignored: %s",
message.additional_kwargs,
)
return message_dict
@staticmethod
@@ -302,3 +401,89 @@ class ChatMlflow(BaseChatModel):
usage = response.get("usage", {})
return ChatResult(generations=generations, llm_output=usage)
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
*,
tool_choice: Optional[
Union[dict, str, Literal["auto", "none", "required", "any"], bool]
] = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
Args:
tools: A list of tool definitions to bind to this chat model.
Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic
models, callables, and BaseTools will be automatically converted to
their schema dictionary representation.
tool_choice: Which tool to require the model to call.
Options are:
name of the tool (str): calls corresponding tool;
"auto": automatically selects a tool (including no tool);
"none": model does not generate any tool calls and instead must
generate a standard assistant message;
"required": the model picks the most relevant tool in tools and
must generate a tool call;
or a dict of the form:
{"type": "function", "function": {"name": <<tool_name>>}}.
**kwargs: Any additional parameters to pass to the
:class:`~langchain.runnable.Runnable` constructor.
"""
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
if tool_choice:
if isinstance(tool_choice, str):
# tool_choice is a tool/function name
if tool_choice not in ("auto", "none", "required"):
tool_choice = {
"type": "function",
"function": {"name": tool_choice},
}
elif isinstance(tool_choice, dict):
tool_names = [
formatted_tool["function"]["name"]
for formatted_tool in formatted_tools
]
if not any(
tool_name == tool_choice["function"]["name"]
for tool_name in tool_names
):
raise ValueError(
f"Tool choice {tool_choice} was specified, but the only "
f"provided tools were {tool_names}."
)
else:
raise ValueError(
f"Unrecognized tool_choice type. Expected str, bool or dict. "
f"Received: {tool_choice}"
)
kwargs["tool_choice"] = tool_choice
return super().bind(tools=formatted_tools, **kwargs)
def _lc_tool_call_to_openai_tool_call(tool_call: ToolCall) -> dict:
return {
"type": "function",
"id": tool_call["id"],
"function": {
"name": tool_call["name"],
"arguments": json.dumps(tool_call["args"]),
},
}
def _lc_invalid_tool_call_to_openai_tool_call(
invalid_tool_call: InvalidToolCall,
) -> dict:
return {
"type": "function",
"id": invalid_tool_call["id"],
"function": {
"name": invalid_tool_call["name"],
"arguments": invalid_tool_call["args"],
},
}

View File

@@ -186,9 +186,9 @@ class ChatMLX(BaseChatModel):
# yield text, if any
if text:
chunk = ChatGenerationChunk(message=AIMessageChunk(content=text))
yield chunk
if run_manager:
run_manager.on_llm_new_token(text, chunk=chunk)
yield chunk
# break if stop sequence found
if token == eos_token_id or (stop is not None and text in stop):

View File

@@ -3,7 +3,12 @@
from typing import Dict
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.chat_models.openai import ChatOpenAI
from langchain_community.utils.openai import is_openai_v1
@@ -31,9 +36,13 @@ class ChatOctoAI(ChatOpenAI):
chat = ChatOctoAI(model_name="mixtral-8x7b-instruct")
"""
octoai_api_base: str = Field(default=DEFAULT_API_BASE)
octoai_api_base: str = Field(
default_factory=from_env("OCTOAI_API_BASE", default=DEFAULT_API_BASE)
)
octoai_api_token: SecretStr = Field(default=None)
model_name: str = Field(default=DEFAULT_MODEL)
model_name: str = Field(
default_factory=from_env("MODEL_NAME", default=DEFAULT_MODEL)
)
@property
def _llm_type(self) -> str:
@@ -51,21 +60,9 @@ class ChatOctoAI(ChatOpenAI):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["octoai_api_base"] = get_from_dict_or_env(
values,
"octoai_api_base",
"OCTOAI_API_BASE",
default=DEFAULT_API_BASE,
)
values["octoai_api_token"] = convert_to_secret_str(
get_from_dict_or_env(values, "octoai_api_token", "OCTOAI_API_TOKEN")
)
values["model_name"] = get_from_dict_or_env(
values,
"model_name",
"MODEL_NAME",
default=DEFAULT_MODEL,
)
try:
import openai

View File

@@ -47,6 +47,7 @@ from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResu
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.runnables import Runnable
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -205,7 +206,9 @@ class ChatOpenAI(BaseChatModel):
# When updating this to use a SecretStr
# Check for classes that derive from this class (as some of them
# may assume openai_api_key is a str)
openai_api_key: Optional[str] = Field(default=None, alias="api_key")
openai_api_key: Optional[str] = Field(
default_factory=from_env("OPENAI_API_KEY"), alias="api_key"
)
"""Automatically inferred from env var `OPENAI_API_KEY` if not provided."""
openai_api_base: Optional[str] = Field(default=None, alias="base_url")
"""Base URL path for API requests, leave blank if not using a proxy or service
@@ -213,7 +216,9 @@ class ChatOpenAI(BaseChatModel):
openai_organization: Optional[str] = Field(default=None, alias="organization")
"""Automatically inferred from env var `OPENAI_ORG_ID` if not provided."""
# to support explicit proxy for OpenAI
openai_proxy: Optional[str] = None
openai_proxy: Optional[str] = Field(
default_factory=from_env("OPENAI_PROXY", default="")
)
request_timeout: Union[float, Tuple[float, float], Any, None] = Field(
default=None, alias="timeout"
)
@@ -283,9 +288,6 @@ class ChatOpenAI(BaseChatModel):
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
# Check OPENAI_ORGANIZATION for backwards compatibility.
values["openai_organization"] = (
values["openai_organization"]
@@ -295,12 +297,6 @@ class ChatOpenAI(BaseChatModel):
values["openai_api_base"] = values["openai_api_base"] or os.getenv(
"OPENAI_API_BASE"
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
try:
import openai

View File

@@ -36,7 +36,11 @@ from langchain_core.messages import (
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import get_from_dict_or_env, get_pydantic_field_names
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
)
logger = logging.getLogger(__name__)
@@ -67,7 +71,9 @@ class ChatPerplexity(BaseChatModel):
"""What sampling temperature to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
pplx_api_key: Optional[str] = Field(None, alias="api_key")
pplx_api_key: Optional[str] = Field(
None, alias="api_key", default_factory=from_env("PPLX_API_KEY")
)
"""Base URL path for API requests,
leave blank if not using a proxy or service emulator."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = Field(
@@ -119,9 +125,6 @@ class ChatPerplexity(BaseChatModel):
@root_validator(allow_reuse=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["pplx_api_key"] = get_from_dict_or_env(
values, "pplx_api_key", "PPLX_API_KEY"
)
try:
import openai
except ImportError:

View File

@@ -14,6 +14,7 @@ from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -111,19 +112,31 @@ class ChatSnowflakeCortex(BaseChatModel):
cumulative probabilities. Value should be ranging between 0.0 and 1.0.
"""
snowflake_username: Optional[str] = Field(default=None, alias="username")
snowflake_username: Optional[str] = Field(
default_factory=from_env("SNOWFLAKE_USERNAME"), alias="username"
)
"""Automatically inferred from env var `SNOWFLAKE_USERNAME` if not provided."""
snowflake_password: Optional[SecretStr] = Field(default=None, alias="password")
"""Automatically inferred from env var `SNOWFLAKE_PASSWORD` if not provided."""
snowflake_account: Optional[str] = Field(default=None, alias="account")
snowflake_account: Optional[str] = Field(
default_factory=from_env("SNOWFLAKE_ACCOUNT"), alias="account"
)
"""Automatically inferred from env var `SNOWFLAKE_ACCOUNT` if not provided."""
snowflake_database: Optional[str] = Field(default=None, alias="database")
snowflake_database: Optional[str] = Field(
default_factory=from_env("SNOWFLAKE_DATABASE"), alias="database"
)
"""Automatically inferred from env var `SNOWFLAKE_DATABASE` if not provided."""
snowflake_schema: Optional[str] = Field(default=None, alias="schema")
snowflake_schema: Optional[str] = Field(
default_factory=from_env("SNOWFLAKE_SCHEMA"), alias="schema"
)
"""Automatically inferred from env var `SNOWFLAKE_SCHEMA` if not provided."""
snowflake_warehouse: Optional[str] = Field(default=None, alias="warehouse")
snowflake_warehouse: Optional[str] = Field(
default_factory=from_env("SNOWFLAKE_WAREHOUSE"), alias="warehouse"
)
"""Automatically inferred from env var `SNOWFLAKE_WAREHOUSE` if not provided."""
snowflake_role: Optional[str] = Field(default=None, alias="role")
snowflake_role: Optional[str] = Field(
default_factory=from_env("SNOWFLAKE_ROLE"), alias="role"
)
"""Automatically inferred from env var `SNOWFLAKE_ROLE` if not provided."""
@root_validator(pre=True)
@@ -146,27 +159,9 @@ class ChatSnowflakeCortex(BaseChatModel):
"`pip install snowflake-snowpark-python`"
)
values["snowflake_username"] = get_from_dict_or_env(
values, "snowflake_username", "SNOWFLAKE_USERNAME"
)
values["snowflake_password"] = convert_to_secret_str(
get_from_dict_or_env(values, "snowflake_password", "SNOWFLAKE_PASSWORD")
)
values["snowflake_account"] = get_from_dict_or_env(
values, "snowflake_account", "SNOWFLAKE_ACCOUNT"
)
values["snowflake_database"] = get_from_dict_or_env(
values, "snowflake_database", "SNOWFLAKE_DATABASE"
)
values["snowflake_schema"] = get_from_dict_or_env(
values, "snowflake_schema", "SNOWFLAKE_SCHEMA"
)
values["snowflake_warehouse"] = get_from_dict_or_env(
values, "snowflake_warehouse", "SNOWFLAKE_WAREHOUSE"
)
values["snowflake_role"] = get_from_dict_or_env(
values, "snowflake_role", "SNOWFLAKE_ROLE"
)
connection_params = {
"account": values["snowflake_account"],

View File

@@ -37,6 +37,7 @@ from langchain_core.outputs import (
)
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
)
@@ -228,10 +229,16 @@ class ChatSparkLLM(BaseChatModel):
spark_api_secret: Optional[str] = Field(default=None, alias="api_secret")
"""Automatically inferred from env var `IFLYTEK_SPARK_API_SECRET`
if not provided."""
spark_api_url: Optional[str] = Field(default=None, alias="api_url")
spark_api_url: Optional[str] = Field(
default_factory=from_env("IFLYTEK_SPARK_API_URL", default=SPARK_API_URL),
alias="api_url",
)
"""Base URL path for API requests, leave blank if not using a proxy or service
emulator."""
spark_llm_domain: Optional[str] = Field(default=None, alias="model")
spark_llm_domain: Optional[str] = Field(
default_factory=from_env("IFLYTEK_SPARK_LLM_DOMAIN", default=SPARK_LLM_DOMAIN),
alias="model",
)
"""Model name to use."""
spark_user_id: str = "lc_user"
streaming: bool = False
@@ -294,18 +301,6 @@ class ChatSparkLLM(BaseChatModel):
["spark_api_secret", "api_secret"],
"IFLYTEK_SPARK_API_SECRET",
)
values["spark_api_url"] = get_from_dict_or_env(
values,
"spark_api_url",
"IFLYTEK_SPARK_API_URL",
SPARK_API_URL,
)
values["spark_llm_domain"] = get_from_dict_or_env(
values,
"spark_llm_domain",
"IFLYTEK_SPARK_LLM_DOMAIN",
SPARK_LLM_DOMAIN,
)
# put extra params into model_kwargs
default_values = {

View File

@@ -42,6 +42,7 @@ from langchain_core.messages import (
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -85,7 +86,9 @@ class ChatYuan2(BaseChatModel):
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
yuan2_api_key: Optional[str] = Field(default="EMPTY", alias="api_key")
yuan2_api_key: Optional[str] = Field(
default="EMPTY", alias="api_key", default_factory=from_env("YUAN2_API_KEY")
)
"""Automatically inferred from env var `YUAN2_API_KEY` if not provided."""
yuan2_api_base: Optional[str] = Field(
@@ -170,9 +173,6 @@ class ChatYuan2(BaseChatModel):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["yuan2_api_key"] = get_from_dict_or_env(
values, "yuan2_api_key", "YUAN2_API_KEY"
)
try:
import openai

View File

@@ -32,7 +32,7 @@ from langchain_core.messages import (
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
logger = logging.getLogger(__name__)
@@ -335,7 +335,10 @@ class ChatZhipuAI(BaseChatModel):
# client:
zhipuai_api_key: Optional[str] = Field(default=None, alias="api_key")
"""Automatically inferred from env var `ZHIPUAI_API_KEY` if not provided."""
zhipuai_api_base: Optional[str] = Field(default=None, alias="api_base")
zhipuai_api_base: Optional[str] = Field(
default_factory=from_env("ZHIPUAI_API_BASE", default=ZHIPUAI_API_BASE),
alias="api_base",
)
"""Base URL path for API requests, leave blank if not using a proxy or service
emulator.
"""
@@ -382,9 +385,6 @@ class ChatZhipuAI(BaseChatModel):
values["zhipuai_api_key"] = get_from_dict_or_env(
values, ["zhipuai_api_key", "api_key"], "ZHIPUAI_API_KEY"
)
values["zhipuai_api_base"] = get_from_dict_or_env(
values, "zhipuai_api_base", "ZHIPUAI_API_BASE", default=ZHIPUAI_API_BASE
)
return values

View File

@@ -6,7 +6,7 @@ from typing import Any, Dict, List, Optional, Sequence, Union
from langchain_core.callbacks.base import Callbacks
from langchain_core.documents import BaseDocumentCompressor, Document
from langchain_core.pydantic_v1 import Extra, Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
class DashScopeRerank(BaseDocumentCompressor):
@@ -21,7 +21,9 @@ class DashScopeRerank(BaseDocumentCompressor):
top_n: Optional[int] = 3
"""Number of documents to return."""
dashscope_api_key: Optional[str] = Field(None, alias="api_key")
dashscope_api_key: Optional[str] = Field(
None, alias="api_key", default_factory=from_env("DASHSCOPE_API_KEY")
)
"""DashScope API key. Must be specified directly or via environment variable
DASHSCOPE_API_KEY."""
@@ -46,9 +48,6 @@ class DashScopeRerank(BaseDocumentCompressor):
)
values["client"] = dashscope.TextReRank
values["dashscope_api_key"] = get_from_dict_or_env(
values, "dashscope_api_key", "DASHSCOPE_API_KEY"
)
values["model"] = dashscope.TextReRank.Models.gte_rerank
return values

View File

@@ -6,7 +6,7 @@ from typing import Any, Dict, List, Optional, Sequence, Union
from langchain_core.callbacks.base import Callbacks
from langchain_core.documents import BaseDocumentCompressor, Document
from langchain_core.pydantic_v1 import Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
class VolcengineRerank(BaseDocumentCompressor):
@@ -15,11 +15,11 @@ class VolcengineRerank(BaseDocumentCompressor):
client: Any = None
"""Volcengine client to use for compressing documents."""
ak: Optional[str] = None
ak: Optional[str] = Field(default_factory=from_env("VOLC_API_AK"))
"""Access Key ID.
https://www.volcengine.com/docs/84313/1254553"""
sk: Optional[str] = None
sk: Optional[str] = Field(default_factory=from_env("VOLC_API_SK"))
"""Secret Access Key.
https://www.volcengine.com/docs/84313/1254553"""
@@ -53,9 +53,6 @@ class VolcengineRerank(BaseDocumentCompressor):
"or `pip install --user volcengine`."
)
values["ak"] = get_from_dict_or_env(values, "ak", "VOLC_API_AK")
values["sk"] = get_from_dict_or_env(values, "sk", "VOLC_API_SK")
values["client"] = VikingDBService(
host="api-vikingdb.volces.com",
region="cn-beijing",

View File

@@ -1,3 +1,4 @@
import logging
from typing import Any, Dict, List, Optional
import requests
@@ -10,6 +11,10 @@ DATABASE_URL = NOTION_BASE_URL + "/databases/{database_id}/query"
PAGE_URL = NOTION_BASE_URL + "/pages/{page_id}"
BLOCK_URL = NOTION_BASE_URL + "/blocks/{block_id}/children"
# Configure logging
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(__name__)
class NotionDBLoader(BaseLoader):
"""Load from `Notion DB`.
@@ -63,7 +68,6 @@ class NotionDBLoader(BaseLoader):
List[Document]: List of documents.
"""
page_summaries = self._retrieve_page_summaries()
return list(self.load_page(page_summary) for page_summary in page_summaries)
def _retrieve_page_summaries(
@@ -133,11 +137,16 @@ class NotionDBLoader(BaseLoader):
elif prop_type == "status":
value = prop_data["status"]["name"] if prop_data["status"] else None
elif prop_type == "people":
value = (
[item["name"] for item in prop_data["people"]]
if prop_data["people"]
else []
)
value = []
if prop_data["people"]:
for item in prop_data["people"]:
name = item.get("name")
if not name:
logger.warning(
"Missing 'name' in 'people' property "
f"for page {page_id}"
)
value.append(name)
elif prop_type == "date":
value = prop_data["date"] if prop_data["date"] else None
elif prop_type == "last_edited_time":

View File

@@ -5,7 +5,12 @@ from __future__ import annotations
from typing import Dict
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.utils.openai import is_openai_v1
@@ -21,7 +26,9 @@ class AnyscaleEmbeddings(OpenAIEmbeddings):
"""AnyScale Endpoints API keys."""
model: str = Field(default=DEFAULT_MODEL)
"""Model name to use."""
anyscale_api_base: str = Field(default=DEFAULT_API_BASE)
anyscale_api_base: str = Field(
default_factory=from_env("ANYSCALE_API_BASE", default=DEFAULT_API_BASE)
)
"""Base URL path for API requests."""
tiktoken_enabled: bool = False
"""Set this to False for non-OpenAI implementations of the embeddings API"""
@@ -44,12 +51,6 @@ class AnyscaleEmbeddings(OpenAIEmbeddings):
"ANYSCALE_API_KEY",
)
)
values["anyscale_api_base"] = get_from_dict_or_env(
values,
"anyscale_api_base",
"ANYSCALE_API_BASE",
default=DEFAULT_API_BASE,
)
try:
import openai

View File

@@ -11,7 +11,7 @@ from typing import (
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
from requests.exceptions import HTTPError
from tenacity import (
before_sleep_log,
@@ -104,7 +104,9 @@ class DashScopeEmbeddings(BaseModel, Embeddings):
client: Any #: :meta private:
"""The DashScope client."""
model: str = "text-embedding-v1"
dashscope_api_key: Optional[str] = None
dashscope_api_key: Optional[str] = Field(
default_factory=from_env("DASHSCOPE_API_KEY")
)
max_retries: int = 5
"""Maximum number of retries to make when generating."""
@@ -118,9 +120,6 @@ class DashScopeEmbeddings(BaseModel, Embeddings):
import dashscope
"""Validate that api key and python package exists in environment."""
values["dashscope_api_key"] = get_from_dict_or_env(
values, "dashscope_api_key", "DASHSCOPE_API_KEY"
)
dashscope.api_key = values["dashscope_api_key"]
try:
import dashscope

View File

@@ -8,7 +8,7 @@ from langchain_core._api.deprecation import deprecated
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables.config import run_in_executor
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
logger = logging.getLogger(__name__)
@@ -20,9 +20,13 @@ logger = logging.getLogger(__name__)
class ErnieEmbeddings(BaseModel, Embeddings):
"""`Ernie Embeddings V1` embedding models."""
ernie_api_base: Optional[str] = None
ernie_client_id: Optional[str] = None
ernie_client_secret: Optional[str] = None
ernie_api_base: Optional[str] = Field(
default_factory=from_env("ERNIE_API_BASE", default="https://aip.baidubce.com")
)
ernie_client_id: Optional[str] = Field(default_factory=from_env("ERNIE_CLIENT_ID"))
ernie_client_secret: Optional[str] = Field(
default_factory=from_env("ERNIE_CLIENT_SECRET")
)
access_token: Optional[str] = None
chunk_size: int = 16
@@ -33,19 +37,6 @@ class ErnieEmbeddings(BaseModel, Embeddings):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
values["ernie_api_base"] = get_from_dict_or_env(
values, "ernie_api_base", "ERNIE_API_BASE", "https://aip.baidubce.com"
)
values["ernie_client_id"] = get_from_dict_or_env(
values,
"ernie_client_id",
"ERNIE_CLIENT_ID",
)
values["ernie_client_secret"] = get_from_dict_or_env(
values,
"ernie_client_secret",
"ERNIE_CLIENT_SECRET",
)
return values
def _embedding(self, json: object) -> dict:

View File

@@ -2,7 +2,7 @@ from typing import Any, Dict, List, Optional
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
from packaging.version import parse
__all__ = ["GradientEmbeddings"]
@@ -31,10 +31,14 @@ class GradientEmbeddings(BaseModel, Embeddings):
model: str
"Underlying gradient.ai model id."
gradient_workspace_id: Optional[str] = None
gradient_workspace_id: Optional[str] = Field(
default_factory=from_env("GRADIENT_WORKSPACE_ID")
)
"Underlying gradient.ai workspace_id."
gradient_access_token: Optional[str] = None
gradient_access_token: Optional[str] = Field(
default_factory=from_env("GRADIENT_ACCESS_TOKEN")
)
"""gradient.ai API Token, which can be generated by going to
https://auth.gradient.ai/select-workspace
and selecting "Access tokens" under the profile drop-down.
@@ -59,13 +63,6 @@ class GradientEmbeddings(BaseModel, Embeddings):
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["gradient_access_token"] = get_from_dict_or_env(
values, "gradient_access_token", "GRADIENT_ACCESS_TOKEN"
)
values["gradient_workspace_id"] = get_from_dict_or_env(
values, "gradient_workspace_id", "GRADIENT_WORKSPACE_ID"
)
values["gradient_api_url"] = get_from_dict_or_env(
values, "gradient_api_url", "GRADIENT_API_URL"
)

View File

@@ -18,6 +18,7 @@ from typing import (
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -144,14 +145,22 @@ class LocalAIEmbeddings(BaseModel, Embeddings):
client: Any #: :meta private:
model: str = "text-embedding-ada-002"
deployment: str = model
openai_api_version: Optional[str] = None
openai_api_base: Optional[str] = None
openai_api_version: Optional[str] = Field(
default_factory=from_env("OPENAI_API_VERSION", default=default_api_version)
)
openai_api_base: Optional[str] = Field(
default_factory=from_env("OPENAI_API_BASE", default="")
)
# to support explicit proxy for LocalAI
openai_proxy: Optional[str] = None
openai_proxy: Optional[str] = Field(
default_factory=from_env("OPENAI_PROXY", default="")
)
embedding_ctx_length: int = 8191
"""The maximum number of tokens to embed at once."""
openai_api_key: Optional[str] = None
openai_organization: Optional[str] = None
openai_api_key: Optional[str] = Field(default_factory=from_env("OPENAI_API_KEY"))
openai_organization: Optional[str] = Field(
default_factory=from_env("OPENAI_ORGANIZATION", default="")
)
allowed_special: Union[Literal["all"], Set[str]] = set()
disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all"
chunk_size: int = 1000
@@ -200,35 +209,8 @@ class LocalAIEmbeddings(BaseModel, Embeddings):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
default_api_version = ""
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
default=default_api_version,
)
values["openai_organization"] = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai

View File

@@ -1,7 +1,12 @@
from typing import Dict
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.utils.openai import is_openai_v1
@@ -22,9 +27,11 @@ class OctoAIEmbeddings(OpenAIEmbeddings):
octoai_api_token: SecretStr = Field(default=None)
"""OctoAI Endpoints API keys."""
endpoint_url: str = Field(default=DEFAULT_API_BASE)
endpoint_url: str = Field(
default_factory=from_env("ENDPOINT_URL", default=DEFAULT_API_BASE)
)
"""Base URL path for API requests."""
model: str = Field(default=DEFAULT_MODEL)
model: str = Field(default_factory=from_env("MODEL", default=DEFAULT_MODEL))
"""Model name to use."""
tiktoken_enabled: bool = False
"""Set this to False for non-OpenAI implementations of the embeddings API"""
@@ -41,21 +48,9 @@ class OctoAIEmbeddings(OpenAIEmbeddings):
@pre_init
def validate_environment(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
values["endpoint_url"] = get_from_dict_or_env(
values,
"endpoint_url",
"ENDPOINT_URL",
default=DEFAULT_API_BASE,
)
values["octoai_api_token"] = convert_to_secret_str(
get_from_dict_or_env(values, "octoai_api_token", "OCTOAI_API_TOKEN")
)
values["model"] = get_from_dict_or_env(
values,
"model",
"MODEL",
default=DEFAULT_MODEL,
)
try:
import openai

View File

@@ -23,6 +23,7 @@ from langchain_core._api.deprecation import deprecated
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -195,19 +196,28 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
# to support Azure OpenAI Service custom deployment names
deployment: Optional[str] = model
# TODO: Move to AzureOpenAIEmbeddings.
openai_api_version: Optional[str] = Field(default=None, alias="api_version")
openai_api_version: Optional[str] = Field(
default_factory=from_env("OPENAI_API_VERSION", default=default_api_version),
alias="api_version",
)
"""Automatically inferred from env var `OPENAI_API_VERSION` if not provided."""
# to support Azure OpenAI Service custom endpoints
openai_api_base: Optional[str] = Field(default=None, alias="base_url")
"""Base URL path for API requests, leave blank if not using a proxy or service
emulator."""
# to support Azure OpenAI Service custom endpoints
openai_api_type: Optional[str] = None
openai_api_type: Optional[str] = Field(
default_factory=from_env("OPENAI_API_TYPE", default="")
)
# to support explicit proxy for OpenAI
openai_proxy: Optional[str] = None
openai_proxy: Optional[str] = Field(
default_factory=from_env("OPENAI_PROXY", default="")
)
embedding_ctx_length: int = 8191
"""The maximum number of tokens to embed at once."""
openai_api_key: Optional[str] = Field(default=None, alias="api_key")
openai_api_key: Optional[str] = Field(
default_factory=from_env("OPENAI_API_KEY"), alias="api_key"
)
"""Automatically inferred from env var `OPENAI_API_KEY` if not provided."""
openai_organization: Optional[str] = Field(default=None, alias="organization")
"""Automatically inferred from env var `OPENAI_ORG_ID` if not provided."""
@@ -289,24 +299,9 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = values["openai_api_base"] or os.getenv(
"OPENAI_API_BASE"
)
values["openai_api_type"] = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
if values["openai_api_type"] in ("azure", "azure_ad", "azuread"):
default_api_version = "2023-05-15"
# Azure OpenAI embedding models allow a maximum of 16 texts
@@ -315,12 +310,6 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
values["chunk_size"] = min(values["chunk_size"], 16)
else:
default_api_version = ""
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
default=default_api_version,
)
# Check OPENAI_ORGANIZATION for backwards compatibility.
values["openai_organization"] = (
values["openai_organization"]

View File

@@ -5,7 +5,7 @@ from typing import Any, Dict, List, Optional
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
logger = logging.getLogger(__name__)
@@ -13,11 +13,11 @@ logger = logging.getLogger(__name__)
class VolcanoEmbeddings(BaseModel, Embeddings):
"""`Volcengine Embeddings` embedding models."""
volcano_ak: Optional[str] = None
volcano_ak: Optional[str] = Field(default_factory=from_env("VOLC_ACCESSKEY"))
"""volcano access key
learn more from: https://www.volcengine.com/docs/6459/76491#ak-sk"""
volcano_sk: Optional[str] = None
volcano_sk: Optional[str] = Field(default_factory=from_env("VOLC_SECRETKEY"))
"""volcano secret key
learn more from: https://www.volcengine.com/docs/6459/76491#ak-sk"""
@@ -66,16 +66,6 @@ class VolcanoEmbeddings(BaseModel, Embeddings):
ValueError: volcengine package not found, please install it with
`pip install volcengine`
"""
values["volcano_ak"] = get_from_dict_or_env(
values,
"volcano_ak",
"VOLC_ACCESSKEY",
)
values["volcano_sk"] = get_from_dict_or_env(
values,
"volcano_sk",
"VOLC_SECRETKEY",
)
try:
from volcengine.maas import MaasService

View File

@@ -6,25 +6,64 @@ from langchain_core.utils import get_from_dict_or_env
class ZhipuAIEmbeddings(BaseModel, Embeddings):
"""ZhipuAI embedding models.
"""ZhipuAI embedding model integration.
To use, you should have the ``zhipuai`` python package installed, and the
environment variable ``ZHIPU_API_KEY`` set with your API key or pass it
as a named parameter to the constructor.
Setup:
More instructions about ZhipuAi Embeddings, you can get it
from https://open.bigmodel.cn/dev/api#vector
To use, you should have the ``zhipuai`` python package installed, and the
environment variable ``ZHIPU_API_KEY`` set with your API KEY.
More instructions about ZhipuAi Embeddings, you can get it
from https://open.bigmodel.cn/dev/api#vector
.. code-block:: bash
pip install -U zhipuai
export ZHIPU_API_KEY="your-api-key"
Key init args — completion params:
model: Optional[str]
Name of ZhipuAI model to use.
api_key: str
Automatically inferred from env var `ZHIPU_API_KEY` if not provided.
See full list of supported init args and their descriptions in the params section.
Instantiate:
Example:
.. code-block:: python
from langchain_community.embeddings import ZhipuAIEmbeddings
embeddings = ZhipuAIEmbeddings(api_key="your-api-key")
text = "This is a test query."
query_result = embeddings.embed_query(text)
# texts = ["This is a test query1.", "This is a test query2."]
# query_result = embeddings.embed_query(texts)
"""
embed = ZhipuAIEmbeddings(
model="embedding-2",
# api_key="...",
)
Embed single text:
.. code-block:: python
input_text = "The meaning of life is 42"
embed.embed_query(input_text)
.. code-block:: python
[-0.003832892, 0.049372625, -0.035413884, -0.019301128, 0.0068899863, 0.01248398, -0.022153955, 0.006623926, 0.00778216, 0.009558191, ...]
Embed multiple text:
.. code-block:: python
input_texts = ["This is a test query1.", "This is a test query2."]
embed.embed_documents(input_texts)
.. code-block:: python
[
[0.0083934665, 0.037985895, -0.06684559, -0.039616987, 0.015481004, -0.023952313, ...],
[-0.02713102, -0.005470169, 0.032321047, 0.042484466, 0.023290444, 0.02170547, ...]
]
""" # noqa: E501
client: Any = Field(default=None, exclude=True) #: :meta private:
model: str = Field(default="embedding-2")

View File

@@ -23,6 +23,7 @@ from langchain_core.prompt_values import PromptValue
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
check_package_version,
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -57,7 +58,11 @@ class _AnthropicCommon(BaseLanguageModel):
max_retries: int = 2
"""Number of retries allowed for requests sent to the Anthropic Completion API."""
anthropic_api_url: Optional[str] = None
anthropic_api_url: Optional[str] = Field(
default_factory=from_env(
"ANTHROPIC_API_URL", default="https://api.anthropic.com"
)
)
anthropic_api_key: Optional[SecretStr] = None
@@ -82,12 +87,6 @@ class _AnthropicCommon(BaseLanguageModel):
get_from_dict_or_env(values, "anthropic_api_key", "ANTHROPIC_API_KEY")
)
# Get custom api url from environment.
values["anthropic_api_url"] = get_from_dict_or_env(
values,
"anthropic_api_url",
"ANTHROPIC_API_URL",
default="https://api.anthropic.com",
)
try:
import anthropic

View File

@@ -15,7 +15,12 @@ from langchain_core.callbacks import (
)
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.llms.openai import (
BaseOpenAI,
@@ -83,9 +88,13 @@ class Anyscale(BaseOpenAI):
"""
"""Key word arguments to pass to the model."""
anyscale_api_base: str = Field(default=DEFAULT_BASE_URL)
anyscale_api_base: str = Field(
default_factory=from_env("ANYSCALE_API_BASE", default=DEFAULT_BASE_URL)
)
anyscale_api_key: SecretStr = Field(default=None)
model_name: str = Field(default=DEFAULT_MODEL)
model_name: str = Field(
default_factory=from_env("MODEL_NAME", default=DEFAULT_MODEL)
)
prefix_messages: List = Field(default_factory=list)
@@ -96,21 +105,9 @@ class Anyscale(BaseOpenAI):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["anyscale_api_base"] = get_from_dict_or_env(
values,
"anyscale_api_base",
"ANYSCALE_API_BASE",
default=DEFAULT_BASE_URL,
)
values["anyscale_api_key"] = convert_to_secret_str(
get_from_dict_or_env(values, "anyscale_api_key", "ANYSCALE_API_KEY")
)
values["model_name"] = get_from_dict_or_env(
values,
"model_name",
"MODEL_NAME",
default=DEFAULT_MODEL,
)
try:
import openai

View File

@@ -8,7 +8,12 @@ import requests
from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.llms.utils import enforce_stop_tokens
@@ -28,7 +33,12 @@ class BaichuanLLM(LLM):
timeout: int = 60
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
baichuan_api_host: Optional[str] = None
baichuan_api_host: Optional[str] = Field(
default_factory=from_env(
"BAICHUAN_API_HOST",
default="https://api.baichuan-ai.com/v1/chat/completions",
)
)
baichuan_api_key: Optional[SecretStr] = None
@pre_init
@@ -36,12 +46,6 @@ class BaichuanLLM(LLM):
values["baichuan_api_key"] = convert_to_secret_str(
get_from_dict_or_env(values, "baichuan_api_key", "BAICHUAN_API_KEY")
)
values["baichuan_api_host"] = get_from_dict_or_env(
values,
"baichuan_api_host",
"BAICHUAN_API_HOST",
default="https://api.baichuan-ai.com/v1/chat/completions",
)
return values
@property

View File

@@ -22,7 +22,7 @@ from langchain_core.callbacks import (
from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
from langchain_core.pydantic_v1 import BaseModel, Extra, Field
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
from langchain_community.llms.utils import enforce_stop_tokens
from langchain_community.utilities.anthropic import (
@@ -297,7 +297,9 @@ class BedrockBase(BaseModel, ABC):
client: Any = Field(exclude=True) #: :meta private:
region_name: Optional[str] = None
region_name: Optional[str] = Field(
default_factory=from_env("AWS_DEFAULT_REGION", default=session.region_name)
)
"""The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
"""
@@ -406,13 +408,6 @@ class BedrockBase(BaseModel, ABC):
# use default credentials
session = boto3.Session()
values["region_name"] = get_from_dict_or_env(
values,
"region_name",
"AWS_DEFAULT_REGION",
default=session.region_name,
)
client_params = {}
if values["region_name"]:
client_params["region_name"] = values["region_name"]

View File

@@ -10,7 +10,7 @@ from langchain_core.callbacks import (
)
from langchain_core.language_models.llms import LLM
from langchain_core.pydantic_v1 import Extra, Field, root_validator
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
from langchain_community.llms.utils import enforce_stop_tokens
from langchain_community.utilities.requests import Requests
@@ -33,7 +33,7 @@ class EdenAI(LLM):
base_url: str = "https://api.edenai.run/v2"
edenai_api_key: Optional[str] = None
edenai_api_key: Optional[str] = Field(default_factory=from_env("EDENAI_API_KEY"))
feature: Literal["text", "image"] = "text"
"""Which generative feature to use, use text by default"""
@@ -76,9 +76,6 @@ class EdenAI(LLM):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
values["edenai_api_key"] = get_from_dict_or_env(
values, "edenai_api_key", "EDENAI_API_KEY"
)
return values
@root_validator(pre=True)

View File

@@ -12,7 +12,7 @@ from langchain_core.callbacks import (
from langchain_core.language_models.llms import BaseLLM
from langchain_core.outputs import Generation, LLMResult
from langchain_core.pydantic_v1 import Extra, Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
from langchain_community.llms.utils import enforce_stop_tokens
@@ -54,10 +54,14 @@ class GradientLLM(BaseLLM):
model_id: str = Field(alias="model", min_length=2)
"Underlying gradient.ai model id (base or fine-tuned)."
gradient_workspace_id: Optional[str] = None
gradient_workspace_id: Optional[str] = Field(
default_factory=from_env("GRADIENT_WORKSPACE_ID")
)
"Underlying gradient.ai workspace_id."
gradient_access_token: Optional[str] = None
gradient_access_token: Optional[str] = Field(
default_factory=from_env("GRADIENT_ACCESS_TOKEN")
)
"""gradient.ai API Token, which can be generated by going to
https://auth.gradient.ai/select-workspace
and selecting "Access tokens" under the profile drop-down.
@@ -83,13 +87,6 @@ class GradientLLM(BaseLLM):
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["gradient_access_token"] = get_from_dict_or_env(
values, "gradient_access_token", "GRADIENT_ACCESS_TOKEN"
)
values["gradient_workspace_id"] = get_from_dict_or_env(
values, "gradient_workspace_id", "GRADIENT_WORKSPACE_ID"
)
if (
values["gradient_access_token"] is None
or len(values["gradient_access_token"]) < 10

View File

@@ -16,7 +16,12 @@ from langchain_core.callbacks import (
)
from langchain_core.language_models.llms import LLM
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.llms.utils import enforce_stop_tokens
@@ -68,8 +73,12 @@ class MinimaxCommon(BaseModel):
"""Total probability mass of tokens to consider at each step."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
minimax_api_host: Optional[str] = None
minimax_group_id: Optional[str] = None
minimax_api_host: Optional[str] = Field(
default_factory=from_env("MINIMAX_API_HOST", default="https://api.minimax.chat")
)
minimax_group_id: Optional[str] = Field(
default_factory=from_env("MINIMAX_GROUP_ID")
)
minimax_api_key: Optional[SecretStr] = None
@pre_init
@@ -78,16 +87,7 @@ class MinimaxCommon(BaseModel):
values["minimax_api_key"] = convert_to_secret_str(
get_from_dict_or_env(values, "minimax_api_key", "MINIMAX_API_KEY")
)
values["minimax_group_id"] = get_from_dict_or_env(
values, "minimax_group_id", "MINIMAX_GROUP_ID"
)
# Get custom api url from environment.
values["minimax_api_host"] = get_from_dict_or_env(
values,
"minimax_api_host",
"MINIMAX_API_HOST",
default="https://api.minimax.chat",
)
values["_client"] = _MinimaxEndpointClient( # type: ignore[call-arg]
host=values["minimax_api_host"],
api_key=values["minimax_api_key"],

View File

@@ -1,7 +1,12 @@
from typing import Any, Dict
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils import (
convert_to_secret_str,
from_env,
get_from_dict_or_env,
pre_init,
)
from langchain_community.llms.openai import BaseOpenAI
from langchain_community.utils.openai import is_openai_v1
@@ -35,9 +40,13 @@ class OctoAIEndpoint(BaseOpenAI):
"""
"""Key word arguments to pass to the model."""
octoai_api_base: str = Field(default=DEFAULT_BASE_URL)
octoai_api_base: str = Field(
default_factory=from_env("OCTOAI_API_BASE", default=DEFAULT_BASE_URL)
)
octoai_api_token: SecretStr = Field(default=None)
model_name: str = Field(default=DEFAULT_MODEL)
model_name: str = Field(
default_factory=from_env("MODEL_NAME", default=DEFAULT_MODEL)
)
@classmethod
def is_lc_serializable(cls) -> bool:
@@ -69,21 +78,9 @@ class OctoAIEndpoint(BaseOpenAI):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["octoai_api_base"] = get_from_dict_or_env(
values,
"octoai_api_base",
"OCTOAI_API_BASE",
default=DEFAULT_BASE_URL,
)
values["octoai_api_token"] = convert_to_secret_str(
get_from_dict_or_env(values, "octoai_api_token", "OCTOAI_API_TOKEN")
)
values["model_name"] = get_from_dict_or_env(
values,
"model_name",
"MODEL_NAME",
default=DEFAULT_MODEL,
)
try:
import openai

View File

@@ -30,6 +30,7 @@ from langchain_core.language_models.llms import BaseLLM, create_base_retry_decor
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import (
from_env,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
@@ -201,7 +202,9 @@ class BaseOpenAI(BaseLLM):
# When updating this to use a SecretStr
# Check for classes that derive from this class (as some of them
# may assume openai_api_key is a str)
openai_api_key: Optional[str] = Field(default=None, alias="api_key")
openai_api_key: Optional[str] = Field(
default_factory=from_env("OPENAI_API_KEY"), alias="api_key"
)
"""Automatically inferred from env var `OPENAI_API_KEY` if not provided."""
openai_api_base: Optional[str] = Field(default=None, alias="base_url")
"""Base URL path for API requests, leave blank if not using a proxy or service
@@ -209,7 +212,9 @@ class BaseOpenAI(BaseLLM):
openai_organization: Optional[str] = Field(default=None, alias="organization")
"""Automatically inferred from env var `OPENAI_ORG_ID` if not provided."""
# to support explicit proxy for OpenAI
openai_proxy: Optional[str] = None
openai_proxy: Optional[str] = Field(
default_factory=from_env("OPENAI_PROXY", default="")
)
batch_size: int = 20
"""Batch size to use when passing multiple documents to generate."""
request_timeout: Union[float, Tuple[float, float], Any, None] = Field(
@@ -283,18 +288,9 @@ class BaseOpenAI(BaseLLM):
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = values["openai_api_base"] or os.getenv(
"OPENAI_API_BASE"
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
values["openai_organization"] = (
values["openai_organization"]
or os.getenv("OPENAI_ORG_ID")

View File

@@ -6,7 +6,7 @@ from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
from langchain_core.pydantic_v1 import Extra
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
class SVEndpointHandler:
@@ -201,7 +201,9 @@ class Sambaverse(LLM):
sambaverse_api_key: str = ""
"""sambaverse api key"""
sambaverse_model_name: Optional[str] = None
sambaverse_model_name: Optional[str] = Field(
default_factory=from_env("SAMBAVERSE_MODEL_NAME")
)
"""sambaverse expert model to use"""
model_kwargs: Optional[dict] = None
@@ -231,9 +233,6 @@ class Sambaverse(LLM):
values["sambaverse_api_key"] = get_from_dict_or_env(
values, "sambaverse_api_key", "SAMBAVERSE_API_KEY"
)
values["sambaverse_model_name"] = get_from_dict_or_env(
values, "sambaverse_model_name", "SAMBAVERSE_MODEL_NAME"
)
return values
@property

View File

@@ -18,7 +18,7 @@ from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
from langchain_core.pydantic_v1 import Field
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
logger = logging.getLogger(__name__)
@@ -42,11 +42,23 @@ class SparkLLM(LLM):
"""
client: Any = None #: :meta private:
spark_app_id: Optional[str] = None
spark_api_key: Optional[str] = None
spark_api_secret: Optional[str] = None
spark_api_url: Optional[str] = None
spark_llm_domain: Optional[str] = None
spark_app_id: Optional[str] = Field(
default_factory=from_env("IFLYTEK_SPARK_APP_ID")
)
spark_api_key: Optional[str] = Field(
default_factory=from_env("IFLYTEK_SPARK_API_KEY")
)
spark_api_secret: Optional[str] = Field(
default_factory=from_env("IFLYTEK_SPARK_API_SECRET")
)
spark_api_url: Optional[str] = Field(
default_factory=from_env(
"IFLYTEK_SPARK_API_URL", default="wss://spark-api.xf-yun.com/v3.1/chat"
)
)
spark_llm_domain: Optional[str] = Field(
default_factory=from_env("IFLYTEK_SPARK_LLM_DOMAIN", default="generalv3")
)
spark_user_id: str = "lc_user"
streaming: bool = False
request_timeout: int = 30
@@ -56,33 +68,6 @@ class SparkLLM(LLM):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
values["spark_app_id"] = get_from_dict_or_env(
values,
"spark_app_id",
"IFLYTEK_SPARK_APP_ID",
)
values["spark_api_key"] = get_from_dict_or_env(
values,
"spark_api_key",
"IFLYTEK_SPARK_API_KEY",
)
values["spark_api_secret"] = get_from_dict_or_env(
values,
"spark_api_secret",
"IFLYTEK_SPARK_API_SECRET",
)
values["spark_api_url"] = get_from_dict_or_env(
values,
"spark_api_url",
"IFLYTEK_SPARK_API_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
values["spark_llm_domain"] = get_from_dict_or_env(
values,
"spark_llm_domain",
"IFLYTEK_SPARK_LLM_DOMAIN",
"generalv3",
)
# put extra params into model_kwargs
values["model_kwargs"]["temperature"] = values["temperature"] or cls.temperature
values["model_kwargs"]["top_k"] = values["top_k"] or cls.top_k

View File

@@ -25,7 +25,7 @@ from langchain_core.callbacks import (
from langchain_core.language_models.llms import BaseLLM
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
from langchain_core.pydantic_v1 import Field
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils import from_env, get_from_dict_or_env, pre_init
from requests.exceptions import HTTPError
from tenacity import (
before_sleep_log,
@@ -184,7 +184,9 @@ class Tongyi(BaseLLM):
top_p: float = 0.8
"""Total probability mass of tokens to consider at each step."""
dashscope_api_key: Optional[str] = None
dashscope_api_key: Optional[str] = Field(
default_factory=from_env("DASHSCOPE_API_KEY")
)
"""Dashscope api key provide by Alibaba Cloud."""
streaming: bool = False
@@ -201,9 +203,6 @@ class Tongyi(BaseLLM):
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["dashscope_api_key"] = get_from_dict_or_env(
values, "dashscope_api_key", "DASHSCOPE_API_KEY"
)
try:
import dashscope
except ImportError:

View File

@@ -57,7 +57,7 @@ class _BaseYandexGPT(Serializable):
disable_request_logging: bool = False
"""YandexGPT API logs all request data by default.
If you provide personal data, confidential information, disable logging."""
_grpc_metadata: Sequence
_grpc_metadata: Optional[Sequence] = None
@property
def _llm_type(self) -> str:

View File

@@ -9,7 +9,7 @@ from langchain_core.callbacks import CallbackManagerForRetrieverRun
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
from langchain_core.retrievers import BaseRetriever
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
from langchain_community.utilities.vertexai import get_client_info
@@ -26,9 +26,11 @@ if TYPE_CHECKING:
class _BaseGoogleVertexAISearchRetriever(BaseModel):
project_id: str
"""Google Cloud Project ID."""
data_store_id: Optional[str] = None
data_store_id: Optional[str] = Field(default_factory=from_env("DATA_STORE_ID"))
"""Vertex AI Search data store ID."""
search_engine_id: Optional[str] = None
search_engine_id: Optional[str] = Field(
default_factory=from_env("SEARCH_ENGINE_ID")
)
"""Vertex AI Search app ID."""
location_id: str = "global"
"""Vertex AI Search data store location."""
@@ -66,17 +68,6 @@ class _BaseGoogleVertexAISearchRetriever(BaseModel):
) from exc
values["project_id"] = get_from_dict_or_env(values, "project_id", "PROJECT_ID")
try:
values["data_store_id"] = get_from_dict_or_env(
values, "data_store_id", "DATA_STORE_ID"
)
values["search_engine_id"] = get_from_dict_or_env(
values, "search_engine_id", "SEARCH_ENGINE_ID"
)
except Exception:
pass
return values
@property

View File

@@ -8,7 +8,7 @@ import requests
from langchain_core.callbacks import CallbackManagerForToolRun
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseTool
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import from_env, get_from_dict_or_env
logger = logging.getLogger(__name__)
@@ -23,7 +23,7 @@ class EdenaiTool(BaseTool):
feature: str
subfeature: str
edenai_api_key: Optional[str] = None
edenai_api_key: Optional[str] = Field(default_factory=from_env("EDENAI_API_KEY"))
is_async: bool = False
providers: List[str]
@@ -32,9 +32,6 @@ class EdenaiTool(BaseTool):
@root_validator(allow_reuse=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
values["edenai_api_key"] = get_from_dict_or_env(
values, "edenai_api_key", "EDENAI_API_KEY"
)
return values
@staticmethod

Some files were not shown because too many files have changed in this diff Show More