Compare commits
252 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
76d3afaef0 | ||
|
|
5dd2161c4b | ||
|
|
720ecacb1c | ||
|
|
8425f33363 | ||
|
|
4adabd33ac | ||
|
|
c9f1768cb9 | ||
|
|
84d250f781 | ||
|
|
7db6aabf65 | ||
|
|
ed62984cb2 | ||
|
|
f818ec49b8 | ||
|
|
1da6d92369 | ||
|
|
a6b483dcbc | ||
|
|
008c7df80d | ||
|
|
77fc2f7644 | ||
|
|
2661dc94f3 | ||
|
|
4b6fdd7bf0 | ||
|
|
2038c7fd5d | ||
|
|
dfb4baa3f9 | ||
|
|
12f8e87a0e | ||
|
|
bdecc5bade | ||
|
|
26d0858a60 | ||
|
|
e26559f512 | ||
|
|
71b0f51003 | ||
|
|
5ba7a7d2bc | ||
|
|
642d2e4b67 | ||
|
|
fd7ab539c8 | ||
|
|
f4bec9686d | ||
|
|
3d81c76160 | ||
|
|
b81a4c1d94 | ||
|
|
35c7c1f050 | ||
|
|
122af2effe | ||
|
|
c149954cc5 | ||
|
|
9e24626e87 | ||
|
|
6bd9c1d2b3 | ||
|
|
9bc7e1851a | ||
|
|
653cf56e0e | ||
|
|
debcf053eb | ||
|
|
e4ae690244 | ||
|
|
8e1b1db90d | ||
|
|
b753bf3323 | ||
|
|
202acce0c9 | ||
|
|
392df7b2e3 | ||
|
|
7f17ce3742 | ||
|
|
908c7bf33e | ||
|
|
43dc669332 | ||
|
|
2beb767ae5 | ||
|
|
a27fa9bf10 | ||
|
|
dcd0392423 | ||
|
|
1fd21ed21c | ||
|
|
9373b9c004 | ||
|
|
b647505280 | ||
|
|
67300567d3 | ||
|
|
23c261ba57 | ||
|
|
f4742dce50 | ||
|
|
0a24ac7388 | ||
|
|
3fb5e4d185 | ||
|
|
9ecb7240a4 | ||
|
|
42dcc502c7 | ||
|
|
90e9ec6962 | ||
|
|
ba0d729961 | ||
|
|
83162649bb | ||
|
|
12d7eaa0c2 | ||
|
|
5f4a697ce3 | ||
|
|
8b79cf9566 | ||
|
|
2a8ded6c8c | ||
|
|
57a02929d5 | ||
|
|
42cd2ef329 | ||
|
|
778e7c526e | ||
|
|
19319e1746 | ||
|
|
b0d5882fe1 | ||
|
|
12596b9a9b | ||
|
|
754aca794f | ||
|
|
cf448a6314 | ||
|
|
31f264169d | ||
|
|
eca8a5e5b8 | ||
|
|
e8c1850369 | ||
|
|
c4341463e8 | ||
|
|
c15701eebf | ||
|
|
c1d811c4bc | ||
|
|
0169d45ba8 | ||
|
|
c87b5c209d | ||
|
|
321506fcd1 | ||
|
|
be04695554 | ||
|
|
e69218504b | ||
|
|
7f0145315a | ||
|
|
ab145d85ec | ||
|
|
ff8e6981ff | ||
|
|
634ccb8ccd | ||
|
|
a1120e2685 | ||
|
|
2a6d4acc9d | ||
|
|
7c0f1bf23f | ||
|
|
c2c0814a94 | ||
|
|
cb7e12f6ba | ||
|
|
96e3e06d50 | ||
|
|
40d188948e | ||
|
|
e669f9d731 | ||
|
|
8bb8c56f74 | ||
|
|
9fdf1059a4 | ||
|
|
8b697ff0ee | ||
|
|
d269dd2e2f | ||
|
|
38ed55245f | ||
|
|
5019f59724 | ||
|
|
efa9ef75c0 | ||
|
|
d62369f478 | ||
|
|
52bf03d786 | ||
|
|
3be76ee2fa | ||
|
|
ea0982eede | ||
|
|
18a4fdded6 | ||
|
|
e3664272f0 | ||
|
|
049a0357e7 | ||
|
|
210a48cfb5 | ||
|
|
201b7ce9af | ||
|
|
25b1d65305 | ||
|
|
ece22b6b6a | ||
|
|
ffa1b3a758 | ||
|
|
4321d192ea | ||
|
|
6c5bb1b2e1 | ||
|
|
ccd1400423 | ||
|
|
8bf16d5275 | ||
|
|
a506302772 | ||
|
|
4a2f0c51a1 | ||
|
|
f3ad22e64a | ||
|
|
6e78dacd78 | ||
|
|
0d37b4c27d | ||
|
|
d6e34ca2ee | ||
|
|
233a904f2e | ||
|
|
6876b02c87 | ||
|
|
1559ba4bfc | ||
|
|
9f0a718198 | ||
|
|
9d200e6cbe | ||
|
|
7fb25b4154 | ||
|
|
f06fcde0d7 | ||
|
|
a3330c4258 | ||
|
|
1861cc7100 | ||
|
|
98c8516ef1 | ||
|
|
17c69678ab | ||
|
|
56653c53aa | ||
|
|
694d768174 | ||
|
|
8e6fa5f1d7 | ||
|
|
9e1e0f54d2 | ||
|
|
63e516c2b0 | ||
|
|
a9db2b0b92 | ||
|
|
6c61315067 | ||
|
|
11cdfe44af | ||
|
|
008348ce71 | ||
|
|
d3a5090e12 | ||
|
|
acdbdbddb1 | ||
|
|
48cf978391 | ||
|
|
e42a576cb2 | ||
|
|
9e32120cbb | ||
|
|
01b7b46908 | ||
|
|
35965df20d | ||
|
|
9d1867c77f | ||
|
|
6402c33299 | ||
|
|
3759a34229 | ||
|
|
bd74eba152 | ||
|
|
b54727fbad | ||
|
|
9c0584be74 | ||
|
|
bb2ed4615c | ||
|
|
361f8e1bc6 | ||
|
|
ead9d5b55c | ||
|
|
15687a28d5 | ||
|
|
467b082c34 | ||
|
|
51193309ea | ||
|
|
70a793ca9d | ||
|
|
e61b528c0e | ||
|
|
f386ac3bef | ||
|
|
ac73154005 | ||
|
|
af9ce3c224 | ||
|
|
77fcaa410a | ||
|
|
ca9de26f2b | ||
|
|
7f4734c0dd | ||
|
|
1c0857b53e | ||
|
|
44da27c07b | ||
|
|
0b743f005b | ||
|
|
2aba9ab47e | ||
|
|
629d9b78fa | ||
|
|
a477ddda45 | ||
|
|
9e81ab47be | ||
|
|
e75766b759 | ||
|
|
17b5090c18 | ||
|
|
c14a8df2ee | ||
|
|
17439daa6a | ||
|
|
4ba2c8ba75 | ||
|
|
7ae8b7f065 | ||
|
|
93bb19f69a | ||
|
|
18ebce2032 | ||
|
|
9beb03e771 | ||
|
|
1f7edcd08b | ||
|
|
ef99b06362 | ||
|
|
3c83779661 | ||
|
|
51a3a86022 | ||
|
|
70f7558db2 | ||
|
|
2363c02cf3 | ||
|
|
fbb82608cd | ||
|
|
9f39c23a13 | ||
|
|
d5e762d328 | ||
|
|
3cd0827785 | ||
|
|
dd0cd98861 | ||
|
|
d0603c86b6 | ||
|
|
28ee6a7c12 | ||
|
|
2c1e735403 | ||
|
|
539941281d | ||
|
|
7d0dda7e41 | ||
|
|
cf86447623 | ||
|
|
99adcdb1c9 | ||
|
|
06d5971be9 | ||
|
|
64969bc8ae | ||
|
|
ce0019b646 | ||
|
|
8f06085b24 | ||
|
|
5451b724fc | ||
|
|
0bff399af1 | ||
|
|
c9d4d53545 | ||
|
|
db67ccb0bb | ||
|
|
78b4c7d5a0 | ||
|
|
6dd7362a54 | ||
|
|
3a82bd7bdb | ||
|
|
9a0ed75a95 | ||
|
|
0ca8d4449c | ||
|
|
eedfddac2d | ||
|
|
7232e082de | ||
|
|
58220cda72 | ||
|
|
683f4a93b9 | ||
|
|
fca34eb122 | ||
|
|
49de862076 | ||
|
|
b6a2507794 | ||
|
|
b56ca0c2a4 | ||
|
|
59adeaddb3 | ||
|
|
c9bce5bbfb | ||
|
|
22abeb9f6c | ||
|
|
b642d00f9f | ||
|
|
c7c03d4709 | ||
|
|
e2a9072b80 | ||
|
|
55fef4b64b | ||
|
|
fd7f129f10 | ||
|
|
316dddc7cd | ||
|
|
1acfe86353 | ||
|
|
5de64e6d60 | ||
|
|
447a523662 | ||
|
|
8e45f720a8 | ||
|
|
ca2eed36b7 | ||
|
|
923e9f9596 | ||
|
|
258ae1ba5f | ||
|
|
2aabfafe1e | ||
|
|
d8fa94e6fa | ||
|
|
b42f218cfc | ||
|
|
f64522fbaf | ||
|
|
b14b65d62a | ||
|
|
4d62def9ff | ||
|
|
a992b9670d | ||
|
|
0a754fa286 | ||
|
|
2f2a5fd582 |
7
.github/CONTRIBUTING.md
vendored
@@ -289,6 +289,13 @@ make docs_linkcheck
|
||||
make api_docs_linkcheck
|
||||
```
|
||||
|
||||
### Verify Documentation changes
|
||||
|
||||
After pushing documentation changes to the repository, you can preview and verify that the changes are
|
||||
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
|
||||
This will take you to a preview of the documentation changes.
|
||||
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).
|
||||
|
||||
## 🏭 Release Process
|
||||
|
||||
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
|
||||
|
||||
2
.github/workflows/doc_lint.yml
vendored
@@ -19,4 +19,4 @@ jobs:
|
||||
run: |
|
||||
# We should not encourage imports directly from main init file
|
||||
# Expect for hub
|
||||
git grep 'from langchain import' docs/{extras,docs_skeleton,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
git grep 'from langchain import' docs/{docs,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
|
||||
4
.github/workflows/scheduled_test.yml
vendored
@@ -61,6 +61,10 @@ jobs:
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
||||
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
||||
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
|
||||
run: |
|
||||
make scheduled_tests
|
||||
|
||||
|
||||
7
.gitignore
vendored
@@ -174,6 +174,7 @@ docs/api_reference/*/
|
||||
!docs/api_reference/_static/
|
||||
!docs/api_reference/templates/
|
||||
!docs/api_reference/themes/
|
||||
docs/docs_skeleton/build
|
||||
docs/docs_skeleton/node_modules
|
||||
docs/docs_skeleton/yarn.lock
|
||||
docs/docs/build
|
||||
docs/docs/node_modules
|
||||
docs/docs/yarn.lock
|
||||
_dist
|
||||
|
||||
4
.gitmodules
vendored
@@ -1,4 +0,0 @@
|
||||
[submodule "docs/_docs_skeleton"]
|
||||
path = docs/_docs_skeleton
|
||||
url = https://github.com/langchain-ai/langchain-shared-docs
|
||||
branch = main
|
||||
@@ -9,9 +9,14 @@ build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: "3.11"
|
||||
jobs:
|
||||
pre_build:
|
||||
commands:
|
||||
- python -mvirtualenv $READTHEDOCS_VIRTUALENV_PATH
|
||||
- python -m pip install --upgrade --no-cache-dir pip setuptools
|
||||
- python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
|
||||
- python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
|
||||
- python docs/api_reference/create_api_rst.py
|
||||
- cat docs/api_reference/conf.py
|
||||
- python -m sphinx -T -E -b html -d _build/doctrees -c docs/api_reference docs/api_reference $READTHEDOCS_OUTPUT/html -j auto
|
||||
|
||||
# Build documentation in the docs/ directory with Sphinx
|
||||
sphinx:
|
||||
|
||||
6
Makefile
@@ -15,10 +15,10 @@ docs_build:
|
||||
docs/.local_build.sh
|
||||
|
||||
docs_clean:
|
||||
rm -r docs/_dist
|
||||
rm -r _dist
|
||||
|
||||
docs_linkcheck:
|
||||
poetry run linkchecker docs/_dist/docs_skeleton/ --ignore-url node_modules
|
||||
poetry run linkchecker _dist/docs/ --ignore-url node_modules
|
||||
|
||||
api_docs_build:
|
||||
poetry run python docs/api_reference/create_api_rst.py
|
||||
@@ -53,4 +53,4 @@ help:
|
||||
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
|
||||
@echo 'spell_check - run codespell on the project'
|
||||
@echo 'spell_fix - run codespell on the project and fix the errors'
|
||||
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'
|
||||
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'
|
||||
|
||||
@@ -18,8 +18,9 @@
|
||||
|
||||
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.
|
||||
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
|
||||
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team
|
||||
|
||||
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
|
||||
|
||||
|
||||
400
cookbook/LLaMA2_sql_chat.ipynb
Normal file
@@ -0,0 +1,400 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "fc935871-7640-41c6-b798-58514d860fe0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## LLaMA2 chat with SQL\n",
|
||||
"\n",
|
||||
"Open source, local LLMs are great to consider for any application that demands data privacy.\n",
|
||||
"\n",
|
||||
"SQL is one good example. \n",
|
||||
"\n",
|
||||
"This cookbook shows how to perform text-to-SQL using various local versions of LLaMA2 run locally.\n",
|
||||
"\n",
|
||||
"## Packages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "81adcf8b-395a-4f02-8749-ac976942b446",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install langchain replicate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8e13ed66-300b-4a23-b8ac-44df68ee4733",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## LLM\n",
|
||||
"\n",
|
||||
"There are a few ways to access LLaMA2.\n",
|
||||
"\n",
|
||||
"To run locally, we use Ollama.ai. \n",
|
||||
"\n",
|
||||
"See [here](https://python.langchain.com/docs/integrations/chat/ollama) for details on installation and setup.\n",
|
||||
"\n",
|
||||
"Also, see [here](https://python.langchain.com/docs/guides/local_llms) for our full guide on local LLMs.\n",
|
||||
" \n",
|
||||
"To use an external API, which is not private, we can use Replicate."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "6a75a5c6-34ee-4ab9-a664-d9b432d812ee",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Init param `input` is deprecated, please use `model_kwargs` instead.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Local \n",
|
||||
"from langchain.chat_models import ChatOllama\n",
|
||||
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
|
||||
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
|
||||
"\n",
|
||||
"# API\n",
|
||||
"from getpass import getpass\n",
|
||||
"from langchain.llms import Replicate\n",
|
||||
"# REPLICATE_API_TOKEN = getpass()\n",
|
||||
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
|
||||
"replicate_id = \"meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d\"\n",
|
||||
"llama2_chat_replicate = Replicate(\n",
|
||||
" model=replicate_id,\n",
|
||||
" input={\"temperature\": 0.01, \n",
|
||||
" \"max_length\": 500, \n",
|
||||
" \"top_p\": 1}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "ce96f7ea-b3d5-44e1-9fa5-a79e04a9e1fb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Simply set the LLM we want to use\n",
|
||||
"llm = llama2_chat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "80222165-f353-4e35-a123-5f70fd70c6c8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## DB\n",
|
||||
"\n",
|
||||
"Connect to a SQLite DB.\n",
|
||||
"\n",
|
||||
"To create this particular DB, you can use the code and follow the steps shown [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "025bdd82-3bb1-4948-bc7c-c3ccd94fd05c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info= 0)\n",
|
||||
"\n",
|
||||
"def get_schema(_):\n",
|
||||
" return db.get_table_info()\n",
|
||||
"\n",
|
||||
"def run_query(query):\n",
|
||||
" return db.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "654b3577-baa2-4e12-a393-f40e5db49ac7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Query a SQL DB \n",
|
||||
"\n",
|
||||
"Follow the runnables workflow [here](https://python.langchain.com/docs/expression_language/cookbook/sql_db)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "5a4933ea-d9c0-4b0a-8177-ba4490c6532b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Prompt\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"\n",
|
||||
"# Chain to query\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"sql_response = (\n",
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"sql_response.invoke({\"question\": \"What team is Klay Thompson on?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a0e9e2c8-9b88-4853-ac86-001bc6cc6695",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can review the results:\n",
|
||||
"\n",
|
||||
"* [LangSmith trace](https://smith.langchain.com/public/afa56a06-b4e2-469a-a60f-c1746e75e42b/r) LLaMA2-13 Replicate API\n",
|
||||
"* [LangSmith trace](https://smith.langchain.com/public/2d4ecc72-6b8f-4523-8f0b-ea95c6b54a1d/r) LLaMA2-13 local \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "2a2825e3-c1b6-4f7d-b9c9-d9835de323bb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Based on the table schema and SQL query, there are 30 unique teams in the NBA.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Chain to answer\n",
|
||||
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"\n",
|
||||
"full_chain = (\n",
|
||||
" RunnablePassthrough.assign(query=sql_response) \n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=lambda x: db.run(x[\"query\"]),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | llm\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"full_chain.invoke({\"question\": \"How many unique teams are there?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ec17b3ee-6618-4681-b6df-089bbb5ffcd7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can review the results:\n",
|
||||
"\n",
|
||||
"* [LangSmith trace](https://smith.langchain.com/public/10420721-746a-4806-8ecf-d6dc6399d739/r) LLaMA2-13 Replicate API\n",
|
||||
"* [LangSmith trace](https://smith.langchain.com/public/5265ebab-0a22-4f37-936b-3300f2dfa1c1/r) LLaMA2-13 local "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1e85381b-1edc-4bb3-a7bd-2ab23f81e54d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat with a SQL DB \n",
|
||||
"\n",
|
||||
"Next, we can add memory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "1985aa1c-eb8f-4fb1-a54f-c8aa10744687",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Prompt\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"\n",
|
||||
"memory = ConversationBufferMemory(return_messages=True)\n",
|
||||
"\n",
|
||||
"# Chain to query with memory \n",
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"sql_chain = (\n",
|
||||
" RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" history=RunnableLambda(lambda x: memory.load_memory_variables(x)[\"history\"])\n",
|
||||
" )| prompt\n",
|
||||
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"def save(input_output):\n",
|
||||
" output = {\"output\": input_output.pop(\"output\")}\n",
|
||||
" memory.save_context(input_output, output)\n",
|
||||
" return output['output']\n",
|
||||
" \n",
|
||||
"sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save\n",
|
||||
"sql_response_memory.invoke({\"question\": \"What team is Klay Thompson on?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "0b45818a-1498-441d-b82d-23c29428c2bb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' SELECT \"SALARY\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
|
||||
]
|
||||
},
|
||||
"execution_count": 20,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sql_response_memory.invoke({\"question\": \"What is his salary?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "800a7a3b-f411-478b-af51-2310cd6e0425",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Sure! Here\\'s the natural language response based on the given input:\\n\\n\"Klay Thompson\\'s salary is $43,219,440.\"')"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Chain to answer\n",
|
||||
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"\n",
|
||||
"full_chain = (\n",
|
||||
" RunnablePassthrough.assign(query=sql_response_memory) \n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=lambda x: db.run(x[\"query\"]),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | llm\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"full_chain.invoke({\"question\": \"What is his salary?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b77fee61-f4da-4bb1-8285-14101e505518",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here is the [trace](https://smith.langchain.com/public/54794d18-2337-4ce2-8b9f-3d8a2df89e51/r)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
52
cookbook/README.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# LangChain cookbook
|
||||
|
||||
Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the [main documentation](https://python.langchain.com).
|
||||
|
||||
Notebook | Description
|
||||
:- | :-
|
||||
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a sql database using an open source llm (llama2), specifically demonstrated on a sqlite database containing nba rosters.
|
||||
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
|
||||
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
|
||||
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
|
||||
[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
|
||||
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
|
||||
[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
|
||||
[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
|
||||
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
|
||||
[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
|
||||
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
|
||||
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
|
||||
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
|
||||
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
|
||||
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
|
||||
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl api.
|
||||
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
|
||||
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
|
||||
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
|
||||
[hugginggpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/hugginggpt.ipynb) | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face.
|
||||
[hypothetical_document_embeddin...](https://github.com/langchain-ai/langchain/tree/master/cookbook/hypothetical_document_embeddings.ipynb) | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries.
|
||||
[learned_prompt_optimization.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/learned_prompt_optimization.ipynb) | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences.
|
||||
[llm_bash.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_bash.ipynb) | Perform simple filesystem commands using language learning models (llms) and a bash process.
|
||||
[llm_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_checker.ipynb) | Create a self-checking chain using the llmcheckerchain function.
|
||||
[llm_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_math.ipynb) | Solve complex word math problems using language models and python repls.
|
||||
[llm_summarization_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_summarization_checker.ipynb) | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results.
|
||||
[llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics.
|
||||
[meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly.
|
||||
[multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text.
|
||||
[multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents.
|
||||
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
|
||||
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
|
||||
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
|
||||
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question answering system by incorporating openai functions into a retrieval pipeline.
|
||||
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
|
||||
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
|
||||
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
|
||||
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
|
||||
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
|
||||
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
|
||||
[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
|
||||
[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.
|
||||
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the twitter algorithm with the help of gpt4 and activeloop's deep lake.
|
||||
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
|
||||
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialoguesimulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
|
||||
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
|
||||
449
cookbook/Semi_Structured_RAG.ipynb
Normal file
706
cookbook/Semi_structured_and_multi_modal_RAG.ipynb
Normal file
597
cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb
Normal file
@@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# Elasticsearch\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n",
|
||||
"\n",
|
||||
"We can use LLMs to interact with Elasticsearch analytics databases in natural language.\n",
|
||||
"\n",
|
||||
@@ -135,9 +135,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# We set this so we can see what exactly is going on\n",
|
||||
"import langchain\n",
|
||||
"from langchain.globals import set_verbose\n",
|
||||
"\n",
|
||||
"langchain.verbose = True"
|
||||
"set_verbose(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -489,7 +489,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -57,7 +57,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"##### Intialize the RL chain with provided defaults\n",
|
||||
"##### Initialize the RL chain with provided defaults\n",
|
||||
"\n",
|
||||
"The prompt template which will be used to query the LLM needs to be defined.\n",
|
||||
"It can be anything, but here `{meal}` is being used and is going to be replaced by one of the meals above, the RL chain will try to pick and inject the best meal\n"
|
||||
@@ -212,9 +212,9 @@
|
||||
"\n",
|
||||
"It's important to note that while the RL model can make sophisticated selections, it doesn't inherently recognize concepts like \"vegetarian\" or understand that \"beef enchiladas\" aren't vegetarian-friendly. Instead, it leverages the LLM to ground its choices in common sense.\n",
|
||||
"\n",
|
||||
"The way the chain is learning that Tom prefers veggetarian meals is via an AutoSelectionScorer that is built into the chain. The scorer will call the LLM again and ask it to evaluate the selection (`ToSelectFrom`) using the information wrapped in (`BasedOn`).\n",
|
||||
"The way the chain is learning that Tom prefers vegetarian meals is via an AutoSelectionScorer that is built into the chain. The scorer will call the LLM again and ask it to evaluate the selection (`ToSelectFrom`) using the information wrapped in (`BasedOn`).\n",
|
||||
"\n",
|
||||
"You can set `langchain.debug=True` if you want to see the details of the auto-scorer, but you can also define the scoring prompt yourself."
|
||||
"You can set `set_debug(True)` if you want to see the details of the auto-scorer, but you can also define the scoring prompt yourself."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -286,7 +286,7 @@
|
||||
" print(event.to_select_from)\n",
|
||||
"\n",
|
||||
" # you can build a complex scoring function here\n",
|
||||
" # it is prefereable that the score ranges between 0 and 1 but it is not enforced\n",
|
||||
" # it is preferable that the score ranges between 0 and 1 but it is not enforced\n",
|
||||
"\n",
|
||||
" selected_meal = event.to_select_from[\"meal\"][event.selected.index]\n",
|
||||
" print(f\"selected meal: {selected_meal}\")\n",
|
||||
@@ -617,7 +617,7 @@
|
||||
"\n",
|
||||
"### other advanced featurization options\n",
|
||||
"\n",
|
||||
"Explictly numerical features can be provided with a colon separator:\n",
|
||||
"Explicitly numerical features can be provided with a colon separator:\n",
|
||||
"`age = rl_chain.BasedOn(\"age:32\")`\n",
|
||||
"\n",
|
||||
"`ToSelectFrom` can be a bit more complex if the scenario demands it, instead of being a list of strings it can be:\n",
|
||||
@@ -672,7 +672,7 @@
|
||||
"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Internally the AutoSelectionScorer adjusted the scoring prompt to make sure that the llm scoring retured a single float.\n",
|
||||
"Internally the AutoSelectionScorer adjusted the scoring prompt to make sure that the llm scoring returned a single float.\n",
|
||||
"\n",
|
||||
"However, if needed, a FULL scoring prompt can also be provided:\n"
|
||||
]
|
||||
@@ -730,7 +730,7 @@
|
||||
"\u001b[32;1m\u001b[1;3m[llm/start]\u001b[0m \u001b[1m[1:chain:LLMChain > 2:llm:OpenAI] Entering LLM run with input:\n",
|
||||
"\u001b[0m{\n",
|
||||
" \"prompts\": [\n",
|
||||
" \"Given ['Vegetarian', 'regular dairy is ok'] rank how good or bad this selection is ['Beef Enchiladas with Feta cheese. Mexican-Greek fusion', 'Chicken Flatbreads with red sauce. Italian-Mexican fusion', 'Veggie sweet potato quesadillas with vegan cheese', 'One-Pan Tortelonni bake with peppers and onions']\\n\\nIMPORANT: you MUST return a single number between -1 and 1, -1 being bad, 1 being good\"\n",
|
||||
" \"Given ['Vegetarian', 'regular dairy is ok'] rank how good or bad this selection is ['Beef Enchiladas with Feta cheese. Mexican-Greek fusion', 'Chicken Flatbreads with red sauce. Italian-Mexican fusion', 'Veggie sweet potato quesadillas with vegan cheese', 'One-Pan Tortelonni bake with peppers and onions']\\n\\nIMPORTANT: you MUST return a single number between -1 and 1, -1 being bad, 1 being good\"\n",
|
||||
" ]\n",
|
||||
"}\n",
|
||||
"\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:LLMChain > 2:llm:OpenAI] [274ms] Exiting LLM run with output:\n",
|
||||
@@ -778,14 +778,15 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"import langchain\n",
|
||||
"langchain.debug = True\n",
|
||||
"from langchain.globals import set_debug\n",
|
||||
"\n",
|
||||
"set_debug(True)\n",
|
||||
"\n",
|
||||
"REWARD_PROMPT_TEMPLATE = \"\"\"\n",
|
||||
"\n",
|
||||
"Given {preference} rank how good or bad this selection is {meal}\n",
|
||||
"\n",
|
||||
"IMPORANT: you MUST return a single number between -1 and 1, -1 being bad, 1 being good\n",
|
||||
"IMPORTANT: you MUST return a single number between -1 and 1, -1 being bad, 1 being good\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
@@ -812,9 +813,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -826,7 +827,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -10,7 +10,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -37,13 +37,13 @@
|
||||
"'Hello World\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import LLMBashChain\n",
|
||||
"from langchain_experimental.llm_bash.base import LLMBashChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
@@ -65,7 +65,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -98,7 +98,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -125,7 +125,7 @@
|
||||
"'Hello World\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -149,7 +149,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -166,28 +166,24 @@
|
||||
"cd ..\n",
|
||||
"```\u001b[0m\n",
|
||||
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3mapi.html\t\t\tllm_summarization_checker.html\n",
|
||||
"constitutional_chain.html\tmoderation.html\n",
|
||||
"llm_bash.html\t\t\topenai_openapi.yaml\n",
|
||||
"llm_checker.html\t\topenapi.html\n",
|
||||
"llm_math.html\t\t\tpal.html\n",
|
||||
"llm_requests.html\t\tsqlite.html\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3mcpal.ipynb llm_bash.ipynb llm_symbolic_math.ipynb\n",
|
||||
"index.mdx llm_math.ipynb pal.ipynb\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'api.html\\t\\t\\tllm_summarization_checker.html\\r\\nconstitutional_chain.html\\tmoderation.html\\r\\nllm_bash.html\\t\\t\\topenai_openapi.yaml\\r\\nllm_checker.html\\t\\topenapi.html\\r\\nllm_math.html\\t\\t\\tpal.html\\r\\nllm_requests.html\\t\\tsqlite.html'"
|
||||
"'cpal.ipynb llm_bash.ipynb llm_symbolic_math.ipynb\\r\\nindex.mdx llm_math.ipynb pal.ipynb'"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.utilities.bash import BashProcess\n",
|
||||
"from langchain_experimental.llm_bash.bash import BashProcess\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"persistent_process = BashProcess(persistent=True)\n",
|
||||
@@ -200,7 +196,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -217,18 +213,19 @@
|
||||
"cd ..\n",
|
||||
"```\u001b[0m\n",
|
||||
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3mexamples\t\tgetting_started.html\tindex_examples\n",
|
||||
"generic\t\t\thow_to_guides.rst\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m_category_.yml\tdata_generation.ipynb\t\t self_check\n",
|
||||
"agents\t\tgraph\n",
|
||||
"code_writing\tlearned_prompt_optimization.ipynb\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'examples\\t\\tgetting_started.html\\tindex_examples\\r\\ngeneric\\t\\t\\thow_to_guides.rst'"
|
||||
"'_category_.yml\\tdata_generation.ipynb\\t\\t self_check\\r\\nagents\\t\\tgraph\\r\\ncode_writing\\tlearned_prompt_optimization.ipynb'"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -237,13 +234,6 @@
|
||||
"# Run the same command again and see that the state is maintained between calls\n",
|
||||
"bash_chain.run(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -262,7 +252,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -10,12 +10,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains.llm_symbolic_math.base import LLMSymbolicMathChain\n",
|
||||
"from langchain_experimental.llm_symbolic_math.base import LLMSymbolicMathChain\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)"
|
||||
@@ -30,7 +30,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -39,7 +39,7 @@
|
||||
"'Answer: exp(x)*sin(x) + exp(x)*cos(x)'"
|
||||
]
|
||||
},
|
||||
"execution_count": 23,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -50,7 +50,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -59,7 +59,7 @@
|
||||
"'Answer: exp(x)*sin(x)'"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -79,7 +79,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -88,7 +88,7 @@
|
||||
"'Answer: Eq(y(t), C2*exp(-t) + (C1 + t/2)*exp(t))'"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -99,7 +99,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -108,7 +108,7 @@
|
||||
"'Answer: {0, -sqrt(3)*I/3, sqrt(3)*I/3}'"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -119,7 +119,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -128,7 +128,7 @@
|
||||
"'Answer: (3 - sqrt(7), -sqrt(7) - 2, 1 - sqrt(7)), (sqrt(7) + 3, -2 + sqrt(7), 1 + sqrt(7))'"
|
||||
]
|
||||
},
|
||||
"execution_count": 22,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -140,9 +140,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -154,9 +154,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
252
cookbook/plan_and_execute_agent.ipynb
Normal file
@@ -0,0 +1,252 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0ddfef23-3c74-444c-81dd-6753722997fa",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Plan-and-execute\n",
|
||||
"\n",
|
||||
"Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the [\"Plan-and-Solve\" paper](https://arxiv.org/abs/2305.04091).\n",
|
||||
"\n",
|
||||
"The planning is almost always done by an LLM.\n",
|
||||
"\n",
|
||||
"The execution is usually done by a separate agent (equipped with tools)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a7ecb22a-7009-48ec-b14e-f0fa5aac1cd0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5fbbd4ee-bfe8-4a25-afe4-8d1a552a3d2e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents.tools import Tool\n",
|
||||
"from langchain.chains import LLMMathChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
|
||||
"from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e0e995e5-af9d-4988-bcd0-467a2a2e18cd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1d789f4e-54e3-4602-891a-f076e0ab9594",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = DuckDuckGoSearchAPIWrapper()\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name=\"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||
" ),\n",
|
||||
" Tool(\n",
|
||||
" name=\"Calculator\",\n",
|
||||
" func=llm_math_chain.run,\n",
|
||||
" description=\"useful for when you need to answer questions about math\"\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "04dc6452-a07f-49f9-be12-95be1e2afccc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Planner, Executor, and Agent\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d8f49c03-c804-458b-8122-c92b26c7b7dd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = ChatOpenAI(temperature=0)\n",
|
||||
"planner = load_chat_planner(model)\n",
|
||||
"executor = load_agent_executor(model, tools, verbose=True)\n",
|
||||
"agent = PlanAndExecute(planner=planner, executor=executor)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "78ba03dd-0322-4927-b58d-a7e2027fdbb3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Run example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "a57f7efe-7866-47a7-bce5-9c7b1047964e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"current prime minister of the UK\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"current prime minister of the UK\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mBottom right: Rishi Sunak is the current prime minister and the first non-white prime minister. The prime minister of the United Kingdom is the principal minister of the crown of His Majesty's Government, and the head of the British Cabinet. 3 min. British Prime Minister Rishi Sunak asserted his stance on gender identity in a speech Wednesday, stating it was \"common sense\" that \"a man is a man and a woman is a woman\" — a ... The former chancellor Rishi Sunak is the UK's new prime minister. Here's what you need to know about him. He won after running for the second time this year He lost to Liz Truss in September,... Isaeli Prime Minister Benjamin Netanyahu spoke with US President Joe Biden on Wednesday, the prime minister's office said in a statement. Netanyahu \"thanked the President for the powerful words of ... By Yasmeen Serhan/London Updated: October 25, 2022 12:56 PM EDT | Originally published: October 24, 2022 9:17 AM EDT S top me if you've heard this one before: After a tumultuous period of political...\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe search results indicate that Rishi Sunak is the current prime minister of the UK. However, it's important to note that this information may not be accurate or up to date.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"current age of the prime minister of the UK\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mHow old is Rishi Sunak? Mr Sunak was born on 12 May, 1980, making him 42 years old. He first became an MP in 2015, aged 34, and has served the constituency of Richmond in Yorkshire ever since. He... Prime Ministers' ages when they took office From oldest to youngest, the ages of the PMs were as follows: Winston Churchill - 65 years old James Callaghan - 64 years old Clement Attlee - 62 years... Anna Kaufman USA TODAY Just a few days after Liz Truss resigned as prime minister, the UK has a new prime minister. Truss, who lasted a mere 45 days in office, will be replaced by Rishi... Advertisement Rishi Sunak is the youngest British prime minister of modern times. Mr. Sunak is 42 and started out in Parliament in 2015. Rishi Sunak was appointed as chancellor of the Exchequer... The first prime minister of the current United Kingdom of Great Britain and Northern Ireland upon its effective creation in 1922 (when 26 Irish counties seceded and created the Irish Free State) was Bonar Law, [10] although the country was not renamed officially until 1927, when Stanley Baldwin was the serving prime minister. [11]\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mBased on the search results, it seems that Rishi Sunak is the current prime minister of the UK. However, I couldn't find any specific information about his age. Would you like me to search again for the current age of the prime minister?\n",
|
||||
"\n",
|
||||
"Action:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"age of Rishi Sunak\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mRishi Sunak is 42 years old, making him the youngest person to hold the office of prime minister in modern times. How tall is Rishi Sunak? How Old Is Rishi Sunak? Rishi Sunak was born on May 12, 1980, in Southampton, England. Parents and Nationality Sunak's parents were born to Indian-origin families in East Africa before... Born on May 12, 1980, Rishi is currently 42 years old. He has been a member of parliament since 2015 where he was an MP for Richmond and has served in roles including Chief Secretary to the Treasury and the Chancellor of Exchequer while Boris Johnson was PM. Family Murty, 42, is the daughter of the Indian billionaire NR Narayana Murthy, often described as the Bill Gates of India, who founded the software company Infosys. According to reports, his... Sunak became the first non-White person to lead the country and, at age 42, the youngest to take on the role in more than a century. Like most politicians, Sunak is revered by some and...\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mBased on the search results, Rishi Sunak is currently 42 years old. He was born on May 12, 1980.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: To calculate the age raised to the power of 0.43, I can use the calculator tool.\n",
|
||||
"\n",
|
||||
"Action:\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"action\": \"Calculator\",\n",
|
||||
" \"action_input\": \"42^0.43\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||||
"42^0.43\u001b[32;1m\u001b[1;3m```text\n",
|
||||
"42**0.43\n",
|
||||
"```\n",
|
||||
"...numexpr.evaluate(\"42**0.43\")...\n",
|
||||
"\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m4.9888126515157\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.9888126515157\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe age raised to the power of 0.43 is approximately 4.9888126515157.\n",
|
||||
"\n",
|
||||
"Final Answer:\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The age raised to the power of 0.43 is approximately 4.9888126515157.\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The current prime minister of the UK is Rishi Sunak. His age raised to the power of 0.43 is approximately 4.9888126515157.\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The current prime minister of the UK is Rishi Sunak. His age raised to the power of 0.43 is approximately 4.9888126515157.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.run(\"Who is the current prime minister of the UK? What is their current age raised to the 0.43 power?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0ef78a07-1a2a-46f8-9bc9-ae45f9bd706c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
152
cookbook/press_releases.ipynb
Normal file
@@ -0,0 +1,152 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "62ee82e4-2ad8-498b-8438-fac388afe1a2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Press Releases Data\n",
|
||||
"=\n",
|
||||
"\n",
|
||||
"Press Releases data powered by [Kay.ai](https://kay.ai).\n",
|
||||
"\n",
|
||||
">Press releases are used by companies to announce something noteworthy, including product launches, financial performance reports, partnerships, and other significant news. They are widely used by analysts to track corporate strategy, operational updates and financial performance.\n",
|
||||
"Kay.ai obtains press releases of all US public companies from a variety of sources, which include the company's official press room and partnerships with various data API providers. \n",
|
||||
"This data is updated till Sept 30th for free access, if you want to access the real-time feed, reach out to us at hello@kay.ai or [tweet at us](https://twitter.com/vishalrohra_)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8183d85d-365f-4672-a963-52b533547de0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Setup\n",
|
||||
"=\n",
|
||||
"\n",
|
||||
"First you will need to install the `kay` package. You will also need an API key: you can get one for free at [https://kay.ai](https://kay.ai/). Once you have an API key, you must set it as an environment variable `KAY_API_KEY`.\n",
|
||||
"\n",
|
||||
"In this example we're going to use the `KayAiRetriever`. Take a look at the [kay notebook](/docs/integrations/retrievers/kay) for more detailed information for the parmeters that it accepts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "02ec21c7-49fe-4844-b58a-bf064ad40b2a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Examples\n",
|
||||
"="
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "bf0395f7-6ebe-4136-8b0d-00b9dea3becd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdin",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" ········\n",
|
||||
" ········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Setup API keys for Kay and OpenAI\n",
|
||||
"from getpass import getpass\n",
|
||||
"KAY_API_KEY = getpass()\n",
|
||||
"OPENAI_API_KEY = getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f7fcaf70-29a4-444b-8f07-9784f808c300",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"os.environ[\"KAY_API_KEY\"] = KAY_API_KEY\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "ac00bf93-3635-4ffe-b9a6-a8b4f35c0c85",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.retrievers import KayAiRetriever\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
|
||||
"retriever = KayAiRetriever.create(dataset_id=\"company\", data_types=[\"PressRelease\"], num_contexts=6)\n",
|
||||
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8d9d927c-35b2-4a7b-8ea7-4d0350797941",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"-> **Question**: How is the healthcare industry adopting generative AI tools? \n",
|
||||
"\n",
|
||||
"**Answer**: The healthcare industry is adopting generative AI tools to improve various aspects of patient care and administrative tasks. Companies like HCA Healthcare Inc, Amazon Com Inc, and Mayo Clinic have collaborated with technology providers like Google Cloud, AWS, and Microsoft to implement generative AI solutions.\n",
|
||||
"\n",
|
||||
"HCA Healthcare is testing a nurse handoff tool that generates draft reports quickly and accurately, which nurses have shown interest in using. They are also exploring the use of Google's medically-tuned Med-PaLM 2 LLM to support caregivers in asking complex medical questions.\n",
|
||||
"\n",
|
||||
"Amazon Web Services (AWS) has introduced AWS HealthScribe, a generative AI-powered service that automatically creates clinical documentation. However, integrating multiple AI systems into a cohesive solution requires significant engineering resources, including access to AI experts, healthcare data, and compute capacity.\n",
|
||||
"\n",
|
||||
"Mayo Clinic is among the first healthcare organizations to deploy Microsoft 365 Copilot, a generative AI service that combines large language models with organizational data from Microsoft 365. This tool has the potential to automate tasks like form-filling, relieving administrative burdens on healthcare providers and allowing them to focus more on patient care.\n",
|
||||
"\n",
|
||||
"Overall, the healthcare industry is recognizing the potential benefits of generative AI tools in improving efficiency, automating tasks, and enhancing patient care. \n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# More sample questions in the Playground on https://kay.ai\n",
|
||||
"questions = [\n",
|
||||
" \"How is the healthcare industry adopting generative AI tools?\",\n",
|
||||
" #\"What are some recent challenges faced by the renewable energy sector?\",\n",
|
||||
"]\n",
|
||||
"chat_history = []\n",
|
||||
"\n",
|
||||
"for question in questions:\n",
|
||||
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
|
||||
" chat_history.append((question, result[\"answer\"]))\n",
|
||||
" print(f\"-> **Question**: {question} \\n\")\n",
|
||||
" print(f\"**Answer**: {result['answer']} \\n\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.18"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -12,7 +12,7 @@
|
||||
"\n",
|
||||
"SalesGPT is context-aware, which means it can understand what section of a sales conversation it is in and act accordingly.\n",
|
||||
" \n",
|
||||
"As such, this agent can have a natural sales conversation with a prospect and behaves based on the conversation stage. Hence, this notebook demonstrates how we can use AI to automate sales development representatives activites, such as outbound sales calls. \n",
|
||||
"As such, this agent can have a natural sales conversation with a prospect and behaves based on the conversation stage. Hence, this notebook demonstrates how we can use AI to automate sales development representatives activities, such as outbound sales calls. \n",
|
||||
"\n",
|
||||
"Additionally, the AI Sales agent has access to tools, which allow it to interact with other systems.\n",
|
||||
"\n",
|
||||
@@ -66,7 +66,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install aditional dependencies\n",
|
||||
"# install additional dependencies\n",
|
||||
"# ! pip install chromadb openai tiktoken"
|
||||
]
|
||||
},
|
||||
@@ -150,7 +150,7 @@
|
||||
" {conversation_history}\n",
|
||||
" ===\n",
|
||||
"\n",
|
||||
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options:\n",
|
||||
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting only from the following options:\n",
|
||||
" 1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\n",
|
||||
" 2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.\n",
|
||||
" 3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.\n",
|
||||
@@ -277,7 +277,7 @@
|
||||
" \n",
|
||||
" ===\n",
|
||||
"\n",
|
||||
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options:\n",
|
||||
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting only from the following options:\n",
|
||||
" 1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\n",
|
||||
" 2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.\n",
|
||||
" 3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.\n",
|
||||
1181
cookbook/self_query_hotel_search.ipynb
Normal file
@@ -17,7 +17,7 @@
|
||||
"\n",
|
||||
"Note that SmartLLMChains\n",
|
||||
"- use more LLM passes (ie n+2 instead of just 1)\n",
|
||||
"- only work then the underlying LLM has the capability for reflection, whicher smaller models often don't\n",
|
||||
"- only work then the underlying LLM has the capability for reflection, which smaller models often don't\n",
|
||||
"- only work with underlying models that return exactly 1 output, not multiple\n",
|
||||
"\n",
|
||||
"This notebook demonstrates how to use a SmartLLMChain."
|
||||
@@ -241,7 +241,7 @@
|
||||
" ideation_llm=ChatOpenAI(temperature=0.9, model_name=\"gpt-4\"),\n",
|
||||
" llm=ChatOpenAI(\n",
|
||||
" temperature=0, model_name=\"gpt-4\"\n",
|
||||
" ), # will be used for critqiue and resolution as no specific llms are given\n",
|
||||
" ), # will be used for critique and resolution as no specific llms are given\n",
|
||||
" prompt=prompt,\n",
|
||||
" n_ideas=3,\n",
|
||||
" verbose=True,\n",
|
||||
@@ -1,3 +1,7 @@
|
||||
# SQL Database Chain
|
||||
|
||||
This example demonstrates the use of the `SQLDatabaseChain` for answering questions over a SQL database.
|
||||
|
||||
Under the hood, LangChain uses SQLAlchemy to connect to SQL databases. The `SQLDatabaseChain` can therefore be used with any SQL dialect supported by SQLAlchemy, such as MS SQL, MySQL, MariaDB, PostgreSQL, Oracle SQL, [Databricks](/docs/ecosystem/integrations/databricks.html) and SQLite. Please refer to the SQLAlchemy documentation for more information about requirements for connecting to your database. For example, a connection to MySQL requires an appropriate connector such as PyMySQL. A URI for a MySQL connection might look like: `mysql+pymysql://user:pass@some_mysql_db_address/db_name`.
|
||||
|
||||
This demonstration uses SQLite and the example Chinook database.
|
||||
@@ -31,8 +35,8 @@ db_chain.run("How many employees are there?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many employees are there?
|
||||
SQLQuery:
|
||||
@@ -71,8 +75,8 @@ db_chain.run("How many albums by Aerosmith?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many albums by Aerosmith?
|
||||
SQLQuery:SELECT COUNT(*) FROM Album WHERE ArtistId = 3;
|
||||
@@ -129,8 +133,8 @@ db_chain.run("How many employees are there in the foobar table?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many employees are there in the foobar table?
|
||||
SQLQuery:SELECT COUNT(*) FROM Employee;
|
||||
@@ -165,8 +169,8 @@ result["intermediate_steps"]
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many employees are there in the foobar table?
|
||||
SQLQuery:SELECT COUNT(*) FROM Employee;
|
||||
@@ -313,8 +317,8 @@ db_chain.run("What are some example tracks by composer Johann Sebastian Bach?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What are some example tracks by composer Johann Sebastian Bach?
|
||||
SQLQuery:SELECT Name FROM Track WHERE Composer = 'Johann Sebastian Bach' LIMIT 3
|
||||
@@ -352,23 +356,23 @@ print(db.table_info)
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
CREATE TABLE "Track" (
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(200) NOT NULL,
|
||||
"AlbumId" INTEGER,
|
||||
"MediaTypeId" INTEGER NOT NULL,
|
||||
"GenreId" INTEGER,
|
||||
"Composer" NVARCHAR(220),
|
||||
"Milliseconds" INTEGER NOT NULL,
|
||||
"Bytes" INTEGER,
|
||||
"UnitPrice" NUMERIC(10, 2) NOT NULL,
|
||||
PRIMARY KEY ("TrackId"),
|
||||
FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"),
|
||||
FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"),
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(200) NOT NULL,
|
||||
"AlbumId" INTEGER,
|
||||
"MediaTypeId" INTEGER NOT NULL,
|
||||
"GenreId" INTEGER,
|
||||
"Composer" NVARCHAR(220),
|
||||
"Milliseconds" INTEGER NOT NULL,
|
||||
"Bytes" INTEGER,
|
||||
"UnitPrice" NUMERIC(10, 2) NOT NULL,
|
||||
PRIMARY KEY ("TrackId"),
|
||||
FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"),
|
||||
FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"),
|
||||
FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId")
|
||||
)
|
||||
|
||||
|
||||
/*
|
||||
2 rows from Track table:
|
||||
TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice
|
||||
@@ -392,8 +396,8 @@ db_chain.run("What are some example tracks by Bach?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What are some example tracks by Bach?
|
||||
SQLQuery:SELECT "Name", "Composer" FROM "Track" WHERE "Composer" LIKE '%Bach%' LIMIT 5
|
||||
@@ -411,7 +415,7 @@ db_chain.run("What are some example tracks by Bach?")
|
||||
</CodeOutputBlock>
|
||||
|
||||
### Custom Table Info
|
||||
In some cases, it can be useful to provide custom table information instead of using the automatically generated table definitions and the first `sample_rows_in_table_info` sample rows. For example, if you know that the first few rows of a table are uninformative, it could help to manually provide example rows that are more diverse or provide more information to the model. It is also possible to limit the columns that will be visible to the model if there are unnecessary columns.
|
||||
In some cases, it can be useful to provide custom table information instead of using the automatically generated table definitions and the first `sample_rows_in_table_info` sample rows. For example, if you know that the first few rows of a table are uninformative, it could help to manually provide example rows that are more diverse or provide more information to the model. It is also possible to limit the columns that will be visible to the model if there are unnecessary columns.
|
||||
|
||||
This information can be provided as a dictionary with table names as the keys and table information as the values. For example, let's provide a custom definition and sample rows for the Track table with only a few columns:
|
||||
|
||||
@@ -419,7 +423,7 @@ This information can be provided as a dictionary with table names as the keys an
|
||||
```python
|
||||
custom_table_info = {
|
||||
"Track": """CREATE TABLE Track (
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(200) NOT NULL,
|
||||
"Composer" NVARCHAR(220),
|
||||
PRIMARY KEY ("TrackId")
|
||||
@@ -448,22 +452,22 @@ print(db.table_info)
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
CREATE TABLE "Playlist" (
|
||||
"PlaylistId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(120),
|
||||
"PlaylistId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(120),
|
||||
PRIMARY KEY ("PlaylistId")
|
||||
)
|
||||
|
||||
|
||||
/*
|
||||
2 rows from Playlist table:
|
||||
PlaylistId Name
|
||||
1 Music
|
||||
2 Movies
|
||||
*/
|
||||
|
||||
|
||||
CREATE TABLE Track (
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(200) NOT NULL,
|
||||
"Composer" NVARCHAR(220),
|
||||
PRIMARY KEY ("TrackId")
|
||||
@@ -490,8 +494,8 @@ db_chain.run("What are some example tracks by Bach?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What are some example tracks by Bach?
|
||||
SQLQuery:SELECT "Name" FROM Track WHERE "Composer" LIKE '%Bach%' LIMIT 5;
|
||||
@@ -501,31 +505,31 @@ db_chain.run("What are some example tracks by Bach?")
|
||||
Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.
|
||||
Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.
|
||||
Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
|
||||
|
||||
|
||||
Use the following format:
|
||||
|
||||
|
||||
Question: "Question here"
|
||||
SQLQuery: "SQL Query to run"
|
||||
SQLResult: "Result of the SQLQuery"
|
||||
Answer: "Final answer here"
|
||||
|
||||
|
||||
Only use the following tables:
|
||||
|
||||
|
||||
CREATE TABLE "Playlist" (
|
||||
"PlaylistId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(120),
|
||||
"PlaylistId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(120),
|
||||
PRIMARY KEY ("PlaylistId")
|
||||
)
|
||||
|
||||
|
||||
/*
|
||||
2 rows from Playlist table:
|
||||
PlaylistId Name
|
||||
1 Music
|
||||
2 Movies
|
||||
*/
|
||||
|
||||
|
||||
CREATE TABLE Track (
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"TrackId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(200) NOT NULL,
|
||||
"Composer" NVARCHAR(220),
|
||||
PRIMARY KEY ("TrackId")
|
||||
@@ -537,7 +541,7 @@ db_chain.run("What are some example tracks by Bach?")
|
||||
2 Balls to the Wall None
|
||||
3 My favorite song ever The coolest composer of all time
|
||||
*/
|
||||
|
||||
|
||||
Question: What are some example tracks by Bach?
|
||||
SQLQuery:SELECT "Name" FROM Track WHERE "Composer" LIKE '%Bach%' LIMIT 5;
|
||||
SQLResult: [('American Woman',), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata',)]
|
||||
@@ -557,7 +561,7 @@ db_chain.run("What are some example tracks by Bach?")
|
||||
|
||||
### SQL Views
|
||||
|
||||
In some case, the table schema can be hidden behind a JSON or JSONB column. Adding row samples into the prompt might help won't always describe the data perfectly.
|
||||
In some case, the table schema can be hidden behind a JSON or JSONB column. Adding row samples into the prompt might help won't always describe the data perfectly.
|
||||
|
||||
For this reason, a custom SQL views can help.
|
||||
|
||||
@@ -609,19 +613,19 @@ chain.run("How many employees are also customers?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseSequentialChain chain...
|
||||
Table names to use:
|
||||
['Employee', 'Customer']
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many employees are also customers?
|
||||
SQLQuery:SELECT COUNT(*) FROM Employee e INNER JOIN Customer c ON e.EmployeeId = c.SupportRepId;
|
||||
SQLResult: [(59,)]
|
||||
Answer:59 employees are also customers.
|
||||
> Finished chain.
|
||||
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
@@ -692,8 +696,8 @@ local_chain("How many customers are there?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many customers are there?
|
||||
SQLQuery:
|
||||
@@ -879,8 +883,8 @@ print("\n" + yaml_example)
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
List all the customer first names that start with 'a'
|
||||
SQLQuery:
|
||||
@@ -900,7 +904,7 @@ print("\n" + yaml_example)
|
||||
[('François', 'Frantiek', 'Helena', 'Astrid', 'Daan', 'Kara', 'Eduardo', 'Alexandre', 'Fernanda', 'Mark', 'Frank', 'Jack', 'Dan', 'Kathy', 'Heather', 'Frank', 'Richard', 'Patrick', 'Julia', 'Edward', 'Martha', 'Aaron', 'Madalena', 'Hannah', 'Niklas', 'Camille', 'Marc', 'Wyatt', 'Isabelle', 'Ladislav', 'Lucas', 'Johannes', 'Stanisaw', 'Joakim', 'Emma', 'Mark', 'Manoj', 'Puja']
|
||||
> Finished chain.
|
||||
*** Query succeeded
|
||||
|
||||
|
||||
answer: '[(''François'', ''Frantiek'', ''Helena'', ''Astrid'', ''Daan'', ''Kara'',
|
||||
''Eduardo'', ''Alexandre'', ''Fernanda'', ''Mark'', ''Frank'', ''Jack'', ''Dan'',
|
||||
''Kathy'', ''Heather'', ''Frank'', ''Richard'', ''Patrick'', ''Julia'', ''Edward'',
|
||||
@@ -931,7 +935,7 @@ print("\n" + yaml_example)
|
||||
None\tGermany\t70174\t+49 0711 2842222\tNone\tleonekohler@surfeu.de\t5\n3\tFrançois\t\
|
||||
Tremblay\tNone\t1498 rue Bélanger\tMontréal\tQC\tCanada\tH2G 1A7\t+1 (514) 721-4711\t\
|
||||
None\tftremblay@gmail.com\t3\n*/"
|
||||
|
||||
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
@@ -944,20 +948,20 @@ YAML_EXAMPLES = """
|
||||
- input: How many customers are not from Brazil?
|
||||
table_info: |
|
||||
CREATE TABLE "Customer" (
|
||||
"CustomerId" INTEGER NOT NULL,
|
||||
"FirstName" NVARCHAR(40) NOT NULL,
|
||||
"LastName" NVARCHAR(20) NOT NULL,
|
||||
"Company" NVARCHAR(80),
|
||||
"Address" NVARCHAR(70),
|
||||
"City" NVARCHAR(40),
|
||||
"State" NVARCHAR(40),
|
||||
"Country" NVARCHAR(40),
|
||||
"PostalCode" NVARCHAR(10),
|
||||
"Phone" NVARCHAR(24),
|
||||
"Fax" NVARCHAR(24),
|
||||
"Email" NVARCHAR(60) NOT NULL,
|
||||
"SupportRepId" INTEGER,
|
||||
PRIMARY KEY ("CustomerId"),
|
||||
"CustomerId" INTEGER NOT NULL,
|
||||
"FirstName" NVARCHAR(40) NOT NULL,
|
||||
"LastName" NVARCHAR(20) NOT NULL,
|
||||
"Company" NVARCHAR(80),
|
||||
"Address" NVARCHAR(70),
|
||||
"City" NVARCHAR(40),
|
||||
"State" NVARCHAR(40),
|
||||
"Country" NVARCHAR(40),
|
||||
"PostalCode" NVARCHAR(10),
|
||||
"Phone" NVARCHAR(24),
|
||||
"Fax" NVARCHAR(24),
|
||||
"Email" NVARCHAR(60) NOT NULL,
|
||||
"SupportRepId" INTEGER,
|
||||
PRIMARY KEY ("CustomerId"),
|
||||
FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")
|
||||
)
|
||||
sql_cmd: SELECT COUNT(*) FROM "Customer" WHERE NOT "Country" = "Brazil";
|
||||
@@ -966,8 +970,8 @@ YAML_EXAMPLES = """
|
||||
- input: list all the genres that start with 'r'
|
||||
table_info: |
|
||||
CREATE TABLE "Genre" (
|
||||
"GenreId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(120),
|
||||
"GenreId" INTEGER NOT NULL,
|
||||
"Name" NVARCHAR(120),
|
||||
PRIMARY KEY ("GenreId")
|
||||
)
|
||||
|
||||
@@ -980,7 +984,7 @@ YAML_EXAMPLES = """
|
||||
*/
|
||||
sql_cmd: SELECT "Name" FROM "Genre" WHERE "Name" LIKE 'r%';
|
||||
sql_result: "[('Rock',), ('Rock and Roll',), ('Reggae',), ('R&B/Soul',)]"
|
||||
answer: The genres that start with 'r' are Rock, Rock and Roll, Reggae and R&B/Soul.
|
||||
answer: The genres that start with 'r' are Rock, Rock and Roll, Reggae and R&B/Soul.
|
||||
"""
|
||||
```
|
||||
|
||||
@@ -1046,8 +1050,8 @@ result = local_chain("How many customers are from Brazil?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many customers are from Brazil?
|
||||
SQLQuery:SELECT count(*) FROM Customer WHERE Country = "Brazil";
|
||||
@@ -1066,8 +1070,8 @@ result = local_chain("How many customers are not from Brazil?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many customers are not from Brazil?
|
||||
SQLQuery:SELECT count(*) FROM customer WHERE country NOT IN (SELECT country FROM customer WHERE country = 'Brazil')
|
||||
@@ -1086,8 +1090,8 @@ result = local_chain("How many customers are there in total?")
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
How many customers are there in total?
|
||||
SQLQuery:SELECT count(*) FROM Customer;
|
||||
@@ -1,3 +1,3 @@
|
||||
FROM python:latest
|
||||
FROM python:3.11
|
||||
|
||||
RUN pip install langchain
|
||||
|
||||
@@ -8,10 +8,11 @@ set -o xtrace
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
|
||||
cd "${SCRIPT_DIR}"
|
||||
|
||||
mkdir -p _dist/docs_skeleton
|
||||
cp -r {docs_skeleton,snippets} _dist
|
||||
cd _dist/docs_skeleton
|
||||
poetry run nbdoc_build
|
||||
poetry run python generate_api_reference_links.py
|
||||
mkdir -p ../_dist
|
||||
cp -r . ../_dist
|
||||
cd ../_dist
|
||||
poetry run python scripts/model_feat_table.py
|
||||
poetry run nbdoc_build --srcdir docs
|
||||
poetry run python scripts/generate_api_reference_links.py
|
||||
yarn install
|
||||
yarn start
|
||||
|
||||
@@ -42,7 +42,7 @@ If you are using GitHub pages for hosting, this command is a convenient way to b
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
|
||||
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open-source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
|
||||
|
||||
```
|
||||
$ yarn ci
|
||||
@@ -122,8 +122,7 @@ def _merge_module_members(
|
||||
|
||||
|
||||
def _load_package_modules(
|
||||
package_directory: Union[str, Path],
|
||||
submodule: Optional[str] = None
|
||||
package_directory: Union[str, Path], submodule: Optional[str] = None
|
||||
) -> Dict[str, ModuleMembers]:
|
||||
"""Recursively load modules of a package based on the file system.
|
||||
|
||||
@@ -171,7 +170,8 @@ def _load_package_modules(
|
||||
# different way
|
||||
if submodule is not None:
|
||||
module_members = _load_module_members(
|
||||
f"{package_name}.{submodule}.{namespace}", f"{submodule}.{namespace}"
|
||||
f"{package_name}.{submodule}.{namespace}",
|
||||
f"{submodule}.{namespace}",
|
||||
)
|
||||
else:
|
||||
module_members = _load_module_members(
|
||||
@@ -280,18 +280,9 @@ Functions
|
||||
return full_doc
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Generate the reference.rst file for each package."""
|
||||
lc_members = _load_package_modules(PKG_DIR)
|
||||
# Put some packages at top level
|
||||
tools = _load_package_modules(PKG_DIR, "tools")
|
||||
lc_members['tools.render'] = tools['render']
|
||||
agents = _load_package_modules(PKG_DIR, "agents")
|
||||
lc_members['agents.output_parsers'] = agents['output_parsers']
|
||||
lc_members['agents.format_scratchpad'] = agents['format_scratchpad']
|
||||
lc_doc = ".. _api_reference:\n\n" + _construct_doc("langchain", lc_members)
|
||||
with open(WRITE_FILE, "w") as f:
|
||||
f.write(lc_doc)
|
||||
def _document_langchain_experimental() -> None:
|
||||
"""Document the langchain_experimental package."""
|
||||
# Generate experimental_api_reference.rst
|
||||
exp_members = _load_package_modules(EXP_DIR)
|
||||
exp_doc = ".. _experimental_api_reference:\n\n" + _construct_doc(
|
||||
"langchain_experimental", exp_members
|
||||
@@ -300,5 +291,36 @@ def main() -> None:
|
||||
f.write(exp_doc)
|
||||
|
||||
|
||||
def _document_langchain_core() -> None:
|
||||
"""Document the main langchain package."""
|
||||
# load top level module members
|
||||
lc_members = _load_package_modules(PKG_DIR)
|
||||
|
||||
# Add additional packages
|
||||
tools = _load_package_modules(PKG_DIR, "tools")
|
||||
agents = _load_package_modules(PKG_DIR, "agents")
|
||||
schema = _load_package_modules(PKG_DIR, "schema")
|
||||
|
||||
lc_members.update(
|
||||
{
|
||||
"agents.output_parsers": agents["output_parsers"],
|
||||
"agents.format_scratchpad": agents["format_scratchpad"],
|
||||
"tools.render": tools["render"],
|
||||
"schema.runnable": schema["runnable"],
|
||||
}
|
||||
)
|
||||
|
||||
lc_doc = ".. _api_reference:\n\n" + _construct_doc("langchain", lc_members)
|
||||
|
||||
with open(WRITE_FILE, "w") as f:
|
||||
f.write(lc_doc)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Generate the reference.rst file for each package."""
|
||||
_document_langchain_core()
|
||||
_document_langchain_experimental()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
|
Before Width: | Height: | Size: 559 KiB After Width: | Height: | Size: 559 KiB |
|
Before Width: | Height: | Size: 157 KiB After Width: | Height: | Size: 157 KiB |
|
Before Width: | Height: | Size: 235 KiB After Width: | Height: | Size: 235 KiB |
|
Before Width: | Height: | Size: 148 KiB After Width: | Height: | Size: 148 KiB |
|
Before Width: | Height: | Size: 3.5 MiB After Width: | Height: | Size: 3.5 MiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 85 KiB |
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 542 B After Width: | Height: | Size: 542 B |
|
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 103 KiB After Width: | Height: | Size: 103 KiB |
|
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 136 KiB |
|
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
465
docs/docs/additional_resources/dependents.mdx
Normal file
@@ -0,0 +1,465 @@
|
||||
# Dependents
|
||||
|
||||
Dependents stats for `langchain-ai/langchain`
|
||||
|
||||
[](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
[&message=451&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
[&message=30083&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
[&message=37822&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
|
||||
|
||||
[update: `2023-10-06`; only dependent repositories with Stars > 100]
|
||||
|
||||
|
||||
| Repository | Stars |
|
||||
| :-------- | -----: |
|
||||
|[openai/openai-cookbook](https://github.com/openai/openai-cookbook) | 49006 |
|
||||
|[AntonOsika/gpt-engineer](https://github.com/AntonOsika/gpt-engineer) | 44368 |
|
||||
|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 38300 |
|
||||
|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 35327 |
|
||||
|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 34799 |
|
||||
|[microsoft/TaskMatrix](https://github.com/microsoft/TaskMatrix) | 34161 |
|
||||
|[streamlit/streamlit](https://github.com/streamlit/streamlit) | 27697 |
|
||||
|[geekan/MetaGPT](https://github.com/geekan/MetaGPT) | 27302 |
|
||||
|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 26805 |
|
||||
|[OpenBB-finance/OpenBBTerminal](https://github.com/OpenBB-finance/OpenBBTerminal) | 24473 |
|
||||
|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 23323 |
|
||||
|[run-llama/llama_index](https://github.com/run-llama/llama_index) | 22151 |
|
||||
|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 19741 |
|
||||
|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 18062 |
|
||||
|[PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) | 16413 |
|
||||
|[chatchat-space/Langchain-Chatchat](https://github.com/chatchat-space/Langchain-Chatchat) | 16300 |
|
||||
|[cube-js/cube](https://github.com/cube-js/cube) | 16261 |
|
||||
|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 15487 |
|
||||
|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 12599 |
|
||||
|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 12501 |
|
||||
|[openai/evals](https://github.com/openai/evals) | 12056 |
|
||||
|[airbytehq/airbyte](https://github.com/airbytehq/airbyte) | 11919 |
|
||||
|[go-skynet/LocalAI](https://github.com/go-skynet/LocalAI) | 11767 |
|
||||
|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 10609 |
|
||||
|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 9240 |
|
||||
|[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) | 8892 |
|
||||
|[langgenius/dify](https://github.com/langgenius/dify) | 8764 |
|
||||
|[gventuri/pandas-ai](https://github.com/gventuri/pandas-ai) | 8687 |
|
||||
|[jmorganca/ollama](https://github.com/jmorganca/ollama) | 8628 |
|
||||
|[langchain-ai/langchainjs](https://github.com/langchain-ai/langchainjs) | 8392 |
|
||||
|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 7953 |
|
||||
|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 7730 |
|
||||
|[PipedreamHQ/pipedream](https://github.com/PipedreamHQ/pipedream) | 7261 |
|
||||
|[joshpxyne/gpt-migrate](https://github.com/joshpxyne/gpt-migrate) | 6349 |
|
||||
|[bentoml/OpenLLM](https://github.com/bentoml/OpenLLM) | 6213 |
|
||||
|[mage-ai/mage-ai](https://github.com/mage-ai/mage-ai) | 5600 |
|
||||
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 5499 |
|
||||
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 5497 |
|
||||
|[sweepai/sweep](https://github.com/sweepai/sweep) | 5489 |
|
||||
|[embedchain/embedchain](https://github.com/embedchain/embedchain) | 5428 |
|
||||
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 5311 |
|
||||
|[Shaunwei/RealChar](https://github.com/Shaunwei/RealChar) | 5264 |
|
||||
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 5146 |
|
||||
|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 5134 |
|
||||
|[serge-chat/serge](https://github.com/serge-chat/serge) | 5009 |
|
||||
|[assafelovic/gpt-researcher](https://github.com/assafelovic/gpt-researcher) | 4836 |
|
||||
|[openchatai/OpenChat](https://github.com/openchatai/OpenChat) | 4697 |
|
||||
|[intel-analytics/BigDL](https://github.com/intel-analytics/BigDL) | 4412 |
|
||||
|[continuedev/continue](https://github.com/continuedev/continue) | 4324 |
|
||||
|[postgresml/postgresml](https://github.com/postgresml/postgresml) | 4267 |
|
||||
|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 4214 |
|
||||
|[MineDojo/Voyager](https://github.com/MineDojo/Voyager) | 4204 |
|
||||
|[danswer-ai/danswer](https://github.com/danswer-ai/danswer) | 3973 |
|
||||
|[RayVentura/ShortGPT](https://github.com/RayVentura/ShortGPT) | 3922 |
|
||||
|[Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | 3849 |
|
||||
|[khoj-ai/khoj](https://github.com/khoj-ai/khoj) | 3817 |
|
||||
|[langchain-ai/chat-langchain](https://github.com/langchain-ai/chat-langchain) | 3742 |
|
||||
|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 3731 |
|
||||
|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 3627 |
|
||||
|[kyegomez/tree-of-thoughts](https://github.com/kyegomez/tree-of-thoughts) | 3553 |
|
||||
|[llm-workflow-engine/llm-workflow-engine](https://github.com/llm-workflow-engine/llm-workflow-engine) | 3483 |
|
||||
|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 3460 |
|
||||
|[aiwaves-cn/agents](https://github.com/aiwaves-cn/agents) | 3413 |
|
||||
|[OpenBMB/ToolBench](https://github.com/OpenBMB/ToolBench) | 3388 |
|
||||
|[shroominic/codeinterpreter-api](https://github.com/shroominic/codeinterpreter-api) | 3218 |
|
||||
|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 3085 |
|
||||
|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 3039 |
|
||||
|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 2911 |
|
||||
|[ParisNeo/lollms-webui](https://github.com/ParisNeo/lollms-webui) | 2907 |
|
||||
|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 2874 |
|
||||
|[openchatai/OpenCopilot](https://github.com/openchatai/OpenCopilot) | 2759 |
|
||||
|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 2657 |
|
||||
|[homanp/superagent](https://github.com/homanp/superagent) | 2624 |
|
||||
|[SamurAIGPT/EmbedAI](https://github.com/SamurAIGPT/EmbedAI) | 2575 |
|
||||
|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2488 |
|
||||
|[microsoft/promptflow](https://github.com/microsoft/promptflow) | 2475 |
|
||||
|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 2445 |
|
||||
|[Mintplex-Labs/anything-llm](https://github.com/Mintplex-Labs/anything-llm) | 2434 |
|
||||
|[emptycrown/llama-hub](https://github.com/emptycrown/llama-hub) | 2432 |
|
||||
|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 2327 |
|
||||
|[ShreyaR/guardrails](https://github.com/ShreyaR/guardrails) | 2307 |
|
||||
|[thomas-yanxin/LangChain-ChatGLM-Webui](https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui) | 2305 |
|
||||
|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 2291 |
|
||||
|[keephq/keep](https://github.com/keephq/keep) | 2252 |
|
||||
|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 2194 |
|
||||
|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 2169 |
|
||||
|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 2031 |
|
||||
|[YiVal/YiVal](https://github.com/YiVal/YiVal) | 2014 |
|
||||
|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 2014 |
|
||||
|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 1977 |
|
||||
|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1887 |
|
||||
|[dot-agent/dotagent-WIP](https://github.com/dot-agent/dotagent-WIP) | 1812 |
|
||||
|[hegelai/prompttools](https://github.com/hegelai/prompttools) | 1775 |
|
||||
|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1734 |
|
||||
|[Vonng/pigsty](https://github.com/Vonng/pigsty) | 1693 |
|
||||
|[psychic-api/psychic](https://github.com/psychic-api/psychic) | 1597 |
|
||||
|[avinashkranjan/Amazing-Python-Scripts](https://github.com/avinashkranjan/Amazing-Python-Scripts) | 1546 |
|
||||
|[pinterest/querybook](https://github.com/pinterest/querybook) | 1539 |
|
||||
|[Forethought-Technologies/AutoChain](https://github.com/Forethought-Technologies/AutoChain) | 1531 |
|
||||
|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1503 |
|
||||
|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 1487 |
|
||||
|[noahshinn024/reflexion](https://github.com/noahshinn024/reflexion) | 1481 |
|
||||
|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1436 |
|
||||
|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1425 |
|
||||
|[milvus-io/bootcamp](https://github.com/milvus-io/bootcamp) | 1420 |
|
||||
|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1401 |
|
||||
|[greshake/llm-security](https://github.com/greshake/llm-security) | 1381 |
|
||||
|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1366 |
|
||||
|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1352 |
|
||||
|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 1339 |
|
||||
|[refuel-ai/autolabel](https://github.com/refuel-ai/autolabel) | 1320 |
|
||||
|[melih-unsal/DemoGPT](https://github.com/melih-unsal/DemoGPT) | 1320 |
|
||||
|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 1320 |
|
||||
|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 1315 |
|
||||
|[run-llama/sec-insights](https://github.com/run-llama/sec-insights) | 1312 |
|
||||
|[Azure/azureml-examples](https://github.com/Azure/azureml-examples) | 1305 |
|
||||
|[cofactoryai/textbase](https://github.com/cofactoryai/textbase) | 1286 |
|
||||
|[dataelement/bisheng](https://github.com/dataelement/bisheng) | 1273 |
|
||||
|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 1263 |
|
||||
|[pluralsh/plural](https://github.com/pluralsh/plural) | 1188 |
|
||||
|[FlagOpen/FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding) | 1184 |
|
||||
|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1144 |
|
||||
|[poe-platform/server-bot-quick-start](https://github.com/poe-platform/server-bot-quick-start) | 1139 |
|
||||
|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 1137 |
|
||||
|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 1124 |
|
||||
|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 1119 |
|
||||
|[ThousandBirdsInc/chidori](https://github.com/ThousandBirdsInc/chidori) | 1116 |
|
||||
|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 1112 |
|
||||
|[psychic-api/rag-stack](https://github.com/psychic-api/rag-stack) | 1110 |
|
||||
|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 1100 |
|
||||
|[promptfoo/promptfoo](https://github.com/promptfoo/promptfoo) | 1099 |
|
||||
|[nod-ai/SHARK](https://github.com/nod-ai/SHARK) | 1062 |
|
||||
|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 1036 |
|
||||
|[Farama-Foundation/chatarena](https://github.com/Farama-Foundation/chatarena) | 1020 |
|
||||
|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 993 |
|
||||
|[jiran214/GPT-vup](https://github.com/jiran214/GPT-vup) | 967 |
|
||||
|[alejandro-ao/ask-multiple-pdfs](https://github.com/alejandro-ao/ask-multiple-pdfs) | 958 |
|
||||
|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 953 |
|
||||
|[LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) | 950 |
|
||||
|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 927 |
|
||||
|[cheshire-cat-ai/core](https://github.com/cheshire-cat-ai/core) | 902 |
|
||||
|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 894 |
|
||||
|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 881 |
|
||||
|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 876 |
|
||||
|[xusenlinzy/api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm) | 865 |
|
||||
|[ricklamers/shell-ai](https://github.com/ricklamers/shell-ai) | 864 |
|
||||
|[codeacme17/examor](https://github.com/codeacme17/examor) | 856 |
|
||||
|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 836 |
|
||||
|[microsoft/Llama-2-Onnx](https://github.com/microsoft/Llama-2-Onnx) | 835 |
|
||||
|[explodinggradients/ragas](https://github.com/explodinggradients/ragas) | 833 |
|
||||
|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 817 |
|
||||
|[kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference](https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference) | 814 |
|
||||
|[ray-project/llm-applications](https://github.com/ray-project/llm-applications) | 804 |
|
||||
|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 801 |
|
||||
|[LambdaLabsML/examples](https://github.com/LambdaLabsML/examples) | 759 |
|
||||
|[kreneskyp/ix](https://github.com/kreneskyp/ix) | 758 |
|
||||
|[pyspark-ai/pyspark-ai](https://github.com/pyspark-ai/pyspark-ai) | 750 |
|
||||
|[billxbf/ReWOO](https://github.com/billxbf/ReWOO) | 746 |
|
||||
|[e-johnstonn/BriefGPT](https://github.com/e-johnstonn/BriefGPT) | 738 |
|
||||
|[akshata29/entaoai](https://github.com/akshata29/entaoai) | 733 |
|
||||
|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 717 |
|
||||
|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 712 |
|
||||
|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 698 |
|
||||
|[Dataherald/dataherald](https://github.com/Dataherald/dataherald) | 684 |
|
||||
|[jondurbin/airoboros](https://github.com/jondurbin/airoboros) | 657 |
|
||||
|[Ikaros-521/AI-Vtuber](https://github.com/Ikaros-521/AI-Vtuber) | 651 |
|
||||
|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 644 |
|
||||
|[langchain-ai/streamlit-agent](https://github.com/langchain-ai/streamlit-agent) | 637 |
|
||||
|[SamurAIGPT/ChatGPT-Developer-Plugins](https://github.com/SamurAIGPT/ChatGPT-Developer-Plugins) | 637 |
|
||||
|[OpenGenerativeAI/GenossGPT](https://github.com/OpenGenerativeAI/GenossGPT) | 632 |
|
||||
|[AILab-CVC/GPT4Tools](https://github.com/AILab-CVC/GPT4Tools) | 629 |
|
||||
|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 614 |
|
||||
|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 613 |
|
||||
|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 607 |
|
||||
|[MiuLab/Taiwan-LLaMa](https://github.com/MiuLab/Taiwan-LLaMa) | 601 |
|
||||
|[microsoft/PodcastCopilot](https://github.com/microsoft/PodcastCopilot) | 600 |
|
||||
|[Dicklesworthstone/swiss_army_llama](https://github.com/Dicklesworthstone/swiss_army_llama) | 596 |
|
||||
|[NoDataFound/hackGPT](https://github.com/NoDataFound/hackGPT) | 596 |
|
||||
|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 593 |
|
||||
|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 582 |
|
||||
|[microsoft/sample-app-aoai-chatGPT](https://github.com/microsoft/sample-app-aoai-chatGPT) | 581 |
|
||||
|[yvann-hub/Robby-chatbot](https://github.com/yvann-hub/Robby-chatbot) | 581 |
|
||||
|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 547 |
|
||||
|[tgscan-dev/tgscan](https://github.com/tgscan-dev/tgscan) | 533 |
|
||||
|[Azure-Samples/openai](https://github.com/Azure-Samples/openai) | 531 |
|
||||
|[plastic-labs/tutor-gpt](https://github.com/plastic-labs/tutor-gpt) | 531 |
|
||||
|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 526 |
|
||||
|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 526 |
|
||||
|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 522 |
|
||||
|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 519 |
|
||||
|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 518 |
|
||||
|[modelscope/modelscope-agent](https://github.com/modelscope/modelscope-agent) | 512 |
|
||||
|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 504 |
|
||||
|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 497 |
|
||||
|[sidhq/Multi-GPT](https://github.com/sidhq/Multi-GPT) | 494 |
|
||||
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 489 |
|
||||
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 487 |
|
||||
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 483 |
|
||||
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 481 |
|
||||
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 474 |
|
||||
|[truera/trulens](https://github.com/truera/trulens) | 464 |
|
||||
|[marella/chatdocs](https://github.com/marella/chatdocs) | 459 |
|
||||
|[opencopilotdev/opencopilot](https://github.com/opencopilotdev/opencopilot) | 453 |
|
||||
|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 444 |
|
||||
|[DataDog/dd-trace-py](https://github.com/DataDog/dd-trace-py) | 441 |
|
||||
|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 441 |
|
||||
|[opentensor/bittensor](https://github.com/opentensor/bittensor) | 433 |
|
||||
|[DjangoPeng/openai-quickstart](https://github.com/DjangoPeng/openai-quickstart) | 425 |
|
||||
|[CarperAI/OpenELM](https://github.com/CarperAI/OpenELM) | 424 |
|
||||
|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 423 |
|
||||
|[showlab/VLog](https://github.com/showlab/VLog) | 411 |
|
||||
|[Anil-matcha/Chatbase](https://github.com/Anil-matcha/Chatbase) | 402 |
|
||||
|[yakami129/VirtualWife](https://github.com/yakami129/VirtualWife) | 399 |
|
||||
|[wandb/weave](https://github.com/wandb/weave) | 399 |
|
||||
|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 398 |
|
||||
|[LinkSoul-AI/AutoAgents](https://github.com/LinkSoul-AI/AutoAgents) | 397 |
|
||||
|[Agenta-AI/agenta](https://github.com/Agenta-AI/agenta) | 389 |
|
||||
|[huchenxucs/ChatDB](https://github.com/huchenxucs/ChatDB) | 386 |
|
||||
|[mallorbc/Finetune_LLMs](https://github.com/mallorbc/Finetune_LLMs) | 379 |
|
||||
|[junruxiong/IncarnaMind](https://github.com/junruxiong/IncarnaMind) | 372 |
|
||||
|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 368 |
|
||||
|[mosaicml/examples](https://github.com/mosaicml/examples) | 366 |
|
||||
|[rsaryev/talk-codebase](https://github.com/rsaryev/talk-codebase) | 364 |
|
||||
|[morpheuslord/GPT_Vuln-analyzer](https://github.com/morpheuslord/GPT_Vuln-analyzer) | 362 |
|
||||
|[monarch-initiative/ontogpt](https://github.com/monarch-initiative/ontogpt) | 362 |
|
||||
|[JayZeeDesign/researcher-gpt](https://github.com/JayZeeDesign/researcher-gpt) | 361 |
|
||||
|[personoids/personoids-lite](https://github.com/personoids/personoids-lite) | 361 |
|
||||
|[intel/intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers) | 357 |
|
||||
|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 357 |
|
||||
|[steamship-packages/langchain-production-starter](https://github.com/steamship-packages/langchain-production-starter) | 356 |
|
||||
|[onlyphantom/llm-python](https://github.com/onlyphantom/llm-python) | 354 |
|
||||
|[Azure-Samples/miyagi](https://github.com/Azure-Samples/miyagi) | 340 |
|
||||
|[mrwadams/attackgen](https://github.com/mrwadams/attackgen) | 338 |
|
||||
|[rgomezcasas/dotfiles](https://github.com/rgomezcasas/dotfiles) | 337 |
|
||||
|[eosphoros-ai/DB-GPT-Hub](https://github.com/eosphoros-ai/DB-GPT-Hub) | 336 |
|
||||
|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 335 |
|
||||
|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 330 |
|
||||
|[momegas/megabots](https://github.com/momegas/megabots) | 329 |
|
||||
|[Nuggt-dev/Nuggt](https://github.com/Nuggt-dev/Nuggt) | 315 |
|
||||
|[itamargol/openai](https://github.com/itamargol/openai) | 315 |
|
||||
|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 315 |
|
||||
|[aws-samples/aws-genai-llm-chatbot](https://github.com/aws-samples/aws-genai-llm-chatbot) | 312 |
|
||||
|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 312 |
|
||||
|[preset-io/promptimize](https://github.com/preset-io/promptimize) | 311 |
|
||||
|[dgarnitz/vectorflow](https://github.com/dgarnitz/vectorflow) | 309 |
|
||||
|[langchain-ai/langsmith-cookbook](https://github.com/langchain-ai/langsmith-cookbook) | 309 |
|
||||
|[CambioML/pykoi](https://github.com/CambioML/pykoi) | 309 |
|
||||
|[wandb/edu](https://github.com/wandb/edu) | 301 |
|
||||
|[XzaiCloud/luna-ai](https://github.com/XzaiCloud/luna-ai) | 300 |
|
||||
|[liangwq/Chatglm_lora_multi-gpu](https://github.com/liangwq/Chatglm_lora_multi-gpu) | 294 |
|
||||
|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 291 |
|
||||
|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 286 |
|
||||
|[sugarforever/LangChain-Tutorials](https://github.com/sugarforever/LangChain-Tutorials) | 285 |
|
||||
|[facebookresearch/personal-timeline](https://github.com/facebookresearch/personal-timeline) | 283 |
|
||||
|[hnawaz007/pythondataanalysis](https://github.com/hnawaz007/pythondataanalysis) | 282 |
|
||||
|[yuanjie-ai/ChatLLM](https://github.com/yuanjie-ai/ChatLLM) | 280 |
|
||||
|[MetaGLM/FinGLM](https://github.com/MetaGLM/FinGLM) | 279 |
|
||||
|[JohnSnowLabs/langtest](https://github.com/JohnSnowLabs/langtest) | 277 |
|
||||
|[Em1tSan/NeuroGPT](https://github.com/Em1tSan/NeuroGPT) | 274 |
|
||||
|[Safiullah-Rahu/CSV-AI](https://github.com/Safiullah-Rahu/CSV-AI) | 274 |
|
||||
|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 274 |
|
||||
|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 266 |
|
||||
|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 263 |
|
||||
|[Mintplex-Labs/vector-admin](https://github.com/Mintplex-Labs/vector-admin) | 262 |
|
||||
|[artitw/text2text](https://github.com/artitw/text2text) | 262 |
|
||||
|[kaarthik108/snowChat](https://github.com/kaarthik108/snowChat) | 261 |
|
||||
|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 260 |
|
||||
|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 260 |
|
||||
|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 258 |
|
||||
|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 257 |
|
||||
|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 255 |
|
||||
|[ur-whitelab/chemcrow-public](https://github.com/ur-whitelab/chemcrow-public) | 253 |
|
||||
|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 251 |
|
||||
|[gustavz/DataChad](https://github.com/gustavz/DataChad) | 249 |
|
||||
|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 249 |
|
||||
|[ennucore/clippinator](https://github.com/ennucore/clippinator) | 247 |
|
||||
|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 244 |
|
||||
|[lilacai/lilac](https://github.com/lilacai/lilac) | 243 |
|
||||
|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 236 |
|
||||
|[iusztinpaul/hands-on-llms](https://github.com/iusztinpaul/hands-on-llms) | 233 |
|
||||
|[PradipNichite/Youtube-Tutorials](https://github.com/PradipNichite/Youtube-Tutorials) | 231 |
|
||||
|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 231 |
|
||||
|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 231 |
|
||||
|[yym68686/ChatGPT-Telegram-Bot](https://github.com/yym68686/ChatGPT-Telegram-Bot) | 226 |
|
||||
|[grumpyp/aixplora](https://github.com/grumpyp/aixplora) | 222 |
|
||||
|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 222 |
|
||||
|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 222 |
|
||||
|[arthur-ai/bench](https://github.com/arthur-ai/bench) | 220 |
|
||||
|[miaoshouai/miaoshouai-assistant](https://github.com/miaoshouai/miaoshouai-assistant) | 219 |
|
||||
|[AutoPackAI/beebot](https://github.com/AutoPackAI/beebot) | 217 |
|
||||
|[edreisMD/plugnplai](https://github.com/edreisMD/plugnplai) | 216 |
|
||||
|[nicknochnack/LangchainDocuments](https://github.com/nicknochnack/LangchainDocuments) | 214 |
|
||||
|[AkshitIreddy/Interactive-LLM-Powered-NPCs](https://github.com/AkshitIreddy/Interactive-LLM-Powered-NPCs) | 213 |
|
||||
|[SpecterOps/Nemesis](https://github.com/SpecterOps/Nemesis) | 210 |
|
||||
|[kyegomez/swarms](https://github.com/kyegomez/swarms) | 210 |
|
||||
|[wpydcr/LLM-Kit](https://github.com/wpydcr/LLM-Kit) | 208 |
|
||||
|[orgexyz/BlockAGI](https://github.com/orgexyz/BlockAGI) | 204 |
|
||||
|[Chainlit/cookbook](https://github.com/Chainlit/cookbook) | 202 |
|
||||
|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 202 |
|
||||
|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 202 |
|
||||
|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 202 |
|
||||
|[langchain-ai/web-explorer](https://github.com/langchain-ai/web-explorer) | 200 |
|
||||
|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 200 |
|
||||
|[alphasecio/langchain-examples](https://github.com/alphasecio/langchain-examples) | 199 |
|
||||
|[Gentopia-AI/Gentopia](https://github.com/Gentopia-AI/Gentopia) | 198 |
|
||||
|[SamPink/dev-gpt](https://github.com/SamPink/dev-gpt) | 196 |
|
||||
|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 196 |
|
||||
|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 195 |
|
||||
|[voxel51/voxelgpt](https://github.com/voxel51/voxelgpt) | 193 |
|
||||
|[CL-lau/SQL-GPT](https://github.com/CL-lau/SQL-GPT) | 192 |
|
||||
|[blob42/Instrukt](https://github.com/blob42/Instrukt) | 191 |
|
||||
|[streamlit/llm-examples](https://github.com/streamlit/llm-examples) | 191 |
|
||||
|[stepanogil/autonomous-hr-chatbot](https://github.com/stepanogil/autonomous-hr-chatbot) | 190 |
|
||||
|[TsinghuaDatabaseGroup/DB-GPT](https://github.com/TsinghuaDatabaseGroup/DB-GPT) | 189 |
|
||||
|[PJLab-ADG/DriveLikeAHuman](https://github.com/PJLab-ADG/DriveLikeAHuman) | 187 |
|
||||
|[Azure-Samples/azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | 187 |
|
||||
|[microsoft/azure-openai-in-a-day-workshop](https://github.com/microsoft/azure-openai-in-a-day-workshop) | 187 |
|
||||
|[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators) | 182 |
|
||||
|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 181 |
|
||||
|[hongbo-miao/hongbomiao.com](https://github.com/hongbo-miao/hongbomiao.com) | 180 |
|
||||
|[QwenLM/Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) | 179 |
|
||||
|[showlab/UniVTG](https://github.com/showlab/UniVTG) | 179 |
|
||||
|[Azure-Samples/jp-azureopenai-samples](https://github.com/Azure-Samples/jp-azureopenai-samples) | 176 |
|
||||
|[afaqueumer/DocQA](https://github.com/afaqueumer/DocQA) | 174 |
|
||||
|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 174 |
|
||||
|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 174 |
|
||||
|[RoboCoachTechnologies/GPT-Synthesizer](https://github.com/RoboCoachTechnologies/GPT-Synthesizer) | 173 |
|
||||
|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 172 |
|
||||
|[vaibkumr/prompt-optimizer](https://github.com/vaibkumr/prompt-optimizer) | 171 |
|
||||
|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 170 |
|
||||
|[anarchy-ai/LLM-VM](https://github.com/anarchy-ai/LLM-VM) | 169 |
|
||||
|[ray-project/langchain-ray](https://github.com/ray-project/langchain-ray) | 169 |
|
||||
|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 169 |
|
||||
|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 168 |
|
||||
|[mayooear/private-chatbot-mpt30b-langchain](https://github.com/mayooear/private-chatbot-mpt30b-langchain) | 167 |
|
||||
|[OpenPluginACI/openplugin](https://github.com/OpenPluginACI/openplugin) | 165 |
|
||||
|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 165 |
|
||||
|[kjappelbaum/gptchem](https://github.com/kjappelbaum/gptchem) | 162 |
|
||||
|[JorisdeJong123/7-Days-of-LangChain](https://github.com/JorisdeJong123/7-Days-of-LangChain) | 161 |
|
||||
|[retr0reg/Ret2GPT](https://github.com/retr0reg/Ret2GPT) | 161 |
|
||||
|[menloparklab/falcon-langchain](https://github.com/menloparklab/falcon-langchain) | 159 |
|
||||
|[summarizepaper/summarizepaper](https://github.com/summarizepaper/summarizepaper) | 158 |
|
||||
|[emarco177/ice_breaker](https://github.com/emarco177/ice_breaker) | 157 |
|
||||
|[AmineDiro/cria](https://github.com/AmineDiro/cria) | 156 |
|
||||
|[morpheuslord/HackBot](https://github.com/morpheuslord/HackBot) | 156 |
|
||||
|[homanp/vercel-langchain](https://github.com/homanp/vercel-langchain) | 156 |
|
||||
|[mlops-for-all/mlops-for-all.github.io](https://github.com/mlops-for-all/mlops-for-all.github.io) | 155 |
|
||||
|[positive666/Prompt-Can-Anything](https://github.com/positive666/Prompt-Can-Anything) | 154 |
|
||||
|[deeppavlov/dream](https://github.com/deeppavlov/dream) | 153 |
|
||||
|[flurb18/AgentOoba](https://github.com/flurb18/AgentOoba) | 151 |
|
||||
|[Open-Swarm-Net/GPT-Swarm](https://github.com/Open-Swarm-Net/GPT-Swarm) | 151 |
|
||||
|[v7labs/benchllm](https://github.com/v7labs/benchllm) | 150 |
|
||||
|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 150 |
|
||||
|[Aggregate-Intellect/sherpa](https://github.com/Aggregate-Intellect/sherpa) | 148 |
|
||||
|[Coding-Crashkurse/Langchain-Full-Course](https://github.com/Coding-Crashkurse/Langchain-Full-Course) | 148 |
|
||||
|[SuperDuperDB/superduperdb](https://github.com/SuperDuperDB/superduperdb) | 147 |
|
||||
|[defenseunicorns/leapfrogai](https://github.com/defenseunicorns/leapfrogai) | 147 |
|
||||
|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 147 |
|
||||
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 146 |
|
||||
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 146 |
|
||||
|[iMagist486/ElasticSearch-Langchain-Chatglm2](https://github.com/iMagist486/ElasticSearch-Langchain-Chatglm2) | 144 |
|
||||
|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 143 |
|
||||
|[kulltc/chatgpt-sql](https://github.com/kulltc/chatgpt-sql) | 142 |
|
||||
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 142 |
|
||||
|[hirokidaichi/wanna](https://github.com/hirokidaichi/wanna) | 141 |
|
||||
|[yasyf/summ](https://github.com/yasyf/summ) | 141 |
|
||||
|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 140 |
|
||||
|[ssheng/BentoChain](https://github.com/ssheng/BentoChain) | 139 |
|
||||
|[mallahyari/drqa](https://github.com/mallahyari/drqa) | 139 |
|
||||
|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 139 |
|
||||
|[dbpunk-labs/octogen](https://github.com/dbpunk-labs/octogen) | 138 |
|
||||
|[RedisVentures/redis-openai-qna](https://github.com/RedisVentures/redis-openai-qna) | 138 |
|
||||
|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 138 |
|
||||
|[langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk) | 137 |
|
||||
|[jina-ai/fastapi-serve](https://github.com/jina-ai/fastapi-serve) | 137 |
|
||||
|[yeagerai/genworlds](https://github.com/yeagerai/genworlds) | 137 |
|
||||
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 137 |
|
||||
|[luisroque/large_laguage_models](https://github.com/luisroque/large_laguage_models) | 136 |
|
||||
|[ChuloAI/BrainChulo](https://github.com/ChuloAI/BrainChulo) | 136 |
|
||||
|[3Alan/DocsMind](https://github.com/3Alan/DocsMind) | 136 |
|
||||
|[KylinC/ChatFinance](https://github.com/KylinC/ChatFinance) | 133 |
|
||||
|[langchain-ai/text-split-explorer](https://github.com/langchain-ai/text-split-explorer) | 133 |
|
||||
|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 133 |
|
||||
|[tencentmusic/supersonic](https://github.com/tencentmusic/supersonic) | 132 |
|
||||
|[kimtth/azure-openai-llm-vector-langchain](https://github.com/kimtth/azure-openai-llm-vector-langchain) | 131 |
|
||||
|[ciare-robotics/world-creator](https://github.com/ciare-robotics/world-creator) | 129 |
|
||||
|[zenml-io/zenml-projects](https://github.com/zenml-io/zenml-projects) | 129 |
|
||||
|[log1stics/voice-generator-webui](https://github.com/log1stics/voice-generator-webui) | 129 |
|
||||
|[snexus/llm-search](https://github.com/snexus/llm-search) | 129 |
|
||||
|[fixie-ai/fixie-examples](https://github.com/fixie-ai/fixie-examples) | 128 |
|
||||
|[MedalCollector/Orator](https://github.com/MedalCollector/Orator) | 127 |
|
||||
|[grumpyp/chroma-langchain-tutorial](https://github.com/grumpyp/chroma-langchain-tutorial) | 127 |
|
||||
|[langchain-ai/langchain-aws-template](https://github.com/langchain-ai/langchain-aws-template) | 127 |
|
||||
|[prof-frink-lab/slangchain](https://github.com/prof-frink-lab/slangchain) | 126 |
|
||||
|[KMnO4-zx/huanhuan-chat](https://github.com/KMnO4-zx/huanhuan-chat) | 124 |
|
||||
|[RCGAI/SimplyRetrieve](https://github.com/RCGAI/SimplyRetrieve) | 124 |
|
||||
|[Dicklesworthstone/llama2_aided_tesseract](https://github.com/Dicklesworthstone/llama2_aided_tesseract) | 123 |
|
||||
|[sdaaron/QueryGPT](https://github.com/sdaaron/QueryGPT) | 122 |
|
||||
|[athina-ai/athina-sdk](https://github.com/athina-ai/athina-sdk) | 121 |
|
||||
|[AIAnytime/Llama2-Medical-Chatbot](https://github.com/AIAnytime/Llama2-Medical-Chatbot) | 121 |
|
||||
|[MuhammadMoinFaisal/LargeLanguageModelsProjects](https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects) | 121 |
|
||||
|[Azure/business-process-automation](https://github.com/Azure/business-process-automation) | 121 |
|
||||
|[definitive-io/code-indexer-loop](https://github.com/definitive-io/code-indexer-loop) | 119 |
|
||||
|[nrl-ai/pautobot](https://github.com/nrl-ai/pautobot) | 119 |
|
||||
|[Azure/app-service-linux-docs](https://github.com/Azure/app-service-linux-docs) | 118 |
|
||||
|[zilliztech/akcio](https://github.com/zilliztech/akcio) | 118 |
|
||||
|[CodeAlchemyAI/ViLT-GPT](https://github.com/CodeAlchemyAI/ViLT-GPT) | 117 |
|
||||
|[georgesung/llm_qlora](https://github.com/georgesung/llm_qlora) | 117 |
|
||||
|[nicknochnack/Nopenai](https://github.com/nicknochnack/Nopenai) | 115 |
|
||||
|[nftblackmagic/flask-langchain](https://github.com/nftblackmagic/flask-langchain) | 115 |
|
||||
|[mortium91/langchain-assistant](https://github.com/mortium91/langchain-assistant) | 115 |
|
||||
|[Ngonie-x/langchain_csv](https://github.com/Ngonie-x/langchain_csv) | 114 |
|
||||
|[wombyz/HormoziGPT](https://github.com/wombyz/HormoziGPT) | 114 |
|
||||
|[langchain-ai/langchain-teacher](https://github.com/langchain-ai/langchain-teacher) | 113 |
|
||||
|[mluogh/eastworld](https://github.com/mluogh/eastworld) | 112 |
|
||||
|[mudler/LocalAGI](https://github.com/mudler/LocalAGI) | 112 |
|
||||
|[marimo-team/marimo](https://github.com/marimo-team/marimo) | 111 |
|
||||
|[trancethehuman/entities-extraction-web-scraper](https://github.com/trancethehuman/entities-extraction-web-scraper) | 111 |
|
||||
|[xuwenhao/mactalk-ai-course](https://github.com/xuwenhao/mactalk-ai-course) | 111 |
|
||||
|[dcaribou/transfermarkt-datasets](https://github.com/dcaribou/transfermarkt-datasets) | 111 |
|
||||
|[rabbitmetrics/langchain-13-min](https://github.com/rabbitmetrics/langchain-13-min) | 111 |
|
||||
|[dotvignesh/PDFChat](https://github.com/dotvignesh/PDFChat) | 111 |
|
||||
|[aws-samples/cdk-eks-blueprints-patterns](https://github.com/aws-samples/cdk-eks-blueprints-patterns) | 110 |
|
||||
|[topoteretes/PromethAI-Backend](https://github.com/topoteretes/PromethAI-Backend) | 110 |
|
||||
|[jlonge4/local_llama](https://github.com/jlonge4/local_llama) | 110 |
|
||||
|[RUC-GSAI/YuLan-Rec](https://github.com/RUC-GSAI/YuLan-Rec) | 108 |
|
||||
|[gh18l/CrawlGPT](https://github.com/gh18l/CrawlGPT) | 107 |
|
||||
|[c0sogi/LLMChat](https://github.com/c0sogi/LLMChat) | 107 |
|
||||
|[hwchase17/langchain-gradio-template](https://github.com/hwchase17/langchain-gradio-template) | 107 |
|
||||
|[ArjanCodes/examples](https://github.com/ArjanCodes/examples) | 106 |
|
||||
|[genia-dev/GeniA](https://github.com/genia-dev/GeniA) | 105 |
|
||||
|[nexus-stc/stc](https://github.com/nexus-stc/stc) | 105 |
|
||||
|[mbchang/data-driven-characters](https://github.com/mbchang/data-driven-characters) | 105 |
|
||||
|[ademakdogan/ChatSQL](https://github.com/ademakdogan/ChatSQL) | 104 |
|
||||
|[crosleythomas/MirrorGPT](https://github.com/crosleythomas/MirrorGPT) | 104 |
|
||||
|[IvanIsCoding/ResuLLMe](https://github.com/IvanIsCoding/ResuLLMe) | 104 |
|
||||
|[avrabyt/MemoryBot](https://github.com/avrabyt/MemoryBot) | 104 |
|
||||
|[Azure/azure-sdk-tools](https://github.com/Azure/azure-sdk-tools) | 103 |
|
||||
|[aniketmaurya/llm-inference](https://github.com/aniketmaurya/llm-inference) | 103 |
|
||||
|[Anil-matcha/Youtube-to-chatbot](https://github.com/Anil-matcha/Youtube-to-chatbot) | 103 |
|
||||
|[nyanp/chat2plot](https://github.com/nyanp/chat2plot) | 102 |
|
||||
|[aws-samples/amazon-kendra-langchain-extensions](https://github.com/aws-samples/amazon-kendra-langchain-extensions) | 101 |
|
||||
|[atisharma/llama_farm](https://github.com/atisharma/llama_farm) | 100 |
|
||||
|[Xueheng-Li/SynologyChatbotGPT](https://github.com/Xueheng-Li/SynologyChatbotGPT) | 100 |
|
||||
|
||||
|
||||
|
||||
_Generated by [github-dependents-info](https://github.com/nvuillam/github-dependents-info)_
|
||||
|
||||
`github-dependents-info --repo langchain-ai/langchain --markdownfile dependents.md --minstars 100 --sort stars`
|
||||
@@ -91,7 +91,7 @@
|
||||
- [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
|
||||
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
|
||||
- [`Flowise` is an open source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- [`Flowise` is an open-source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
|
||||
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
|
||||
@@ -48,7 +48,6 @@ If you’re working on something you’re proud of, and think the LangChain comm
|
||||
Here’s where our team hangs out, talks shop, spotlights cool work, and shares what we’re up to. We’d love to see you there too.
|
||||
|
||||
- **[Twitter](https://twitter.com/LangChainAI):** We post about what we’re working on and what cool things we’re seeing in the space. If you tag @langchainai in your post, we’ll almost certainly see it, and can show you some love!
|
||||
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with >30k developers who are building with LangChain
|
||||
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with over 30,000 developers who are building with LangChain.
|
||||
- **[GitHub](https://github.com/langchain-ai/langchain):** Open pull requests, contribute to a discussion, and/or contribute
|
||||
- **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
|
||||
- **Slack:** If you’re building an application in production at your company, we’d love to get into a Slack channel together. Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) and we’ll get in touch about setting one up.
|
||||
@@ -20,7 +20,7 @@
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
@@ -70,7 +70,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RunnablePassthrough.assign(\n",
|
||||
" memory=memory.load_memory_variables | itemgetter(\"history\")\n",
|
||||
" memory=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\")\n",
|
||||
") | prompt | model\n"
|
||||
]
|
||||
},
|
||||
@@ -184,7 +184,7 @@
|
||||
"source": [
|
||||
"## PromptTemplate + LLM + OutputParser\n",
|
||||
"\n",
|
||||
"We can also add in an output parser to easily trasform the raw LLM/ChatModel output into a more workable format"
|
||||
"We can also add in an output parser to easily transform the raw LLM/ChatModel output into a more workable format"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -42,7 +42,7 @@
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.vectorstores import FAISS\n"
|
||||
]
|
||||
},
|
||||
@@ -338,7 +338,7 @@
|
||||
"# First we add a step to load memory\n",
|
||||
"# This adds a \"memory\" key to the input object\n",
|
||||
"loaded_memory = RunnablePassthrough.assign(\n",
|
||||
" chat_history=memory.load_memory_variables | itemgetter(\"history\"),\n",
|
||||
" chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\"),\n",
|
||||
")\n",
|
||||
"# Now we calculate the standalone question\n",
|
||||
"standalone_question = {\n",
|
||||
@@ -363,7 +363,7 @@
|
||||
" \"docs\": itemgetter(\"docs\"),\n",
|
||||
"}\n",
|
||||
"# And now we put it all together!\n",
|
||||
"final_chain = loaded_memory | expanded_memory | standalone_question | retrieved_documents | answer\n"
|
||||
"final_chain = loaded_memory | standalone_question | retrieved_documents | answer\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
594
docs/docs/expression_language/how_to/configure.ipynb
Normal file
@@ -0,0 +1,594 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "39eaf61b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Configuration\n",
|
||||
"\n",
|
||||
"Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n",
|
||||
"In order to make this experience as easy as possible, we have defined two methods.\n",
|
||||
"\n",
|
||||
"First, a `configurable_fields` method. \n",
|
||||
"This lets you configure particular fields of a runnable.\n",
|
||||
"\n",
|
||||
"Second, a `configurable_alternatives` method.\n",
|
||||
"With this method, you can list out alternatives for any particular runnable that can be set during runtime."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f2347a11",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configuration Fields"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a06f6e2d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With LLMs\n",
|
||||
"With LLMs we can configure things like temperature"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 35,
|
||||
"id": "7ba735f4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(temperature=0).configurable_fields(\n",
|
||||
" temperature=ConfigurableField(\n",
|
||||
" id=\"llm_temperature\",\n",
|
||||
" name=\"LLM Temperature\",\n",
|
||||
" description=\"The temperature of the LLM\",\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"id": "63a71165",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='7')"
|
||||
]
|
||||
},
|
||||
"execution_count": 38,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model.invoke(\"pick a random number\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "4f83245c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='34')"
|
||||
]
|
||||
},
|
||||
"execution_count": 39,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model.with_config(configurable={\"llm_temperature\": .9}).invoke(\"pick a random number\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9da1fcd2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also do this when its used as part of a chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 40,
|
||||
"id": "e75ae678",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = PromptTemplate.from_template(\"Pick a random number above {x}\")\n",
|
||||
"chain = prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 41,
|
||||
"id": "44886071",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='57')"
|
||||
]
|
||||
},
|
||||
"execution_count": 41,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"x\": 0})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"id": "c09fac15",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='6')"
|
||||
]
|
||||
},
|
||||
"execution_count": 42,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.with_config(configurable={\"llm_temperature\": .9}).invoke({\"x\": 0})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fb9637d0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With HubRunnables\n",
|
||||
"\n",
|
||||
"This is useful to allow for switching of prompts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 43,
|
||||
"id": "7d5836b2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.runnables.hub import HubRunnable"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 46,
|
||||
"id": "9a9ea077",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = HubRunnable(\"rlm/rag-prompt\").configurable_fields(\n",
|
||||
" owner_repo_commit=ConfigurableField(\n",
|
||||
" id=\"hub_commit\",\n",
|
||||
" name=\"Hub Commit\",\n",
|
||||
" description=\"The Hub commit to pull from\",\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 47,
|
||||
"id": "c4a62cee",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"ChatPromptValue(messages=[HumanMessage(content=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:\")])"
|
||||
]
|
||||
},
|
||||
"execution_count": 47,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt.invoke({\"question\": \"foo\", \"context\": \"bar\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 49,
|
||||
"id": "f33f3cf2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"ChatPromptValue(messages=[HumanMessage(content=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]\")])"
|
||||
]
|
||||
},
|
||||
"execution_count": 49,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke({\"question\": \"foo\", \"context\": \"bar\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79d51519",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configurable Alternatives\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ac733d35",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With LLMs\n",
|
||||
"\n",
|
||||
"Let's take a look at doing this with LLMs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "430ab8cc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI, ChatAnthropic\n",
|
||||
"from langchain.schema.runnable import ConfigurableField\n",
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "71248a9f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatAnthropic(temperature=0).configurable_alternatives(\n",
|
||||
" # This gives this field an id\n",
|
||||
" # When configuring the end runnable, we can then use this id to configure this field\n",
|
||||
" ConfigurableField(id=\"llm\"),\n",
|
||||
" # This sets a default_key.\n",
|
||||
" # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n",
|
||||
" default_key=\"anthropic\",\n",
|
||||
" # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`\n",
|
||||
" openai=ChatOpenAI(),\n",
|
||||
" # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model=\"gpt-4\")`\n",
|
||||
" gpt4=ChatOpenAI(model=\"gpt-4\"),\n",
|
||||
" # You can add more configuration options here\n",
|
||||
")\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
|
||||
"chain = prompt | llm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "e598b1f1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Here's a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# By default it will call Anthropic\n",
|
||||
"chain.invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "48b45337",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Sure, here's a bear joke for you:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they already have bear feet!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 20,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# We can use `.with_config(configurable={\"llm\": \"openai\"})` to specify an llm to use\n",
|
||||
"chain.with_config(configurable={\"llm\": \"openai\"}).invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "42647fb7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Here's a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# If we use the `default_key` then it uses the default\n",
|
||||
"chain.with_config(configurable={\"llm\": \"anthropic\"}).invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a9134559",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With Prompts\n",
|
||||
"\n",
|
||||
"We can do a similar thing, but alternate between prompts\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "9f6a7c6c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatAnthropic(temperature=0)\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\").configurable_alternatives(\n",
|
||||
" # This gives this field an id\n",
|
||||
" # When configuring the end runnable, we can then use this id to configure this field\n",
|
||||
" ConfigurableField(id=\"prompt\"),\n",
|
||||
" # This sets a default_key.\n",
|
||||
" # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n",
|
||||
" default_key=\"joke\",\n",
|
||||
" # This adds a new option, with name `poem`\n",
|
||||
" poem=PromptTemplate.from_template(\"Write a short poem about {topic}\"),\n",
|
||||
" # You can add more configuration options here\n",
|
||||
")\n",
|
||||
"chain = prompt | llm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "97eda915",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Here's a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# By default it will write a joke\n",
|
||||
"chain.invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "927297a1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Here is a short poem about bears:\\n\\nThe bears awaken from their sleep\\nAnd lumber out into the deep\\nForests filled with trees so tall\\nForaging for food before nightfall \\nTheir furry coats and claws so sharp\\nSniffing for berries and fish to nab\\nLumbering about without a care\\nThe mighty grizzly and black bear\\nProud creatures, wild and free\\nRuling their domain majestically\\nWandering the woods they call their own\\nBefore returning to their dens alone')"
|
||||
]
|
||||
},
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# We can configure it write a poem\n",
|
||||
"chain.with_config(configurable={\"prompt\": \"poem\"}).invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0c77124e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With Prompts and LLMs\n",
|
||||
"\n",
|
||||
"We can also have multiple things configurable!\n",
|
||||
"Here's an example doing that with both prompts and LLMs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
"id": "97538c23",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatAnthropic(temperature=0).configurable_alternatives(\n",
|
||||
" # This gives this field an id\n",
|
||||
" # When configuring the end runnable, we can then use this id to configure this field\n",
|
||||
" ConfigurableField(id=\"llm\"),\n",
|
||||
" # This sets a default_key.\n",
|
||||
" # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n",
|
||||
" default_key=\"anthropic\",\n",
|
||||
" # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`\n",
|
||||
" openai=ChatOpenAI(),\n",
|
||||
" # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model=\"gpt-4\")`\n",
|
||||
" gpt4=ChatOpenAI(model=\"gpt-4\"),\n",
|
||||
" # You can add more configuration options here\n",
|
||||
")\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\").configurable_alternatives(\n",
|
||||
" # This gives this field an id\n",
|
||||
" # When configuring the end runnable, we can then use this id to configure this field\n",
|
||||
" ConfigurableField(id=\"prompt\"),\n",
|
||||
" # This sets a default_key.\n",
|
||||
" # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n",
|
||||
" default_key=\"joke\",\n",
|
||||
" # This adds a new option, with name `poem`\n",
|
||||
" poem=PromptTemplate.from_template(\"Write a short poem about {topic}\"),\n",
|
||||
" # You can add more configuration options here\n",
|
||||
")\n",
|
||||
"chain = prompt | llm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"id": "1dcc7ccc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"In the forest, where tall trees sway,\\nA creature roams, both fierce and gray.\\nWith mighty paws and piercing eyes,\\nThe bear, a symbol of strength, defies.\\n\\nThrough snow-kissed mountains, it does roam,\\nA guardian of its woodland home.\\nWith fur so thick, a shield of might,\\nIt braves the coldest winter night.\\n\\nA gentle giant, yet wild and free,\\nThe bear commands respect, you see.\\nWith every step, it leaves a trace,\\nOf untamed power and ancient grace.\\n\\nFrom honeyed feast to salmon's leap,\\nIt takes its place, in nature's keep.\\nA symbol of untamed delight,\\nThe bear, a wonder, day and night.\\n\\nSo let us honor this noble beast,\\nIn forests where its soul finds peace.\\nFor in its presence, we come to know,\\nThe untamed spirit that in us also flows.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 29,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# We can configure it write a poem with OpenAI\n",
|
||||
"chain.with_config(configurable={\"prompt\": \"poem\", \"llm\": \"openai\"}).invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"id": "e4ee9fbc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Sure, here's a bear joke for you:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 30,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# We can always just configure only one if we want\n",
|
||||
"chain.with_config(configurable={\"llm\": \"openai\"}).invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "02fc4841",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Saving configurations\n",
|
||||
"\n",
|
||||
"We can also easily save configured chains as their own objects"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "5cf53202",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai_poem = chain.with_config(configurable={\"llm\": \"openai\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"id": "9486d701",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 32,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"openai_poem.invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a43e3b70",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -101,7 +101,7 @@
|
||||
"source": [
|
||||
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
|
||||
"\n",
|
||||
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictuionary in the RunnableMap class — the type conversion is handled for us."
|
||||
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictionary in the RunnableMap class — the type conversion is handled for us."
|
||||
]
|
||||
},
|
||||
{
|
||||