Compare commits
135 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a3330c4258 | ||
|
|
1861cc7100 | ||
|
|
98c8516ef1 | ||
|
|
17c69678ab | ||
|
|
56653c53aa | ||
|
|
694d768174 | ||
|
|
8e6fa5f1d7 | ||
|
|
9e1e0f54d2 | ||
|
|
63e516c2b0 | ||
|
|
a9db2b0b92 | ||
|
|
6c61315067 | ||
|
|
11cdfe44af | ||
|
|
008348ce71 | ||
|
|
d3a5090e12 | ||
|
|
acdbdbddb1 | ||
|
|
48cf978391 | ||
|
|
e42a576cb2 | ||
|
|
9e32120cbb | ||
|
|
01b7b46908 | ||
|
|
35965df20d | ||
|
|
9d1867c77f | ||
|
|
6402c33299 | ||
|
|
3759a34229 | ||
|
|
bd74eba152 | ||
|
|
b54727fbad | ||
|
|
9c0584be74 | ||
|
|
bb2ed4615c | ||
|
|
361f8e1bc6 | ||
|
|
ead9d5b55c | ||
|
|
15687a28d5 | ||
|
|
467b082c34 | ||
|
|
51193309ea | ||
|
|
70a793ca9d | ||
|
|
e61b528c0e | ||
|
|
f386ac3bef | ||
|
|
ac73154005 | ||
|
|
af9ce3c224 | ||
|
|
77fcaa410a | ||
|
|
ca9de26f2b | ||
|
|
7f4734c0dd | ||
|
|
1c0857b53e | ||
|
|
44da27c07b | ||
|
|
0b743f005b | ||
|
|
2aba9ab47e | ||
|
|
629d9b78fa | ||
|
|
a477ddda45 | ||
|
|
9e81ab47be | ||
|
|
e75766b759 | ||
|
|
17b5090c18 | ||
|
|
c14a8df2ee | ||
|
|
17439daa6a | ||
|
|
4ba2c8ba75 | ||
|
|
7ae8b7f065 | ||
|
|
93bb19f69a | ||
|
|
18ebce2032 | ||
|
|
9beb03e771 | ||
|
|
1f7edcd08b | ||
|
|
ef99b06362 | ||
|
|
3c83779661 | ||
|
|
51a3a86022 | ||
|
|
70f7558db2 | ||
|
|
2363c02cf3 | ||
|
|
fbb82608cd | ||
|
|
9f39c23a13 | ||
|
|
d5e762d328 | ||
|
|
3cd0827785 | ||
|
|
dd0cd98861 | ||
|
|
d0603c86b6 | ||
|
|
28ee6a7c12 | ||
|
|
2c1e735403 | ||
|
|
539941281d | ||
|
|
7d0dda7e41 | ||
|
|
cf86447623 | ||
|
|
99adcdb1c9 | ||
|
|
06d5971be9 | ||
|
|
64969bc8ae | ||
|
|
ce0019b646 | ||
|
|
8f06085b24 | ||
|
|
5451b724fc | ||
|
|
0bff399af1 | ||
|
|
c9d4d53545 | ||
|
|
db67ccb0bb | ||
|
|
78b4c7d5a0 | ||
|
|
6dd7362a54 | ||
|
|
3a82bd7bdb | ||
|
|
9a0ed75a95 | ||
|
|
0ca8d4449c | ||
|
|
eedfddac2d | ||
|
|
7232e082de | ||
|
|
58220cda72 | ||
|
|
683f4a93b9 | ||
|
|
fca34eb122 | ||
|
|
49de862076 | ||
|
|
b6a2507794 | ||
|
|
b56ca0c2a4 | ||
|
|
59adeaddb3 | ||
|
|
c9bce5bbfb | ||
|
|
22abeb9f6c | ||
|
|
b642d00f9f | ||
|
|
c7c03d4709 | ||
|
|
e2a9072b80 | ||
|
|
55fef4b64b | ||
|
|
fd7f129f10 | ||
|
|
316dddc7cd | ||
|
|
1acfe86353 | ||
|
|
5de64e6d60 | ||
|
|
447a523662 | ||
|
|
8e45f720a8 | ||
|
|
ca2eed36b7 | ||
|
|
923e9f9596 | ||
|
|
258ae1ba5f | ||
|
|
2aabfafe1e | ||
|
|
d8fa94e6fa | ||
|
|
b42f218cfc | ||
|
|
f64522fbaf | ||
|
|
b14b65d62a | ||
|
|
4d62def9ff | ||
|
|
a992b9670d | ||
|
|
0a754fa286 | ||
|
|
2f2a5fd582 | ||
|
|
8932ed3f07 | ||
|
|
e7a0def1bc | ||
|
|
eec53fa294 | ||
|
|
09c66fe04f | ||
|
|
628cc4cce8 | ||
|
|
6a10e8ef31 | ||
|
|
eb572f41a6 | ||
|
|
484947c492 | ||
|
|
c3d2b01adf | ||
|
|
5470e730d2 | ||
|
|
29f5f70415 | ||
|
|
872836c541 | ||
|
|
8f50b616c5 | ||
|
|
bcd308c368 | ||
|
|
88ab69c288 |
2
.github/workflows/doc_lint.yml
vendored
@@ -19,4 +19,4 @@ jobs:
|
||||
run: |
|
||||
# We should not encourage imports directly from main init file
|
||||
# Expect for hub
|
||||
git grep 'from langchain import' docs/{extras,docs_skeleton,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
git grep 'from langchain import' docs/{docs,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
|
||||
4
.github/workflows/scheduled_test.yml
vendored
@@ -61,6 +61,10 @@ jobs:
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
||||
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
||||
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
|
||||
run: |
|
||||
make scheduled_tests
|
||||
|
||||
|
||||
6
.gitignore
vendored
@@ -174,6 +174,6 @@ docs/api_reference/*/
|
||||
!docs/api_reference/_static/
|
||||
!docs/api_reference/templates/
|
||||
!docs/api_reference/themes/
|
||||
docs/docs_skeleton/build
|
||||
docs/docs_skeleton/node_modules
|
||||
docs/docs_skeleton/yarn.lock
|
||||
docs/docs/build
|
||||
docs/docs/node_modules
|
||||
docs/docs/yarn.lock
|
||||
|
||||
4
.gitmodules
vendored
@@ -1,4 +0,0 @@
|
||||
[submodule "docs/_docs_skeleton"]
|
||||
path = docs/_docs_skeleton
|
||||
url = https://github.com/langchain-ai/langchain-shared-docs
|
||||
branch = main
|
||||
6
Makefile
@@ -15,10 +15,10 @@ docs_build:
|
||||
docs/.local_build.sh
|
||||
|
||||
docs_clean:
|
||||
rm -r docs/_dist
|
||||
rm -r _dist
|
||||
|
||||
docs_linkcheck:
|
||||
poetry run linkchecker docs/_dist/docs_skeleton/ --ignore-url node_modules
|
||||
poetry run linkchecker _dist/docs/ --ignore-url node_modules
|
||||
|
||||
api_docs_build:
|
||||
poetry run python docs/api_reference/create_api_rst.py
|
||||
@@ -53,4 +53,4 @@ help:
|
||||
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
|
||||
@echo 'spell_check - run codespell on the project'
|
||||
@echo 'spell_fix - run codespell on the project and fix the errors'
|
||||
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'
|
||||
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'
|
||||
|
||||
@@ -18,8 +18,9 @@
|
||||
|
||||
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.
|
||||
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
|
||||
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team
|
||||
|
||||
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
|
||||
|
||||
|
||||
386
cookbook/Semi_Structured_RAG.ipynb
Normal file
613
cookbook/Semi_structured_and_multi_modal_RAG.ipynb
Normal file
559
cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb
Normal file
@@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# Elasticsearch\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n",
|
||||
"\n",
|
||||
"We can use LLMs to interact with Elasticsearch analytics databases in natural language.\n",
|
||||
"\n",
|
||||
@@ -135,9 +135,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# We set this so we can see what exactly is going on\n",
|
||||
"import langchain\n",
|
||||
"from langchain.globals import set_verbose\n",
|
||||
"\n",
|
||||
"langchain.verbose = True"
|
||||
"set_verbose(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -489,7 +489,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -214,7 +214,7 @@
|
||||
"\n",
|
||||
"The way the chain is learning that Tom prefers veggetarian meals is via an AutoSelectionScorer that is built into the chain. The scorer will call the LLM again and ask it to evaluate the selection (`ToSelectFrom`) using the information wrapped in (`BasedOn`).\n",
|
||||
"\n",
|
||||
"You can set `langchain.debug=True` if you want to see the details of the auto-scorer, but you can also define the scoring prompt yourself."
|
||||
"You can set `set_debug(True)` if you want to see the details of the auto-scorer, but you can also define the scoring prompt yourself."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -778,8 +778,9 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"import langchain\n",
|
||||
"langchain.debug = True\n",
|
||||
"from langchain.globals import set_debug\n",
|
||||
"\n",
|
||||
"set_debug(True)\n",
|
||||
"\n",
|
||||
"REWARD_PROMPT_TEMPLATE = \"\"\"\n",
|
||||
"\n",
|
||||
@@ -812,9 +813,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -826,7 +827,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -10,7 +10,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -37,13 +37,13 @@
|
||||
"'Hello World\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import LLMBashChain\n",
|
||||
"from langchain_experimental.llm_bash.base import LLMBashChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
@@ -65,7 +65,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -98,7 +98,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -125,7 +125,7 @@
|
||||
"'Hello World\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -149,7 +149,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -166,28 +166,24 @@
|
||||
"cd ..\n",
|
||||
"```\u001b[0m\n",
|
||||
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3mapi.html\t\t\tllm_summarization_checker.html\n",
|
||||
"constitutional_chain.html\tmoderation.html\n",
|
||||
"llm_bash.html\t\t\topenai_openapi.yaml\n",
|
||||
"llm_checker.html\t\topenapi.html\n",
|
||||
"llm_math.html\t\t\tpal.html\n",
|
||||
"llm_requests.html\t\tsqlite.html\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3mcpal.ipynb llm_bash.ipynb llm_symbolic_math.ipynb\n",
|
||||
"index.mdx llm_math.ipynb pal.ipynb\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'api.html\\t\\t\\tllm_summarization_checker.html\\r\\nconstitutional_chain.html\\tmoderation.html\\r\\nllm_bash.html\\t\\t\\topenai_openapi.yaml\\r\\nllm_checker.html\\t\\topenapi.html\\r\\nllm_math.html\\t\\t\\tpal.html\\r\\nllm_requests.html\\t\\tsqlite.html'"
|
||||
"'cpal.ipynb llm_bash.ipynb llm_symbolic_math.ipynb\\r\\nindex.mdx llm_math.ipynb pal.ipynb'"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.utilities.bash import BashProcess\n",
|
||||
"from langchain_experimental.llm_bash.bash import BashProcess\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"persistent_process = BashProcess(persistent=True)\n",
|
||||
@@ -200,7 +196,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -217,18 +213,19 @@
|
||||
"cd ..\n",
|
||||
"```\u001b[0m\n",
|
||||
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3mexamples\t\tgetting_started.html\tindex_examples\n",
|
||||
"generic\t\t\thow_to_guides.rst\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m_category_.yml\tdata_generation.ipynb\t\t self_check\n",
|
||||
"agents\t\tgraph\n",
|
||||
"code_writing\tlearned_prompt_optimization.ipynb\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'examples\\t\\tgetting_started.html\\tindex_examples\\r\\ngeneric\\t\\t\\thow_to_guides.rst'"
|
||||
"'_category_.yml\\tdata_generation.ipynb\\t\\t self_check\\r\\nagents\\t\\tgraph\\r\\ncode_writing\\tlearned_prompt_optimization.ipynb'"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -237,13 +234,6 @@
|
||||
"# Run the same command again and see that the state is maintained between calls\n",
|
||||
"bash_chain.run(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -262,7 +252,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -10,12 +10,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains.llm_symbolic_math.base import LLMSymbolicMathChain\n",
|
||||
"from langchain_experimental.llm_symbolic_math.base import LLMSymbolicMathChain\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)"
|
||||
@@ -30,7 +30,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -39,7 +39,7 @@
|
||||
"'Answer: exp(x)*sin(x) + exp(x)*cos(x)'"
|
||||
]
|
||||
},
|
||||
"execution_count": 23,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -50,7 +50,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -59,7 +59,7 @@
|
||||
"'Answer: exp(x)*sin(x)'"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -79,7 +79,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -88,7 +88,7 @@
|
||||
"'Answer: Eq(y(t), C2*exp(-t) + (C1 + t/2)*exp(t))'"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -99,7 +99,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -108,7 +108,7 @@
|
||||
"'Answer: {0, -sqrt(3)*I/3, sqrt(3)*I/3}'"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -119,7 +119,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -128,7 +128,7 @@
|
||||
"'Answer: (3 - sqrt(7), -sqrt(7) - 2, 1 - sqrt(7)), (sqrt(7) + 3, -2 + sqrt(7), 1 + sqrt(7))'"
|
||||
]
|
||||
},
|
||||
"execution_count": 22,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -140,9 +140,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -154,9 +154,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
252
cookbook/plan_and_execute_agent.ipynb
Normal file
@@ -0,0 +1,252 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0ddfef23-3c74-444c-81dd-6753722997fa",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Plan-and-execute\n",
|
||||
"\n",
|
||||
"Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the [\"Plan-and-Solve\" paper](https://arxiv.org/abs/2305.04091).\n",
|
||||
"\n",
|
||||
"The planning is almost always done by an LLM.\n",
|
||||
"\n",
|
||||
"The execution is usually done by a separate agent (equipped with tools)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a7ecb22a-7009-48ec-b14e-f0fa5aac1cd0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5fbbd4ee-bfe8-4a25-afe4-8d1a552a3d2e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents.tools import Tool\n",
|
||||
"from langchain.chains import LLMMathChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
|
||||
"from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e0e995e5-af9d-4988-bcd0-467a2a2e18cd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1d789f4e-54e3-4602-891a-f076e0ab9594",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = DuckDuckGoSearchAPIWrapper()\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name=\"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||
" ),\n",
|
||||
" Tool(\n",
|
||||
" name=\"Calculator\",\n",
|
||||
" func=llm_math_chain.run,\n",
|
||||
" description=\"useful for when you need to answer questions about math\"\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "04dc6452-a07f-49f9-be12-95be1e2afccc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Planner, Executor, and Agent\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d8f49c03-c804-458b-8122-c92b26c7b7dd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = ChatOpenAI(temperature=0)\n",
|
||||
"planner = load_chat_planner(model)\n",
|
||||
"executor = load_agent_executor(model, tools, verbose=True)\n",
|
||||
"agent = PlanAndExecute(planner=planner, executor=executor)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "78ba03dd-0322-4927-b58d-a7e2027fdbb3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Run example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "a57f7efe-7866-47a7-bce5-9c7b1047964e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"current prime minister of the UK\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"current prime minister of the UK\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mBottom right: Rishi Sunak is the current prime minister and the first non-white prime minister. The prime minister of the United Kingdom is the principal minister of the crown of His Majesty's Government, and the head of the British Cabinet. 3 min. British Prime Minister Rishi Sunak asserted his stance on gender identity in a speech Wednesday, stating it was \"common sense\" that \"a man is a man and a woman is a woman\" — a ... The former chancellor Rishi Sunak is the UK's new prime minister. Here's what you need to know about him. He won after running for the second time this year He lost to Liz Truss in September,... Isaeli Prime Minister Benjamin Netanyahu spoke with US President Joe Biden on Wednesday, the prime minister's office said in a statement. Netanyahu \"thanked the President for the powerful words of ... By Yasmeen Serhan/London Updated: October 25, 2022 12:56 PM EDT | Originally published: October 24, 2022 9:17 AM EDT S top me if you've heard this one before: After a tumultuous period of political...\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe search results indicate that Rishi Sunak is the current prime minister of the UK. However, it's important to note that this information may not be accurate or up to date.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"current age of the prime minister of the UK\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mHow old is Rishi Sunak? Mr Sunak was born on 12 May, 1980, making him 42 years old. He first became an MP in 2015, aged 34, and has served the constituency of Richmond in Yorkshire ever since. He... Prime Ministers' ages when they took office From oldest to youngest, the ages of the PMs were as follows: Winston Churchill - 65 years old James Callaghan - 64 years old Clement Attlee - 62 years... Anna Kaufman USA TODAY Just a few days after Liz Truss resigned as prime minister, the UK has a new prime minister. Truss, who lasted a mere 45 days in office, will be replaced by Rishi... Advertisement Rishi Sunak is the youngest British prime minister of modern times. Mr. Sunak is 42 and started out in Parliament in 2015. Rishi Sunak was appointed as chancellor of the Exchequer... The first prime minister of the current United Kingdom of Great Britain and Northern Ireland upon its effective creation in 1922 (when 26 Irish counties seceded and created the Irish Free State) was Bonar Law, [10] although the country was not renamed officially until 1927, when Stanley Baldwin was the serving prime minister. [11]\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mBased on the search results, it seems that Rishi Sunak is the current prime minister of the UK. However, I couldn't find any specific information about his age. Would you like me to search again for the current age of the prime minister?\n",
|
||||
"\n",
|
||||
"Action:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Search\",\n",
|
||||
" \"action_input\": \"age of Rishi Sunak\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mRishi Sunak is 42 years old, making him the youngest person to hold the office of prime minister in modern times. How tall is Rishi Sunak? How Old Is Rishi Sunak? Rishi Sunak was born on May 12, 1980, in Southampton, England. Parents and Nationality Sunak's parents were born to Indian-origin families in East Africa before... Born on May 12, 1980, Rishi is currently 42 years old. He has been a member of parliament since 2015 where he was an MP for Richmond and has served in roles including Chief Secretary to the Treasury and the Chancellor of Exchequer while Boris Johnson was PM. Family Murty, 42, is the daughter of the Indian billionaire NR Narayana Murthy, often described as the Bill Gates of India, who founded the software company Infosys. According to reports, his... Sunak became the first non-White person to lead the country and, at age 42, the youngest to take on the role in more than a century. Like most politicians, Sunak is revered by some and...\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mBased on the search results, Rishi Sunak is currently 42 years old. He was born on May 12, 1980.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: To calculate the age raised to the power of 0.43, I can use the calculator tool.\n",
|
||||
"\n",
|
||||
"Action:\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"action\": \"Calculator\",\n",
|
||||
" \"action_input\": \"42^0.43\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||||
"42^0.43\u001b[32;1m\u001b[1;3m```text\n",
|
||||
"42**0.43\n",
|
||||
"```\n",
|
||||
"...numexpr.evaluate(\"42**0.43\")...\n",
|
||||
"\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m4.9888126515157\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.9888126515157\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe age raised to the power of 0.43 is approximately 4.9888126515157.\n",
|
||||
"\n",
|
||||
"Final Answer:\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The age raised to the power of 0.43 is approximately 4.9888126515157.\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The current prime minister of the UK is Rishi Sunak. His age raised to the power of 0.43 is approximately 4.9888126515157.\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The current prime minister of the UK is Rishi Sunak. His age raised to the power of 0.43 is approximately 4.9888126515157.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.run(\"Who is the current prime minister of the UK? What is their current age raised to the 0.43 power?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0ef78a07-1a2a-46f8-9bc9-ae45f9bd706c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -66,7 +66,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install aditional dependencies\n",
|
||||
"# install additional dependencies\n",
|
||||
"# ! pip install chromadb openai tiktoken"
|
||||
]
|
||||
},
|
||||
1181
cookbook/self_query_hotel_search.ipynb
Normal file
@@ -17,7 +17,7 @@
|
||||
"\n",
|
||||
"Note that SmartLLMChains\n",
|
||||
"- use more LLM passes (ie n+2 instead of just 1)\n",
|
||||
"- only work then the underlying LLM has the capability for reflection, whicher smaller models often don't\n",
|
||||
"- only work then the underlying LLM has the capability for reflection, which smaller models often don't\n",
|
||||
"- only work with underlying models that return exactly 1 output, not multiple\n",
|
||||
"\n",
|
||||
"This notebook demonstrates how to use a SmartLLMChain."
|
||||
@@ -241,7 +241,7 @@
|
||||
" ideation_llm=ChatOpenAI(temperature=0.9, model_name=\"gpt-4\"),\n",
|
||||
" llm=ChatOpenAI(\n",
|
||||
" temperature=0, model_name=\"gpt-4\"\n",
|
||||
" ), # will be used for critqiue and resolution as no specific llms are given\n",
|
||||
" ), # will be used for critique and resolution as no specific llms are given\n",
|
||||
" prompt=prompt,\n",
|
||||
" n_ideas=3,\n",
|
||||
" verbose=True,\n",
|
||||
@@ -1,3 +1,3 @@
|
||||
FROM python:latest
|
||||
FROM python:3.11
|
||||
|
||||
RUN pip install langchain
|
||||
|
||||
@@ -8,11 +8,10 @@ set -o xtrace
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
|
||||
cd "${SCRIPT_DIR}"
|
||||
|
||||
mkdir -p _dist/docs_skeleton
|
||||
cp -r {docs_skeleton,snippets} _dist
|
||||
cp -r extras/* _dist/docs_skeleton/docs
|
||||
cd _dist/docs_skeleton
|
||||
poetry run nbdoc_build
|
||||
poetry run python generate_api_reference_links.py
|
||||
mkdir -p ../_dist
|
||||
cp -r . ../_dist
|
||||
cd ../_dist
|
||||
poetry run nbdoc_build --srcdir docs
|
||||
poetry run python scripts/generate_api_reference_links.py
|
||||
yarn install
|
||||
yarn start
|
||||
|
||||
@@ -42,7 +42,7 @@ If you are using GitHub pages for hosting, this command is a convenient way to b
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
|
||||
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open-source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
|
||||
|
||||
```
|
||||
$ yarn ci
|
||||
|
Before Width: | Height: | Size: 559 KiB After Width: | Height: | Size: 559 KiB |
|
Before Width: | Height: | Size: 157 KiB After Width: | Height: | Size: 157 KiB |
|
Before Width: | Height: | Size: 235 KiB After Width: | Height: | Size: 235 KiB |
|
Before Width: | Height: | Size: 148 KiB After Width: | Height: | Size: 148 KiB |
|
Before Width: | Height: | Size: 3.5 MiB After Width: | Height: | Size: 3.5 MiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 85 KiB |
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 542 B After Width: | Height: | Size: 542 B |
|
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 103 KiB After Width: | Height: | Size: 103 KiB |
|
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 136 KiB |
|
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
465
docs/docs/additional_resources/dependents.mdx
Normal file
@@ -0,0 +1,465 @@
|
||||
# Dependents
|
||||
|
||||
Dependents stats for `langchain-ai/langchain`
|
||||
|
||||
[](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
[&message=451&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
[&message=30083&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
[&message=37822&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents)
|
||||
|
||||
|
||||
[update: `2023-10-06`; only dependent repositories with Stars > 100]
|
||||
|
||||
|
||||
| Repository | Stars |
|
||||
| :-------- | -----: |
|
||||
|[openai/openai-cookbook](https://github.com/openai/openai-cookbook) | 49006 |
|
||||
|[AntonOsika/gpt-engineer](https://github.com/AntonOsika/gpt-engineer) | 44368 |
|
||||
|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 38300 |
|
||||
|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 35327 |
|
||||
|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 34799 |
|
||||
|[microsoft/TaskMatrix](https://github.com/microsoft/TaskMatrix) | 34161 |
|
||||
|[streamlit/streamlit](https://github.com/streamlit/streamlit) | 27697 |
|
||||
|[geekan/MetaGPT](https://github.com/geekan/MetaGPT) | 27302 |
|
||||
|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 26805 |
|
||||
|[OpenBB-finance/OpenBBTerminal](https://github.com/OpenBB-finance/OpenBBTerminal) | 24473 |
|
||||
|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 23323 |
|
||||
|[run-llama/llama_index](https://github.com/run-llama/llama_index) | 22151 |
|
||||
|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 19741 |
|
||||
|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 18062 |
|
||||
|[PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) | 16413 |
|
||||
|[chatchat-space/Langchain-Chatchat](https://github.com/chatchat-space/Langchain-Chatchat) | 16300 |
|
||||
|[cube-js/cube](https://github.com/cube-js/cube) | 16261 |
|
||||
|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 15487 |
|
||||
|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 12599 |
|
||||
|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 12501 |
|
||||
|[openai/evals](https://github.com/openai/evals) | 12056 |
|
||||
|[airbytehq/airbyte](https://github.com/airbytehq/airbyte) | 11919 |
|
||||
|[go-skynet/LocalAI](https://github.com/go-skynet/LocalAI) | 11767 |
|
||||
|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 10609 |
|
||||
|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 9240 |
|
||||
|[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) | 8892 |
|
||||
|[langgenius/dify](https://github.com/langgenius/dify) | 8764 |
|
||||
|[gventuri/pandas-ai](https://github.com/gventuri/pandas-ai) | 8687 |
|
||||
|[jmorganca/ollama](https://github.com/jmorganca/ollama) | 8628 |
|
||||
|[langchain-ai/langchainjs](https://github.com/langchain-ai/langchainjs) | 8392 |
|
||||
|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 7953 |
|
||||
|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 7730 |
|
||||
|[PipedreamHQ/pipedream](https://github.com/PipedreamHQ/pipedream) | 7261 |
|
||||
|[joshpxyne/gpt-migrate](https://github.com/joshpxyne/gpt-migrate) | 6349 |
|
||||
|[bentoml/OpenLLM](https://github.com/bentoml/OpenLLM) | 6213 |
|
||||
|[mage-ai/mage-ai](https://github.com/mage-ai/mage-ai) | 5600 |
|
||||
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 5499 |
|
||||
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 5497 |
|
||||
|[sweepai/sweep](https://github.com/sweepai/sweep) | 5489 |
|
||||
|[embedchain/embedchain](https://github.com/embedchain/embedchain) | 5428 |
|
||||
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 5311 |
|
||||
|[Shaunwei/RealChar](https://github.com/Shaunwei/RealChar) | 5264 |
|
||||
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 5146 |
|
||||
|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 5134 |
|
||||
|[serge-chat/serge](https://github.com/serge-chat/serge) | 5009 |
|
||||
|[assafelovic/gpt-researcher](https://github.com/assafelovic/gpt-researcher) | 4836 |
|
||||
|[openchatai/OpenChat](https://github.com/openchatai/OpenChat) | 4697 |
|
||||
|[intel-analytics/BigDL](https://github.com/intel-analytics/BigDL) | 4412 |
|
||||
|[continuedev/continue](https://github.com/continuedev/continue) | 4324 |
|
||||
|[postgresml/postgresml](https://github.com/postgresml/postgresml) | 4267 |
|
||||
|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 4214 |
|
||||
|[MineDojo/Voyager](https://github.com/MineDojo/Voyager) | 4204 |
|
||||
|[danswer-ai/danswer](https://github.com/danswer-ai/danswer) | 3973 |
|
||||
|[RayVentura/ShortGPT](https://github.com/RayVentura/ShortGPT) | 3922 |
|
||||
|[Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | 3849 |
|
||||
|[khoj-ai/khoj](https://github.com/khoj-ai/khoj) | 3817 |
|
||||
|[langchain-ai/chat-langchain](https://github.com/langchain-ai/chat-langchain) | 3742 |
|
||||
|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 3731 |
|
||||
|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 3627 |
|
||||
|[kyegomez/tree-of-thoughts](https://github.com/kyegomez/tree-of-thoughts) | 3553 |
|
||||
|[llm-workflow-engine/llm-workflow-engine](https://github.com/llm-workflow-engine/llm-workflow-engine) | 3483 |
|
||||
|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 3460 |
|
||||
|[aiwaves-cn/agents](https://github.com/aiwaves-cn/agents) | 3413 |
|
||||
|[OpenBMB/ToolBench](https://github.com/OpenBMB/ToolBench) | 3388 |
|
||||
|[shroominic/codeinterpreter-api](https://github.com/shroominic/codeinterpreter-api) | 3218 |
|
||||
|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 3085 |
|
||||
|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 3039 |
|
||||
|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 2911 |
|
||||
|[ParisNeo/lollms-webui](https://github.com/ParisNeo/lollms-webui) | 2907 |
|
||||
|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 2874 |
|
||||
|[openchatai/OpenCopilot](https://github.com/openchatai/OpenCopilot) | 2759 |
|
||||
|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 2657 |
|
||||
|[homanp/superagent](https://github.com/homanp/superagent) | 2624 |
|
||||
|[SamurAIGPT/EmbedAI](https://github.com/SamurAIGPT/EmbedAI) | 2575 |
|
||||
|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2488 |
|
||||
|[microsoft/promptflow](https://github.com/microsoft/promptflow) | 2475 |
|
||||
|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 2445 |
|
||||
|[Mintplex-Labs/anything-llm](https://github.com/Mintplex-Labs/anything-llm) | 2434 |
|
||||
|[emptycrown/llama-hub](https://github.com/emptycrown/llama-hub) | 2432 |
|
||||
|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 2327 |
|
||||
|[ShreyaR/guardrails](https://github.com/ShreyaR/guardrails) | 2307 |
|
||||
|[thomas-yanxin/LangChain-ChatGLM-Webui](https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui) | 2305 |
|
||||
|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 2291 |
|
||||
|[keephq/keep](https://github.com/keephq/keep) | 2252 |
|
||||
|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 2194 |
|
||||
|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 2169 |
|
||||
|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 2031 |
|
||||
|[YiVal/YiVal](https://github.com/YiVal/YiVal) | 2014 |
|
||||
|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 2014 |
|
||||
|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 1977 |
|
||||
|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1887 |
|
||||
|[dot-agent/dotagent-WIP](https://github.com/dot-agent/dotagent-WIP) | 1812 |
|
||||
|[hegelai/prompttools](https://github.com/hegelai/prompttools) | 1775 |
|
||||
|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1734 |
|
||||
|[Vonng/pigsty](https://github.com/Vonng/pigsty) | 1693 |
|
||||
|[psychic-api/psychic](https://github.com/psychic-api/psychic) | 1597 |
|
||||
|[avinashkranjan/Amazing-Python-Scripts](https://github.com/avinashkranjan/Amazing-Python-Scripts) | 1546 |
|
||||
|[pinterest/querybook](https://github.com/pinterest/querybook) | 1539 |
|
||||
|[Forethought-Technologies/AutoChain](https://github.com/Forethought-Technologies/AutoChain) | 1531 |
|
||||
|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1503 |
|
||||
|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 1487 |
|
||||
|[noahshinn024/reflexion](https://github.com/noahshinn024/reflexion) | 1481 |
|
||||
|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1436 |
|
||||
|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1425 |
|
||||
|[milvus-io/bootcamp](https://github.com/milvus-io/bootcamp) | 1420 |
|
||||
|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1401 |
|
||||
|[greshake/llm-security](https://github.com/greshake/llm-security) | 1381 |
|
||||
|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1366 |
|
||||
|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1352 |
|
||||
|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 1339 |
|
||||
|[refuel-ai/autolabel](https://github.com/refuel-ai/autolabel) | 1320 |
|
||||
|[melih-unsal/DemoGPT](https://github.com/melih-unsal/DemoGPT) | 1320 |
|
||||
|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 1320 |
|
||||
|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 1315 |
|
||||
|[run-llama/sec-insights](https://github.com/run-llama/sec-insights) | 1312 |
|
||||
|[Azure/azureml-examples](https://github.com/Azure/azureml-examples) | 1305 |
|
||||
|[cofactoryai/textbase](https://github.com/cofactoryai/textbase) | 1286 |
|
||||
|[dataelement/bisheng](https://github.com/dataelement/bisheng) | 1273 |
|
||||
|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 1263 |
|
||||
|[pluralsh/plural](https://github.com/pluralsh/plural) | 1188 |
|
||||
|[FlagOpen/FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding) | 1184 |
|
||||
|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1144 |
|
||||
|[poe-platform/server-bot-quick-start](https://github.com/poe-platform/server-bot-quick-start) | 1139 |
|
||||
|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 1137 |
|
||||
|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 1124 |
|
||||
|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 1119 |
|
||||
|[ThousandBirdsInc/chidori](https://github.com/ThousandBirdsInc/chidori) | 1116 |
|
||||
|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 1112 |
|
||||
|[psychic-api/rag-stack](https://github.com/psychic-api/rag-stack) | 1110 |
|
||||
|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 1100 |
|
||||
|[promptfoo/promptfoo](https://github.com/promptfoo/promptfoo) | 1099 |
|
||||
|[nod-ai/SHARK](https://github.com/nod-ai/SHARK) | 1062 |
|
||||
|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 1036 |
|
||||
|[Farama-Foundation/chatarena](https://github.com/Farama-Foundation/chatarena) | 1020 |
|
||||
|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 993 |
|
||||
|[jiran214/GPT-vup](https://github.com/jiran214/GPT-vup) | 967 |
|
||||
|[alejandro-ao/ask-multiple-pdfs](https://github.com/alejandro-ao/ask-multiple-pdfs) | 958 |
|
||||
|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 953 |
|
||||
|[LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) | 950 |
|
||||
|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 927 |
|
||||
|[cheshire-cat-ai/core](https://github.com/cheshire-cat-ai/core) | 902 |
|
||||
|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 894 |
|
||||
|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 881 |
|
||||
|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 876 |
|
||||
|[xusenlinzy/api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm) | 865 |
|
||||
|[ricklamers/shell-ai](https://github.com/ricklamers/shell-ai) | 864 |
|
||||
|[codeacme17/examor](https://github.com/codeacme17/examor) | 856 |
|
||||
|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 836 |
|
||||
|[microsoft/Llama-2-Onnx](https://github.com/microsoft/Llama-2-Onnx) | 835 |
|
||||
|[explodinggradients/ragas](https://github.com/explodinggradients/ragas) | 833 |
|
||||
|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 817 |
|
||||
|[kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference](https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference) | 814 |
|
||||
|[ray-project/llm-applications](https://github.com/ray-project/llm-applications) | 804 |
|
||||
|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 801 |
|
||||
|[LambdaLabsML/examples](https://github.com/LambdaLabsML/examples) | 759 |
|
||||
|[kreneskyp/ix](https://github.com/kreneskyp/ix) | 758 |
|
||||
|[pyspark-ai/pyspark-ai](https://github.com/pyspark-ai/pyspark-ai) | 750 |
|
||||
|[billxbf/ReWOO](https://github.com/billxbf/ReWOO) | 746 |
|
||||
|[e-johnstonn/BriefGPT](https://github.com/e-johnstonn/BriefGPT) | 738 |
|
||||
|[akshata29/entaoai](https://github.com/akshata29/entaoai) | 733 |
|
||||
|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 717 |
|
||||
|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 712 |
|
||||
|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 698 |
|
||||
|[Dataherald/dataherald](https://github.com/Dataherald/dataherald) | 684 |
|
||||
|[jondurbin/airoboros](https://github.com/jondurbin/airoboros) | 657 |
|
||||
|[Ikaros-521/AI-Vtuber](https://github.com/Ikaros-521/AI-Vtuber) | 651 |
|
||||
|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 644 |
|
||||
|[langchain-ai/streamlit-agent](https://github.com/langchain-ai/streamlit-agent) | 637 |
|
||||
|[SamurAIGPT/ChatGPT-Developer-Plugins](https://github.com/SamurAIGPT/ChatGPT-Developer-Plugins) | 637 |
|
||||
|[OpenGenerativeAI/GenossGPT](https://github.com/OpenGenerativeAI/GenossGPT) | 632 |
|
||||
|[AILab-CVC/GPT4Tools](https://github.com/AILab-CVC/GPT4Tools) | 629 |
|
||||
|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 614 |
|
||||
|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 613 |
|
||||
|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 607 |
|
||||
|[MiuLab/Taiwan-LLaMa](https://github.com/MiuLab/Taiwan-LLaMa) | 601 |
|
||||
|[microsoft/PodcastCopilot](https://github.com/microsoft/PodcastCopilot) | 600 |
|
||||
|[Dicklesworthstone/swiss_army_llama](https://github.com/Dicklesworthstone/swiss_army_llama) | 596 |
|
||||
|[NoDataFound/hackGPT](https://github.com/NoDataFound/hackGPT) | 596 |
|
||||
|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 593 |
|
||||
|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 582 |
|
||||
|[microsoft/sample-app-aoai-chatGPT](https://github.com/microsoft/sample-app-aoai-chatGPT) | 581 |
|
||||
|[yvann-hub/Robby-chatbot](https://github.com/yvann-hub/Robby-chatbot) | 581 |
|
||||
|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 547 |
|
||||
|[tgscan-dev/tgscan](https://github.com/tgscan-dev/tgscan) | 533 |
|
||||
|[Azure-Samples/openai](https://github.com/Azure-Samples/openai) | 531 |
|
||||
|[plastic-labs/tutor-gpt](https://github.com/plastic-labs/tutor-gpt) | 531 |
|
||||
|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 526 |
|
||||
|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 526 |
|
||||
|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 522 |
|
||||
|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 519 |
|
||||
|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 518 |
|
||||
|[modelscope/modelscope-agent](https://github.com/modelscope/modelscope-agent) | 512 |
|
||||
|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 504 |
|
||||
|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 497 |
|
||||
|[sidhq/Multi-GPT](https://github.com/sidhq/Multi-GPT) | 494 |
|
||||
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 489 |
|
||||
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 487 |
|
||||
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 483 |
|
||||
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 481 |
|
||||
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 474 |
|
||||
|[truera/trulens](https://github.com/truera/trulens) | 464 |
|
||||
|[marella/chatdocs](https://github.com/marella/chatdocs) | 459 |
|
||||
|[opencopilotdev/opencopilot](https://github.com/opencopilotdev/opencopilot) | 453 |
|
||||
|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 444 |
|
||||
|[DataDog/dd-trace-py](https://github.com/DataDog/dd-trace-py) | 441 |
|
||||
|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 441 |
|
||||
|[opentensor/bittensor](https://github.com/opentensor/bittensor) | 433 |
|
||||
|[DjangoPeng/openai-quickstart](https://github.com/DjangoPeng/openai-quickstart) | 425 |
|
||||
|[CarperAI/OpenELM](https://github.com/CarperAI/OpenELM) | 424 |
|
||||
|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 423 |
|
||||
|[showlab/VLog](https://github.com/showlab/VLog) | 411 |
|
||||
|[Anil-matcha/Chatbase](https://github.com/Anil-matcha/Chatbase) | 402 |
|
||||
|[yakami129/VirtualWife](https://github.com/yakami129/VirtualWife) | 399 |
|
||||
|[wandb/weave](https://github.com/wandb/weave) | 399 |
|
||||
|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 398 |
|
||||
|[LinkSoul-AI/AutoAgents](https://github.com/LinkSoul-AI/AutoAgents) | 397 |
|
||||
|[Agenta-AI/agenta](https://github.com/Agenta-AI/agenta) | 389 |
|
||||
|[huchenxucs/ChatDB](https://github.com/huchenxucs/ChatDB) | 386 |
|
||||
|[mallorbc/Finetune_LLMs](https://github.com/mallorbc/Finetune_LLMs) | 379 |
|
||||
|[junruxiong/IncarnaMind](https://github.com/junruxiong/IncarnaMind) | 372 |
|
||||
|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 368 |
|
||||
|[mosaicml/examples](https://github.com/mosaicml/examples) | 366 |
|
||||
|[rsaryev/talk-codebase](https://github.com/rsaryev/talk-codebase) | 364 |
|
||||
|[morpheuslord/GPT_Vuln-analyzer](https://github.com/morpheuslord/GPT_Vuln-analyzer) | 362 |
|
||||
|[monarch-initiative/ontogpt](https://github.com/monarch-initiative/ontogpt) | 362 |
|
||||
|[JayZeeDesign/researcher-gpt](https://github.com/JayZeeDesign/researcher-gpt) | 361 |
|
||||
|[personoids/personoids-lite](https://github.com/personoids/personoids-lite) | 361 |
|
||||
|[intel/intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers) | 357 |
|
||||
|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 357 |
|
||||
|[steamship-packages/langchain-production-starter](https://github.com/steamship-packages/langchain-production-starter) | 356 |
|
||||
|[onlyphantom/llm-python](https://github.com/onlyphantom/llm-python) | 354 |
|
||||
|[Azure-Samples/miyagi](https://github.com/Azure-Samples/miyagi) | 340 |
|
||||
|[mrwadams/attackgen](https://github.com/mrwadams/attackgen) | 338 |
|
||||
|[rgomezcasas/dotfiles](https://github.com/rgomezcasas/dotfiles) | 337 |
|
||||
|[eosphoros-ai/DB-GPT-Hub](https://github.com/eosphoros-ai/DB-GPT-Hub) | 336 |
|
||||
|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 335 |
|
||||
|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 330 |
|
||||
|[momegas/megabots](https://github.com/momegas/megabots) | 329 |
|
||||
|[Nuggt-dev/Nuggt](https://github.com/Nuggt-dev/Nuggt) | 315 |
|
||||
|[itamargol/openai](https://github.com/itamargol/openai) | 315 |
|
||||
|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 315 |
|
||||
|[aws-samples/aws-genai-llm-chatbot](https://github.com/aws-samples/aws-genai-llm-chatbot) | 312 |
|
||||
|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 312 |
|
||||
|[preset-io/promptimize](https://github.com/preset-io/promptimize) | 311 |
|
||||
|[dgarnitz/vectorflow](https://github.com/dgarnitz/vectorflow) | 309 |
|
||||
|[langchain-ai/langsmith-cookbook](https://github.com/langchain-ai/langsmith-cookbook) | 309 |
|
||||
|[CambioML/pykoi](https://github.com/CambioML/pykoi) | 309 |
|
||||
|[wandb/edu](https://github.com/wandb/edu) | 301 |
|
||||
|[XzaiCloud/luna-ai](https://github.com/XzaiCloud/luna-ai) | 300 |
|
||||
|[liangwq/Chatglm_lora_multi-gpu](https://github.com/liangwq/Chatglm_lora_multi-gpu) | 294 |
|
||||
|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 291 |
|
||||
|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 286 |
|
||||
|[sugarforever/LangChain-Tutorials](https://github.com/sugarforever/LangChain-Tutorials) | 285 |
|
||||
|[facebookresearch/personal-timeline](https://github.com/facebookresearch/personal-timeline) | 283 |
|
||||
|[hnawaz007/pythondataanalysis](https://github.com/hnawaz007/pythondataanalysis) | 282 |
|
||||
|[yuanjie-ai/ChatLLM](https://github.com/yuanjie-ai/ChatLLM) | 280 |
|
||||
|[MetaGLM/FinGLM](https://github.com/MetaGLM/FinGLM) | 279 |
|
||||
|[JohnSnowLabs/langtest](https://github.com/JohnSnowLabs/langtest) | 277 |
|
||||
|[Em1tSan/NeuroGPT](https://github.com/Em1tSan/NeuroGPT) | 274 |
|
||||
|[Safiullah-Rahu/CSV-AI](https://github.com/Safiullah-Rahu/CSV-AI) | 274 |
|
||||
|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 274 |
|
||||
|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 266 |
|
||||
|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 263 |
|
||||
|[Mintplex-Labs/vector-admin](https://github.com/Mintplex-Labs/vector-admin) | 262 |
|
||||
|[artitw/text2text](https://github.com/artitw/text2text) | 262 |
|
||||
|[kaarthik108/snowChat](https://github.com/kaarthik108/snowChat) | 261 |
|
||||
|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 260 |
|
||||
|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 260 |
|
||||
|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 258 |
|
||||
|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 257 |
|
||||
|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 255 |
|
||||
|[ur-whitelab/chemcrow-public](https://github.com/ur-whitelab/chemcrow-public) | 253 |
|
||||
|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 251 |
|
||||
|[gustavz/DataChad](https://github.com/gustavz/DataChad) | 249 |
|
||||
|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 249 |
|
||||
|[ennucore/clippinator](https://github.com/ennucore/clippinator) | 247 |
|
||||
|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 244 |
|
||||
|[lilacai/lilac](https://github.com/lilacai/lilac) | 243 |
|
||||
|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 236 |
|
||||
|[iusztinpaul/hands-on-llms](https://github.com/iusztinpaul/hands-on-llms) | 233 |
|
||||
|[PradipNichite/Youtube-Tutorials](https://github.com/PradipNichite/Youtube-Tutorials) | 231 |
|
||||
|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 231 |
|
||||
|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 231 |
|
||||
|[yym68686/ChatGPT-Telegram-Bot](https://github.com/yym68686/ChatGPT-Telegram-Bot) | 226 |
|
||||
|[grumpyp/aixplora](https://github.com/grumpyp/aixplora) | 222 |
|
||||
|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 222 |
|
||||
|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 222 |
|
||||
|[arthur-ai/bench](https://github.com/arthur-ai/bench) | 220 |
|
||||
|[miaoshouai/miaoshouai-assistant](https://github.com/miaoshouai/miaoshouai-assistant) | 219 |
|
||||
|[AutoPackAI/beebot](https://github.com/AutoPackAI/beebot) | 217 |
|
||||
|[edreisMD/plugnplai](https://github.com/edreisMD/plugnplai) | 216 |
|
||||
|[nicknochnack/LangchainDocuments](https://github.com/nicknochnack/LangchainDocuments) | 214 |
|
||||
|[AkshitIreddy/Interactive-LLM-Powered-NPCs](https://github.com/AkshitIreddy/Interactive-LLM-Powered-NPCs) | 213 |
|
||||
|[SpecterOps/Nemesis](https://github.com/SpecterOps/Nemesis) | 210 |
|
||||
|[kyegomez/swarms](https://github.com/kyegomez/swarms) | 210 |
|
||||
|[wpydcr/LLM-Kit](https://github.com/wpydcr/LLM-Kit) | 208 |
|
||||
|[orgexyz/BlockAGI](https://github.com/orgexyz/BlockAGI) | 204 |
|
||||
|[Chainlit/cookbook](https://github.com/Chainlit/cookbook) | 202 |
|
||||
|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 202 |
|
||||
|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 202 |
|
||||
|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 202 |
|
||||
|[langchain-ai/web-explorer](https://github.com/langchain-ai/web-explorer) | 200 |
|
||||
|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 200 |
|
||||
|[alphasecio/langchain-examples](https://github.com/alphasecio/langchain-examples) | 199 |
|
||||
|[Gentopia-AI/Gentopia](https://github.com/Gentopia-AI/Gentopia) | 198 |
|
||||
|[SamPink/dev-gpt](https://github.com/SamPink/dev-gpt) | 196 |
|
||||
|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 196 |
|
||||
|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 195 |
|
||||
|[voxel51/voxelgpt](https://github.com/voxel51/voxelgpt) | 193 |
|
||||
|[CL-lau/SQL-GPT](https://github.com/CL-lau/SQL-GPT) | 192 |
|
||||
|[blob42/Instrukt](https://github.com/blob42/Instrukt) | 191 |
|
||||
|[streamlit/llm-examples](https://github.com/streamlit/llm-examples) | 191 |
|
||||
|[stepanogil/autonomous-hr-chatbot](https://github.com/stepanogil/autonomous-hr-chatbot) | 190 |
|
||||
|[TsinghuaDatabaseGroup/DB-GPT](https://github.com/TsinghuaDatabaseGroup/DB-GPT) | 189 |
|
||||
|[PJLab-ADG/DriveLikeAHuman](https://github.com/PJLab-ADG/DriveLikeAHuman) | 187 |
|
||||
|[Azure-Samples/azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | 187 |
|
||||
|[microsoft/azure-openai-in-a-day-workshop](https://github.com/microsoft/azure-openai-in-a-day-workshop) | 187 |
|
||||
|[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators) | 182 |
|
||||
|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 181 |
|
||||
|[hongbo-miao/hongbomiao.com](https://github.com/hongbo-miao/hongbomiao.com) | 180 |
|
||||
|[QwenLM/Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) | 179 |
|
||||
|[showlab/UniVTG](https://github.com/showlab/UniVTG) | 179 |
|
||||
|[Azure-Samples/jp-azureopenai-samples](https://github.com/Azure-Samples/jp-azureopenai-samples) | 176 |
|
||||
|[afaqueumer/DocQA](https://github.com/afaqueumer/DocQA) | 174 |
|
||||
|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 174 |
|
||||
|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 174 |
|
||||
|[RoboCoachTechnologies/GPT-Synthesizer](https://github.com/RoboCoachTechnologies/GPT-Synthesizer) | 173 |
|
||||
|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 172 |
|
||||
|[vaibkumr/prompt-optimizer](https://github.com/vaibkumr/prompt-optimizer) | 171 |
|
||||
|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 170 |
|
||||
|[anarchy-ai/LLM-VM](https://github.com/anarchy-ai/LLM-VM) | 169 |
|
||||
|[ray-project/langchain-ray](https://github.com/ray-project/langchain-ray) | 169 |
|
||||
|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 169 |
|
||||
|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 168 |
|
||||
|[mayooear/private-chatbot-mpt30b-langchain](https://github.com/mayooear/private-chatbot-mpt30b-langchain) | 167 |
|
||||
|[OpenPluginACI/openplugin](https://github.com/OpenPluginACI/openplugin) | 165 |
|
||||
|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 165 |
|
||||
|[kjappelbaum/gptchem](https://github.com/kjappelbaum/gptchem) | 162 |
|
||||
|[JorisdeJong123/7-Days-of-LangChain](https://github.com/JorisdeJong123/7-Days-of-LangChain) | 161 |
|
||||
|[retr0reg/Ret2GPT](https://github.com/retr0reg/Ret2GPT) | 161 |
|
||||
|[menloparklab/falcon-langchain](https://github.com/menloparklab/falcon-langchain) | 159 |
|
||||
|[summarizepaper/summarizepaper](https://github.com/summarizepaper/summarizepaper) | 158 |
|
||||
|[emarco177/ice_breaker](https://github.com/emarco177/ice_breaker) | 157 |
|
||||
|[AmineDiro/cria](https://github.com/AmineDiro/cria) | 156 |
|
||||
|[morpheuslord/HackBot](https://github.com/morpheuslord/HackBot) | 156 |
|
||||
|[homanp/vercel-langchain](https://github.com/homanp/vercel-langchain) | 156 |
|
||||
|[mlops-for-all/mlops-for-all.github.io](https://github.com/mlops-for-all/mlops-for-all.github.io) | 155 |
|
||||
|[positive666/Prompt-Can-Anything](https://github.com/positive666/Prompt-Can-Anything) | 154 |
|
||||
|[deeppavlov/dream](https://github.com/deeppavlov/dream) | 153 |
|
||||
|[flurb18/AgentOoba](https://github.com/flurb18/AgentOoba) | 151 |
|
||||
|[Open-Swarm-Net/GPT-Swarm](https://github.com/Open-Swarm-Net/GPT-Swarm) | 151 |
|
||||
|[v7labs/benchllm](https://github.com/v7labs/benchllm) | 150 |
|
||||
|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 150 |
|
||||
|[Aggregate-Intellect/sherpa](https://github.com/Aggregate-Intellect/sherpa) | 148 |
|
||||
|[Coding-Crashkurse/Langchain-Full-Course](https://github.com/Coding-Crashkurse/Langchain-Full-Course) | 148 |
|
||||
|[SuperDuperDB/superduperdb](https://github.com/SuperDuperDB/superduperdb) | 147 |
|
||||
|[defenseunicorns/leapfrogai](https://github.com/defenseunicorns/leapfrogai) | 147 |
|
||||
|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 147 |
|
||||
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 146 |
|
||||
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 146 |
|
||||
|[iMagist486/ElasticSearch-Langchain-Chatglm2](https://github.com/iMagist486/ElasticSearch-Langchain-Chatglm2) | 144 |
|
||||
|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 143 |
|
||||
|[kulltc/chatgpt-sql](https://github.com/kulltc/chatgpt-sql) | 142 |
|
||||
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 142 |
|
||||
|[hirokidaichi/wanna](https://github.com/hirokidaichi/wanna) | 141 |
|
||||
|[yasyf/summ](https://github.com/yasyf/summ) | 141 |
|
||||
|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 140 |
|
||||
|[ssheng/BentoChain](https://github.com/ssheng/BentoChain) | 139 |
|
||||
|[mallahyari/drqa](https://github.com/mallahyari/drqa) | 139 |
|
||||
|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 139 |
|
||||
|[dbpunk-labs/octogen](https://github.com/dbpunk-labs/octogen) | 138 |
|
||||
|[RedisVentures/redis-openai-qna](https://github.com/RedisVentures/redis-openai-qna) | 138 |
|
||||
|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 138 |
|
||||
|[langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk) | 137 |
|
||||
|[jina-ai/fastapi-serve](https://github.com/jina-ai/fastapi-serve) | 137 |
|
||||
|[yeagerai/genworlds](https://github.com/yeagerai/genworlds) | 137 |
|
||||
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 137 |
|
||||
|[luisroque/large_laguage_models](https://github.com/luisroque/large_laguage_models) | 136 |
|
||||
|[ChuloAI/BrainChulo](https://github.com/ChuloAI/BrainChulo) | 136 |
|
||||
|[3Alan/DocsMind](https://github.com/3Alan/DocsMind) | 136 |
|
||||
|[KylinC/ChatFinance](https://github.com/KylinC/ChatFinance) | 133 |
|
||||
|[langchain-ai/text-split-explorer](https://github.com/langchain-ai/text-split-explorer) | 133 |
|
||||
|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 133 |
|
||||
|[tencentmusic/supersonic](https://github.com/tencentmusic/supersonic) | 132 |
|
||||
|[kimtth/azure-openai-llm-vector-langchain](https://github.com/kimtth/azure-openai-llm-vector-langchain) | 131 |
|
||||
|[ciare-robotics/world-creator](https://github.com/ciare-robotics/world-creator) | 129 |
|
||||
|[zenml-io/zenml-projects](https://github.com/zenml-io/zenml-projects) | 129 |
|
||||
|[log1stics/voice-generator-webui](https://github.com/log1stics/voice-generator-webui) | 129 |
|
||||
|[snexus/llm-search](https://github.com/snexus/llm-search) | 129 |
|
||||
|[fixie-ai/fixie-examples](https://github.com/fixie-ai/fixie-examples) | 128 |
|
||||
|[MedalCollector/Orator](https://github.com/MedalCollector/Orator) | 127 |
|
||||
|[grumpyp/chroma-langchain-tutorial](https://github.com/grumpyp/chroma-langchain-tutorial) | 127 |
|
||||
|[langchain-ai/langchain-aws-template](https://github.com/langchain-ai/langchain-aws-template) | 127 |
|
||||
|[prof-frink-lab/slangchain](https://github.com/prof-frink-lab/slangchain) | 126 |
|
||||
|[KMnO4-zx/huanhuan-chat](https://github.com/KMnO4-zx/huanhuan-chat) | 124 |
|
||||
|[RCGAI/SimplyRetrieve](https://github.com/RCGAI/SimplyRetrieve) | 124 |
|
||||
|[Dicklesworthstone/llama2_aided_tesseract](https://github.com/Dicklesworthstone/llama2_aided_tesseract) | 123 |
|
||||
|[sdaaron/QueryGPT](https://github.com/sdaaron/QueryGPT) | 122 |
|
||||
|[athina-ai/athina-sdk](https://github.com/athina-ai/athina-sdk) | 121 |
|
||||
|[AIAnytime/Llama2-Medical-Chatbot](https://github.com/AIAnytime/Llama2-Medical-Chatbot) | 121 |
|
||||
|[MuhammadMoinFaisal/LargeLanguageModelsProjects](https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects) | 121 |
|
||||
|[Azure/business-process-automation](https://github.com/Azure/business-process-automation) | 121 |
|
||||
|[definitive-io/code-indexer-loop](https://github.com/definitive-io/code-indexer-loop) | 119 |
|
||||
|[nrl-ai/pautobot](https://github.com/nrl-ai/pautobot) | 119 |
|
||||
|[Azure/app-service-linux-docs](https://github.com/Azure/app-service-linux-docs) | 118 |
|
||||
|[zilliztech/akcio](https://github.com/zilliztech/akcio) | 118 |
|
||||
|[CodeAlchemyAI/ViLT-GPT](https://github.com/CodeAlchemyAI/ViLT-GPT) | 117 |
|
||||
|[georgesung/llm_qlora](https://github.com/georgesung/llm_qlora) | 117 |
|
||||
|[nicknochnack/Nopenai](https://github.com/nicknochnack/Nopenai) | 115 |
|
||||
|[nftblackmagic/flask-langchain](https://github.com/nftblackmagic/flask-langchain) | 115 |
|
||||
|[mortium91/langchain-assistant](https://github.com/mortium91/langchain-assistant) | 115 |
|
||||
|[Ngonie-x/langchain_csv](https://github.com/Ngonie-x/langchain_csv) | 114 |
|
||||
|[wombyz/HormoziGPT](https://github.com/wombyz/HormoziGPT) | 114 |
|
||||
|[langchain-ai/langchain-teacher](https://github.com/langchain-ai/langchain-teacher) | 113 |
|
||||
|[mluogh/eastworld](https://github.com/mluogh/eastworld) | 112 |
|
||||
|[mudler/LocalAGI](https://github.com/mudler/LocalAGI) | 112 |
|
||||
|[marimo-team/marimo](https://github.com/marimo-team/marimo) | 111 |
|
||||
|[trancethehuman/entities-extraction-web-scraper](https://github.com/trancethehuman/entities-extraction-web-scraper) | 111 |
|
||||
|[xuwenhao/mactalk-ai-course](https://github.com/xuwenhao/mactalk-ai-course) | 111 |
|
||||
|[dcaribou/transfermarkt-datasets](https://github.com/dcaribou/transfermarkt-datasets) | 111 |
|
||||
|[rabbitmetrics/langchain-13-min](https://github.com/rabbitmetrics/langchain-13-min) | 111 |
|
||||
|[dotvignesh/PDFChat](https://github.com/dotvignesh/PDFChat) | 111 |
|
||||
|[aws-samples/cdk-eks-blueprints-patterns](https://github.com/aws-samples/cdk-eks-blueprints-patterns) | 110 |
|
||||
|[topoteretes/PromethAI-Backend](https://github.com/topoteretes/PromethAI-Backend) | 110 |
|
||||
|[jlonge4/local_llama](https://github.com/jlonge4/local_llama) | 110 |
|
||||
|[RUC-GSAI/YuLan-Rec](https://github.com/RUC-GSAI/YuLan-Rec) | 108 |
|
||||
|[gh18l/CrawlGPT](https://github.com/gh18l/CrawlGPT) | 107 |
|
||||
|[c0sogi/LLMChat](https://github.com/c0sogi/LLMChat) | 107 |
|
||||
|[hwchase17/langchain-gradio-template](https://github.com/hwchase17/langchain-gradio-template) | 107 |
|
||||
|[ArjanCodes/examples](https://github.com/ArjanCodes/examples) | 106 |
|
||||
|[genia-dev/GeniA](https://github.com/genia-dev/GeniA) | 105 |
|
||||
|[nexus-stc/stc](https://github.com/nexus-stc/stc) | 105 |
|
||||
|[mbchang/data-driven-characters](https://github.com/mbchang/data-driven-characters) | 105 |
|
||||
|[ademakdogan/ChatSQL](https://github.com/ademakdogan/ChatSQL) | 104 |
|
||||
|[crosleythomas/MirrorGPT](https://github.com/crosleythomas/MirrorGPT) | 104 |
|
||||
|[IvanIsCoding/ResuLLMe](https://github.com/IvanIsCoding/ResuLLMe) | 104 |
|
||||
|[avrabyt/MemoryBot](https://github.com/avrabyt/MemoryBot) | 104 |
|
||||
|[Azure/azure-sdk-tools](https://github.com/Azure/azure-sdk-tools) | 103 |
|
||||
|[aniketmaurya/llm-inference](https://github.com/aniketmaurya/llm-inference) | 103 |
|
||||
|[Anil-matcha/Youtube-to-chatbot](https://github.com/Anil-matcha/Youtube-to-chatbot) | 103 |
|
||||
|[nyanp/chat2plot](https://github.com/nyanp/chat2plot) | 102 |
|
||||
|[aws-samples/amazon-kendra-langchain-extensions](https://github.com/aws-samples/amazon-kendra-langchain-extensions) | 101 |
|
||||
|[atisharma/llama_farm](https://github.com/atisharma/llama_farm) | 100 |
|
||||
|[Xueheng-Li/SynologyChatbotGPT](https://github.com/Xueheng-Li/SynologyChatbotGPT) | 100 |
|
||||
|
||||
|
||||
|
||||
_Generated by [github-dependents-info](https://github.com/nvuillam/github-dependents-info)_
|
||||
|
||||
`github-dependents-info --repo langchain-ai/langchain --markdownfile dependents.md --minstars 100 --sort stars`
|
||||
@@ -91,7 +91,7 @@
|
||||
- [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
|
||||
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
|
||||
- [`Flowise` is an open source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- [`Flowise` is an open-source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
|
||||
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
|
||||
@@ -48,7 +48,6 @@ If you’re working on something you’re proud of, and think the LangChain comm
|
||||
Here’s where our team hangs out, talks shop, spotlights cool work, and shares what we’re up to. We’d love to see you there too.
|
||||
|
||||
- **[Twitter](https://twitter.com/LangChainAI):** We post about what we’re working on and what cool things we’re seeing in the space. If you tag @langchainai in your post, we’ll almost certainly see it, and can show you some love!
|
||||
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with >30k developers who are building with LangChain
|
||||
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with over 30,000 developers who are building with LangChain.
|
||||
- **[GitHub](https://github.com/langchain-ai/langchain):** Open pull requests, contribute to a discussion, and/or contribute
|
||||
- **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
|
||||
- **Slack:** If you’re building an application in production at your company, we’d love to get into a Slack channel together. Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) and we’ll get in touch about setting one up.
|
||||
@@ -17,9 +17,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
@@ -27,7 +28,7 @@
|
||||
" (\"system\", \"You are a helpful chatbot\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{input}\")\n",
|
||||
"])"
|
||||
"])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -37,7 +38,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True)"
|
||||
"memory = ConversationBufferMemory(return_messages=True)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -58,7 +59,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})"
|
||||
"memory.load_memory_variables({})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -68,13 +69,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RunnableMap({\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"memory\": memory.load_memory_variables\n",
|
||||
"}) | {\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"history\": lambda x: x[\"memory\"][\"history\"]\n",
|
||||
"} | prompt | model"
|
||||
"chain = RunnablePassthrough.assign(\n",
|
||||
" memory=memory.load_memory_variables | itemgetter(\"history\")\n",
|
||||
") | prompt | model\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -97,7 +94,7 @@
|
||||
"source": [
|
||||
"inputs = {\"input\": \"hi im bob\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response"
|
||||
"response\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -107,7 +104,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory.save_context(inputs, {\"output\": response.content})"
|
||||
"memory.save_context(inputs, {\"output\": response.content})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -129,7 +126,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})"
|
||||
"memory.load_memory_variables({})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -152,7 +149,7 @@
|
||||
"source": [
|
||||
"inputs = {\"input\": \"whats my name\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response"
|
||||
"response\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -8,7 +8,7 @@
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"title: Prompt + LLM\n",
|
||||
"---"
|
||||
"---\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -47,7 +47,7 @@
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {foo}\")\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"chain = prompt | model"
|
||||
"chain = prompt | model\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -68,7 +68,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -94,7 +94,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model.bind(stop=[\"\\n\"])"
|
||||
"chain = prompt | model.bind(stop=[\"\\n\"])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -115,7 +115,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -153,7 +153,7 @@
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
"chain = prompt | model.bind(function_call= {\"name\": \"joke\"}, functions= functions)"
|
||||
"chain = prompt | model.bind(function_call= {\"name\": \"joke\"}, functions= functions)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -174,7 +174,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"}, config={})"
|
||||
"chain.invoke({\"foo\": \"bears\"}, config={})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -196,7 +196,7 @@
|
||||
"source": [
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"chain = prompt | model | StrOutputParser()"
|
||||
"chain = prompt | model | StrOutputParser()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -225,7 +225,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -251,7 +251,7 @@
|
||||
" prompt \n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonOutputFunctionsParser()\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -273,7 +273,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -289,7 +289,7 @@
|
||||
" prompt \n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,7 +310,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -332,13 +332,13 @@
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"map_ = RunnableMap({\"foo\": RunnablePassthrough()})\n",
|
||||
"map_ = RunnableMap(foo=RunnablePassthrough())\n",
|
||||
"chain = (\n",
|
||||
" map_ \n",
|
||||
" | prompt\n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -359,7 +359,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"bears\")"
|
||||
"chain.invoke(\"bears\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -382,7 +382,7 @@
|
||||
" | prompt\n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -403,7 +403,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"bears\")"
|
||||
"chain.invoke(\"bears\")\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -8,7 +8,7 @@
|
||||
"---\n",
|
||||
"sidebar_position: 1\n",
|
||||
"title: RAG\n",
|
||||
"---"
|
||||
"---\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -26,7 +26,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain openai faiss-cpu tiktoken"
|
||||
"!pip install langchain openai faiss-cpu tiktoken\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,7 +43,7 @@
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
"from langchain.vectorstores import FAISS\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -63,7 +63,7 @@
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()"
|
||||
"model = ChatOpenAI()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -78,7 +78,7 @@
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -99,7 +99,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"where did harrison work?\")"
|
||||
"chain.invoke(\"where did harrison work?\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -122,7 +122,7 @@
|
||||
" \"context\": itemgetter(\"question\") | retriever, \n",
|
||||
" \"question\": itemgetter(\"question\"), \n",
|
||||
" \"language\": itemgetter(\"language\")\n",
|
||||
"} | prompt | model | StrOutputParser()"
|
||||
"} | prompt | model | StrOutputParser()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -143,7 +143,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
|
||||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -164,7 +164,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema import format_document"
|
||||
"from langchain.schema import format_document\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -182,7 +182,7 @@
|
||||
"{chat_history}\n",
|
||||
"Follow Up Input: {question}\n",
|
||||
"Standalone question:\"\"\"\n",
|
||||
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)"
|
||||
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -197,7 +197,7 @@
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)"
|
||||
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -210,7 +210,7 @@
|
||||
"DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template=\"{page_content}\")\n",
|
||||
"def _combine_documents(docs, document_prompt = DEFAULT_DOCUMENT_PROMPT, document_separator=\"\\n\\n\"):\n",
|
||||
" doc_strings = [format_document(doc, document_prompt) for doc in docs]\n",
|
||||
" return document_separator.join(doc_strings)"
|
||||
" return document_separator.join(doc_strings)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -227,7 +227,7 @@
|
||||
" human = \"Human: \" + dialogue_turn[0]\n",
|
||||
" ai = \"Assistant: \" + dialogue_turn[1]\n",
|
||||
" buffer += \"\\n\" + \"\\n\".join([human, ai])\n",
|
||||
" return buffer"
|
||||
" return buffer\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -238,18 +238,15 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"_inputs = RunnableMap(\n",
|
||||
" {\n",
|
||||
" \"standalone_question\": {\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
|
||||
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
|
||||
" }\n",
|
||||
" standalone_question=RunnablePassthrough.assign(\n",
|
||||
" chat_history=lambda x: _format_chat_history(x['chat_history'])\n",
|
||||
" ) | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
|
||||
")\n",
|
||||
"_context = {\n",
|
||||
" \"context\": itemgetter(\"standalone_question\") | retriever | _combine_documents,\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"]\n",
|
||||
"}\n",
|
||||
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()"
|
||||
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -273,7 +270,7 @@
|
||||
"conversational_qa_chain.invoke({\n",
|
||||
" \"question\": \"where did harrison work?\",\n",
|
||||
" \"chat_history\": [],\n",
|
||||
"})"
|
||||
"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -297,7 +294,7 @@
|
||||
"conversational_qa_chain.invoke({\n",
|
||||
" \"question\": \"where did he work?\",\n",
|
||||
" \"chat_history\": [(\"Who wrote this notebook?\", \"Harrison\")],\n",
|
||||
"})"
|
||||
"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -317,7 +314,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import ConversationBufferMemory"
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -327,7 +325,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True, output_key=\"answer\", input_key=\"question\")"
|
||||
"memory = ConversationBufferMemory(return_messages=True, output_key=\"answer\", input_key=\"question\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -338,19 +336,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# First we add a step to load memory\n",
|
||||
"# This needs to be a RunnableMap because its the first input\n",
|
||||
"loaded_memory = RunnableMap(\n",
|
||||
" {\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"memory\": memory.load_memory_variables,\n",
|
||||
" }\n",
|
||||
"# This adds a \"memory\" key to the input object\n",
|
||||
"loaded_memory = RunnablePassthrough.assign(\n",
|
||||
" chat_history=memory.load_memory_variables | itemgetter(\"history\"),\n",
|
||||
")\n",
|
||||
"# Next we add a step to expand memory into the variables\n",
|
||||
"expanded_memory = {\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"chat_history\": lambda x: x[\"memory\"][\"history\"]\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"# Now we calculate the standalone question\n",
|
||||
"standalone_question = {\n",
|
||||
" \"standalone_question\": {\n",
|
||||
@@ -374,7 +363,7 @@
|
||||
" \"docs\": itemgetter(\"docs\"),\n",
|
||||
"}\n",
|
||||
"# And now we put it all together!\n",
|
||||
"final_chain = loaded_memory | expanded_memory | standalone_question | retrieved_documents | answer"
|
||||
"final_chain = loaded_memory | expanded_memory | standalone_question | retrieved_documents | answer\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -398,7 +387,7 @@
|
||||
"source": [
|
||||
"inputs = {\"question\": \"where did harrison work?\"}\n",
|
||||
"result = final_chain.invoke(inputs)\n",
|
||||
"result"
|
||||
"result\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -411,7 +400,7 @@
|
||||
"# Note that the memory does not save automatically\n",
|
||||
"# This will be improved in the future\n",
|
||||
"# For now you need to save it yourself\n",
|
||||
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})"
|
||||
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -433,7 +422,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})"
|
||||
"memory.load_memory_variables({})\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -8,7 +8,7 @@
|
||||
"---\n",
|
||||
"sidebar_position: 3\n",
|
||||
"title: Querying a SQL DB\n",
|
||||
"---"
|
||||
"---\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -33,7 +33,7 @@
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)"
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,7 +43,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import SQLDatabase"
|
||||
"from langchain.utilities import SQLDatabase\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -61,7 +61,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")"
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -72,7 +72,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_schema(_):\n",
|
||||
" return db.get_table_info()"
|
||||
" return db.get_table_info()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -83,7 +83,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def run_query(query):\n",
|
||||
" return db.run(query)"
|
||||
" return db.run(query)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -93,24 +93,18 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnableLambda, RunnableMap\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"inputs = {\n",
|
||||
" \"schema\": RunnableLambda(get_schema),\n",
|
||||
" \"question\": itemgetter(\"question\")\n",
|
||||
"}\n",
|
||||
"sql_response = (\n",
|
||||
" RunnableMap(inputs)\n",
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )"
|
||||
" )\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -131,7 +125,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sql_response.invoke({\"question\": \"How many employees are there?\"})"
|
||||
"sql_response.invoke({\"question\": \"How many employees are there?\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -147,7 +141,7 @@
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_template(template)"
|
||||
"prompt_response = ChatPromptTemplate.from_template(template)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -158,19 +152,14 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"full_chain = (\n",
|
||||
" RunnableMap({\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"query\": sql_response,\n",
|
||||
" }) \n",
|
||||
" | {\n",
|
||||
" \"schema\": RunnableLambda(get_schema),\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"query\": itemgetter(\"query\"),\n",
|
||||
" \"response\": lambda x: db.run(x[\"query\"]) \n",
|
||||
" } \n",
|
||||
" RunnablePassthrough.assign(query=sql_response) \n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=lambda x: db.run(x[\"query\"]),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | model\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -191,7 +180,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\": \"How many employees are there?\"})"
|
||||
"full_chain.invoke({\"question\": \"How many employees are there?\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -5,9 +5,9 @@
|
||||
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Use RunnableMaps\n",
|
||||
"# Use RunnableParallel/RunnableMap\n",
|
||||
"\n",
|
||||
"RunnableMaps make it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
|
||||
"RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -31,16 +31,16 @@
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema.runnable import RunnableParallel\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
||||
"poem_chain = ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
|
||||
"\n",
|
||||
"map_chain = RunnableMap({\"joke\": joke_chain, \"poem\": poem_chain,})\n",
|
||||
"map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n",
|
||||
"\n",
|
||||
"map_chain.invoke({\"topic\": \"bear\"})"
|
||||
"map_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -91,7 +91,7 @@
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"retrieval_chain.invoke(\"where did harrison work?\")"
|
||||
"retrieval_chain.invoke(\"where did harrison work?\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -101,7 +101,7 @@
|
||||
"source": [
|
||||
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
|
||||
"\n",
|
||||
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictuionary in the RunnableMap class — the type conversion is handled for us."
|
||||
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictionary in the RunnableMap class — the type conversion is handled for us."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -131,7 +131,7 @@
|
||||
"source": [
|
||||
"%%timeit\n",
|
||||
"\n",
|
||||
"joke_chain.invoke({\"topic\": \"bear\"})"
|
||||
"joke_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -151,7 +151,7 @@
|
||||
"source": [
|
||||
"%%timeit\n",
|
||||
"\n",
|
||||
"poem_chain.invoke({\"topic\": \"bear\"})"
|
||||
"poem_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -171,7 +171,7 @@
|
||||
"source": [
|
||||
"%%timeit\n",
|
||||
"\n",
|
||||
"map_chain.invoke({\"topic\": \"bear\"})"
|
||||
"map_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -31,3 +31,6 @@ How to use core features of LCEL
|
||||
|
||||
#### [Cookbook](/docs/expression_language/cookbook)
|
||||
Examples of common LCEL usage patterns
|
||||
|
||||
#### [Why use LCEL](/docs/expression_language/why)
|
||||
A deeper dive into the benefits of LCEL
|
||||
@@ -16,7 +16,7 @@
|
||||
"id": "9a9acd2e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
|
||||
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.Runnable.html#langchain.schema.runnable.base.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
|
||||
"\n",
|
||||
"- [`stream`](#stream): stream back chunks of the response\n",
|
||||
"- [`invoke`](#invoke): call the chain on an input\n",
|
||||
@@ -131,7 +131,7 @@
|
||||
],
|
||||
"source": [
|
||||
"# The input schema of the chain is the input schema of its first part, the prompt.\n",
|
||||
"chain.input_schema.schema()"
|
||||
"chain.input_schema.schema()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -244,7 +244,7 @@
|
||||
],
|
||||
"source": [
|
||||
"# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessage\n",
|
||||
"chain.output_schema.schema()"
|
||||
"chain.output_schema.schema()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -783,7 +783,7 @@
|
||||
],
|
||||
"source": [
|
||||
"async for chunk in retrieval_chain.astream_log(\"where did harrison work?\", include_names=['Docs'], diff=False):\n",
|
||||
" print(chunk)"
|
||||
" print(chunk)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -793,7 +793,7 @@
|
||||
"source": [
|
||||
"## Parallelism\n",
|
||||
"\n",
|
||||
"Let's take a look at how LangChain Expression Language support parallel requests as much as possible. For example, when using a RunnableMap (often written as a dictionary) it executes each element in parallel."
|
||||
"Let's take a look at how LangChain Expression Language support parallel requests as much as possible. For example, when using a RunnableParallel (often written as a dictionary) it executes each element in parallel."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -803,13 +803,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema.runnable import RunnableParallel\n",
|
||||
"chain1 = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
||||
"chain2 = ChatPromptTemplate.from_template(\"write a short (2 line) poem about {topic}\") | model\n",
|
||||
"combined = RunnableMap({\n",
|
||||
" \"joke\": chain1,\n",
|
||||
" \"poem\": chain2,\n",
|
||||
"})\n"
|
||||
"combined = RunnableParallel(joke=chain1, poem=chain2)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
11
docs/docs/expression_language/why.mdx
Normal file
@@ -0,0 +1,11 @@
|
||||
# Why use LCEL?
|
||||
|
||||
The LangChain Expression Language was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully running in production LCEL chains with 100s of steps). To highlight a few of the reasons you might want to use LCEL:
|
||||
|
||||
- first-class support for streaming: when you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. We’re constantly improving streaming support, recently we added a [streaming JSON parser](https://twitter.com/LangChainAI/status/1709690468030914584), and more is in the works.
|
||||
- first-class async support: any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](https://github.com/langchain-ai/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
|
||||
- optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
|
||||
- support for retries and fallbacks: more recently we’ve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
|
||||
- accessing intermediate results: for more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. We’ve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and it’s available on every LangServe server.
|
||||
- [input and output schemas](https://x.com/LangChainAI/status/1711805322195861934?s=20): this week we launched input and output schemas for LCEL, giving every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
|
||||
- tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand what’s happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.
|
||||