Compare commits

..

48 Commits

Author SHA1 Message Date
Jacob Lee
187ddca02f docs: Make deprecation docs bar bigger (#28674)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-12-11 17:19:47 -08:00
Bagatur
f28e339129 infra: test release skip attestations (#27853) 2024-11-01 17:22:00 -07:00
Bagatur
974cf1f330 infra: fix test release permissions (#27852) 2024-11-01 17:09:47 -07:00
Bagatur
e711034713 core[patch]: rm image prompt file loading (#27848) 2024-11-01 23:42:14 +00:00
Erick Friis
bc6600d86f docs: canonical url fix (#27554)
canonical wrong on e.g.
https://python.langchain.com/v0.1/docs/modules/agents/concepts/
2024-10-22 12:46:18 -07:00
Erick Friis
3b0e993e20 docs: v0.1 canonical (#26515) 2024-09-16 13:32:04 -07:00
Erick Friis
5b5a782196 docs: v0.1 docs deprecation show (#26510) 2024-09-16 02:21:35 +00:00
Erick Friis
86b6d9df9a docs: v0.1 link to latest in dropdown (#26503) 2024-09-15 18:36:00 +00:00
Jacob Lee
d6e9d6fee1 Try removing v0.2 rewrite (#25918) 2024-09-01 07:33:22 -07:00
ccurme
4b7a7ed4b1 v0.1 docs: add callout to OllamaFunctions docs (#25579) 2024-08-20 08:59:11 -04:00
Erick Friis
be7a5aa726 docs: fix mdx fences (#25184) 2024-08-08 17:07:24 +00:00
Bagatur
27170b08da docs: nvidia v0.2 warning (#24831)
Add explicit warning and link to v0.2 docs. cc @dglogo
2024-07-30 12:46:46 -07:00
Jacob Lee
6ed6c7f409 docs[patch]: Increase v0.1 banner size (#23072)
CC @hwchase17
2024-06-18 02:34:35 -07:00
Erick Friis
cd17444711 multiple: fix codespell (v0.1) (#22785) 2024-06-11 13:31:06 -07:00
Jacob Lee
3a5523bf82 docs[patch]: Redirect LangGraph docs link (#22784)
FYI @efriis
2024-06-11 12:28:02 -07:00
Jacob Lee
f883981446 docs[patch]: Add canonical URLs pointing at 0.2 docs for popular pages (#22250)
@efriis
2024-05-31 10:55:10 -04:00
Erick Friis
b8550e7d3a infra: fix make docs_build (0.1) (#22247) 2024-05-28 13:54:49 -07:00
Nelson Auner
946a0a38bd experimental[patch]: Fix import errors and docs of experimental prompt injection identifier (#21586)
**Description**

I noticed that code in experimental was not functional due to small
errors, and users were complaining at
https://stackoverflow.com/questions/78272521/hugging-face-prompt-injection-identifier
and https://github.com/langchain-ai/langchain/discussions/19995.

I know folks are busy with `v2.0` now, but I believe this is worthy fix
because it is isolated, minor, backwards compatible, and fixes an error
with current v0.1 code

**Issue**
https://github.com/langchain-ai/langchain/discussions/19995

**Dependencies**
None

**Twitter handle**
https://twitter.com/nelsonauner


cc @asofter

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-21 16:29:25 -07:00
junefish
5a40413bfd docs: add Pinecone tab to vector stores page (#21969)
Thank you for contributing to LangChain!

- [x] **PR title**: docs: add Pinecone tab to [vector stores
page](https://python.langchain.com/v0.1/docs/modules/data_connection/vectorstores/).


- [x] **PR message**: Recreation of
https://github.com/langchain-ai/langchain/pull/21721.
Adds information about PineconeVectorStore to the LangChain vector
stores page. Although this page is deprecated, it still shows up
prominently in Google search results, so it will still be very helpful
to users to have correct information.
![search
results](https://github.com/langchain-ai/langchain/assets/19216250/e05d8d74-03da-44a1-b87f-0f8087d3c13a)


- [x] **Add tests and docs**: N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-21 09:35:20 -07:00
Bagatur
cb45caa02e docs: link to v0.1 api ref (#21945) 2024-05-20 19:01:55 -07:00
SN
bc84bc44b6 docs: Update pairwise evaluations page to link to LangSmith evaluate_compar… (#21686)
…ative function
2024-05-20 16:22:24 -07:00
Erick Friis
b199db9767 docs: update announcement bar (v0.1) (#21855) 2024-05-18 00:35:42 +00:00
Erick Friis
91c6117fd5 docs: align vercel core dep (v0.1) (#21838) 2024-05-17 14:21:28 -07:00
Erick Friis
65e6239f14 docs: cookbook redirect (v0.1) (#21823) 2024-05-17 10:02:00 -07:00
Erick Friis
4bf7add9b8 docs: version dropdown (v0.1) (#21786) 2024-05-16 17:01:37 -07:00
Erick Friis
c9ec8c5d07 docs: dont rewrite ipynb links that have double slash (v0.1) (#21776) 2024-05-16 13:26:09 -07:00
Erick Friis
d53de65aca docs: anthropic forced tool calling (#21774) 2024-05-16 18:34:02 +00:00
Mohammad Mohtashim
f02e27b664 Docs[patch]: v0.1 UseCases Notebooks' Google Colab Links fix (#21722)
- **Description:** v0.1 Usecases Notebooks google colab link were
pointing to master but they should point to v0.1 which was raising the
following issue.
  - **Issue:** #21690
2024-05-15 15:44:35 -04:00
Bagatur
a79c8b3834 docs: openai bind tools nit (#21693) 2024-05-15 01:06:02 +00:00
Erick Friis
4ea6f882d4 infra: remove prints from notebook build (v0.1) (#21689) 2024-05-14 16:27:44 -07:00
Erick Friis
03bd0f51c9 docs: ignore nb echo:false blocks (v0.1) (#21642) 2024-05-14 00:22:53 +00:00
Erick Friis
be3d5904da infra: fix api ref link generation (v0.1) (#21632) 2024-05-13 15:16:54 -07:00
ccurme
9d850054bc docs: update heading in v0.1 extraction docs (#21605) 2024-05-13 07:07:43 -07:00
ccurme
5c07a96de1 langchain: release 0.1.20 (#21549) 2024-05-10 21:15:56 +00:00
ccurme
7440ce0cb4 langchain: cherry-pick moderation fix into v0.1 (#21544)
```
git checkout v0.1
git pull
git checkout -b cc/cherry_pick_into_v01
git cherry-pick d3ca2cc8c3
```

Co-authored-by: Matt Florence <matt@mattflo.com>
Co-authored-by: Emilia Katari <emilia@outpace.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-10 21:05:33 +00:00
Erick Friis
73a5b3dd98 infra: codespell in v1 (#21547) 2024-05-10 13:57:27 -07:00
Erick Friis
cfd827b574 docs: announcement bar (#21511) 2024-05-10 10:10:52 -07:00
Erick Friis
a71c4e22d1 docs: 404 page top level (#21510) 2024-05-09 17:14:56 -07:00
Erick Friis
29be604023 Revert "docs: redirect base slug (0.1)" (#21500)
Reverts langchain-ai/langchain#21458
2024-05-09 12:21:25 -07:00
Erick Friis
25f8467ffb docs: redirect base slug (0.1) (#21458) 2024-05-09 10:52:17 -07:00
Erick Friis
0c84aea1d4 docs: spell correction (#21459) 2024-05-09 10:51:04 -07:00
Erick Friis
d0f043a765 docs: baseUrl for ganalytics (0.1) (#21456) 2024-05-08 18:07:10 -07:00
Erick Friis
264f677528 langchain: bump core community (#21453) 2024-05-08 15:58:01 -07:00
Erick Friis
3b99f428a0 community: fix core version str (#21452) 2024-05-08 15:36:12 -07:00
Erick Friis
6eab0a4709 community: release 0.0.38 (#21451) 2024-05-08 15:33:08 -07:00
Erick Friis
be8006fa9c langchain: fix core version dep (#21449) 2024-05-08 15:08:55 -07:00
Erick Friis
e316d30dcb langchain: release 0.1.19 (#21447) 2024-05-08 14:52:03 -07:00
Erick Friis
61b5159ed0 docs: v0.1 build semantics [BEGIN v0.1 BRANCH] (#21444) 2024-05-08 14:29:49 -07:00
3475 changed files with 354234 additions and 208946 deletions

View File

@@ -10,7 +10,7 @@ You can use the dev container configuration in this folder to build and run the
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master**.
1. Click **Create codespace on master** .
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).

View File

@@ -5,10 +5,10 @@ services:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
@@ -28,3 +28,5 @@ services:
networks:
langchain-network:
driver: bridge

View File

@@ -4,6 +4,9 @@ contact_links:
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
- name: Discord
url: https://discord.gg/6adMQxSpJS
about: General community discussions
- name: Feature Request
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
about: Suggest a feature or an idea

View File

@@ -26,13 +26,6 @@ body:
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
@@ -55,4 +48,4 @@ body:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.
from the current documentation.

View File

@@ -26,4 +26,4 @@ Additional guidelines:
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.

View File

@@ -350,7 +350,11 @@ def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(settings=settings, query=prs_query, after=after)
data = get_graphql_response(
settings=settings,
query=prs_query,
after=after
)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
@@ -480,16 +484,10 @@ def get_contributors(settings: Settings):
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = datetime.now(timezone.utc) - timedelta(days=3 * 30)
three_months_ago = (datetime.now(timezone.utc) - timedelta(days=3*30))
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return (
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
authors,
)
return contributors, contributor_scores, recent_contributor_scores, reviewers, authors
def get_top_users(
@@ -526,13 +524,9 @@ if __name__ == "__main__":
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
(
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
pr_authors,
) = get_contributors(settings=settings)
contributors, contributor_scores, recent_contributor_scores, reviewers, pr_authors = get_contributors(
settings=settings
)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
@@ -543,9 +537,7 @@ if __name__ == "__main__":
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin",
"ccurme",
"vbarda",
"rlancemartin"
}
hidden_logins = {
"dev2049",
@@ -553,7 +545,6 @@ if __name__ == "__main__":
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"isahers1",
"dqbd",
"bracesproul",
"akira",
@@ -565,7 +556,7 @@ if __name__ == "__main__":
maintainers.append(
{
"login": login,
"count": contributors[login], # + question_commentors[login],
"count": contributors[login], #+ question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
@@ -621,7 +612,9 @@ if __name__ == "__main__":
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if people_old_content == new_people_content:
if (
people_old_content == new_people_content
):
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
@@ -634,7 +627,9 @@ if __name__ == "__main__":
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(["git", "add", str(people_path)], check=True)
subprocess.run(
["git", "add", str(people_path)], check=True
)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
@@ -643,4 +638,4 @@ if __name__ == "__main__":
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")
logging.info("Finished")

View File

@@ -1,128 +1,16 @@
import glob
import json
import os
import re
import sys
import tomllib
from collections import defaultdict
from typing import Dict, List, Set
from pathlib import Path
import os
from typing import Dict
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/community",
"libs/langchain",
"libs/experimental",
]
def all_package_dirs() -> Set[str]:
return {
"/".join(path.split("/")[:-1]).lstrip("./")
for path in glob.glob("./libs/**/pyproject.toml", recursive=True)
if "libs/cli" not in path and "libs/standard-tests" not in path
}
def dependents_graph() -> dict:
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
for path in glob.glob("./libs/**/pyproject.toml", recursive=True):
if "template" in path:
continue
# load regular and test deps from pyproject.toml
with open(path, "rb") as f:
pyproject = tomllib.load(f)["tool"]["poetry"]
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in [
*pyproject["dependencies"].keys(),
*pyproject["group"]["test"]["dependencies"].keys(),
]:
if "langchain" in dep:
dependents[dep].add(pkg_dir)
continue
# load extended deps from extended_testing_deps.txt
package_path = Path(path).parent
extended_requirement_path = package_path / "extended_testing_deps.txt"
if extended_requirement_path.exists():
with open(extended_requirement_path, "r") as f:
extended_deps = f.read().splitlines()
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith(
"-e ../partners/"
), "Extended test deps should only editable install partner packages"
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
dep = depline.split("==")[0]
if "langchain" in dep:
dependents[dep].add(pkg_dir)
return dependents
def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
updated = set()
for dir_ in dirs_to_eval:
# handle core manually because it has so many dependents
if "core" in dir_:
updated.add(dir_)
continue
pkg = "langchain-" + dir_.split("/")[-1]
updated.update(dependents[pkg])
updated.add(dir_)
return list(updated)
def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
min_python = "3.8"
max_python = "3.12"
# custom logic for specific directories
if dir_ == "libs/partners/milvus":
# milvus poetry doesn't allow 3.12 because they
# declare deps in funny way
max_python = "3.11"
return [
{"working-directory": dir_, "python-version": min_python},
{"working-directory": dir_, "python-version": max_python},
]
def _get_configs_for_multi_dirs(
job: str, dirs_to_run: List[str], dependents: dict
) -> List[Dict[str, str]]:
if job == "lint":
dirs = add_dependents(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"],
dependents,
)
elif job in ["test", "compile-integration-tests", "dependencies"]:
dirs = add_dependents(
dirs_to_run["test"] | dirs_to_run["extended-test"], dependents
)
elif job == "extended-tests":
dirs = list(dirs_to_run["extended-test"])
else:
raise ValueError(f"Unknown job: {job}")
return [
config for dir_ in dirs for config in _get_configs_for_single_dir(job, dir_)
]
if __name__ == "__main__":
files = sys.argv[1:]
@@ -133,11 +21,10 @@ if __name__ == "__main__":
}
docs_edited = False
if len(files) >= 300:
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
dirs_to_run["lint"] = all_package_dirs()
dirs_to_run["test"] = all_package_dirs()
dirs_to_run["extended-test"] = set(LANGCHAIN_DIRS)
raise ValueError("Max diff reached. Please manually run CI on changed libs.")
for file in files:
if any(
file.startswith(dir_)
@@ -194,25 +81,14 @@ if __name__ == "__main__":
docs_edited = True
dirs_to_run["lint"].add(".")
dependents = dependents_graph()
# we now have dirs_by_job
# todo: clean this up
map_job_to_configs = {
job: _get_configs_for_multi_dirs(job, dirs_to_run, dependents)
for job in [
"lint",
"test",
"extended-tests",
"compile-integration-tests",
"dependencies",
]
outputs = {
"dirs-to-lint": list(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
"docs-edited": "true" if docs_edited else "",
}
map_job_to_configs["test-doc-imports"] = (
[{"python-version": "3.12"}] if docs_edited else []
)
for key, value in map_job_to_configs.items():
for key, value in outputs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}")
print(f"{key}={json_output}") # noqa: T201

View File

@@ -1,35 +0,0 @@
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
releasing_rc = "rc" in version
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["tool"]["poetry"]["dependencies"]
for lib in dependencies:
dep_version = dependencies[lib]
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {lib} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {lib} has allow-prereleases set to true. Please remove this."
)

View File

@@ -1,11 +1,6 @@
import sys
if sys.version_info >= (3, 11):
import tomllib
else:
# for python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
import tomllib
from packaging.version import parse as parse_version
import re
@@ -14,11 +9,8 @@ MIN_VERSION_LIBS = [
"langchain-community",
"langchain",
"langchain-text-splitters",
"SQLAlchemy",
]
SKIP_IF_PULL_REQUEST = ["langchain-core"]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
@@ -45,7 +37,7 @@ def get_min_version(version: str) -> str:
raise ValueError(f"Unrecognized version format: {version}")
def get_min_version_from_toml(toml_path: str, versions_for: str):
def get_min_version_from_toml(toml_path: str):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
@@ -58,10 +50,6 @@ def get_min_version_from_toml(toml_path: str, versions_for: str):
# Iterate over the libs in MIN_VERSION_LIBS
for lib in MIN_VERSION_LIBS:
if versions_for == "pull_request" and lib in SKIP_IF_PULL_REQUEST:
# some libs only get checked on release because of simultaneous
# changes
continue
# Check if the lib is present in the dependencies
if lib in dependencies:
# Get the version string
@@ -82,10 +70,10 @@ def get_min_version_from_toml(toml_path: str, versions_for: str):
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
versions_for = sys.argv[2]
assert versions_for in ["release", "pull_request"]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file, versions_for)
min_versions = get_min_version_from_toml(toml_file)
print(" ".join([f"{lib}=={version}" for lib, version in min_versions.items()]))
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201

View File

@@ -1,7 +0,0 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -7,10 +7,6 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
@@ -21,14 +17,21 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: "poetry run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: compile-integration

View File

@@ -11,10 +11,6 @@ on:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
@@ -25,14 +21,21 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: dependency checks ${{ inputs.python-version }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: dependency checks ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: pydantic-cross-compat

View File

@@ -6,28 +6,30 @@ on:
working-directory:
required: true
type: string
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
environment: Scheduled testing
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: Python ${{ inputs.python-version }}
strategy:
matrix:
python-version:
- "3.8"
- "3.11"
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
@@ -51,15 +53,8 @@ jobs:
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@@ -11,10 +11,6 @@ on:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
@@ -25,15 +21,27 @@ env:
jobs:
build:
name: "make lint #${{ inputs.python-version }}"
name: "make lint #${{ matrix.python-version }}"
runs-on: ubuntu-latest
strategy:
matrix:
# Only lint on the min and max supported Python versions.
# It's extremely unlikely that there's a lint issue on any version in between
# that doesn't show up on the min or max versions.
#
# GitHub rate-limits how many jobs can be running at any one time.
# Starting new jobs is also relatively slow,
# so linting on fewer versions makes CI faster.
python-version:
- "3.8"
- "3.11"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: lint-with-extras
@@ -78,7 +86,7 @@ jobs:
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ inputs.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
- name: Analysing the code with our lint
@@ -112,7 +120,7 @@ jobs:
with:
path: |
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ inputs.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}

View File

@@ -72,66 +72,10 @@ jobs:
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
permissions: write-all
@@ -143,7 +87,6 @@ jobs:
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
steps:
@@ -189,7 +132,7 @@ jobs:
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 15 && \
sleep 5 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
@@ -202,7 +145,7 @@ jobs:
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the test PyPI version.
@@ -221,17 +164,12 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
poetry run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release)"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
@@ -240,7 +178,7 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
poetry run pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
@@ -250,10 +188,6 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
@@ -289,14 +223,12 @@ jobs:
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
@@ -338,7 +270,6 @@ jobs:
mark-release:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
@@ -375,6 +306,5 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: ${{ needs.release-notes.outputs.release-body }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}

View File

@@ -11,10 +11,6 @@ on:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
@@ -25,14 +21,21 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: "make test #${{ inputs.python-version }}"
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: "make test #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
@@ -65,22 +68,3 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging tomli
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
# Temporarily disabled until we can get the minimum versions working
# - name: Run unit tests with minimum dependency versions
# if: ${{ steps.min-version.outputs.min-versions != '' }}
# env:
# MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
# run: |
# poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
# make tests
# working-directory: ${{ inputs.working-directory }}

View File

@@ -2,11 +2,6 @@ name: test_doc_imports
on:
workflow_call:
inputs:
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
@@ -14,14 +9,18 @@ env:
jobs:
build:
runs-on: ubuntu-latest
name: "check doc imports #${{ inputs.python-version }}"
strategy:
matrix:
python-version:
- "3.11"
name: "check doc imports #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: core

View File

@@ -98,3 +98,4 @@ jobs:
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
attestations: false

View File

@@ -7,7 +7,6 @@ on:
jobs:
check-links:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

View File

@@ -26,112 +26,104 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
lint: ${{ steps.set-matrix.outputs.lint }}
test: ${{ steps.set-matrix.outputs.test }}
extended-tests: ${{ steps.set-matrix.outputs.extended-tests }}
compile-integration-tests: ${{ steps.set-matrix.outputs.compile-integration-tests }}
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
docs-edited: ${{ steps.set-matrix.outputs.docs-edited }}
lint:
name: cd ${{ matrix.job-configs.working-directory }}
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.lint != '[]' }}
if: ${{ needs.build.outputs.dirs-to-lint != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.lint) }}
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-lint) }}
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test:
name: cd ${{ matrix.job-configs.working-directory }}
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test != '[]' }}
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test) }}
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test-doc-imports:
needs: [ build ]
if: ${{ needs.build.outputs.test-doc-imports != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
if: ${{ needs.build.outputs.dirs-to-test != '[]' || needs.build.outputs.docs-edited }}
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
with:
python-version: ${{ matrix.job-configs.python-version }}
compile-integration-tests:
name: cd ${{ matrix.job-configs.working-directory }}
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.compile-integration-tests) }}
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
working-directory: ${{ matrix.working-directory }}
secrets: inherit
dependencies:
name: cd ${{ matrix.job-configs.working-directory }}
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dependencies != '[]' }}
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.dependencies) }}
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
working-directory: ${{ matrix.working-directory }}
secrets: inherit
extended-tests:
name: "cd ${{ matrix.job-configs.working-directory }} / make extended_tests #${{ matrix.job-configs.python-version }}"
name: "cd ${{ matrix.working-directory }} / make extended_tests #${{ matrix.python-version }}"
needs: [ build ]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
if: ${{ needs.build.outputs.dirs-to-extended-test != '[]' }}
strategy:
matrix:
# note different variable for extended test dirs
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-extended-test) }}
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.job-configs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install --with test
poetry run pip install uv
poetry run uv pip install -r extended_testing_deps.txt
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests

View File

@@ -1,36 +0,0 @@
---
name: Integration docs lint
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
with:
filter: |
*.ipynb
*.md
*.mdx
- name: Check new docs
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -29,9 +29,9 @@ jobs:
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py

View File

@@ -7,4 +7,4 @@ ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
print(f"::set-output name=ignore_words_list::{ignore_words_list}") # noqa: T201

View File

@@ -16,7 +16,6 @@ jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
permissions: write-all
steps:
- name: Dump GitHub context
env:

View File

@@ -10,11 +10,8 @@ env:
jobs:
build:
if: github.repository_owner == 'langchain-ai'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version:
- "3.8"
@@ -27,38 +24,16 @@ jobs:
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/together"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }}
uses: "./langchain/.github/actions/poetry_setup"
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: langchain/${{ matrix.working-directory }}
working-directory: ${{ matrix.working-directory }}
cache-key: scheduled
- name: 'Authenticate to Google Cloud'
@@ -67,20 +42,16 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Run integration tests
working-directory: ${{ matrix.working-directory }}
shell: bash
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -95,24 +66,12 @@ jobs:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
make integration_test
- name: Ensure the tests did not create any additional files
working-directory: langchain
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
set -eu

2
.gitignore vendored
View File

@@ -133,7 +133,6 @@ env.bak/
# mypy
.mypy_cache/
.mypy_cache_test/
.dmypy.json
dmypy.json
@@ -179,4 +178,3 @@ _dist
docs/docs/templates
prof
virtualenv/

View File

@@ -3,7 +3,7 @@
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
@sed -n 's/^##//p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
all: help
@@ -32,19 +32,10 @@ api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
cd docs/api_reference && poetry run make clean
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:

View File

@@ -2,16 +2,17 @@
⚡ Build context-aware reasoning applications ⚡
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -37,44 +38,43 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/v0.2/docs/concepts#langchain-expression-language-lcel), [components](https://python.langchain.com/v0.2/docs/concepts), and [third-party integrations](https://python.langchain.com/v0.2/docs/integrations/platforms/).
Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
@@ -87,49 +87,49 @@ Off-the-shelf chains make it easy to get started. Components make it easy to cus
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
## Components
Components fall into the following **modules**:
**📃 Model I/O**
**📃 Model I/O:**
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
**📚 Retrieval**
**📚 Retrieval:**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
**🤖 Agents**
**🤖 Agents:**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Create stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploy LangChain runnables and chains as REST APIs.
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
## 🌟 Contributors

View File

@@ -64,7 +64,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai langchain-chroma langchain-experimental # (newest versions required for multi-modal)"
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
]
},
{
@@ -355,7 +355,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -37,7 +37,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -U --quiet langchain langchain-chroma langchain-community openai langchain-experimental\n",
"%pip install -U --quiet langchain langchain_community openai chromadb langchain-experimental\n",
"%pip install --quiet \"unstructured[all-docs]\" pypdf pillow pydantic lxml pillow matplotlib chromadb tiktoken"
]
},
@@ -344,8 +344,8 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import VertexAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"pip install -U langchain umap-learn scikit-learn langchain_community tiktoken langchain-openai langchainhub langchain-chroma langchain-anthropic"
"pip install -U langchain umap-learn scikit-learn langchain_community tiktoken langchain-openai langchainhub chromadb langchain-anthropic"
]
},
{
@@ -645,7 +645,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# Initialize all_texts with leaf_texts\n",
"all_texts = leaf_texts.copy()\n",

View File

@@ -57,6 +57,4 @@ Notebook | Description
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.

View File

@@ -39,7 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain langchain-chroma unstructured[all-docs] pydantic lxml langchainhub"
"! pip install langchain unstructured[all-docs] pydantic lxml langchainhub"
]
},
{
@@ -320,7 +320,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -59,7 +59,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain langchain-chroma unstructured[all-docs] pydantic lxml"
"! pip install langchain unstructured[all-docs] pydantic lxml"
]
},
{
@@ -375,7 +375,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -59,7 +59,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain langchain-chroma unstructured[all-docs] pydantic lxml"
"! pip install langchain unstructured[all-docs] pydantic lxml"
]
},
{
@@ -378,8 +378,8 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",

View File

@@ -19,7 +19,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai langchain_chroma langchain-experimental # (newest versions required for multi-modal)"
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
]
},
{
@@ -132,7 +132,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"baseline = Chroma.from_texts(\n",

View File

@@ -28,7 +28,7 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",

View File

@@ -14,7 +14,7 @@
}
],
"source": [
"%pip install -qU langchain-airbyte langchain_chroma"
"%pip install -qU langchain-airbyte"
]
},
{
@@ -123,7 +123,7 @@
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",

View File

@@ -46,7 +46,7 @@
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Needed since jupyter runs an async eventloop\n",
"# Needed synce jupyter runs an async eventloop\n",
"nest_asyncio.apply()"
]
},

File diff suppressed because one or more lines are too long

View File

@@ -39,7 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain docugami==0.0.8 dgml-utils==0.3.0 pydantic langchainhub langchain-chroma hnswlib --upgrade --quiet"
"! pip install langchain docugami==0.0.8 dgml-utils==0.3.0 pydantic langchainhub chromadb hnswlib --upgrade --quiet"
]
},
{
@@ -547,7 +547,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -84,7 +84,7 @@
}
],
"source": [
"%pip install --quiet pypdf langchain-chroma tiktoken openai \n",
"%pip install --quiet pypdf chromadb tiktoken openai \n",
"%pip uninstall -y langchain-fireworks\n",
"%pip install --editable /mnt/disks/data/langchain/libs/partners/fireworks"
]
@@ -138,7 +138,7 @@
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_fireworks.embeddings import FireworksEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",

View File

@@ -170,7 +170,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",

File diff suppressed because one or more lines are too long

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain-chroma langchain_community tiktoken langchain-openai langchainhub langchain langgraph"
"! pip install langchain_community tiktoken langchain-openai langchainhub chromadb langchain langgraph"
]
},
{
@@ -30,8 +30,8 @@
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"urls = [\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain-chroma langchain_community tiktoken langchain-openai langchainhub langchain langgraph tavily-python"
"! pip install langchain_community tiktoken langchain-openai langchainhub chromadb langchain langgraph tavily-python"
]
},
{
@@ -77,8 +77,8 @@
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"urls = [\n",
@@ -180,8 +180,8 @@
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import Document\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.messages import BaseMessage, FunctionMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain-chroma langchain_community tiktoken langchain-openai langchainhub langchain langgraph"
"! pip install langchain_community tiktoken langchain-openai langchainhub chromadb langchain langgraph"
]
},
{
@@ -86,8 +86,8 @@
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"urls = [\n",
@@ -188,7 +188,7 @@
"from langchain.output_parsers import PydanticOutputParser\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.messages import BaseMessage, FunctionMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",

View File

@@ -58,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai langchain-chroma langchain-experimental # (newest versions required for multi-modal)"
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
]
},
{
@@ -187,7 +187,7 @@
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",

View File

@@ -58,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain-nomic langchain-chroma langchain-community tiktoken langchain-openai langchain"
"! pip install -U langchain-nomic langchain_community tiktoken langchain-openai chromadb langchain"
]
},
{
@@ -167,7 +167,7 @@
"source": [
"import os\n",
"\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_nomic import NomicEmbeddings\n",

View File

@@ -1,497 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9fc3897d-176f-4729-8fd1-cfb4add53abd",
"metadata": {},
"source": [
"## Nomic multi-modal RAG\n",
"\n",
"Many documents contain a mixture of content types, including text and images. \n",
"\n",
"Yet, information captured in images is lost in most RAG applications.\n",
"\n",
"With the emergence of multimodal LLMs, like [GPT-4V](https://openai.com/research/gpt-4v-system-card), it is worth considering how to utilize images in RAG:\n",
"\n",
"In this demo we\n",
"\n",
"* Use multimodal embeddings from Nomic Embed [Vision](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) and [Text](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) to embed images and text\n",
"* Retrieve both using similarity search\n",
"* Pass raw images and text chunks to a multimodal LLM for answer synthesis \n",
"\n",
"## Signup\n",
"\n",
"Get your API token, then run:\n",
"```\n",
"! nomic login\n",
"```\n",
"\n",
"Then run with your generated API token \n",
"```\n",
"! nomic login < token > \n",
"```\n",
"\n",
"## Packages\n",
"\n",
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54926b9b-75c2-4cd4-8f14-b3882a0d370b",
"metadata": {},
"outputs": [],
"source": [
"! nomic login token"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"! pip install -U langchain-nomic langchain-chroma langchain-community tiktoken langchain-openai langchain # (newest versions required for multi-modal)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acbdc603-39e2-4a5f-836c-2bbaecd46b0b",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
"metadata": {},
"source": [
"## Data Loading\n",
"\n",
"### Partition PDF text and images\n",
" \n",
"Let's look at an example pdfs containing interesting images.\n",
"\n",
"1/ Art from the J Paul Getty museum:\n",
"\n",
" * Here is a [zip file](https://drive.google.com/file/d/18kRKbq2dqAhhJ3DfZRnYcTBEUfYxe1YR/view?usp=sharing) with the PDF and the already extracted images. \n",
"* https://www.getty.edu/publications/resources/virtuallibrary/0892360224.pdf\n",
"\n",
"2/ Famous photographs from library of congress:\n",
"\n",
"* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n",
"* We'll use this as an example below\n",
"\n",
"We can use `partition_pdf` below from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images.\n",
"\n",
"To supply this to extract the images:\n",
"```\n",
"extract_images_in_pdf=True\n",
"```\n",
"\n",
"\n",
"\n",
"If using this zip file, then you can simply process the text only with:\n",
"```\n",
"extract_images_in_pdf=False\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9646b524-71a7-4b2a-bdc8-0b81f77e968f",
"metadata": {},
"outputs": [],
"source": [
"# Folder with pdf and extracted images\n",
"from pathlib import Path\n",
"\n",
"# replace with actual path to images\n",
"path = Path(\"../art\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "77f096ab-a933-41d0-8f4e-1efc83998fc3",
"metadata": {},
"outputs": [],
"source": [
"path.resolve()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc4839c0-8773-4a07-ba59-5364501269b2",
"metadata": {},
"outputs": [],
"source": [
"# Extract images, tables, and chunk text\n",
"from unstructured.partition.pdf import partition_pdf\n",
"\n",
"raw_pdf_elements = partition_pdf(\n",
" filename=str(path.resolve()) + \"/getty.pdf\",\n",
" extract_images_in_pdf=False,\n",
" infer_table_structure=True,\n",
" chunking_strategy=\"by_title\",\n",
" max_characters=4000,\n",
" new_after_n_chars=3800,\n",
" combine_text_under_n_chars=2000,\n",
" image_output_dir_path=path,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "969545ad",
"metadata": {},
"outputs": [],
"source": [
"# Categorize text elements by type\n",
"tables = []\n",
"texts = []\n",
"for element in raw_pdf_elements:\n",
" if \"unstructured.documents.elements.Table\" in str(type(element)):\n",
" tables.append(str(element))\n",
" elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):\n",
" texts.append(str(element))"
]
},
{
"cell_type": "markdown",
"id": "5d8e6349-1547-4cbf-9c6f-491d8610ec10",
"metadata": {},
"source": [
"## Multi-modal embeddings with our document\n",
"\n",
"We will use [nomic-embed-vision-v1.5](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) embeddings. This model is aligned \n",
"to [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) allowing for multimodal semantic search and Multimodal RAG!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4bc15842-cb95-4f84-9eb5-656b0282a800",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import uuid\n",
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_chroma import Chroma\n",
"from langchain_nomic import NomicEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",
"# Create chroma\n",
"text_vectorstore = Chroma(\n",
" collection_name=\"mm_rag_clip_photos_text\",\n",
" embedding_function=NomicEmbeddings(\n",
" vision_model=\"nomic-embed-vision-v1.5\", model=\"nomic-embed-text-v1.5\"\n",
" ),\n",
")\n",
"image_vectorstore = Chroma(\n",
" collection_name=\"mm_rag_clip_photos_image\",\n",
" embedding_function=NomicEmbeddings(\n",
" vision_model=\"nomic-embed-vision-v1.5\", model=\"nomic-embed-text-v1.5\"\n",
" ),\n",
")\n",
"\n",
"# Get image URIs with .jpg extension only\n",
"image_uris = sorted(\n",
" [\n",
" os.path.join(path, image_name)\n",
" for image_name in os.listdir(path)\n",
" if image_name.endswith(\".jpg\")\n",
" ]\n",
")\n",
"\n",
"# Add images\n",
"image_vectorstore.add_images(uris=image_uris)\n",
"\n",
"# Add documents\n",
"text_vectorstore.add_texts(texts=texts)\n",
"\n",
"# Make retriever\n",
"image_retriever = image_vectorstore.as_retriever()\n",
"text_retriever = text_vectorstore.as_retriever()"
]
},
{
"cell_type": "markdown",
"id": "02a186d0-27e0-4820-8092-63b5349dd25d",
"metadata": {},
"source": [
"## RAG\n",
"\n",
"`vectorstore.add_images` will store / retrieve images as base64 encoded strings.\n",
"\n",
"These can be passed to [GPT-4V](https://platform.openai.com/docs/guides/vision)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "344f56a8-0dc3-433e-851c-3f7600c7a72b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"import io\n",
"from io import BytesIO\n",
"\n",
"import numpy as np\n",
"from PIL import Image\n",
"\n",
"\n",
"def resize_base64_image(base64_string, size=(128, 128)):\n",
" \"\"\"\n",
" Resize an image encoded as a Base64 string.\n",
"\n",
" Args:\n",
" base64_string (str): Base64 string of the original image.\n",
" size (tuple): Desired size of the image as (width, height).\n",
"\n",
" Returns:\n",
" str: Base64 string of the resized image.\n",
" \"\"\"\n",
" # Decode the Base64 string\n",
" img_data = base64.b64decode(base64_string)\n",
" img = Image.open(io.BytesIO(img_data))\n",
"\n",
" # Resize the image\n",
" resized_img = img.resize(size, Image.LANCZOS)\n",
"\n",
" # Save the resized image to a bytes buffer\n",
" buffered = io.BytesIO()\n",
" resized_img.save(buffered, format=img.format)\n",
"\n",
" # Encode the resized image to Base64\n",
" return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
"\n",
"\n",
"def is_base64(s):\n",
" \"\"\"Check if a string is Base64 encoded\"\"\"\n",
" try:\n",
" return base64.b64encode(base64.b64decode(s)) == s.encode()\n",
" except Exception:\n",
" return False\n",
"\n",
"\n",
"def split_image_text_types(docs):\n",
" \"\"\"Split numpy array images and texts\"\"\"\n",
" images = []\n",
" text = []\n",
" for doc in docs:\n",
" doc = doc.page_content # Extract Document contents\n",
" if is_base64(doc):\n",
" # Resize image to avoid OAI server error\n",
" images.append(\n",
" resize_base64_image(doc, size=(250, 250))\n",
" ) # base64 encoded str\n",
" else:\n",
" text.append(doc)\n",
" return {\"images\": images, \"texts\": text}"
]
},
{
"cell_type": "markdown",
"id": "23a2c1d8-fea6-4152-b184-3172dd46c735",
"metadata": {},
"source": [
"Currently, we format the inputs using a `RunnableLambda` while we add image support to `ChatPromptTemplates`.\n",
"\n",
"Our runnable follows the classic RAG flow - \n",
"\n",
"* We first compute the context (both \"texts\" and \"images\" in this case) and the question (just a RunnablePassthrough here) \n",
"* Then we pass this into our prompt template, which is a custom function that formats the message for the gpt-4-vision-preview model. \n",
"* And finally we parse the output as a string."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d8919dc-c238-4746-86ba-45d940a7d260",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c93fab3-74c4-4f1d-958a-0bc4cdd0797e",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",
" # Joining the context texts into a single string\n",
" formatted_texts = \"\\n\".join(data_dict[\"text_context\"][\"texts\"])\n",
" messages = []\n",
"\n",
" # Adding image(s) to the messages if present\n",
" if data_dict[\"image_context\"][\"images\"]:\n",
" image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": f\"data:image/jpeg;base64,{data_dict['image_context']['images'][0]}\"\n",
" },\n",
" }\n",
" messages.append(image_message)\n",
"\n",
" # Adding the text message for analysis\n",
" text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": (\n",
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
" \"- Connections between the image and the related text.\\n\\n\"\n",
" f\"User-provided keywords: {data_dict['question']}\\n\\n\"\n",
" \"Text and / or tables:\\n\"\n",
" f\"{formatted_texts}\"\n",
" ),\n",
" }\n",
" messages.append(text_message)\n",
"\n",
" return [HumanMessage(content=messages)]\n",
"\n",
"\n",
"model = ChatOpenAI(temperature=0, model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
"\n",
"# RAG pipeline\n",
"chain = (\n",
" {\n",
" \"text_context\": text_retriever | RunnableLambda(split_image_text_types),\n",
" \"image_context\": image_retriever | RunnableLambda(split_image_text_types),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | RunnableLambda(prompt_func)\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1566096d-97c2-4ddc-ba4a-6ef88c525e4e",
"metadata": {},
"source": [
"## Test retrieval and run RAG"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "90121e56-674b-473b-871d-6e4753fd0c45",
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML, display\n",
"\n",
"\n",
"def plt_img_base64(img_base64):\n",
" # Create an HTML img tag with the base64 string as the source\n",
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
"\n",
" # Display the image by rendering the HTML\n",
" display(HTML(image_html))\n",
"\n",
"\n",
"docs = text_retriever.invoke(\"Women with children\", k=5)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "44eaa532-f035-4c04-b578-02339d42554c",
"metadata": {},
"outputs": [],
"source": [
"docs = image_retriever.invoke(\"Women with children\", k=5)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69fb15fd-76fc-49b4-806d-c4db2990027d",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"Women with children\")"
]
},
{
"cell_type": "markdown",
"id": "227f08b8-e732-4089-b65c-6eb6f9e48f15",
"metadata": {},
"source": [
"We can see the images retrieved in the LangSmith trace:\n",
"\n",
"LangSmith [trace](https://smith.langchain.com/public/69c558a5-49dc-4c60-a49b-3adbb70f74c5/r/e872c2c8-528c-468f-aefd-8b5cd730a673)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -20,8 +20,8 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]

View File

@@ -80,7 +80,7 @@
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"

View File

@@ -6,24 +6,23 @@
"source": [
"# Oracle AI Vector Search with Document Processing\n",
"Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords.\n",
"One of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system.\n",
"This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.\n",
"One of the biggest benefit of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system. This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.\n",
"\n",
"In addition, your vectors can benefit from all of Oracle Databases most powerful features, like the following:\n",
"In addition, because Oracle has been building database technologies for so long, your vectors can benefit from all of Oracle Database's most powerful features, like the following:\n",
"\n",
" * [Partitioning Support](https://www.oracle.com/database/technologies/partitioning.html)\n",
" * [Real Application Clusters scalability](https://www.oracle.com/database/real-application-clusters/)\n",
" * [Exadata smart scans](https://www.oracle.com/database/technologies/exadata/software/smartscan/)\n",
" * [Shard processing across geographically distributed databases](https://www.oracle.com/database/distributed-database/)\n",
" * [Transactions](https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/transactions.html)\n",
" * [Parallel SQL](https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/parallel-exec-intro.html#GUID-D28717E4-0F77-44F5-BB4E-234C31D4E4BA)\n",
" * [Disaster recovery](https://www.oracle.com/database/data-guard/)\n",
" * [Security](https://www.oracle.com/security/database-security/)\n",
" * [Oracle Machine Learning](https://www.oracle.com/artificial-intelligence/database-machine-learning/)\n",
" * [Oracle Graph Database](https://www.oracle.com/database/integrated-graph-database/)\n",
" * [Oracle Spatial and Graph](https://www.oracle.com/database/spatial/)\n",
" * [Oracle Blockchain](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_blockchain_table.html#GUID-B469E277-978E-4378-A8C1-26D3FF96C9A6)\n",
" * [JSON](https://docs.oracle.com/en/database/oracle/oracle-database/23/adjsn/json-in-oracle-database.html)\n",
" * Partitioning Support\n",
" * Real Application Clusters scalability\n",
" * Exadata smart scans\n",
" * Shard processing across geographically distributed databases\n",
" * Transactions\n",
" * Parallel SQL\n",
" * Disaster recovery\n",
" * Security\n",
" * Oracle Machine Learning\n",
" * Oracle Graph Database\n",
" * Oracle Spatial and Graph\n",
" * Oracle Blockchain\n",
" * JSON\n",
"\n",
"This guide demonstrates how Oracle AI Vector Search can be used with Langchain to serve an end-to-end RAG pipeline. This guide goes through examples of:\n",
"\n",
@@ -34,13 +33,6 @@
" * Storing and Indexing them in a Vector Store and querying them for queries in OracleVS"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you are just starting with Oracle Database, consider exploring the [free Oracle 23 AI](https://www.oracle.com/database/free/#resources) which provides a great introduction to setting up your database environment. While working with the database, it is often advisable to avoid using the system user by default; instead, you can create your own user for enhanced security and customization. For detailed steps on user creation, refer to our [end-to-end guide](https://github.com/langchain-ai/langchain/blob/master/cookbook/oracleai_demo.ipynb) which also shows how to set up a user in Oracle. Additionally, understanding user privileges is crucial for managing database security effectively. You can learn more about this topic in the official [Oracle guide](https://docs.oracle.com/en/database/oracle/oracle-database/19/admqs/administering-user-accounts-and-security.html#GUID-36B21D72-1BBB-46C9-A0C9-F0D2A8591B8D) on administering user accounts and security."
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -86,7 +78,8 @@
"\n",
"import oracledb\n",
"\n",
"# Update with your username, password, hostname, and service_name\n",
"# please update with your username, password, hostname and service_name\n",
"# please make sure this user has sufficient privileges to perform all below\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
@@ -96,45 +89,40 @@
" print(\"Connection successful!\")\n",
"\n",
" cursor = conn.cursor()\n",
" try:\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- Drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error dropping user: ' || SQLERRM);\n",
" end;\n",
" \n",
" -- Create user and grant privileges\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
" \n",
" -- Network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db)\n",
" );\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" except Exception as e:\n",
" print(f\"User setup failed with error: {e}\")\n",
" finally:\n",
" cursor.close()\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error setting up user.');\n",
" end;\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
"\n",
" -- network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db));\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" cursor.close()\n",
" conn.close()\n",
"except Exception as e:\n",
" print(f\"Connection failed with error: {e}\")\n",
" print(\"User setup failed!\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
@@ -143,13 +131,13 @@
"metadata": {},
"source": [
"## Process Documents using Oracle AI\n",
"Consider the following scenario: users possess documents stored either in an Oracle Database or a file system and intend to utilize this data with Oracle AI Vector Search powered by Langchain.\n",
"Let's think about a scenario that the users have some documents in Oracle Database or in a file system. They want to use the data for Oracle AI Vector Search using Langchain.\n",
"\n",
"To prepare the documents for analysis, a comprehensive preprocessing workflow is necessary. Initially, the documents must be retrieved, summarized (if required), and chunked as needed. Subsequent steps involve generating embeddings for these chunks and integrating them into the Oracle AI Vector Store. Users can then conduct semantic searches on this data.\n",
"For that, the users need to do some document preprocessing. The first step would be to read the documents, generate their summary(if needed) and then chunk/split them if needed. After that, they need to generate the embeddings for those chunks and store into Oracle AI Vector Store. Finally, the users will perform some semantic queries on those data. \n",
"\n",
"The Oracle AI Vector Search Langchain library encompasses a suite of document processing tools that facilitate document loading, chunking, summary generation, and embedding creation.\n",
"Oracle AI Vector Search Langchain library provides a range of document processing functionalities including document loading, splitting, generating summary and embeddings.\n",
"\n",
"In the sections that follow, we will detail the utilization of Oracle AI Langchain APIs to effectively implement each of these processes."
"In the following sections, we will go through how to use Oracle AI Langchain APIs to achieve each of these functionalities individually. "
]
},
{
@@ -157,7 +145,7 @@
"metadata": {},
"source": [
"### Connect to Demo User\n",
"The following sample code will show how to connect to Oracle Database. By default, python-oracledb runs in a Thin mode which connects directly to Oracle Database. This mode does not need Oracle Client libraries. However, some additional functionality is available when python-oracledb uses them. Python-oracledb is said to be in Thick mode when Oracle Client libraries are used. Both modes have comprehensive functionality supporting the Python Database API v2.0 Specification. See the following [guide](https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_a.html#featuresummary) that talks about features supported in each mode. You might want to switch to thick-mode if you are unable to use thin-mode."
"The following sample code will show how to connect to Oracle Database. "
]
},
{
@@ -254,7 +242,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"With the inclusion of a demo user and a populated sample table, the remaining configuration involves setting up embedding and summary functionalities. Users are presented with multiple provider options, including local database solutions and third-party services such as Ocigenai, Hugging Face, and OpenAI. Should users opt for a third-party provider, they are required to establish credentials containing the necessary authentication details. Conversely, if selecting a database as the provider for embeddings, it is necessary to upload an ONNX model to the Oracle Database. No additional setup is required for summary functionalities when using the database option."
"\n",
"\n",
"Now that we have a demo user and a demo table with some data, we just need to do one more setup. For embedding and summary, we have a few provider options that the users can choose from such as database, 3rd party providers like ocigenai, huggingface, openai, etc. If the users choose to use 3rd party provider, they need to create a credential with corresponding authentication information. On the other hand, if the users choose to use 'database' as provider, they need to load an onnx model to Oracle Database for embeddings; however, for summary, they don't need to do anything."
]
},
{
@@ -263,13 +253,13 @@
"source": [
"### Load ONNX Model\n",
"\n",
"Oracle accommodates a variety of embedding providers, enabling users to choose between proprietary database solutions and third-party services such as OCIGENAI and HuggingFace. This selection dictates the methodology for generating and managing embeddings.\n",
"To generate embeddings, Oracle provides a few provider options for users to choose from. The users can choose 'database' provider or some 3rd party providers like OCIGENAI, HuggingFace, etc.\n",
"\n",
"***Important*** : Should users opt for the database option, they must upload an ONNX model into the Oracle Database. Conversely, if a third-party provider is selected for embedding generation, uploading an ONNX model to Oracle Database is not required.\n",
"***Note*** If the users choose database option, they need to load an ONNX model to Oracle Database. The users do not need to load an ONNX model to Oracle Database if they choose to use 3rd party provider to generate embeddings.\n",
"\n",
"A significant advantage of utilizing an ONNX model directly within Oracle is the enhanced security and performance it offers by eliminating the need to transmit data to external parties. Additionally, this method avoids the latency typically associated with network or REST API calls.\n",
"One of the core benefits of using an ONNX model is that the users do not need to transfer their data to 3rd party to generate embeddings. And also, since it does not involve any network or REST API calls, it may provide better performance.\n",
"\n",
"Below is the example code to upload an ONNX model into Oracle Database:"
"Here is the sample code to load an ONNX model to Oracle Database:"
]
},
{
@@ -308,11 +298,11 @@
"source": [
"### Create Credential\n",
"\n",
"When selecting third-party providers for generating embeddings, users are required to establish credentials to securely access the provider's endpoints.\n",
"On the other hand, if the users choose to use 3rd party provider to generate embeddings and summary, they need to create credential to access 3rd party provider's end points.\n",
"\n",
"***Important:*** No credentials are necessary when opting for the 'database' provider to generate embeddings. However, should users decide to utilize a third-party provider, they must create credentials specific to the chosen provider.\n",
"***Note:*** The users do not need to create any credential if they choose to use 'database' provider to generate embeddings and summary. Should the users choose to 3rd party provider, they need to create credential for the 3rd party provider they want to use. \n",
"\n",
"Below is an illustrative example:"
"Here is a sample example:"
]
},
{
@@ -362,11 +352,11 @@
"metadata": {},
"source": [
"### Load Documents\n",
"Users have the flexibility to load documents from either the Oracle Database, a file system, or both, by appropriately configuring the loader parameters. For comprehensive details on these parameters, please consult the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-73397E89-92FB-48ED-94BB-1AD960C4EA1F).\n",
"The users can load the documents from Oracle Database or a file system or both. They just need to set the loader parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"A significant advantage of utilizing OracleDocLoader is its capability to process over 150 distinct file formats, eliminating the need for multiple loaders for different document types. For a complete list of the supported formats, please refer to the [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html).\n",
"The main benefit of using OracleDocLoader is that it can handle 150+ different file formats. You don't need to use different types of loader for different file formats. Here is the list formats that we support: [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html)\n",
"\n",
"Below is a sample code snippet that demonstrates how to use OracleDocLoader"
"The following sample code will show how to do that:"
]
},
{
@@ -409,7 +399,7 @@
"metadata": {},
"source": [
"### Generate Summary\n",
"Now that the user loaded the documents, they may want to generate a summary for each document. The Oracle AI Vector Search Langchain library offers a suite of APIs designed for document summarization. It supports multiple summarization providers such as Database, OCIGENAI, HuggingFace, among others, allowing users to select the provider that best meets their needs. To utilize these capabilities, users must configure the summary parameters as specified. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-EC9DDB58-6A15-4B36-BA66-ECBA20D2CE57)."
"Now that the user loaded the documents, they may want to generate a summary for each document. The Oracle AI Vector Search Langchain library provides an API to do that. There are a few summary generation provider options including Database, OCIGENAI, HuggingFace and so on. The users can choose their preferred provider to generate a summary. Like before, they just need to set the summary parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters."
]
},
{
@@ -480,9 +470,9 @@
"metadata": {},
"source": [
"### Split Documents\n",
"The documents may vary in size, ranging from small to very large. Users often prefer to chunk their documents into smaller sections to facilitate the generation of embeddings. A wide array of customization options is available for this splitting process. For comprehensive details regarding these parameters, please consult the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-4E145629-7098-4C7C-804F-FC85D1F24240).\n",
"The documents can be in different sizes: small, medium, large, or very large. The users like to split/chunk their documents into smaller pieces to generate embeddings. There are lots of different splitting customizations the users can do. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"Below is a sample code illustrating how to implement this:"
"The following sample code will show how to do that:"
]
},
{
@@ -523,14 +513,14 @@
"metadata": {},
"source": [
"### Generate Embeddings\n",
"Now that the documents are chunked as per requirements, the users may want to generate embeddings for these chunks. Oracle AI Vector Search provides multiple methods for generating embeddings, utilizing either locally hosted ONNX models or third-party APIs. For comprehensive instructions on configuring these alternatives, please refer to the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-C6439E94-4E86-4ECD-954E-4B73D53579DE)."
"Now that the documents are chunked as per requirements, the users may want to generate embeddings for these chunks. Oracle AI Vector Search provides a number of ways to generate embeddings. The users can load an ONNX embedding model to Oracle Database and use it to generate embeddings or use some 3rd party API's end points to generate embeddings. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** Users may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
"***Note:*** The users may need to set proxy if they want to use some 3rd party embedding generation providers other than 'database' provider (aka using ONNX model)."
]
},
{
@@ -762,18 +752,20 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The example provided illustrates the creation of a vector store using the DOT_PRODUCT distance strategy. Users have the flexibility to employ various distance strategies with the Oracle AI Vector Store, as detailed in our [comprehensive guide](https://python.langchain.com/v0.1/docs/integrations/vectorstores/oracle/)."
"The above example creates a vector store with DOT_PRODUCT distance strategy. \n",
"\n",
"However, the users can create Oracle AI Vector Store provides different distance strategies. Please see the [comprehensive guide](/docs/integrations/vectorstores/oracle) for more information."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With embeddings now stored in vector stores, it is advisable to establish an index to enhance semantic search performance during query execution.\n",
"Now that we have embeddings stored in vector stores, let's create an index on them to get better semantic search performance during query time.\n",
"\n",
"***Note*** Should you encounter an \"insufficient memory\" error, it is recommended to increase the ***vector_memory_size*** in your database configuration\n",
"***Note*** If you are getting some insufficient memory error, please increase ***vector_memory_size*** in your database.\n",
"\n",
"Below is a sample code snippet for creating an index:"
"Here is the sample code to create an index:"
]
},
{
@@ -793,9 +785,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This example demonstrates the creation of a default HNSW index on embeddings within the 'oravs' table. Users may adjust various parameters according to their specific needs. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/vecse/manage-different-categories-vector-indexes.html).\n",
"The above example creates a default HNSW index on the embeddings stored in 'oravs' table. The users can set different parameters as per their requirements. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"Additionally, various types of vector indices can be created to meet diverse requirements. More details can be found in our [comprehensive guide](https://python.langchain.com/v0.1/docs/integrations/vectorstores/oracle/).\n"
"Also, there are different types of vector indices that the users can create. Please see the [comprehensive guide](/docs/integrations/vectorstores/oracle) for more information.\n"
]
},
{
@@ -805,9 +797,9 @@
"## Perform Semantic Search\n",
"All set!\n",
"\n",
"We have successfully processed the documents and stored them in the vector store, followed by the creation of an index to enhance query performance. We are now prepared to proceed with semantic searches.\n",
"We have processed the documents, stored them to vector store, and then created index to get better query performance. Now let's do some semantic searches.\n",
"\n",
"Below is the sample code for this process:"
"Here is the sample code for this:"
]
},
{

View File

@@ -1,756 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "10f50955-be55-422f-8c62-3a32f8cf02ed",
"metadata": {},
"source": [
"# RAG application running locally on Intel Xeon CPU using langchain and open-source models"
]
},
{
"cell_type": "markdown",
"id": "48113be6-44bb-4aac-aed3-76a1365b9561",
"metadata": {},
"source": [
"Author - Pratool Bharti (pratool.bharti@intel.com)"
]
},
{
"cell_type": "markdown",
"id": "8b10b54b-1572-4ea1-9c1e-1d29fcc3dcd9",
"metadata": {},
"source": [
"In this cookbook, we use langchain tools and open source models to execute locally on CPU. This notebook has been validated to run on Intel Xeon 8480+ CPU. Here we implement a RAG pipeline for Llama2 model to answer questions about Intel Q1 2024 earnings release."
]
},
{
"cell_type": "markdown",
"id": "acadbcec-3468-4926-8ce5-03b678041c0a",
"metadata": {},
"source": [
"**Create a conda or virtualenv environment with python >=3.10 and install following libraries**\n",
"<br>\n",
"\n",
"`pip install --upgrade langchain langchain-community langchainhub langchain-chroma bs4 gpt4all pypdf pysqlite3-binary` <br>\n",
"`pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu`"
]
},
{
"cell_type": "markdown",
"id": "84c392c8-700a-42ec-8e94-806597f22e43",
"metadata": {},
"source": [
"**Load pysqlite3 in sys modules since ChromaDB requires sqlite3.**"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "145cd491-b388-4ea7-bdc8-2f4995cac6fd",
"metadata": {},
"outputs": [],
"source": [
"__import__(\"pysqlite3\")\n",
"import sys\n",
"\n",
"sys.modules[\"sqlite3\"] = sys.modules.pop(\"pysqlite3\")"
]
},
{
"cell_type": "markdown",
"id": "14dde7e2-b236-49b9-b3a0-08c06410418c",
"metadata": {},
"source": [
"**Import essential components from langchain to load and split data**"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "887643ba-249e-48d6-9aa7-d25087e8dfbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import PyPDFLoader"
]
},
{
"cell_type": "markdown",
"id": "922c0eba-8736-4de5-bd2f-3d0f00b16e43",
"metadata": {},
"source": [
"**Download Intel Q1 2024 earnings release**"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d6a2419-5338-4188-8615-a40a65ff8019",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2024-07-15 15:04:43-- https://d1io3yog0oux5.cloudfront.net/_11d435a500963f99155ee058df09f574/intel/db/887/9014/earnings_release/Q1+24_EarningsRelease_FINAL.pdf\n",
"Resolving proxy-dmz.intel.com (proxy-dmz.intel.com)... 10.7.211.16\n",
"Connecting to proxy-dmz.intel.com (proxy-dmz.intel.com)|10.7.211.16|:912... connected.\n",
"Proxy request sent, awaiting response... 200 OK\n",
"Length: 133510 (130K) [application/pdf]\n",
"Saving to: intel_q1_2024_earnings.pdf\n",
"\n",
"intel_q1_2024_earni 100%[===================>] 130.38K --.-KB/s in 0.005s \n",
"\n",
"2024-07-15 15:04:44 (24.6 MB/s) - intel_q1_2024_earnings.pdf saved [133510/133510]\n",
"\n"
]
}
],
"source": [
"!wget 'https://d1io3yog0oux5.cloudfront.net/_11d435a500963f99155ee058df09f574/intel/db/887/9014/earnings_release/Q1+24_EarningsRelease_FINAL.pdf' -O intel_q1_2024_earnings.pdf"
]
},
{
"cell_type": "markdown",
"id": "e3612627-e105-453d-8a50-bbd6e39dedb5",
"metadata": {},
"source": [
"**Loading earning release pdf document through PyPDFLoader**"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cac6278e-ebad-4224-a062-bf6daca24cb0",
"metadata": {},
"outputs": [],
"source": [
"loader = PyPDFLoader(\"intel_q1_2024_earnings.pdf\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "a7dca43b-1c62-41df-90c7-6ed2904f823d",
"metadata": {},
"source": [
"**Splitting entire document in several chunks with each chunk size is 500 tokens**"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4486adbe-0d0e-4685-8c08-c1774ed6e993",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"id": "af142346-e793-4a52-9a56-63e3be416b3d",
"metadata": {},
"source": [
"**Looking at the first split of the document**"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e4240fd1-898e-4bfc-a377-02c9bc25b56e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'intel_q1_2024_earnings.pdf', 'page': 0}, page_content='Intel Corporation\\n2200 Mission College Blvd.\\nSanta Clara, CA 95054-1549\\n \\nNews Release\\n Intel Reports First -Quarter 2024 Financial Results\\nNEWS SUMMARY\\n▪First-quarter revenue of $12.7 billion , up 9% year over year (YoY).\\n▪First-quarter GAAP earnings (loss) per share (EPS) attributable to Intel was $(0.09) ; non-GAAP EPS \\nattributable to Intel was $0.18 .')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"all_splits[0]"
]
},
{
"cell_type": "markdown",
"id": "b88d2632-7c1b-49ef-a691-c0eb67d23e6a",
"metadata": {},
"source": [
"**One of the major step in RAG is to convert each split of document into embeddings and store in a vector database such that searching relevant documents are efficient.** <br>\n",
"**For that, importing Chroma vector database from langchain. Also, importing open source GPT4All for embedding models**"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9ff99dd7-9d47-4239-ba0a-d775792334ba",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import GPT4AllEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "b5d1f4dd-dd8d-4a20-95d1-2dbdd204375a",
"metadata": {},
"source": [
"**In next step, we will download one of the most popular embedding model \"all-MiniLM-L6-v2\". Find more details of the model at this link https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2**"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "05db3494-5d8e-4a13-9941-26330a86f5e5",
"metadata": {},
"outputs": [],
"source": [
"model_name = \"all-MiniLM-L6-v2.gguf2.f16.gguf\"\n",
"gpt4all_kwargs = {\"allow_download\": \"True\"}\n",
"embeddings = GPT4AllEmbeddings(model_name=model_name, gpt4all_kwargs=gpt4all_kwargs)"
]
},
{
"cell_type": "markdown",
"id": "4e53999e-1983-46ac-8039-2783e194c3ae",
"metadata": {},
"source": [
"**Store all the embeddings in the Chroma database**"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0922951a-9ddf-4761-973d-8e9a86f61284",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=embeddings)"
]
},
{
"cell_type": "markdown",
"id": "29f94fa0-6c75-4a65-a1a3-debc75422479",
"metadata": {},
"source": [
"**Now, let's find relevant splits from the documents related to the question**"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "88c8152d-ec7a-4f0b-9d86-877789407537",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n"
]
}
],
"source": [
"question = \"What is Intel CCG revenue in Q1 2024\"\n",
"docs = vectorstore.similarity_search(question)\n",
"print(len(docs))"
]
},
{
"cell_type": "markdown",
"id": "53330c6b-cb0f-43f9-b379-2e57ac1e5335",
"metadata": {},
"source": [
"**Look at the first retrieved document from the vector database**"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "43a6d94f-b5c4-47b0-a353-2db4c3d24d9c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'page': 1, 'source': 'intel_q1_2024_earnings.pdf'}, page_content='Client Computing Group (CCG) $7.5 billion up31%\\nData Center and AI (DCAI) $3.0 billion up5%\\nNetwork and Edge (NEX) $1.4 billion down 8%\\nTotal Intel Products revenue $11.9 billion up17%\\nIntel Foundry $4.4 billion down 10%\\nAll other:\\nAltera $342 million down 58%\\nMobileye $239 million down 48%\\nOther $194 million up17%\\nTotal all other revenue $775 million down 46%\\nIntersegment eliminations $(4.4) billion\\nTotal net revenue $12.7 billion up9%\\nIntel Products Highlights')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "64ba074f-4b36-442e-b7e2-b26d6e2815c3",
"metadata": {},
"source": [
"**Download Lllama-2 model from Huggingface and store locally** <br>\n",
"**You can download different quantization variant of Lllama-2 model from the link below. We are using Q8 version here (7.16GB).** <br>\n",
"https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8dd0811-6f43-4bc6-b854-2ab377639c9a",
"metadata": {},
"outputs": [],
"source": [
"!huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q8_0.gguf --local-dir . --local-dir-use-symlinks False"
]
},
{
"cell_type": "markdown",
"id": "3895b1f5-f51d-4539-abf0-af33d7ca48ea",
"metadata": {},
"source": [
"**Import langchain components required to load downloaded LLMs model**"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fb087088-aa62-44c0-8356-061e9b9f1186",
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_community.llms import LlamaCpp"
]
},
{
"cell_type": "markdown",
"id": "5a8a111e-2614-4b70-b034-85cd3e7304cb",
"metadata": {},
"source": [
"**Loading the local Lllama-2 model using Llama-cpp library**"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "fb917da2-c0d7-4995-b56d-26254276e0da",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from llama-2-7b-chat.Q8_0.gguf (version GGUF V2)\n",
"llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\n",
"llama_model_loader: - kv 0: general.architecture str = llama\n",
"llama_model_loader: - kv 1: general.name str = LLaMA v2\n",
"llama_model_loader: - kv 2: llama.context_length u32 = 4096\n",
"llama_model_loader: - kv 3: llama.embedding_length u32 = 4096\n",
"llama_model_loader: - kv 4: llama.block_count u32 = 32\n",
"llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008\n",
"llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128\n",
"llama_model_loader: - kv 7: llama.attention.head_count u32 = 32\n",
"llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32\n",
"llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001\n",
"llama_model_loader: - kv 10: general.file_type u32 = 7\n",
"llama_model_loader: - kv 11: tokenizer.ggml.model str = llama\n",
"llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = [\"<unk>\", \"<s>\", \"</s>\", \"<0x00>\", \"<...\n",
"llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...\n",
"llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...\n",
"llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1\n",
"llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2\n",
"llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0\n",
"llama_model_loader: - kv 18: general.quantization_version u32 = 2\n",
"llama_model_loader: - type f32: 65 tensors\n",
"llama_model_loader: - type q8_0: 226 tensors\n",
"llm_load_vocab: special tokens cache size = 259\n",
"llm_load_vocab: token to piece cache size = 0.1684 MB\n",
"llm_load_print_meta: format = GGUF V2\n",
"llm_load_print_meta: arch = llama\n",
"llm_load_print_meta: vocab type = SPM\n",
"llm_load_print_meta: n_vocab = 32000\n",
"llm_load_print_meta: n_merges = 0\n",
"llm_load_print_meta: vocab_only = 0\n",
"llm_load_print_meta: n_ctx_train = 4096\n",
"llm_load_print_meta: n_embd = 4096\n",
"llm_load_print_meta: n_layer = 32\n",
"llm_load_print_meta: n_head = 32\n",
"llm_load_print_meta: n_head_kv = 32\n",
"llm_load_print_meta: n_rot = 128\n",
"llm_load_print_meta: n_swa = 0\n",
"llm_load_print_meta: n_embd_head_k = 128\n",
"llm_load_print_meta: n_embd_head_v = 128\n",
"llm_load_print_meta: n_gqa = 1\n",
"llm_load_print_meta: n_embd_k_gqa = 4096\n",
"llm_load_print_meta: n_embd_v_gqa = 4096\n",
"llm_load_print_meta: f_norm_eps = 0.0e+00\n",
"llm_load_print_meta: f_norm_rms_eps = 1.0e-06\n",
"llm_load_print_meta: f_clamp_kqv = 0.0e+00\n",
"llm_load_print_meta: f_max_alibi_bias = 0.0e+00\n",
"llm_load_print_meta: f_logit_scale = 0.0e+00\n",
"llm_load_print_meta: n_ff = 11008\n",
"llm_load_print_meta: n_expert = 0\n",
"llm_load_print_meta: n_expert_used = 0\n",
"llm_load_print_meta: causal attn = 1\n",
"llm_load_print_meta: pooling type = 0\n",
"llm_load_print_meta: rope type = 0\n",
"llm_load_print_meta: rope scaling = linear\n",
"llm_load_print_meta: freq_base_train = 10000.0\n",
"llm_load_print_meta: freq_scale_train = 1\n",
"llm_load_print_meta: n_ctx_orig_yarn = 4096\n",
"llm_load_print_meta: rope_finetuned = unknown\n",
"llm_load_print_meta: ssm_d_conv = 0\n",
"llm_load_print_meta: ssm_d_inner = 0\n",
"llm_load_print_meta: ssm_d_state = 0\n",
"llm_load_print_meta: ssm_dt_rank = 0\n",
"llm_load_print_meta: model type = 7B\n",
"llm_load_print_meta: model ftype = Q8_0\n",
"llm_load_print_meta: model params = 6.74 B\n",
"llm_load_print_meta: model size = 6.67 GiB (8.50 BPW) \n",
"llm_load_print_meta: general.name = LLaMA v2\n",
"llm_load_print_meta: BOS token = 1 '<s>'\n",
"llm_load_print_meta: EOS token = 2 '</s>'\n",
"llm_load_print_meta: UNK token = 0 '<unk>'\n",
"llm_load_print_meta: LF token = 13 '<0x0A>'\n",
"llm_load_print_meta: max token length = 48\n",
"llm_load_tensors: ggml ctx size = 0.14 MiB\n",
"llm_load_tensors: CPU buffer size = 6828.64 MiB\n",
"...................................................................................................\n",
"llama_new_context_with_model: n_ctx = 2048\n",
"llama_new_context_with_model: n_batch = 512\n",
"llama_new_context_with_model: n_ubatch = 512\n",
"llama_new_context_with_model: flash_attn = 0\n",
"llama_new_context_with_model: freq_base = 10000.0\n",
"llama_new_context_with_model: freq_scale = 1\n",
"llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB\n",
"llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB\n",
"llama_new_context_with_model: CPU output buffer size = 0.12 MiB\n",
"llama_new_context_with_model: CPU compute buffer size = 164.01 MiB\n",
"llama_new_context_with_model: graph nodes = 1030\n",
"llama_new_context_with_model: graph splits = 1\n",
"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | \n",
"Model metadata: {'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.context_length': '4096', 'general.name': 'LLaMA v2', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '11008', 'llama.attention.layer_norm_rms_epsilon': '0.000001', 'llama.rope.dimension_count': '128', 'llama.attention.head_count': '32', 'tokenizer.ggml.bos_token_id': '1', 'llama.block_count': '32', 'llama.attention.head_count_kv': '32', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '7'}\n",
"Using fallback chat format: llama-2\n"
]
}
],
"source": [
"llm = LlamaCpp(\n",
" model_path=\"llama-2-7b-chat.Q8_0.gguf\",\n",
" n_gpu_layers=-1,\n",
" n_batch=512,\n",
" n_ctx=2048,\n",
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "43e06f56-ef97-451b-87d9-8465ea442aed",
"metadata": {},
"source": [
"**Now let's ask the same question to Llama model without showing them the earnings release.**"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "1033dd82-5532-437d-a548-27695e109589",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"?\n",
"(NASDAQ:INTC)\n",
"Intel's CCG (Client Computing Group) revenue for Q1 2024 was $9.6 billion, a decrease of 35% from the previous quarter and a decrease of 42% from the same period last year."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 16.05 ms / 68 runs ( 0.24 ms per token, 4236.76 tokens per second)\n",
"llama_print_timings: prompt eval time = 131.14 ms / 16 tokens ( 8.20 ms per token, 122.01 tokens per second)\n",
"llama_print_timings: eval time = 3225.00 ms / 67 runs ( 48.13 ms per token, 20.78 tokens per second)\n",
"llama_print_timings: total time = 3466.40 ms / 83 tokens\n"
]
},
{
"data": {
"text/plain": [
"\"?\\n(NASDAQ:INTC)\\nIntel's CCG (Client Computing Group) revenue for Q1 2024 was $9.6 billion, a decrease of 35% from the previous quarter and a decrease of 42% from the same period last year.\""
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.invoke(question)"
]
},
{
"cell_type": "markdown",
"id": "75f5cb10-746f-4e37-9386-b85a4d2b84ef",
"metadata": {},
"source": [
"**As you can see, model is giving wrong information. Correct asnwer is CCG revenue in Q1 2024 is $7.5B. Now let's apply RAG using the earning release document**"
]
},
{
"cell_type": "markdown",
"id": "0f4150ec-5692-4756-b11a-22feb7ab88ff",
"metadata": {},
"source": [
"**in RAG, we modify the input prompt by adding relevent documents with the question. Here, we use one of the popular RAG prompt**"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "226c14b0-f43e-4a1f-a1e4-04731d467ec4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: {question} \\nContext: {context} \\nAnswer:\"))]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"\n",
"rag_prompt = hub.pull(\"rlm/rag-prompt\")\n",
"rag_prompt.messages"
]
},
{
"cell_type": "markdown",
"id": "77deb6a0-0950-450a-916a-f2a029676c20",
"metadata": {},
"source": [
"**Appending all retreived documents in a single document**"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "2dbc3327-6ef3-4c1f-8797-0c71964b0921",
"metadata": {},
"outputs": [],
"source": [
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)"
]
},
{
"cell_type": "markdown",
"id": "2e2d9f18-49d0-43a3-bea8-78746ffa86b7",
"metadata": {},
"source": [
"**The last step is to create a chain using langchain tool that will create an e2e pipeline. It will take question and context as an input.**"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "427379c2-51ff-4e0f-8278-a45221363299",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough, RunnablePick\n",
"\n",
"# Chain\n",
"chain = (\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "095d6280-c949-4d00-8e32-8895a82d245f",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Based on the provided context, Intel CCG revenue in Q1 2024 was $7.5 billion up 31%."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 7.74 ms / 31 runs ( 0.25 ms per token, 4004.13 tokens per second)\n",
"llama_print_timings: prompt eval time = 2529.41 ms / 674 tokens ( 3.75 ms per token, 266.46 tokens per second)\n",
"llama_print_timings: eval time = 1542.94 ms / 30 runs ( 51.43 ms per token, 19.44 tokens per second)\n",
"llama_print_timings: total time = 4123.68 ms / 704 tokens\n"
]
},
{
"data": {
"text/plain": [
"' Based on the provided context, Intel CCG revenue in Q1 2024 was $7.5 billion up 31%.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"context\": docs, \"question\": question})"
]
},
{
"cell_type": "markdown",
"id": "638364b2-6bd2-4471-9961-d3a1d1b9d4ee",
"metadata": {},
"source": [
"**Now we see the results are correct as it is mentioned in earnings release.** <br>\n",
"**To further automate, we will create a chain that will take input as question and retriever so that we don't need to retrieve documents seperately**"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "4654e5b7-635f-4767-8b31-4c430164cdd5",
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()\n",
"qa_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0979f393-fd0a-4e82-b844-68371c6ad68f",
"metadata": {},
"source": [
"**Now we only need to pass the question to the chain and it will fetch the contexts directly from the vector database to generate the answer**\n",
"<br>\n",
"**Let's try with another question**"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "3ea07b82-e6ec-4084-85f4-191373530172",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" According to the provided context, Intel DCAI revenue in Q1 2024 was $3.0 billion up 5%."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 6.28 ms / 31 runs ( 0.20 ms per token, 4937.88 tokens per second)\n",
"llama_print_timings: prompt eval time = 2681.93 ms / 730 tokens ( 3.67 ms per token, 272.19 tokens per second)\n",
"llama_print_timings: eval time = 1471.07 ms / 30 runs ( 49.04 ms per token, 20.39 tokens per second)\n",
"llama_print_timings: total time = 4206.77 ms / 760 tokens\n"
]
},
{
"data": {
"text/plain": [
"' According to the provided context, Intel DCAI revenue in Q1 2024 was $3.0 billion up 5%.'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa_chain.invoke(\"what is Intel DCAI revenue in Q1 2024?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9407f2a0-4a35-4315-8e96-02fcb80f210c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "rag-on-intel",
"language": "python",
"name": "rag-on-intel"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -36,9 +36,7 @@
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(\n",
" docs, embedding=UpstageEmbeddings(model=\"solar-embedding-1-large\")\n",
")\n",
"vectorstore = DocArrayInMemorySearch.from_documents(docs, embedding=UpstageEmbeddings())\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",

View File

@@ -36,13 +36,15 @@
"from bs4 import BeautifulSoup as Soup\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryByteStore, LocalFileStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders.recursive_url_loader import (\n",
" RecursiveUrlLoader,\n",
")\n",
"\n",
"# noqa\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter # noqa\n",
"\n",
"DOCSTORE_DIR = \".\"\n",
"DOCSTORE_ID_KEY = \"doc_id\""
@@ -370,14 +372,13 @@
],
"source": [
"import torch\n",
"from langchain_huggingface.llms import HuggingFacePipeline\n",
"from optimum.intel.ipex import IPEXModelForCausalLM\n",
"from transformers import AutoTokenizer, pipeline\n",
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"\n",
"model_id = \"Intel/neural-chat-7b-v3-3\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = IPEXModelForCausalLM.from_pretrained(\n",
" model_id, torch_dtype=torch.bfloat16, export=True\n",
"model = AutoModelForCausalLM.from_pretrained(\n",
" model_id, device_map=\"auto\", torch_dtype=torch.bfloat16\n",
")\n",
"\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=100)\n",
@@ -582,7 +583,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.9.18"
}
},
"nbformat": 4,

View File

@@ -647,7 +647,7 @@ Sometimes you may not have the luxury of using OpenAI or other service-hosted la
import logging
import torch
from transformers import AutoTokenizer, GPT2TokenizerFast, pipeline, AutoModelForSeq2SeqLM, AutoModelForCausalLM
from langchain_huggingface import HuggingFacePipeline
from langchain_community.llms import HuggingFacePipeline
# Note: This model requires a large GPU, e.g. an 80GB A100. See documentation for other ways to run private non-OpenAI models.
model_id = "google/flan-ul2"
@@ -740,7 +740,7 @@ Even this relatively large model will most likely fail to generate more complica
```bash
poetry run pip install pyyaml langchain_chroma
poetry run pip install pyyaml chromadb
import yaml
```
@@ -992,9 +992,9 @@ Now that you have some examples (with manually corrected output SQL), you can do
```python
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
from langchain.chains.sql_database.prompt import _sqlite_prompt, PROMPT_SUFFIX
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
from langchain_chroma import Chroma
from langchain_community.vectorstores import Chroma
example_prompt = PromptTemplate(
input_variables=["table_info", "input", "sql_cmd", "sql_result", "answer"],

View File

@@ -22,7 +22,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet pypdf tiktoken openai langchain-chroma langchain-together"
"! pip install --quiet pypdf chromadb tiktoken openai langchain-together"
]
},
{
@@ -45,8 +45,8 @@
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import OpenAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"\"\"\"\n",
"from langchain_together.embeddings import TogetherEmbeddings\n",

File diff suppressed because one or more lines are too long

View File

@@ -13,7 +13,7 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm|couchbase" | tr '\n' ' ')
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm|ai21" | tr '\n' ' ')
PORT ?= 3001
@@ -35,11 +35,11 @@ generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
@@ -61,22 +61,22 @@ render:
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync
build: install-py-deps generate-files copy-infra render md-sync generate-references
vercel-build: install-vercel-deps build generate-references
vercel-build: install-vercel-deps build
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
yarn run docusaurus build
mv build v0.2
mv build v0.1
mkdir build
mv v0.2 build
mv build/v0.2/404.html build
mv v0.1 build
mv build/v0.1/404.html build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)

View File

@@ -12,8 +12,7 @@ pre {
}
}
#my-component-root *,
#headlessui-portal-root * {
#my-component-root *, #headlessui-portal-root * {
z-index: 10000;
}

View File

@@ -178,10 +178,3 @@ autosummary_generate = True
html_copy_source = False
html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -10,21 +10,12 @@ from pathlib import Path
from typing import Dict, List, Literal, Optional, Sequence, TypedDict, Union
import toml
import typing_extensions
from langchain_core.runnables import Runnable, RunnableSerializable
from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
ClassKind = Literal[
"TypedDict",
"Regular",
"Pydantic",
"enum",
"RunnablePydantic",
"RunnableNonPydantic",
]
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
class ClassInfo(TypedDict):
@@ -78,36 +69,8 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
continue
if inspect.isclass(type_):
# The type of the class is used to select a template
# for the object when rendering the documentation.
# See `templates` directory for defined templates.
# This is a hacky solution to distinguish between different
# kinds of thing that we want to render.
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
if type(type_) == typing._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif type(type_) is typing._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif (
issubclass(type_, Runnable)
and issubclass(type_, BaseModel)
and type_ is not Runnable
):
# RunnableSerializable subclasses from Pydantic which
# for which we use autodoc_pydantic for rendering.
# We need to distinguish these from regular Pydantic
# classes so we can hide inherited Runnable methods
# and provide a link to the Runnable interface from
# the template.
kind = "RunnablePydantic"
elif (
issubclass(type_, Runnable)
and not issubclass(type_, BaseModel)
and type_ is not Runnable
):
# These are not pydantic classes but are Runnable.
# We'll hide all the inherited methods from Runnable
# but use a regular class template to render.
kind = "RunnableNonPydantic"
elif issubclass(type_, Enum):
kind = "enum"
elif issubclass(type_, BaseModel):
@@ -165,11 +128,11 @@ def _load_package_modules(
of the modules/packages are part of the package vs. 3rd party or built-in.
Parameters:
package_directory (Union[str, Path]): Path to the package directory.
submodule (Optional[str]): Optional name of submodule to load.
package_directory: Path to the package directory.
submodule: Optional name of submodule to load.
Returns:
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
list: A list of loaded module objects.
"""
package_path = (
Path(package_directory)
@@ -224,7 +187,7 @@ def _load_package_modules(
modules_by_namespace[top_namespace] = _module_members
except ImportError as e:
print(f"Error: Unable to import module '{namespace}' with error: {e}")
print(f"Error: Unable to import module '{namespace}' with error: {e}") # noqa: T201
return modules_by_namespace
@@ -288,10 +251,6 @@ Classes
template = "enum.rst"
elif class_["kind"] == "Pydantic":
template = "pydantic.rst"
elif class_["kind"] == "RunnablePydantic":
template = "runnable_pydantic.rst"
elif class_["kind"] == "RunnableNonPydantic":
template = "runnable_non_pydantic.rst"
else:
template = "class.rst"
@@ -400,14 +359,9 @@ def main(dirs: Optional[list] = None) -> None:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners", "standard-tests")
]
dirs += [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs" / "partners")
if os.path.isdir(ROOT_DIR / "libs" / "partners" / dir_)
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / "partners" / dir_)
if dir_ not in ("cli", "partners")
]
dirs += os.listdir(ROOT_DIR / "libs" / "partners")
for dir_ in dirs:
# Skip any hidden directories
# Some of these could be present by mistake in the code base

File diff suppressed because one or more lines are too long

View File

@@ -33,4 +33,4 @@
{% endblock %}
.. example_links:: {{ objname }}
.. example_links:: {{ objname }}

View File

@@ -15,8 +15,6 @@
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace
{% block attributes %}
{% endblock %}

View File

@@ -1,40 +0,0 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,24 +0,0 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
:model-show-json: False
:model-show-config-summary: False
:model-show-validator-members: False
:model-show-field-summary: False
:field-signature-prefix: param
:members:
:undoc-members:
:inherited-members:
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign
.. example_links:: {{ objname }}

View File

@@ -2,129 +2,132 @@
{%- set url_root = pathto('', 1) %}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- set lang_attr = 'en' %}
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="{{ lang_attr }}" > <![endif]-->
<!--[if gt IE 8]><!-->
<html class="no-js" lang="{{ lang_attr }}"> <!--<![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="{{ lang_attr }}" > <!--<![endif]-->
<head>
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical"
href="https://api.python.langchain.com/en/latest/{{ pagename }}.html"/>
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical" href="https://api.python.langchain.com/en/latest/{{pagename}}.html" />
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
<link rel="stylesheet"
href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}"
type="text/css"/>
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}"
type="text/css"{% if css.title is not none %}
title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css"/>
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css"/>
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}"
src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
<link rel="stylesheet" href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}" type="text/css" />
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}" type="text/css"{% if css.title is not none %} title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css" />
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}" src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
</head>
<body>
{% include "nav.html" %}
{%- block content %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary"
for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}"
class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}"
class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary" for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
<div class="btn-group w-100 mb-2" role="group" aria-label="rellinks">
{%- if prev %}
<a href="{{ prev.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ prev.title|striptags }}">Prev</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Prev</a>
{%- endif %}
{%- if parents -%}
<a href="{{ parents[-1].link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ parents[-1].title|striptags }}">Up</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink disabled py-1">Up</a>
{%- endif %}
{%- if next %}
<a href="{{ next.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ next.title|striptags }}">Next</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Next</a>
{%- endif %}
</div>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}" class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}" class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
</div>
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}
&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}
.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated
on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}"
rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
</div>
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}" rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
</div>
</div>
{%- endblock %}
<script src="{{ pathto('_static/js/vendor/bootstrap.min.js', 1) }}"></script>
{% include "javascript.html" %}

View File

@@ -1398,20 +1398,3 @@ table.sk-sponsor-table td {
.highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */
.highlight .vm { color: #bb60d5 } /* Name.Variable.Magic */
.highlight .il { color: #208050 } /* Literal.Number.Integer.Long */
/** Custom styles overriding certain values */
div.sk-sidebar-toc-wrapper {
width: unset;
overflow-x: auto;
}
div.sk-sidebar-toc-wrapper > [aria-label="rellinks"] {
position: sticky;
left: 0;
}
.navbar-nav .dropdown-menu {
max-height: 80vh;
overflow-y: auto;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,912 +0,0 @@
# arXiv
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
Templates, and Cookbooks.
From the opposite direction, scientists use LangChain in research and reference LangChain in the research papers.
Here you find [such papers](https://arxiv.org/search/?query=langchain&searchtype=all&source=header).
## Summary
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|------------------|---------|-------------------|------------------------|
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022-10-06 | `Docs:` [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/chat/huggingface](https://python.langchain.com/docs/integrations/chat/huggingface), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), `API:` [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **arXiv id:** 2402.03620v1
- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
- **Published Date:** 2024-02-06
- **URL:** http://arxiv.org/abs/2402.03620v1
- **LangChain:**
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
task-intrinsic reasoning structures to tackle complex reasoning problems that
are challenging for typical prompting methods. Core to the framework is a
self-discovery process where LLMs select multiple atomic reasoning modules such
as critical thinking and step-by-step thinking, and compose them into an
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
outperforms inference-intensive methods such as CoT-Self-Consistency by more
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
the self-discovered reasoning structures are universally applicable across
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
commonalities with human reasoning patterns.
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **arXiv id:** 2401.18059v1
- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
- **Published Date:** 2024-01-31
- **URL:** http://arxiv.org/abs/2401.18059v1
- **LangChain:**
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
state and incorporate long-tail knowledge. However, most existing methods
retrieve only short contiguous chunks from a retrieval corpus, limiting
holistic understanding of the overall document context. We introduce the novel
approach of recursively embedding, clustering, and summarizing chunks of text,
constructing a tree with differing levels of summarization from the bottom up.
At inference time, our RAPTOR model retrieves from this tree, integrating
information across lengthy documents at different levels of abstraction.
Controlled experiments show that retrieval with recursive summaries offers
significant improvements over traditional retrieval-augmented LMs on several
tasks. On question-answering tasks that involve complex, multi-step reasoning,
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
with the use of GPT-4, we can improve the best performance on the QuALITY
benchmark by 20% in absolute accuracy.
## Corrective Retrieval Augmented Generation
- **arXiv id:** 2401.15884v2
- **Title:** Corrective Retrieval Augmented Generation
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
- **Published Date:** 2024-01-29
- **URL:** http://arxiv.org/abs/2401.15884v2
- **LangChain:**
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
accuracy of generated texts cannot be secured solely by the parametric
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
practicable complement to LLMs, it relies heavily on the relevance of retrieved
documents, raising concerns about how the model behaves if retrieval goes
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
(CRAG) to improve the robustness of generation. Specifically, a lightweight
retrieval evaluator is designed to assess the overall quality of retrieved
documents for a query, returning a confidence degree based on which different
knowledge retrieval actions can be triggered. Since retrieval from static and
limited corpora can only return sub-optimal documents, large-scale web searches
are utilized as an extension for augmenting the retrieval results. Besides, a
decompose-then-recompose algorithm is designed for retrieved documents to
selectively focus on key information and filter out irrelevant information in
them. CRAG is plug-and-play and can be seamlessly coupled with various
RAG-based approaches. Experiments on four datasets covering short- and
long-form generation tasks show that CRAG can significantly improve the
performance of RAG-based approaches.
## Mixtral of Experts
- **arXiv id:** 2401.04088v1
- **Title:** Mixtral of Experts
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
- **Published Date:** 2024-01-08
- **URL:** http://arxiv.org/abs/2401.04088v1
- **LangChain:**
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and combine their outputs. Even though each token only sees two experts, the
selected experts can be different at each timestep. As a result, each token has
access to 47B parameters, but only uses 13B active parameters during inference.
Mixtral was trained with a context size of 32k tokens and it outperforms or
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
multilingual benchmarks. We also provide a model fine-tuned to follow
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
the base and instruct models are released under the Apache 2.0 license.
## Dense X Retrieval: What Retrieval Granularity Should We Use?
- **arXiv id:** 2312.06648v2
- **Title:** Dense X Retrieval: What Retrieval Granularity Should We Use?
- **Authors:** Tong Chen, Hongwei Wang, Sihao Chen, et al.
- **Published Date:** 2023-12-11
- **URL:** http://arxiv.org/abs/2312.06648v2
- **LangChain:**
- **Template:** [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
**Abstract:** Dense retrieval has become a prominent method to obtain relevant context or
world knowledge in open-domain NLP tasks. When we use a learned dense retriever
on a retrieval corpus at inference time, an often-overlooked design choice is
the retrieval unit in which the corpus is indexed, e.g. document, passage, or
sentence. We discover that the retrieval unit choice significantly impacts the
performance of both retrieval and downstream tasks. Distinct from the typical
approach of using passages or sentences, we introduce a novel retrieval unit,
proposition, for dense retrieval. Propositions are defined as atomic
expressions within text, each encapsulating a distinct factoid and presented in
a concise, self-contained natural language format. We conduct an empirical
comparison of different retrieval granularity. Our results reveal that
proposition-based retrieval significantly outperforms traditional passage or
sentence-based methods in dense retrieval. Moreover, retrieval by proposition
also enhances the performance of downstream QA tasks, since the retrieved texts
are more condensed with question-relevant information, reducing the need for
lengthy input tokens and minimizing the inclusion of extraneous, irrelevant
information.
## Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
- **arXiv id:** 2311.09210v1
- **Title:** Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
- **Authors:** Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al.
- **Published Date:** 2023-11-15
- **URL:** http://arxiv.org/abs/2311.09210v1
- **LangChain:**
- **Template:** [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
**Abstract:** Retrieval-augmented language models (RALMs) represent a substantial
advancement in the capabilities of large language models, notably in reducing
factual hallucination by leveraging external knowledge sources. However, the
reliability of the retrieved information is not always guaranteed. The
retrieval of irrelevant data can lead to misguided responses, and potentially
causing the model to overlook its inherent knowledge, even when it possesses
adequate information to address the query. Moreover, standard RALMs often
struggle to assess whether they possess adequate knowledge, both intrinsic and
retrieved, to provide an accurate answer. In situations where knowledge is
lacking, these systems should ideally respond with "unknown" when the answer is
unattainable. In response to these challenges, we introduces Chain-of-Noting
(CoN), a novel approach aimed at improving the robustness of RALMs in facing
noisy, irrelevant documents and in handling unknown scenarios. The core idea of
CoN is to generate sequential reading notes for retrieved documents, enabling a
thorough evaluation of their relevance to the given question and integrating
this information to formulate the final answer. We employed ChatGPT to create
training data for CoN, which was subsequently trained on an LLaMa-2 7B model.
Our experiments across four open-domain QA benchmarks show that RALMs equipped
with CoN significantly outperform standard RALMs. Notably, CoN achieves an
average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **arXiv id:** 2310.11511v1
- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
- **Published Date:** 2023-10-17
- **URL:** http://arxiv.org/abs/2310.11511v1
- **LangChain:**
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **arXiv id:** 2310.06117v2
- **Title:** Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **Authors:** Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al.
- **Published Date:** 2023-10-09
- **URL:** http://arxiv.org/abs/2310.06117v2
- **LangChain:**
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
instances containing specific details. Using the concepts and principles to
guide reasoning, LLMs significantly improve their abilities in following a
correct reasoning path towards the solution. We conduct experiments of
Step-Back Prompting with PaLM-2L, GPT-4 and Llama2-70B models, and observe
substantial performance gains on various challenging reasoning-intensive tasks
including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
## Llama 2: Open Foundation and Fine-Tuned Chat Models
- **arXiv id:** 2307.09288v2
- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
- **Published Date:** 2023-07-18
- **URL:** http://arxiv.org/abs/2307.09288v2
- **LangChain:**
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
## Query Rewriting for Retrieval-Augmented Large Language Models
- **arXiv id:** 2305.14283v3
- **Title:** Query Rewriting for Retrieval-Augmented Large Language Models
- **Authors:** Xinbei Ma, Yeyun Gong, Pengcheng He, et al.
- **Published Date:** 2023-05-23
- **URL:** http://arxiv.org/abs/2305.14283v3
- **LangChain:**
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of
the previous retrieve-then-read for the retrieval-augmented LLMs from the
perspective of the query rewriting. Unlike prior studies focusing on adapting
either the retriever or the reader, our approach pays attention to the
adaptation of the search query itself, for there is inevitably a gap between
the input text and the needed knowledge in retrieval. We first prompt an LLM to
generate the query, then use a web search engine to retrieve contexts.
Furthermore, to better align the query to the frozen modules, we propose a
trainable scheme for our pipeline. A small language model is adopted as a
trainable rewriter to cater to the black-box LLM reader. The rewriter is
trained using the feedback of the LLM reader by reinforcement learning.
Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice
QA. Experiments results show consistent performance improvement, indicating
that our framework is proven effective and scalable, and brings a new framework
for retrieval-augmented LLM.
## Large Language Model Guided Tree-of-Thought
- **arXiv id:** 2305.08291v1
- **Title:** Large Language Model Guided Tree-of-Thought
- **Authors:** Jieyi Long
- **Published Date:** 2023-05-15
- **URL:** http://arxiv.org/abs/2305.08291v1
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
approach aimed at improving the problem-solving capabilities of auto-regressive
large language models (LLMs). The ToT technique is inspired by the human mind's
approach for solving complex reasoning tasks through trial and error. In this
process, the human mind explores the solution space through a tree-like thought
process, allowing for backtracking when necessary. To implement ToT as a
software system, we augment an LLM with additional modules including a prompter
agent, a checker module, a memory module, and a ToT controller. In order to
solve a given problem, these modules engage in a multi-round conversation with
the LLM. The memory module records the conversation and state history of the
problem solving process, which allows the system to backtrack to the previous
steps of the thought-process and explore other directions from there. To verify
the effectiveness of the proposed technique, we implemented a ToT-based solver
for the Sudoku Puzzle. Experimental results show that the ToT framework can
significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **arXiv id:** 2305.04091v3
- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
- **Published Date:** 2023-05-06
- **URL:** http://arxiv.org/abs/2305.04091v3
- **LangChain:**
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
## Visual Instruction Tuning
- **arXiv id:** 2304.08485v2
- **Title:** Visual Instruction Tuning
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
- **Published Date:** 2023-04-17
- **URL:** http://arxiv.org/abs/2304.08485v2
- **LangChain:**
- **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
## Generative Agents: Interactive Simulacra of Human Behavior
- **arXiv id:** 2304.03442v2
- **Title:** Generative Agents: Interactive Simulacra of Human Behavior
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
- **Published Date:** 2023-04-07
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
communication to prototyping tools. In this paper, we introduce generative
agents--computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint,
while authors write; they form opinions, notice each other, and initiate
conversations; they remember and reflect on days past as they plan the next
day. To enable generative agents, we describe an architecture that extends a
large language model to store a complete record of the agent's experiences
using natural language, synthesize those memories over time into higher-level
reflections, and retrieve them dynamically to plan behavior. We instantiate
generative agents to populate an interactive sandbox environment inspired by
The Sims, where end users can interact with a small town of twenty five agents
using natural language. In an evaluation, these generative agents produce
believable individual and emergent social behaviors: for example, starting with
only a single user-specified notion that one agent wants to throw a Valentine's
Day party, the agents autonomously spread invitations to the party over the
next two days, make new acquaintances, ask each other out on dates to the
party, and coordinate to show up for the party together at the right time. We
demonstrate through ablation that the components of our agent
architecture--observation, planning, and reflection--each contribute critically
to the believability of agent behavior. By fusing large language models with
computational, interactive agents, this work introduces architectural and
interaction patterns for enabling believable simulations of human behavior.
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **arXiv id:** 2303.17760v2
- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
- **Published Date:** 2023-03-31
- **URL:** http://arxiv.org/abs/2303.17760v2
- **LangChain:**
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **arXiv id:** 2303.17580v4
- **Title:** HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **Authors:** Yongliang Shen, Kaitao Song, Xu Tan, et al.
- **Published Date:** 2023-03-30
- **URL:** http://arxiv.org/abs/2303.17580v4
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are numerous AI models
available for various domains and modalities, they cannot handle complicated AI
tasks autonomously. Considering large language models (LLMs) have exhibited
exceptional abilities in language understanding, generation, interaction, and
reasoning, we advocate that LLMs could act as a controller to manage existing
AI models to solve complicated AI tasks, with language serving as a generic
interface to empower this. Based on this philosophy, we present HuggingGPT, an
LLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI
models in machine learning communities (e.g., Hugging Face) to solve AI tasks.
Specifically, we use ChatGPT to conduct task planning when receiving a user
request, select models according to their function descriptions available in
Hugging Face, execute each subtask with the selected AI model, and summarize
the response according to the execution results. By leveraging the strong
language capability of ChatGPT and abundant AI models in Hugging Face,
HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different
modalities and domains and achieve impressive results in language, vision,
speech, and other challenging tasks, which paves a new way towards the
realization of artificial general intelligence.
## GPT-4 Technical Report
- **arXiv id:** 2303.08774v6
- **Title:** GPT-4 Technical Report
- **Authors:** OpenAI, Josh Achiam, Steven Adler, et al.
- **Published Date:** 2023-03-15
- **URL:** http://arxiv.org/abs/2303.08774v6
- **LangChain:**
- **Documentation:** [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
**Abstract:** We report the development of GPT-4, a large-scale, multimodal model which can
accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance on
various professional and academic benchmarks, including passing a simulated bar
exam with a score around the top 10% of test takers. GPT-4 is a
Transformer-based model pre-trained to predict the next token in a document.
The post-training alignment process results in improved performance on measures
of factuality and adherence to desired behavior. A core component of this
project was developing infrastructure and optimization methods that behave
predictably across a wide range of scales. This allowed us to accurately
predict some aspects of GPT-4's performance based on models trained with no
more than 1/1,000th the compute of GPT-4.
## A Watermark for Large Language Models
- **arXiv id:** 2301.10226v4
- **Title:** A Watermark for Large Language Models
- **Authors:** John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al.
- **Published Date:** 2023-01-24
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
humans but algorithmically detectable from a short span of tokens. We propose a
watermarking framework for proprietary language models. The watermark can be
embedded with negligible impact on text quality, and can be detected using an
efficient open-source algorithm without access to the language model API or
parameters. The watermark works by selecting a randomized set of "green" tokens
before a word is generated, and then softly promoting use of green tokens
during sampling. We propose a statistical test for detecting the watermark with
interpretable p-values, and derive an information-theoretic framework for
analyzing the sensitivity of the watermark. We test the watermark using a
multi-billion parameter model from the Open Pretrained Transformer (OPT)
family, and discuss robustness and security.
## Precise Zero-Shot Dense Retrieval without Relevance Labels
- **arXiv id:** 2212.10496v1
- **Title:** Precise Zero-Shot Dense Retrieval without Relevance Labels
- **Authors:** Luyu Gao, Xueguang Ma, Jimmy Lin, et al.
- **Published Date:** 2022-12-20
- **URL:** http://arxiv.org/abs/2212.10496v1
- **LangChain:**
- **API Reference:** [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
languages, it remains difficult to create effective fully zero-shot dense
retrieval systems when no relevance label is available. In this paper, we
recognize the difficulty of zero-shot learning and encoding relevance. Instead,
we propose to pivot through Hypothetical Document Embeddings~(HyDE). Given a
query, HyDE first zero-shot instructs an instruction-following language model
(e.g. InstructGPT) to generate a hypothetical document. The document captures
relevance patterns but is unreal and may contain false details. Then, an
unsupervised contrastively learned encoder~(e.g. Contriever) encodes the
document into an embedding vector. This vector identifies a neighborhood in the
corpus embedding space, where similar real documents are retrieved based on
vector similarity. This second step ground the generated document to the actual
corpus, with the encoder's dense bottleneck filtering out the incorrect
details. Our experiments show that HyDE significantly outperforms the
state-of-the-art unsupervised dense retriever Contriever and shows strong
performance comparable to fine-tuned retrievers, across various tasks (e.g. web
search, QA, fact verification) and languages~(e.g. sw, ko, ja).
## Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
- **arXiv id:** 2212.07425v3
- **Title:** Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
- **Authors:** Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al.
- **Published Date:** 2022-12-12
- **URL:** http://arxiv.org/abs/2212.07425v3
- **LangChain:**
- **API Reference:** [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
**Abstract:** The spread of misinformation, propaganda, and flawed argumentation has been
amplified in the Internet era. Given the volume of data and the subtlety of
identifying violations of argumentation norms, supporting information analytics
tasks, like content moderation, with trustworthy methods that can identify
logical fallacies is essential. In this paper, we formalize prior theoretical
work on logical fallacies into a comprehensive three-stage evaluation framework
of detection, coarse-grained, and fine-grained classification. We adapt
existing evaluation datasets for each stage of the evaluation. We employ three
families of robust and explainable methods based on prototype reasoning,
instance-based reasoning, and knowledge injection. The methods combine language
models with background knowledge and explainable mechanisms. Moreover, we
address data sparsity with strategies for data augmentation and curriculum
learning. Our three-stage framework natively consolidates prior datasets and
methods from existing tasks, like propaganda detection, serving as an
overarching evaluation testbed. We extensively evaluate these methods on our
datasets, focusing on their robustness and explainability. Our results provide
insight into the strengths and weaknesses of the methods on different
components and fallacy classes, indicating that fallacy identification is a
challenging task that may require specialized forms of reasoning to capture
various classes. We share our open-source code and data on GitHub to support
further work on logical fallacy identification.
## Complementary Explanations for Effective In-Context Learning
- **arXiv id:** 2211.13892v2
- **Title:** Complementary Explanations for Effective In-Context Learning
- **Authors:** Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al.
- **Published Date:** 2022-11-25
- **URL:** http://arxiv.org/abs/2211.13892v2
- **LangChain:**
- **API Reference:** [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
**Abstract:** Large language models (LLMs) have exhibited remarkable capabilities in
learning from explanations in prompts, but there has been limited understanding
of exactly how these explanations function or why they are effective. This work
aims to better understand the mechanisms by which explanations are used for
in-context learning. We first study the impact of two different factors on the
performance of prompts with explanations: the computation trace (the way the
solution is decomposed) and the natural language used to express the prompt. By
perturbing explanations on three controlled tasks, we show that both factors
contribute to the effectiveness of explanations. We further study how to form
maximally effective sets of explanations for solving a given test query. We
find that LLMs can benefit from the complementarity of the explanation set:
diverse reasoning skills shown by different exemplars can lead to better
performance. Therefore, we propose a maximal marginal relevance-based exemplar
selection approach for constructing exemplar sets that are both relevant as
well as complementary, which successfully improves the in-context learning
performance across three real-world tasks on multiple LLMs.
## PAL: Program-aided Language Models
- **arXiv id:** 2211.10435v2
- **Title:** PAL: Program-aided Language Models
- **Authors:** Luyu Gao, Aman Madaan, Shuyan Zhou, et al.
- **Published Date:** 2022-11-18
- **URL:** http://arxiv.org/abs/2211.10435v2
- **LangChain:**
- **API Reference:** [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
examples at test time ("few-shot prompting"). Much of this success can be
attributed to prompting methods such as "chain-of-thought'', which employ LLMs
for both understanding the problem description by decomposing it into steps, as
well as solving each step of the problem. While LLMs seem to be adept at this
sort of step-by-step decomposition, LLMs often make logical and arithmetic
mistakes in the solution part, even when the problem is decomposed correctly.
In this paper, we present Program-Aided Language models (PAL): a novel approach
that uses the LLM to read natural language problems and generate programs as
the intermediate reasoning steps, but offloads the solution step to a runtime
such as a Python interpreter. With PAL, decomposing the natural language
problem into runnable steps remains the only learning task for the LLM, while
solving is delegated to the interpreter. We demonstrate this synergy between a
neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and
algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all
these natural language reasoning tasks, generating code using an LLM and
reasoning using a Python interpreter leads to more accurate results than much
larger models. For example, PAL using Codex achieves state-of-the-art few-shot
accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
which uses chain-of-thought by absolute 15% top-1. Our code and data are
publicly available at http://reasonwithpal.com/ .
## ReAct: Synergizing Reasoning and Acting in Language Models
- **arXiv id:** 2210.03629v3
- **Title:** ReAct: Synergizing Reasoning and Acting in Language Models
- **Authors:** Shunyu Yao, Jeffrey Zhao, Dian Yu, et al.
- **Published Date:** 2022-10-06
- **URL:** http://arxiv.org/abs/2210.03629v3
- **LangChain:**
- **Documentation:** [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/chat/huggingface](https://python.langchain.com/docs/integrations/chat/huggingface), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping)
- **API Reference:** [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
**Abstract:** While large language models (LLMs) have demonstrated impressive capabilities
across tasks in language understanding and interactive decision making, their
abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g.
action plan generation) have primarily been studied as separate topics. In this
paper, we explore the use of LLMs to generate both reasoning traces and
task-specific actions in an interleaved manner, allowing for greater synergy
between the two: reasoning traces help the model induce, track, and update
action plans as well as handle exceptions, while actions allow it to interface
with external sources, such as knowledge bases or environments, to gather
additional information. We apply our approach, named ReAct, to a diverse set of
language and decision making tasks and demonstrate its effectiveness over
state-of-the-art baselines, as well as improved human interpretability and
trustworthiness over methods without reasoning or acting components.
Concretely, on question answering (HotpotQA) and fact verification (Fever),
ReAct overcomes issues of hallucination and error propagation prevalent in
chain-of-thought reasoning by interacting with a simple Wikipedia API, and
generates human-like task-solving trajectories that are more interpretable than
baselines without reasoning traces. On two interactive decision making
benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and
reinforcement learning methods by an absolute success rate of 34% and 10%
respectively, while being prompted with only one or two in-context examples.
Project site with code: https://react-lm.github.io
## Deep Lake: a Lakehouse for Deep Learning
- **arXiv id:** 2209.10785v2
- **Title:** Deep Lake: a Lakehouse for Deep Learning
- **Authors:** Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al.
- **Published Date:** 2022-09-22
- **URL:** http://arxiv.org/abs/2209.10785v2
- **LangChain:**
- **Documentation:** [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
**Abstract:** Traditional data lakes provide critical data infrastructure for analytical
workloads by enabling time travel, running SQL queries, ingesting data with
ACID transactions, and visualizing petabyte-scale datasets on cloud storage.
They allow organizations to break down data silos, unlock data-driven
decision-making, improve operational efficiency, and reduce costs. However, as
deep learning usage increases, traditional data lakes are not well-designed for
applications such as natural language processing (NLP), audio processing,
computer vision, and applications involving non-tabular datasets. This paper
presents Deep Lake, an open-source lakehouse for deep learning applications
developed at Activeloop. Deep Lake maintains the benefits of a vanilla data
lake with one key difference: it stores complex data, such as images, videos,
annotations, as well as tabular data, in the form of tensors and rapidly
streams the data over the network to (a) Tensor Query Language, (b) in-browser
visualization engine, or (c) deep learning frameworks without sacrificing GPU
utilization. Datasets stored in Deep Lake can be accessed from PyTorch,
TensorFlow, JAX, and integrate with numerous MLOps tools.
## Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
- **arXiv id:** 2205.12654v1
- **Title:** Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
- **Authors:** Kevin Heffernan, Onur Çelebi, Holger Schwenk
- **Published Date:** 2022-05-25
- **URL:** http://arxiv.org/abs/2205.12654v1
- **LangChain:**
- **API Reference:** [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
**Abstract:** Scaling multilingual representation learning beyond the hundred most frequent
languages is challenging, in particular to cover the long tail of low-resource
languages. A promising approach has been to train one-for-all multilingual
models capable of cross-lingual transfer, but these models often suffer from
insufficient capacity and interference between unrelated languages. Instead, we
move away from this approach and focus on training multiple language (family)
specific representations, but most prominently enable all languages to still be
encoded in the same representational space. To achieve this, we focus on
teacher-student training, allowing all encoders to be mutually compatible for
bitext mining, and enabling fast learning of new languages. We introduce a new
teacher-student training scheme which combines supervised and self-supervised
training, allowing encoders to take advantage of monolingual training data,
which is valuable in the low-resource setting.
Our approach significantly outperforms the original LASER encoder. We study
very low-resource languages and handle 50 African languages, many of which are
not covered by any other model. For these languages, we train sentence
encoders, mine bitexts, and validate the bitexts by training NMT systems.
## Evaluating the Text-to-SQL Capabilities of Large Language Models
- **arXiv id:** 2204.00498v1
- **Title:** Evaluating the Text-to-SQL Capabilities of Large Language Models
- **Authors:** Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau
- **Published Date:** 2022-03-15
- **URL:** http://arxiv.org/abs/2204.00498v1
- **LangChain:**
- **API Reference:** [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
language model. We find that, without any finetuning, Codex is a strong
baseline on the Spider benchmark; we also analyze the failure modes of Codex in
this setting. Furthermore, we demonstrate on the GeoQuery and Scholar
benchmarks that a small number of in-domain examples provided in the prompt
enables Codex to perform better than state-of-the-art models finetuned on such
few-shot examples.
## Locally Typical Sampling
- **arXiv id:** 2202.00666v5
- **Title:** Locally Typical Sampling
- **Authors:** Clara Meister, Tiago Pimentel, Gian Wiher, et al.
- **Published Date:** 2022-02-01
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
perform well under standard metrics, e.g., perplexity. This discrepancy has
puzzled the language generation community for the last few years. In this work,
we posit that the abstraction of natural language generation as a discrete
stochastic process--which allows for an information-theoretic analysis--can
provide new insights into the behavior of probabilistic language generators,
e.g., why high-probability texts can be dull or repetitive. Humans use language
as a means of communicating information, aiming to do so in a simultaneously
efficient and error-minimizing manner; in fact, psycholinguistics research
suggests humans choose each word in a string with this subconscious goal in
mind. We formally define the set of strings that meet this criterion: those for
which each word has an information content close to the expected information
content, i.e., the conditional entropy of our model. We then propose a simple
and efficient procedure for enforcing this criterion when generating from
probabilistic models, which we call locally typical sampling. Automatic and
human evaluations show that, in comparison to nucleus and top-k sampling,
locally typical sampling offers competitive performance (in both abstractive
summarization and story generation) in terms of quality while consistently
reducing degenerate repetitions.
## Learning Transferable Visual Models From Natural Language Supervision
- **arXiv id:** 2103.00020v1
- **Title:** Learning Transferable Visual Models From Natural Language Supervision
- **Authors:** Alec Radford, Jong Wook Kim, Chris Hallacy, et al.
- **Published Date:** 2021-02-26
- **URL:** http://arxiv.org/abs/2103.00020v1
- **LangChain:**
- **API Reference:** [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
**Abstract:** State-of-the-art computer vision systems are trained to predict a fixed set
of predetermined object categories. This restricted form of supervision limits
their generality and usability since additional labeled data is needed to
specify any other visual concept. Learning directly from raw text about images
is a promising alternative which leverages a much broader source of
supervision. We demonstrate that the simple pre-training task of predicting
which caption goes with which image is an efficient and scalable way to learn
SOTA image representations from scratch on a dataset of 400 million (image,
text) pairs collected from the internet. After pre-training, natural language
is used to reference learned visual concepts (or describe new ones) enabling
zero-shot transfer of the model to downstream tasks. We study the performance
of this approach by benchmarking on over 30 different existing computer vision
datasets, spanning tasks such as OCR, action recognition in videos,
geo-localization, and many types of fine-grained object classification. The
model transfers non-trivially to most tasks and is often competitive with a
fully supervised baseline without the need for any dataset specific training.
For instance, we match the accuracy of the original ResNet-50 on ImageNet
zero-shot without needing to use any of the 1.28 million training examples it
was trained on. We release our code and pre-trained model weights at
https://github.com/OpenAI/CLIP.
## CTRL: A Conditional Transformer Language Model for Controllable Generation
- **arXiv id:** 1909.05858v2
- **Title:** CTRL: A Conditional Transformer Language Model for Controllable Generation
- **Authors:** Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al.
- **Published Date:** 2019-09-11
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We
release CTRL, a 1.63 billion-parameter conditional transformer language model,
trained to condition on control codes that govern style, content, and
task-specific behavior. Control codes were derived from structure that
naturally co-occurs with raw text, preserving the advantages of unsupervised
learning while providing more explicit control over text generation. These
codes also allow CTRL to predict which parts of the training data are most
likely given a sequence. This provides a potential method for analyzing large
amounts of data via model-based source attribution. We have released multiple
full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl.
## Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
- **arXiv id:** 1908.10084v1
- **Title:** Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
- **Authors:** Nils Reimers, Iryna Gurevych
- **Published Date:** 2019-08-27
- **URL:** http://arxiv.org/abs/1908.10084v1
- **LangChain:**
- **Documentation:** [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
**Abstract:** BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new
state-of-the-art performance on sentence-pair regression tasks like semantic
textual similarity (STS). However, it requires that both sentences are fed into
the network, which causes a massive computational overhead: Finding the most
similar pair in a collection of 10,000 sentences requires about 50 million
inference computations (~65 hours) with BERT. The construction of BERT makes it
unsuitable for semantic similarity search as well as for unsupervised tasks
like clustering.
In this publication, we present Sentence-BERT (SBERT), a modification of the
pretrained BERT network that use siamese and triplet network structures to
derive semantically meaningful sentence embeddings that can be compared using
cosine-similarity. This reduces the effort for finding the most similar pair
from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while
maintaining the accuracy from BERT.
We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning
tasks, where it outperforms other state-of-the-art sentence embeddings methods.

View File

@@ -1,17 +1,25 @@
# 3rd Party Tutorials
# Tutorials
## Books and Handbooks
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
## Tutorials
### [LangChain v 0.1 by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae0gBSJ9T0w7cu7iJZbH3T31)
### [Build with Langchain - Advanced by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae06tclDATrMYY0idsTdLg9v)
### [LangGraph by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae16n2TWUkKq5PgJ0w6Pkwtg)
### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5)
### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ)
### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F)
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C)
## Courses
@@ -25,7 +33,6 @@
### Online courses
- [Udemy](https://www.udemy.com/courses/search/?q=langchain)
- [DataCamp](https://www.datacamp.com/courses/developing-llm-applications-with-langchain)
- [Pluralsight](https://www.pluralsight.com/search?q=langchain)
- [Coursera](https://www.coursera.org/search?query=langchain)
- [Maven](https://maven.com/courses?query=langchain)
@@ -41,11 +48,8 @@
- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs)
- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb)
## Books and Handbooks
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
- [Dive into Langchain (Chinese language)](https://langchain.boblin.app/)
## [Documentation: Use cases](/docs/use_cases)
---------------------

View File

@@ -1,63 +1,137 @@
# YouTube videos
[Updated 2024-05-16]
⛓ icon marks a new addition [last update 2023-09-21]
### [Official LangChain YouTube channel](https://www.youtube.com/@LangChain)
### [Tutorials on YouTube](/docs/additional_resources/tutorials/#tutorials)
### Introduction to LangChain with Harrison Chase, creator of LangChain
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@The_Full_Stack)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
## Videos (sorted by views)
Only videos with 40K+ views:
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain OpenAI API)](https://youtu.be/9AXP7tCI9PI) by [TechLead](https://www.youtube.com/@TechLead)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DaveShap)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DaveShap)
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [How to Use Langchain With `Zapier` | Write and Send Email with GPT-3 | OpenAI API Tutorial](https://youtu.be/p9v2-xEa9A0) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [Use Your Locally Stored Files To Get Response From GPT - `OpenAI` | Langchain | Python](https://youtu.be/NC1Ni9KS-rk) by [Shweta Lodha](https://www.youtube.com/@shweta-lodha)
- [`Langchain JS` | How to Use GPT-3, GPT-4 to Reference your own Data | `OpenAI Embeddings` Intro](https://youtu.be/veV2I-NEjaM) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [The easiest way to work with large language models | Learn LangChain in 10min](https://youtu.be/kmbS6FDQh7c) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [4 Autonomous AI Agents: “Westworld” simulation `BabyAGI`, `AutoGPT`, `Camel`, `LangChain`](https://youtu.be/yWbnH6inT_U) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT](https://youtu.be/J-GL0htqda8) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase](https://youtu.be/jRnUPUTkZmU) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [`Weaviate` + LangChain for LLM apps presented by Erika Cardenas](https://youtu.be/7AGj4Td5Lgw) by [`Weaviate` • Vector Database](https://www.youtube.com/@Weaviate)
- [Langchain Overview — How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [Langchain Overview - How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [LangChain Tutorials](https://www.youtube.com/watch?v=FuqdVNB_8c0&list=PL9V0lbeJ69brU-ojMpU1Y7Ic58Tap0Cw6) by [Edrick](https://www.youtube.com/@edrickdch):
- [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- [LangChain 101: The Complete Beginner's Guide](https://youtu.be/P3MAbZ2eMUI)
- [Custom langchain Agent & Tools with memory. Turn any `Python function` into langchain tool with Gpt 3](https://youtu.be/NIG8lXk0ULg) by [echohive](https://www.youtube.com/@echohive)
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@heymichaeldaigler)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nickdaigler)
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@heymichaeldaigler)
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
- [Build More Powerful LLM Applications for Businesss with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- [LangFlow LLM Agent Demo for 🦜🔗LangChain](https://youtu.be/zJxDHaWt-6o) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- [Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain](https://youtu.be/eYer3uzrcuM) by [Finxter](https://www.youtube.com/@CobusGreylingZA)
- [LangChain Tutorial - ChatGPT mit eigenen Daten](https://youtu.be/0XDLyY90E2c) by [Coding Crashkurse](https://www.youtube.com/@codingcrashkurse6429)
- [Chat with a `CSV` | LangChain Agents Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [GoDataProf](https://www.youtube.com/@godataprof)
- [Introdução ao Langchain - #Cortes - Live DataHackers](https://youtu.be/fw8y5VRei5Y) by [Prof. João Gabriel Lima](https://www.youtube.com/@profjoaogabriellima)
- [LangChain: Level up `ChatGPT` !? | LangChain Tutorial Part 1](https://youtu.be/vxUGx8aZpDE) by [Code Affinity](https://www.youtube.com/@codeaffinitydev)
- [KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch](https://youtu.be/QpTiXyK1jus) by [SimpleKI](https://www.youtube.com/@simpleki)
- [Chat with Audio: Langchain, `Chroma DB`, OpenAI, and `Assembly AI`](https://youtu.be/Kjy7cx1r75g) by [AI Anytime](https://www.youtube.com/@AIAnytime)
- [QA over documents with Auto vector index selection with Langchain router chains](https://youtu.be/9G05qybShv8) by [echohive](https://www.youtube.com/@echohive)
- [Build your own custom LLM application with `Bubble.io` & Langchain (No Code & Beginner friendly)](https://youtu.be/O7NhQGu1m6c) by [No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- [Simple App to Question Your Docs: Leveraging `Streamlit`, `Hugging Face Spaces`, LangChain, and `Claude`!](https://youtu.be/X4YbNECRr7o) by [Chris Alexiuk](https://www.youtube.com/@chrisalexiuk)
- [LANGCHAIN AI- `ConstitutionalChainAI` + Databutton AI ASSISTANT Web App](https://youtu.be/5zIU6_rdJCU) by [Avra](https://www.youtube.com/@Avra_b)
- [LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 `BABY AGI` 🤖 with EMAIL AUTOMATION using `DATABUTTON`](https://youtu.be/cvAwOGfeHgw) by [Avra](https://www.youtube.com/@Avra_b)
- [The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain)](https://youtu.be/v_LIcVyg5dk) by [Absent Data](https://www.youtube.com/@absentdata)
- [Memory in LangChain | Deep dive (python)](https://youtu.be/70lqvTFh_Yg) by [Eden Marco](https://www.youtube.com/@EdenMarco)
- [9 LangChain UseCases | Beginner's Guide | 2023](https://youtu.be/zS8_qosHNMw) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes](https://youtu.be/JSe11L1a_QQ) by [Abhinaw Tiwari](https://www.youtube.com/@AbhinawTiwariAT)
- [How to Talk to Your Langchain Agent | `11 Labs` + `Whisper`](https://youtu.be/N4k459Zw2PU) by [VRSEN](https://www.youtube.com/@vrsen)
- [LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily](https://youtu.be/mPYEPzLkeks) by [James NoCode](https://www.youtube.com/@jamesnocode)
- [LangChain 101: Models](https://youtu.be/T6c_XsyaNSQ) by [Mckay Wrigley](https://www.youtube.com/@realmckaywrigley)
- [LangChain with JavaScript Tutorial #1 | Setup & Using LLMs](https://youtu.be/W3AoeMrg27o) by [Leon van Zyl](https://www.youtube.com/@leonvanzyl)
- [LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE)](https://youtu.be/iI84yym473Q) by [James NoCode](https://www.youtube.com/@jamesnocode)
- [LangChain In Action: Real-World Use Case With Step-by-Step Tutorial](https://youtu.be/UO699Szp82M) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- [Summarizing and Querying Multiple Papers with LangChain](https://youtu.be/p_MQRWH5Y6k) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Using Langchain (and `Replit`) through `Tana`, ask `Google`/`Wikipedia`/`Wolfram Alpha` to fill out a table](https://youtu.be/Webau9lEzoI) by [Stian Håklev](https://www.youtube.com/@StianHaklev)
- [Langchain PDF App (GUI) | Create a ChatGPT For Your `PDF` in Python](https://youtu.be/wUAUdEw5oxM) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant](https://youtu.be/imDfPmMKEjM) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [Create Your OWN Slack AI Assistant with Python & LangChain](https://youtu.be/3jFXRNn2Bu8) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- [How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide]](https://youtu.be/4p1Fojur8Zw) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- [Build a `Multilingual PDF` Search App with LangChain, `Cohere` and `Bubble`](https://youtu.be/hOrtuumOrv8) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [Building a LangChain Agent (code-free!) Using `Bubble` and `Flowise`](https://youtu.be/jDJIIVWTZDE) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise](https://youtu.be/s33v5cIeqA4) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [LangChain Memory Tutorial | Building a ChatGPT Clone in Python](https://youtu.be/Cwq91cj2Pnc) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain](https://youtu.be/TeDgIDqQmzs) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@heymichaeldaigler)
- [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
- [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- [Build AI chatbot with custom knowledge base using OpenAI API and GPT Index](https://youtu.be/vDZAZuaXf48) by [Irina Nik](https://www.youtube.com/@irina_nik)
- [Build Your Own Auto-GPT Apps with LangChain (Python Tutorial)](https://youtu.be/NYSWn1ipbgg) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
- [`Flowise` is an open-source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Girlfriend GPT](https://www.youtube.com/@girlfriendGPT)
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg) by [Krish Naik](https://www.youtube.com/@krishnaik06)
- ⛓ [Vector Embeddings Tutorial Code Your Own AI Assistant with `GPT-4 API` + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=5uJhxoh2tvdnOXok) by [FreeCodeCamp.org](https://www.youtube.com/@freecodecamp)
- ⛓ [Fully LOCAL `Llama 2` Q&A with LangChain](https://youtu.be/wgYctKFnQ74?si=UX1F3W-B3MqF4-K-) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- ⛓ [Fully LOCAL `Llama 2` Langchain on CPU](https://youtu.be/yhECvKMu8kM?si=IvjxwlA1c09VwHZ4) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- ⛓ [Build LangChain Audio Apps with Python in 5 Minutes](https://youtu.be/7w7ysaDz2W4?si=BvdMiyHhormr2-vr) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- ⛓ [`Voiceflow` & `Flowise`: Want to Beat Competition? New Tutorial with Real AI Chatbot](https://youtu.be/EZKkmeFwag0?si=-4dETYDHEstiK_bb) by [AI SIMP](https://www.youtube.com/@aisimp)
- ⛓ [THIS Is How You Build Production-Ready AI Apps (`LangSmith` Tutorial)](https://youtu.be/tFXm5ijih98?si=lfiqpyaivxHFyI94) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- ⛓ [Build POWERFUL LLM Bots EASILY with Your Own Data - `Embedchain` - Langchain 2.0? (Tutorial)](https://youtu.be/jE24Y_GasE8?si=0yEDZt3BK5Q-LIuF) by [WorldofAI](https://www.youtube.com/@intheworldofai)
- ⛓ [`Code Llama` powered Gradio App for Coding: Runs on CPU](https://youtu.be/AJOhV6Ryy5o?si=ouuQT6IghYlc1NEJ) by [AI Anytime](https://www.youtube.com/@AIAnytime)
- ⛓ [LangChain Complete Course in One Video | Develop LangChain (AI) Based Solutions for Your Business](https://youtu.be/j9mQd-MyIg8?si=_wlNT3nP2LpDKztZ) by [UBprogrammer](https://www.youtube.com/@UBprogrammer)
- ⛓ [How to Run `LLaMA` Locally on CPU or GPU | Python & Langchain & CTransformers Guide](https://youtu.be/SvjWDX2NqiM?si=DxFml8XeGhiLTzLV) by [Code With Prince](https://www.youtube.com/@CodeWithPrince)
- ⛓ [PyData Heidelberg #11 - TimeSeries Forecasting & LLM Langchain](https://www.youtube.com/live/Glbwb5Hxu18?si=PIEY8Raq_C9PCHuW) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [Prompt Engineering in Web Development | Using LangChain and Templates with OpenAI](https://youtu.be/pK6WzlTOlYw?si=fkcDQsBG2h-DM8uQ) by [Akamai Developer
](https://www.youtube.com/@AkamaiDeveloper)
- ⛓ [Retrieval-Augmented Generation (RAG) using LangChain and `Pinecone` - The RAG Special Episode](https://youtu.be/J_tCD_J6w3s?si=60Mnr5VD9UED9bGG) by [Generative AI and Data Science On AWS](https://www.youtube.com/@GenerativeAIOnAWS)
- ⛓ [`LLAMA2 70b-chat` Multiple Documents Chatbot with Langchain & Streamlit |All OPEN SOURCE|Replicate API](https://youtu.be/vhghB81vViM?si=dszzJnArMeac7lyc) by [DataInsightEdge](https://www.youtube.com/@DataInsightEdge01)
- ⛓ [Chatting with 44K Fashion Products: LangChain Opportunities and Pitfalls](https://youtu.be/Zudgske0F_s?si=8HSshHoEhh0PemJA) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- ⛓ [Structured Data Extraction from `ChatGPT` with LangChain](https://youtu.be/q1lYg8JISpQ?si=0HctzOHYZvq62sve) by [MG](https://www.youtube.com/@MG_cafe)
- ⛓ [Chat with Multiple PDFs using `Llama 2`, `Pinecone` and LangChain (Free LLMs and Embeddings)](https://youtu.be/TcJ_tVSGS4g?si=FZYnMDJyoFfL3Z2i) by [Muhammad Moin](https://www.youtube.com/@muhammadmoinfaisal)
- ⛓ [Integrate Audio into `LangChain.js` apps in 5 Minutes](https://youtu.be/hNpUSaYZIzs?si=Gb9h7W9A8lzfvFKi) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- ⛓ [`ChatGPT` for your data with Local LLM](https://youtu.be/bWrjpwhHEMU?si=uM6ZZ18z9og4M90u) by [Jacob Jedryszek](https://www.youtube.com/@jj09)
- ⛓ [Training `Chatgpt` with your personal data using langchain step by step in detail](https://youtu.be/j3xOMde2v9Y?si=179HsiMU-hEPuSs4) by [NextGen Machines](https://www.youtube.com/@MayankGupta-kb5yc)
- ⛓ [Use ANY language in `LangSmith` with REST](https://youtu.be/7BL0GEdMmgY?si=iXfOEdBLqXF6hqRM) by [Nerding I/O](https://www.youtube.com/@nerding_io)
- ⛓ [How to Leverage the Full Potential of LLMs for Your Business with Langchain - Leon Ruddat](https://youtu.be/vZmoEa7oWMg?si=ZhMmydq7RtkZd56Q) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [`ChatCSV` App: Chat with CSV files using LangChain and `Llama 2`](https://youtu.be/PvsMg6jFs8E?si=Qzg5u5gijxj933Ya) by [Muhammad Moin](https://www.youtube.com/@muhammadmoinfaisal)
- ⛓ [Build Chat PDF app in Python with LangChain, OpenAI, Streamlit | Full project | Learn Coding](https://www.youtube.com/watch?v=WYzFzZg4YZI) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
- ⛓ [Build Eminem Bot App with LangChain, Streamlit, OpenAI | Full Python Project | Tutorial | AI ChatBot](https://www.youtube.com/watch?v=a2shHB4MRZ4) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
### [Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
- [Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and `ChatGPT`](https://www.youtube.com/watch?v=muXbPpG_ys4)
- [Loaders, Indexes & Vectorstores in LangChain: Question Answering on `PDF` files with `ChatGPT`](https://www.youtube.com/watch?v=FQnvfR8Dmr0)
- [LangChain Models: `ChatGPT`, `Flan Alpaca`, `OpenAI Embeddings`, Prompt Templates & Streaming](https://www.youtube.com/watch?v=zy6LiK5F5-s)
- [LangChain Chains: Use `ChatGPT` to Build Conversational Agents, Summaries and Q&A on Text With LLMs](https://www.youtube.com/watch?v=h1tJZQPcimM)
- [Analyze Custom CSV Data with `GPT-4` using Langchain](https://www.youtube.com/watch?v=Ew3sGdX8at4)
- [Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations](https://youtu.be/CyuUlf54wTs)
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain `OpenAI API`)](https://youtu.be/9AXP7tCI9PI)
- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg?si=pjXKhsHRzn10vOqX)
- [`Hugging Face` + Langchain in 5 mins | Access 200k+ FREE AI models for your AI apps](https://youtu.be/_j7JEDWuqLE?si=psimQscN3qo2dOa9)
- [LangChain Crash Course For Beginners | LangChain Tutorial](https://youtu.be/nAmC7SoVLd8?si=qJdvyG5-rnjqfdj1)
- [Vector Embeddings Tutorial Code Your Own AI Assistant with GPT-4 API + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=UBP3yw50cLm3a2nj)
- [Development with Large Language Models Tutorial `OpenAI`, Langchain, Agents, `Chroma`](https://youtu.be/xZDB1naRUlk?si=v8J1q6oFHRyTkf7Y)
- [Langchain: `PDF` Chat App (GUI) | ChatGPT for Your PDF FILES | Step-by-Step Tutorial](https://youtu.be/RIWbalZ7sTo?si=LbKsCcuyv0BtnrTY)
- [Vector Search `RAG` Tutorial Combine Your Data with LLMs with Advanced Search](https://youtu.be/JEBDfGqrAUA?si=pD7oxpfwWeJCxfBt)
- [LangChain Crash Course for Beginners](https://youtu.be/lG7Uxts9SXs?si=Yte4S5afN7KNCw0F)
- [Learn `RAG` From Scratch Python AI Tutorial from a LangChain Engineer](https://youtu.be/sVcwVQRHIc8?si=_LN4g0vOgSdtlB3S)
- [`Llama 2` in LangChain — FIRST Open Source Conversational Agent!](https://youtu.be/6iHVJyX2e50?si=rtq1maPrzWKHbwVV)
- [LangChain Tutorial for Beginners | Generative AI Series](https://youtu.be/cQUUkZnyoD0?si=KYz-bvcocdqGh9f_)
- [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=yS7T98VLfcWdkDek)
- [LangChain Explained In 15 Minutes - A MUST Learn For Python Programmers](https://youtu.be/mrjq3lFz23s?si=wkQGcSKUJjuiiEPf)
- [LLM Project | End to End LLM Project Using Langchain, `OpenAI` in Finance Domain](https://youtu.be/MoqgmWV1fm8?si=oVl-5kJVgd3a07Y_)
- [What is LangChain?](https://youtu.be/1bUy-1hGZpI?si=NZ0D51VM5y-DhjGe)
- [`RAG` + Langchain Python Project: Easy AI/Chat For Your Doc](https://youtu.be/tcqEUSNCn8I?si=RLcWPBVLIErRqdmU)
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg?si=X9qVazlXYucN_JBP)
- [LangChain GEN AI Tutorial 6 End-to-End Projects using OpenAI, Google `Gemini Pro`, `LLAMA2`](https://youtu.be/x0AnCE9SE4A?si=_92gJYm7kb-V2bi0)
- [Complete Langchain GEN AI Crash Course With 6 End To End LLM Projects With OPENAI, `LLAMA2`, `Gemini Pro`](https://youtu.be/aWKrL4z5H6w?si=NVLi7Yiq0ccE7xXE)
- [AI Leader Reveals The Future of AI AGENTS (LangChain CEO)](https://youtu.be/9ZhbA0FHZYc?si=1r4P6kRvKVvEhRgE)
- [Learn How To Query Pdf using Langchain Open AI in 5 min](https://youtu.be/5Ghv-F1wF_0?si=ZZRjrWfeiFOVrcvu)
- [Reliable, fully local RAG agents with `LLaMA3`](https://youtu.be/-ROS6gfYIts?si=75CXA8W_BbnkIxcV)
- [Learn `LangChain.js` - Build LLM apps with JavaScript and `OpenAI`](https://youtu.be/HSZ_uaif57o?si=Icj-RAhwMT-vHaYA)
- [LLM Project | End to End LLM Project Using LangChain, Google Palm In Ed-Tech Industry](https://youtu.be/AjQPRomyd-k?si=eC3NT6kn02Lhpz-_)
- [Chatbot Answering from Your Own Knowledge Base: Langchain, `ChatGPT`, `Pinecone`, and `Streamlit`: | Code](https://youtu.be/nAKhxQ3hcMA?si=9Zd_Nd_jiYhtml5w)
- [LangChain is AMAZING | Quick Python Tutorial](https://youtu.be/I4mFqyqFkxg?si=aJ66qh558OfNAczD)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw?si=kZR-lnJwixeVrjmh)
- [Using NEW `MPT-7B` in `Hugging Face` and LangChain](https://youtu.be/DXpk9K7DgMo?si=99JDpV_ueimwJhMi)
- [LangChain - COMPLETE TUTORIAL - Basics to advanced concept!](https://youtu.be/a89vqgK-Qcs?si=0aVO2EOqsw7GE5e3)
- [LangChain Agents: Simply Explained!](https://youtu.be/Xi9Ui-9qcPw?si=DCuG7nGx8dxcfhkx)
- [Chat With Multiple `PDF` Documents With Langchain And Google `Gemini Pro`](https://youtu.be/uus5eLz6smA?si=YUwvHtaZsGeIl0WD)
- [LLM Project | End to end LLM project Using Langchain, `Google Palm` in Retail Industry](https://youtu.be/4wtrl4hnPT8?si=_eOKPpdLfWu5UXMQ)
- [Tutorial | Chat with any Website using Python and Langchain](https://youtu.be/bupx08ZgSFg?si=KRrjYZFnuLsstGwW)
- [Prompt Engineering And LLM's With LangChain In One Shot-Generative AI](https://youtu.be/t2bSApmPzU4?si=87vPQQtYEWTyu2Kx)
- [Build a Custom Chatbot with `OpenAI`: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU?si=gR1u3DUG9lvzBIKK)
- [Search Your `PDF` App using Langchain, `ChromaDB`, and Open Source LLM: No OpenAI API (Runs on CPU)](https://youtu.be/rIV1EseKwU4?si=UxZEoXSiPai8fXgl)
- [Building a `RAG` application from scratch using Python, LangChain, and the `OpenAI API`](https://youtu.be/BrsocJb-fAo?si=hvkh9iTGzJ-LnsX-)
- [Function Calling via `ChatGPT API` - First Look With LangChain](https://youtu.be/0-zlUy7VUjg?si=Vc6LFseckEc6qvuk)
- [Private GPT, free deployment! Langchain-Chachat helps you easily play with major mainstream AI models! | Zero Degree Commentary](https://youtu.be/3LLUyaHP-3I?si=AZumEeFXsvqaLl0f)
- [Create a ChatGPT clone using `Streamlit` and LangChain](https://youtu.be/IaTiyQ2oYUQ?si=WbgsYmqPDnMidSUK)
- [What's next for AI agents ft. LangChain's Harrison Chase](https://youtu.be/pBBe1pk8hf4?si=H4vdBF9nmkNZxiHt)
- [`LangFlow`: Build Chatbots without Writing Code - LangChain](https://youtu.be/KJ-ux3hre4s?si=TJuDu4bAlva1myNL)
- [Building a LangChain Custom Medical Agent with Memory](https://youtu.be/6UFtRwWnHws?si=wymYad26VgigRkHy)
- [`Ollama` meets LangChain](https://youtu.be/k_1pOF1mj8k?si=RlBiCrmaR3s7SnMK)
- [End To End LLM Langchain Project using `Pinecone` Vector Database](https://youtu.be/erUfLIi9OFM?si=aHpuHXdIEmAfS4eF)
- [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=FUs0OLVJpzKhut0h)
- [Understanding `ReACT` with LangChain](https://youtu.be/Eug2clsLtFs?si=imgj534ggxlypS0d)
---------------------
[Updated 2024-05-16]
⛓ icon marks a new addition [last update 2024-02-04]

View File

@@ -1,10 +1,27 @@
# langchain-core
## 0.1.x
## 0.1.7 (Jan 5, 2024)
#### Deleted
No deletions.
#### Deprecated
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
#### Fixed
- Restrict recursive URL scraping: [#15559](https://github.com/langchain-ai/langchain/pull/15559)
#### Added
No additions.
#### Beta
- Marked `langchain_core.load.load` and `langchain_core.load.loads` as beta.
- Marked `langchain_core.beta.runnables.context.ContextGet` and `langchain_core.beta.runnables.context.ContextSet` as beta.

View File

@@ -1,73 +1,16 @@
# langchain
## 0.2.0
### Deleted
As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.
The following functions and classes require an explicit LLM to be passed as an argument:
- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit`
- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit`
- `langchain.chains.openai_functions.get_openapi_chain`
- `langchain.chains.router.MultiRetrievalQAChain.from_retrievers`
- `langchain.indexes.VectorStoreIndexWrapper.query`
- `langchain.indexes.VectorStoreIndexWrapper.query_with_sources`
- `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources`
- `langchain.chains.flare.FlareChain`
The following classes now require passing an explicit Embedding model as an argument:
- `langchain.indexes.VectostoreIndexCreator`
The following code has been removed:
- `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method.
### Deprecated
We have two main types of deprecations:
1. Code that was moved from `langchain` into another package (e.g, `langchain-community`)
If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement.
```python
python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader"
```
```python
LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:
>> from langchain.document_loaders import UnstructuredMarkdownLoader
with new imports of:
>> from langchain_community.document_loaders import UnstructuredMarkdownLoader
```
We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.)
However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, were releasing a migration script via the LangChain CLI. See further instructions in migration guide.
1. Code that has better alternatives available and will eventually be removed, so theres only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`).
Many of these were marked for removal in 0.2. We have bumped the removal to 0.3.
## 0.1.0 (Jan 5, 2024)
### Deleted
#### Deleted
No deletions.
### Deprecated
#### Deprecated
Deprecated classes and methods will be removed in 0.2.0
| Deprecated | Alternative | Reason |
| Deprecated | Alternative | Reason |
|---------------------------------|-----------------------------------|------------------------------------------------|
| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers |
| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood |

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,36 @@
# Setup
---
sidebar_position: 1
---
# Contribute Code
This guide walks through how to run the repository locally and check in your first code.
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These live in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/docs/`.
- Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
## 🚀 Quick Start
This quick start guide explains how to run the repository locally.
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
## Dependency Management: Poetry and other env/dependency managers
### Dependency Management: Poetry and other env/dependency managers
This project utilizes [Poetry](https://python-poetry.org/) v1.7.1+ as a dependency manager.
@@ -14,7 +41,7 @@ Install Poetry: **[documentation on how to install it](https://python-poetry.org
❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
## Different packages
### Different packages
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
@@ -32,7 +59,7 @@ For this quickstart, start with langchain-community:
cd libs/community
```
## Local Development Dependencies
### Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
@@ -52,9 +79,9 @@ If you are still seeing this bug on v1.6.1+, you may also try disabling "modern
(`poetry config installer.modern-installation false`) and re-installing requirements.
See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
## Testing
### Testing
**Note:** In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional. See the following section about optional dependencies.
_In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional; see section about optional dependencies_.
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
@@ -91,11 +118,11 @@ poetry install --with test
make test
```
## Formatting and Linting
### Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.
### Code Formatting
#### Code Formatting
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
@@ -147,7 +174,7 @@ This can be very helpful when you've made changes to only certain parts of the p
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Spellcheck
#### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
@@ -179,7 +206,9 @@ ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogy
`langchain-core` and partner packages **do not use** optional dependencies in this way.
You'll notice that `pyproject.toml` and `poetry.lock` are **not** touched when you add optional dependencies below.
You only need to add a new dependency if a **unit test** relies on the package.
If your package is only required for **integration tests**, then you can skip these
steps and leave all pyproject.toml and poetry.lock files alone.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
@@ -187,12 +216,20 @@ that most users won't have it installed.
Users who do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to a library, please do the following:
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Open extended_testing_deps.txt and add the dependency
2. Add a unit test that the very least attempts to import the new code. Ideally, the unit
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
3. Please use the `@pytest.mark.requires(package_name)` decorator for any unit tests that require the dependency.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
## Adding a Jupyter Notebook

View File

@@ -1,35 +0,0 @@
# General guidelines
Here are some things to keep in mind for all types of contributions:
- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow.
- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
- Ensure your PR passes formatting, linting, and testing checks before requesting a review.
- If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer.
- See the sections on [Testing](/docs/contributing/code/setup#testing) and [Formatting and Linting](/docs/contributing/code/setup#formatting-and-linting) for how to run these checks locally.
- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes.
- Look for duplicate PRs or issues that have already been opened before opening a new one.
- Keep scope as isolated as possible. As a general rule, your changes should not affect more than one package at a time.
## Bugfixes
We encourage and appreciate bugfixes. We ask that you:
- Explain the bug in enough detail for maintainers to be able to reproduce it.
- If an accompanying issue exists, link to it. Prefix with `Fixes` so that the issue will close automatically when the PR is merged.
- Avoid breaking changes if possible.
- Include unit tests that fail without the bugfix.
If you come across a bug and don't know how to fix it, we ask that you open an issue for it describing in detail the environment in which you encountered the bug.
## New features
We aim to keep the bar high for new features. We generally don't accept new core abstractions, changes to infra, changes to dependencies,
or new agents/chains from outside contributors without an existing GitHub discussion or issue that demonstrates an acute need for them.
- New features must come with docs, unit tests, and (if appropriate) integration tests.
- New integrations must come with docs, unit tests, and (if appropriate) integration tests.
- See [this page](/docs/contributing/integrations) for more details on contributing new integrations.
- New functionality should not inherit from or use deprecated methods or classes.
- We will reject features that are likely to lead to security vulnerabilities or reports.
- Do not add any hard dependencies. Integrations may add optional dependencies.

View File

@@ -1,6 +0,0 @@
# Contribute Code
If you would like to add a new feature or update an existing one, please read the resources below before getting started:
- [General guidelines](/docs/contributing/code/guidelines/)
- [Setup](/docs/contributing/code/setup/)

View File

@@ -0,0 +1,2 @@
label: 'Documentation'
position: 3

View File

@@ -1,7 +0,0 @@
# Contribute Documentation
Documentation is a vital part of LangChain. We welcome both new documentation for new features and
community improvements to our current documentation. Please read the resources below before getting started:
- [Documentation style guide](/docs/contributing/documentation/style_guide/)
- [Setup](/docs/contributing/documentation/setup/)

View File

@@ -1,8 +1,10 @@
---
sidebar_class_name: "hidden"
sidebar_label: "Style guide"
---
# Documentation Style Guide
# LangChain Documentation Style Guide
## Introduction
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
@@ -10,139 +12,116 @@ organization and structure.
## Philosophy
LangChain's documentation follows the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories: [Tutorials](/docs/contributing/documentation/style_guide/#tutorials),
[How-to guides](/docs/contributing/documentation/style_guide/#how-to-guides),
[References](/docs/contributing/documentation/style_guide/#references), and [Explanations](/docs/contributing/documentation/style_guide/#conceptual-guide).
LangChain's documentation aspires to follow the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories:
### Tutorials
Tutorials are lessons that take the reader through a practical activity. Their purpose is to help the user
gain understanding of concepts and how they interact by showing one way to achieve some goal in a hands-on way. They should **avoid** giving
multiple permutations of ways to achieve that goal in-depth. Instead, it should guide a new user through a recommended path to accomplishing the tutorial's goal. While the end result of a tutorial does not necessarily need to
be completely production-ready, it should be useful and practically satisfy the the goal that you clearly stated in the tutorial's introduction. Information on how to address additional scenarios
belongs in how-to guides.
To quote the Diataxis website:
> A tutorial serves the users *acquisition* of skills and knowledge - their study. Its purpose is not to help the user get something done, but to help them learn.
In LangChain, these are often higher level guides that show off end-to-end use cases.
Some examples include:
- [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain/)
- [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/)
A good structural rule of thumb is to follow the structure of this [example from Numpy](https://numpy.org/numpy-tutorials/content/tutorial-svd.html).
Here are some high-level tips on writing a good tutorial:
- Focus on guiding the user to get something done, but keep in mind the end-goal is more to impart principles than to create a perfect production system.
- Be specific, not abstract and follow one path.
- No need to go deeply into alternative approaches, but its ok to reference them, ideally with a link to an appropriate how-to guide.
- Get "a point on the board" as soon as possible - something the user can run that outputs something.
- You can iterate and expand afterwards.
- Try to frequently checkpoint at given steps where the user can run code and see progress.
- Focus on results, not technical explanation.
- Crosslink heavily to appropriate conceptual/reference pages.
- The first time you mention a LangChain concept, use its full name (e.g. "LangChain Expression Language (LCEL)"), and link to its conceptual/other documentation page.
- It's also helpful to add a prerequisite callout that links to any pages with necessary background information.
- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as related how-to guides.
### How-to guides
A how-to guide, as the name implies, demonstrates how to do something discrete and specific.
It should assume that the user is already familiar with underlying concepts, and is trying to solve an immediate problem, but
should still give some background or list the scenarios where the information contained within can be relevant.
They can and should discuss alternatives if one approach may be better than another in certain cases.
To quote the Diataxis website:
> A how-to guide serves the work of the already-competent user, whom you can assume to know what they want to do, and to be able to follow your instructions correctly.
Some examples include:
- [How to: return structured data from a model](/docs/how_to/structured_output/)
- [How to: write a custom chat model](/docs/how_to/custom_chat_model/)
Here are some high-level tips on writing a good how-to guide:
- Clearly explain what you are guiding the user through at the start.
- Assume higher intent than a tutorial and show what the user needs to do to get that task done.
- Assume familiarity of concepts, but explain why suggested actions are helpful.
- Crosslink heavily to conceptual/reference pages.
- Discuss alternatives and responses to real-world tradeoffs that may arise when solving a problem.
- Use lots of example code.
- Prefer full code blocks that the reader can copy and run.
- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as other related how-to guides.
### Conceptual guide
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. They should cover LangChain terms and concepts
in a more abstract way than how-to guides or tutorials, and should be geared towards curious users interested in
gaining a deeper understanding of the framework. Try to avoid excessively large code examples - the goal here is to
impart perspective to the user rather than to finish a practical project. These guides should cover **why** things work they way they do.
This guide on documentation style is meant to fall under this category.
To quote the Diataxis website:
> The perspective of explanation is higher and wider than that of the other types. It does not take the users eye-level view, as in a how-to guide, or a close-up view of the machinery, like reference material. Its scope in each case is a topic - “an area of knowledge”, that somehow has to be bounded in a reasonable, meaningful way.
Some examples include:
- [Retrieval conceptual docs](/docs/concepts/#retrieval)
- [Chat model conceptual docs](/docs/concepts/#chat-models)
Here are some high-level tips on writing a good conceptual guide:
- Explain design decisions. Why does concept X exist and why was it designed this way?
- Use analogies and reference other concepts and alternatives
- Avoid blending in too much reference content
- You can and should reference content covered in other guides, but make sure to link to them
### References
References contain detailed, low-level information that describes exactly what functionality exists and how to use it.
In LangChain, this is mainly our API reference pages, which are populated from docstrings within code.
References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know
how to use something specific.
To quote the Diataxis website:
> The only purpose of a reference guide is to describe, as succinctly as possible, and in an orderly way. Whereas the content of tutorials and how-to guides are led by needs of the user, reference material is led by the product it describes.
Many of the reference pages in LangChain are automatically generated from code,
but here are some high-level tips on writing a good docstring:
- Be concise
- Discuss special cases and deviations from a user's expectations
- Go into detail on required inputs and outputs
- Light details on when one might use the feature are fine, but in-depth details belong in other sections.
- **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
- An example of this is our [LCEL streaming guide](/docs/expression_language/streaming).
- Our guides on [custom components](/docs/modules/model_io/chat/custom_chat_model) is another one.
- **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
- The clearest examples of this are our [Use case](/docs/use_cases/) quickstart pages.
- **Reference**: Technical descriptions of the machinery and how to operate it.
- Our [Runnable interface](/docs/expression_language/interface) page is an example of this.
- The [API reference pages](https://api.python.langchain.com/) are another.
- **Explanation**: Explanations that clarify and illuminate a particular topic.
- The [LCEL primitives pages](/docs/expression_language/primitives/sequence) are an example of this.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
## General guidelines
## Taxonomy
Keeping the above in mind, we have sorted LangChain's docs into categories. It is helpful to think in these terms
when contributing new documentation:
### Getting started
The [getting started section](/docs/get_started/introduction) includes a high-level introduction to LangChain, a quickstart that
tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases
[Use cases](/docs/use_cases/) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped,
then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
The quickstart pages here should fit the **How-to guide** category, with the other pages intended to be **Explanations** of more
in-depth concepts and strategies that accompany the main happy paths.
:::note
The below sections are listed roughly in order of increasing level of abstraction.
:::
### Expression Language
[LangChain Expression Language (LCEL)](/docs/expression_language/) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,
and some **References** for how to use different methods in the Runnable interface.
### Components
The [components section](/docs/modules) covers concepts one level of abstraction higher than LCEL.
Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes,
such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
This section should contain mostly conceptual **Tutorials**, **References**, and **Explanations** of the components they cover.
:::note
As a general rule of thumb, everything covered in the `Expression Language` and `Components` sections (with the exception of the `Composition` section of components) should
cover only components that exist in `langchain_core`.
:::
### Integrations
The [integrations](/docs/integrations/platforms/) are specific implementations of components. These often involve third-party APIs and services.
If this is the case, as a general rule, these are maintained by the third-party partner.
This section should contain mostly **Explanations** and **References**, though the actual content here is more flexible than other sections and more at the
discretion of the third-party provider.
:::note
Concepts covered in `Integrations` should generally exist in `langchain_community` or specific partner packages.
:::
### Guides and Ecosystem
The [Guides](/docs/guides) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above.
This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
### API references
LangChain's API references. Should act as **References** (as the name implies) with some **Explanation**-focused content as well.
## Sample developer journey
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
- The developer lands on https://python.langchain.com, and reads through the introduction and the diagram.
- If they are just curious, they may be drawn to the [Quickstart](/docs/get_started/quickstart) to get a high-level tour of what LangChain contains.
- If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
- They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
- Next, they can learn about LangChain's various components and integrations.
- Finally, they can get additional knowledge through the Guides.
This is only an ideal of course - sections will inevitably reference lower or higher-level concepts that are documented in other sections.
## Guidelines
Here are some other guidelines you should think about when writing and organizing documentation.
We generally do not merge new tutorials from outside contributors without an actue need.
We welcome updates as well as new integration docs, how-tos, and references.
### Avoid duplication
Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should
be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides.
### Link to other sections
### Linking to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
to allow a developer to learn more about an unfamiliar topic inline.
This includes linking to the API references as well as conceptual sections!
### Be concise
### Conciseness
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
re-explain it, unless the concept you are documenting presents some new wrinkle.
@@ -151,10 +130,9 @@ Be concise, including in code samples.
### General style
- Use active voice and present tense whenever possible
- Use examples and code snippets to illustrate concepts and usage
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically
- Use fewer cells with more code to make copy/paste easier
- Use bullet points and numbered lists to break down information into easily digestible chunks
- Use tables (especially for **Reference** sections) and diagrams often to present information visually
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages
- Use active voice and present tense whenever possible.
- Use examples and code snippets to illustrate concepts and usage.
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically.
- Use bullet points and numbered lists to break down information into easily digestible chunks.
- Use tables (especially for **Reference** sections) and diagrams often to present information visually.
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages.

View File

@@ -1,8 +1,4 @@
---
sidebar_class_name: "hidden"
---
# Setup
# Technical logistics
LangChain documentation consists of two components:
@@ -16,6 +12,8 @@ used to generate the externally facing [API Reference](https://api.python.langch
The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that
developers document their code well.
The main documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
The `API Reference` is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/)
from the code and is hosted by [Read the Docs](https://readthedocs.org/).
@@ -31,7 +29,7 @@ The content for the main documentation is located in the `/docs` directory of th
The documentation is written using a combination of ipython notebooks (`.ipynb` files)
and markdown (`.mdx` files). The notebooks are converted to markdown
and then built using [Docusaurus 2](https://docusaurus.io/).
using [Quarto](https://quarto.org) and then built using [Docusaurus 2](https://docusaurus.io/).
Feel free to make contributions to the main documentation! 🥰
@@ -50,6 +48,10 @@ locally to ensure that it looks good and is free of errors.
If you're unable to build it locally that's okay as well, as you will be able to
see a preview of the documentation on the pull request page.
### Install dependencies
- [Quarto](https://quarto.org) - package that converts Jupyter notebooks (`.ipynb` files) into mdx files for serving in Docusaurus. [Download link](https://quarto.org/docs/download/).
From the **monorepo root**, run the following command to install the dependencies:
```bash
@@ -76,18 +78,6 @@ make docs_build
make api_docs_build
```
:::tip
The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
```bash
make api_docs_quick_preview
```
which will just build a small subset of the API reference.
:::
Finally, run the link checker to ensure all links are valid:
```bash

View File

@@ -12,8 +12,8 @@ As an open-source project in a rapidly developing field, we are extremely open t
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](/docs/contributing/documentation/): Help improve our docs, including this one!
- [**Code**](/docs/contributing/code/): Help us write code, fix bugs, or improve our infrastructure.
- [**Documentation**](/docs/contributing/documentation/style_guide): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.
@@ -48,7 +48,7 @@ In a similar vein, we do enforce certain linting, formatting, and documentation
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
### 🌟 Recognition
# 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -1,7 +1,6 @@
---
sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/docs/contributing/code/).
@@ -11,7 +10,7 @@ There are a few different places you can contribute integrations for LangChain:
- **Community**: For lighter-weight integrations that are primarily maintained by LangChain and the Open Source Community.
- **Partner Packages**: For independent packages that are co-maintained by LangChain and a partner.
For the most part, **new integrations should be added to the Community package**. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package.
For the most part, new integrations should be added to the Community package. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package.
In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`.
@@ -60,10 +59,6 @@ And add documentation to:
## Partner package in LangChain repo
:::caution
Before starting a **partner** package, please confirm your intent with the LangChain team. Partner packages require more maintenance as separate packages, so we will close PRs that add new partner packages without prior discussion. See the above section for how to add a community integration.
:::
Partner packages can be hosted in the `LangChain` monorepo or in an external repo.
Partner package in the `LangChain` repo is placed in `libs/partners/{partner}`

View File

@@ -6,8 +6,7 @@ sidebar_position: 0.5
If you plan on contributing to LangChain code or documentation, it can be useful
to understand the high level structure of the repository.
LangChain is organized as a [monorepo](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
You can check out our [installation guide](/docs/how_to/installation/) for more on how they fit together.
LangChain is organized as a [monorep](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
Here's the structure visualized as a tree:
@@ -16,22 +15,12 @@ Here's the structure visualized as a tree:
├── cookbook # Tutorials and examples
├── docs # Contains content for the documentation here: https://python.langchain.com/
├── libs
│ ├── langchain
│ │ ├── langchain
│ ├── langchain # Main package
│ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity)
│ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity)
│ ├── community # Third-party integrations
│ ├── langchain-community
│ ├── core # Base interfaces for key abstractions
│ │ ├── langchain-core
│ ├── experimental # Experimental components and chains
│ │ ├── langchain-experimental
| ├── cli # Command line interface
│ │ ├── langchain-cli
│ ├── text-splitters
│ │ ├── langchain-text-splitters
│ ├── standard-tests
│ │ ├── langchain-standard-tests
│ ├── langchain-community # Third-party integrations
│ ├── langchain-core # Base interfaces for key abstractions
│ ├── langchain-experimental # Experimental components and chains
│ ├── partners
│ ├── langchain-partner-1
│ ├── langchain-partner-2
@@ -52,7 +41,7 @@ There are other files in the root directory level, but their presence should be
The `/docs` directory contains the content for the documentation that is shown
at https://python.langchain.com/ and the associated API Reference https://api.python.langchain.com/en/latest/langchain_api_reference.html.
See the [documentation](/docs/contributing/documentation/) guidelines to learn how to contribute to the documentation.
See the [documentation](/docs/contributing/documentation/style_guide) guidelines to learn how to contribute to the documentation.
## Code
@@ -60,6 +49,6 @@ The `/libs` directory contains the code for the LangChain packages.
To learn more about how to contribute code see the following guidelines:
- [Code](/docs/contributing/code/): Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx): Learn how to contribute to third-party integrations to `langchain-community` or to start a new partner package.
- [Testing](./testing.mdx): Guidelines to learn how to write tests for the packages.
- [Code](./code.mdx) Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx) to learn how to contribute to third-party integrations to langchain-community or to start a new partner package.
- [Testing](./testing.mdx) guidelines to learn how to write tests for the packages.

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 6
sidebar_position: 2
---
# Testing

View File

@@ -0,0 +1,139 @@
{
"cells": [
{
"cell_type": "raw",
"id": "1e997ab7",
"metadata": {},
"source": [
"---\n",
"sidebar_class_name: hidden\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f09fd305",
"metadata": {},
"source": [
"# Code writing\n",
"\n",
"Example of how to use LCEL to write Python code."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0653c7c7",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain-experimental langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "bd7c259a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import (\n",
" ChatPromptTemplate,\n",
")\n",
"from langchain_experimental.utilities import PythonREPL\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "73795d2d",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Write some python code to solve the user's problem. \n",
"\n",
"Return only python code in Markdown format, e.g.:\n",
"\n",
"```python\n",
"....\n",
"```\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", template), (\"human\", \"{input}\")])\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "42859e8a",
"metadata": {},
"outputs": [],
"source": [
"def _sanitize_output(text: str):\n",
" _, after = text.split(\"```python\")\n",
" return after.split(\"```\")[0]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "5ded1a86",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().run"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "208c2b75",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Python REPL can execute arbitrary code. Use with caution.\n"
]
},
{
"data": {
"text/plain": [
"'4\\n'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"whats 2 plus 2\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,267 @@
{
"cells": [
{
"cell_type": "raw",
"id": "877102d1-02ea-4fa3-8ec7-a08e242b95b3",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: Multiple chains\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "0f2bf8d3",
"metadata": {},
"source": [
"Runnables can easily be used to string together multiple Chains"
]
},
{
"cell_type": "code",
"id": "0f316b5c",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d65d4e9e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'El país donde se encuentra la ciudad de Honolulu, donde nació Barack Obama, el 44º Presidente de los Estados Unidos, es Estados Unidos. Honolulu se encuentra en la isla de Oahu, en el estado de Hawái.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt1 = ChatPromptTemplate.from_template(\"what is the city {person} is from?\")\n",
"prompt2 = ChatPromptTemplate.from_template(\n",
" \"what country is the city {city} in? respond in {language}\"\n",
")\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"chain1 = prompt1 | model | StrOutputParser()\n",
"\n",
"chain2 = (\n",
" {\"city\": chain1, \"language\": itemgetter(\"language\")}\n",
" | prompt2\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"chain2.invoke({\"person\": \"obama\", \"language\": \"spanish\"})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "878f8176",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"prompt1 = ChatPromptTemplate.from_template(\n",
" \"generate a {attribute} color. Return the name of the color and nothing else:\"\n",
")\n",
"prompt2 = ChatPromptTemplate.from_template(\n",
" \"what is a fruit of color: {color}. Return the name of the fruit and nothing else:\"\n",
")\n",
"prompt3 = ChatPromptTemplate.from_template(\n",
" \"what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:\"\n",
")\n",
"prompt4 = ChatPromptTemplate.from_template(\n",
" \"What is the color of {fruit} and the flag of {country}?\"\n",
")\n",
"\n",
"model_parser = model | StrOutputParser()\n",
"\n",
"color_generator = (\n",
" {\"attribute\": RunnablePassthrough()} | prompt1 | {\"color\": model_parser}\n",
")\n",
"color_to_fruit = prompt2 | model_parser\n",
"color_to_country = prompt3 | model_parser\n",
"question_generator = (\n",
" color_generator | {\"fruit\": color_to_fruit, \"country\": color_to_country} | prompt4\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d621a870",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content='What is the color of strawberry and the flag of China?', additional_kwargs={}, example=False)])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question_generator.invoke(\"warm\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b4a9812b-bead-4fd9-ae27-0b8be57e5dc1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The color of an apple is typically red or green. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.', additional_kwargs={}, example=False)"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = question_generator.invoke(\"warm\")\n",
"model.invoke(prompt)"
]
},
{
"cell_type": "markdown",
"id": "6d75a313-f1c8-4e94-9a17-24e0bf4a2bdc",
"metadata": {},
"source": [
"### Branching and Merging\n",
"\n",
"You may want the output of one component to be processed by 2 or more other components. [RunnableParallels](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html#langchain_core.runnables.base.RunnableParallel) let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
"\n",
"```text\n",
" Input\n",
" / \\\n",
" / \\\n",
" Branch1 Branch2\n",
" \\ /\n",
" \\ /\n",
" Combine\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "247fa0bd-4596-4063-8cb3-1d7fc119d982",
"metadata": {},
"outputs": [],
"source": [
"planner = (\n",
" ChatPromptTemplate.from_template(\"Generate an argument about: {input}\")\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
" | {\"base_response\": RunnablePassthrough()}\n",
")\n",
"\n",
"arguments_for = (\n",
" ChatPromptTemplate.from_template(\n",
" \"List the pros or positive aspects of {base_response}\"\n",
" )\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")\n",
"arguments_against = (\n",
" ChatPromptTemplate.from_template(\n",
" \"List the cons or negative aspects of {base_response}\"\n",
" )\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")\n",
"\n",
"final_responder = (\n",
" ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"ai\", \"{original_response}\"),\n",
" (\"human\", \"Pros:\\n{results_1}\\n\\nCons:\\n{results_2}\"),\n",
" (\"system\", \"Generate a final response given the critique\"),\n",
" ]\n",
" )\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")\n",
"\n",
"chain = (\n",
" planner\n",
" | {\n",
" \"results_1\": arguments_for,\n",
" \"results_2\": arguments_against,\n",
" \"original_response\": itemgetter(\"base_response\"),\n",
" }\n",
" | final_responder\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2564f310-0674-4bb1-9c4e-d7848ca73511",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'While Scrum has its potential cons and challenges, many organizations have successfully embraced and implemented this project management framework to great effect. The cons mentioned above can be mitigated or overcome with proper training, support, and a commitment to continuous improvement. It is also important to note that not all cons may be applicable to every organization or project.\\n\\nFor example, while Scrum may be complex initially, with proper training and guidance, teams can quickly grasp the concepts and practices. The lack of predictability can be mitigated by implementing techniques such as velocity tracking and release planning. The limited documentation can be addressed by maintaining a balance between lightweight documentation and clear communication among team members. The dependency on team collaboration can be improved through effective communication channels and regular team-building activities.\\n\\nScrum can be scaled and adapted to larger projects by using frameworks like Scrum of Scrums or LeSS (Large Scale Scrum). Concerns about speed versus quality can be addressed by incorporating quality assurance practices, such as continuous integration and automated testing, into the Scrum process. Scope creep can be managed by having a well-defined and prioritized product backlog, and a strong product owner can be developed through training and mentorship.\\n\\nResistance to change can be overcome by providing proper education and communication to stakeholders and involving them in the decision-making process. Ultimately, the cons of Scrum can be seen as opportunities for growth and improvement, and with the right mindset and support, they can be effectively managed.\\n\\nIn conclusion, while Scrum may have its challenges and potential cons, the benefits and advantages it offers in terms of collaboration, flexibility, adaptability, transparency, and customer satisfaction make it a widely adopted and successful project management framework. With proper implementation and continuous improvement, organizations can leverage Scrum to drive innovation, efficiency, and project success.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"scrum\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,436 @@
{
"cells": [
{
"cell_type": "raw",
"id": "abf7263d-3a62-4016-b5d5-b157f92f2070",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: Prompt + LLM\n",
"---\n"
]
},
{
"cell_type": "markdown",
"id": "9a434f2b-9405-468c-9dfd-254d456b57a6",
"metadata": {},
"source": [
"The most common and valuable composition is taking:\n",
"\n",
"``PromptTemplate`` / ``ChatPromptTemplate`` -> ``LLM`` / ``ChatModel`` -> ``OutputParser``\n",
"\n",
"Almost any other chains you build will use this building block."
]
},
{
"cell_type": "markdown",
"id": "93aa2c87",
"metadata": {},
"source": [
"## PromptTemplate + LLM\n",
"\n",
"The simplest composition is just combining a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model output.\n",
"\n",
"Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here."
]
},
{
"cell_type": "raw",
"id": "ef79a54b",
"metadata": {},
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "466b65b3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {foo}\")\n",
"model = ChatOpenAI()\n",
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e3d0a6cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "7eb9ef50",
"metadata": {},
"source": [
"Often times we want to attach kwargs that'll be passed to each model call. Here are a few examples of that:"
]
},
{
"cell_type": "markdown",
"id": "0b1d8f88",
"metadata": {},
"source": [
"### Attaching Stop Sequences"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "562a06bf",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model.bind(stop=[\"\\n\"])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "43f5d04c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Why did the bear never wear shoes?', additional_kwargs={}, example=False)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "f3eaf88a",
"metadata": {},
"source": [
"### Attaching Function Call information"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f94b71b2",
"metadata": {},
"outputs": [],
"source": [
"functions = [\n",
" {\n",
" \"name\": \"joke\",\n",
" \"description\": \"A joke\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"setup\": {\"type\": \"string\", \"description\": \"The setup for the joke\"},\n",
" \"punchline\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The punchline for the joke\",\n",
" },\n",
" },\n",
" \"required\": [\"setup\", \"punchline\"],\n",
" },\n",
" }\n",
"]\n",
"chain = prompt | model.bind(function_call={\"name\": \"joke\"}, functions=functions)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "decf7710",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'joke', 'arguments': '{\\n \"setup\": \"Why don\\'t bears wear shoes?\",\\n \"punchline\": \"Because they have bear feet!\"\\n}'}}, example=False)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bears\"}, config={})"
]
},
{
"cell_type": "markdown",
"id": "9098c5ed",
"metadata": {},
"source": [
"## PromptTemplate + LLM + OutputParser\n",
"\n",
"We can also add in an output parser to easily transform the raw LLM/ChatModel output into a more workable format"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cc194c78",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chain = prompt | model | StrOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "77acf448",
"metadata": {},
"source": [
"Notice that this now returns a string - a much more workable format for downstream tasks"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e3d69a18",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "c01864e5",
"metadata": {},
"source": [
"### Functions Output Parser\n",
"\n",
"When you specify the function to return, you may just want to parse that directly"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ad0dd88e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser\n",
"\n",
"chain = (\n",
" prompt\n",
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
" | JsonOutputFunctionsParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1e7aa8eb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': \"Why don't bears like fast food?\",\n",
" 'punchline': \"Because they can't catch it!\"}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bears\"})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d4aa1a01",
"metadata": {},
"outputs": [],
"source": [
"from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
"\n",
"chain = (\n",
" prompt\n",
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "8b6df9ba",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why don't bears wear shoes?\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "023fbccb-ef7d-489e-a9ba-f98e17283d51",
"metadata": {},
"source": [
"## Simplifying input\n",
"\n",
"To make invocation even simpler, we can add a `RunnableParallel` to take care of creating the prompt input dict for us:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9601c0f0-71f9-4bd4-a672-7bd04084b018",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"map_ = RunnableParallel(foo=RunnablePassthrough())\n",
"chain = (\n",
" map_\n",
" | prompt\n",
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "7ec4f154-fda5-4847-9220-41aa902fdc33",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why don't bears wear shoes?\""
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"bears\")"
]
},
{
"cell_type": "markdown",
"id": "def00bfe-0f83-4805-8c8f-8a53f99fa8ea",
"metadata": {},
"source": [
"Since we're composing our map with another Runnable, we can even use some syntactic sugar and just use a dict:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "7bf3846a-02ee-41a3-ba1b-a708827d4f3a",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" {\"foo\": RunnablePassthrough()}\n",
" | prompt\n",
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "e566d6a1-538d-4cb5-a210-a63e082e4c74",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why don't bears like fast food?\""
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"bears\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,537 @@
{
"cells": [
{
"cell_type": "raw",
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: Get started\n",
"keywords: [chain.invoke]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "befa7fd1",
"metadata": {},
"source": [
"LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging."
]
},
{
"cell_type": "markdown",
"id": "9a9acd2e",
"metadata": {},
"source": [
"## Basic example: prompt + model + output parser\n",
"\n",
"The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "278b0027",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain-community langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "c3d54f72",
"metadata": {},
"source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9eed8e8",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4\")"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "466b65b3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why don't ice creams ever get invited to parties?\\n\\nBecause they always drip when things heat up!\""
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a short joke about {topic}\")\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser\n",
"\n",
"chain.invoke({\"topic\": \"ice cream\"})"
]
},
{
"cell_type": "markdown",
"id": "81c502c5-85ee-4f36-aaf4-d6e350b7792f",
"metadata": {},
"source": [
"Notice this line of the code, where we piece together these different components into a single chain using LCEL:\n",
"\n",
"```\n",
"chain = prompt | model | output_parser\n",
"```\n",
"\n",
"The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components, feeding the output from one component as input into the next component. \n",
"\n",
"In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on."
]
},
{
"cell_type": "markdown",
"id": "aa1b77fa",
"metadata": {},
"source": [
"### 1. Prompt\n",
"\n",
"`prompt` is a `BasePromptTemplate`, which means it takes in a dictionary of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `BaseMessage`s and for producing a string."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b8656990",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value = prompt.invoke({\"topic\": \"ice cream\"})\n",
"prompt_value"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e6034488",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='tell me a short joke about ice cream')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value.to_messages()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "60565463",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Human: tell me a short joke about ice cream'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value.to_string()"
]
},
{
"cell_type": "markdown",
"id": "577f0f76",
"metadata": {},
"source": [
"### 2. Model\n",
"\n",
"The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `BaseMessage`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "33cf5f72",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Why don't ice creams ever get invited to parties?\\n\\nBecause they always bring a melt down!\")"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message = model.invoke(prompt_value)\n",
"message"
]
},
{
"cell_type": "markdown",
"id": "327e7db8",
"metadata": {},
"source": [
"If our `model` was an `LLM`, it would output a string."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8feb05da",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nRobot: Why did the ice cream truck break down? Because it had a meltdown!'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"llm.invoke(prompt_value)"
]
},
{
"cell_type": "markdown",
"id": "91847478",
"metadata": {},
"source": [
"### 3. Output parser\n",
"\n",
"And lastly we pass our `model` output to the `output_parser`, which is a `BaseOutputParser` meaning it takes either a string or a \n",
"`BaseMessage` as input. The specific `StrOutputParser` simply converts any input into a string."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "533e59a8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"output_parser.invoke(message)"
]
},
{
"cell_type": "markdown",
"id": "9851e842",
"metadata": {},
"source": [
"### 4. Entire Pipeline\n",
"\n",
"To follow the steps along:\n",
"\n",
"1. We pass in user input on the desired topic as `{\"topic\": \"ice cream\"}`\n",
"2. The `prompt` component takes the user input, which is then used to construct a PromptValue after using the `topic` to construct the prompt. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method. \n"
]
},
{
"cell_type": "markdown",
"id": "c4873109",
"metadata": {},
"source": [
"```mermaid\n",
"graph LR\n",
" A(Input: topic=ice cream) --> |Dict| B(PromptTemplate)\n",
" B -->|PromptValue| C(ChatModel) \n",
" C -->|ChatMessage| D(StrOutputParser)\n",
" D --> |String| F(Result)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "fe63534d",
"metadata": {},
"source": [
":::info\n",
"\n",
"Note that if youre curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11089b6f-23f8-474f-97ec-8cae8d0ca6d4",
"metadata": {},
"outputs": [],
"source": [
"input = {\"topic\": \"ice cream\"}\n",
"\n",
"prompt.invoke(input)\n",
"# > ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])\n",
"\n",
"(prompt | model).invoke(input)\n",
"# > AIMessage(content=\"Why did the ice cream go to therapy?\\nBecause it had too many toppings and couldn't cone-trol itself!\")"
]
},
{
"cell_type": "markdown",
"id": "cc7d3b9d-e400-4c9b-9188-f29dac73e6bb",
"metadata": {},
"source": [
"## RAG Search Example\n",
"\n",
"For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions."
]
},
{
"cell_type": "markdown",
"id": "b8fe8eb4",
"metadata": {},
"source": [
"```{=mdx}\n",
"<ChatModelTabs />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "662426e8-4316-41dc-8312-9b58edc7e0c9",
"metadata": {},
"outputs": [],
"source": [
"# Requires:\n",
"# pip install langchain docarray tiktoken\n",
"\n",
"from langchain_community.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_texts(\n",
" [\"harrison worked at kensho\", \"bears like to eat honey\"],\n",
" embedding=OpenAIEmbeddings(),\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"output_parser = StrOutputParser()\n",
"\n",
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")\n",
"chain = setup_and_retrieval | prompt | model | output_parser\n",
"\n",
"chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "f0999140-6001-423b-970b-adf1dfdb4dec",
"metadata": {},
"source": [
"In this case, the composed chain is: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b88e9bb-f04a-4a56-87ec-19a0e6350763",
"metadata": {},
"outputs": [],
"source": [
"chain = setup_and_retrieval | prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "6e929e15-40a5-4569-8969-384f636cab87",
"metadata": {},
"source": [
"To explain this, we first can see that the prompt template above takes in `context` and `question` as values to be substituted in the prompt. Before building the prompt template, we want to retrieve relevant documents to the search and include them as part of the context. \n",
"\n",
"As a preliminary step, weve setup the retriever using an in memory store, which can retrieve documents based on a query. This is a runnable component as well that can be chained together with other components, but you can also try to run it separately:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7319ef6-613b-4638-ad7d-4a2183702c1d",
"metadata": {},
"outputs": [],
"source": [
"retriever.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "e6833844-f1c4-444c-a3d2-31b3c6b31d46",
"metadata": {},
"source": [
"We then use the `RunnableParallel` to prepare the expected inputs into the prompt by using the entries for the retrieved documents as well as the original user question, using the retriever for document search, and `RunnablePassthrough` to pass the users question:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcbca26b-d6b9-4c24-806c-1ec8fdaab4ed",
"metadata": {},
"outputs": [],
"source": [
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "68c721c1-048b-4a64-9d78-df54fe465992",
"metadata": {},
"source": [
"To review, the complete chain is:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d5115a7-7b8e-458b-b936-26cc87ee81c4",
"metadata": {},
"outputs": [],
"source": [
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")\n",
"chain = setup_and_retrieval | prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "5c6f5f74-b387-48a0-bedd-1fae202cd10a",
"metadata": {},
"source": [
"With the flow being:\n",
"\n",
"1. The first steps create a `RunnableParallel` object with two entries. The first entry, `context` will include the document results fetched by the retriever. The second entry, `question` will contain the users original question. To pass on the question, we use `RunnablePassthrough` to copy this entry. \n",
"2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method.\n",
"\n",
"```mermaid\n",
"graph LR\n",
" A(Question) --> B(RunnableParallel)\n",
" B -->|Question| C(Retriever)\n",
" B -->|Question| D(RunnablePassThrough)\n",
" C -->|context=retrieved docs| E(PromptTemplate)\n",
" D -->|question=Question| E\n",
" E -->|PromptValue| F(ChatModel) \n",
" F -->|ChatMessage| G(StrOutputParser)\n",
" G --> |String| H(Result)\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "8c2438df-164e-4bbe-b5f4-461695e45b0f",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"We recommend reading our [Advantages of LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,136 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b45110ef",
"metadata": {},
"source": [
"# Create a runnable with the @chain decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionally equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested children.\n",
"\n",
"It will also allow you to use this as any other runnable, compose it in chain, etc.\n",
"\n",
"Let's take a look at this in action!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23b2b564",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "d9370420",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import chain\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "b7f74f7e",
"metadata": {},
"outputs": [],
"source": [
"prompt1 = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"prompt2 = ChatPromptTemplate.from_template(\"What is the subject of this joke: {joke}\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "2b0365c4",
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
"def custom_chain(text):\n",
" prompt_val1 = prompt1.invoke({\"topic\": text})\n",
" output1 = ChatOpenAI().invoke(prompt_val1)\n",
" parsed_output1 = StrOutputParser().invoke(output1)\n",
" chain2 = prompt2 | ChatOpenAI() | StrOutputParser()\n",
" return chain2.invoke({\"joke\": parsed_output1})"
]
},
{
"cell_type": "markdown",
"id": "904d6872",
"metadata": {},
"source": [
"`custom_chain` is now a runnable, meaning you will need to use `invoke`"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "6448bdd3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The subject of this joke is bears.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"custom_chain.invoke(\"bears\")"
]
},
{
"cell_type": "markdown",
"id": "aa767ea9",
"metadata": {},
"source": [
"If you check out your LangSmith traces, you should see a `custom_chain` trace in there, with the calls to OpenAI nested underneath"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1245bdc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,21 +5,11 @@
"id": "8c5eb99a",
"metadata": {},
"source": [
"# How to inspect runnables\n",
"# Inspect your runnables\n",
"\n",
":::info Prerequisites\n",
"Once you create a runnable with LCEL, you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"\n",
":::\n",
"\n",
"Once you create a runnable with [LangChain Expression Language](/docs/concepts/#langchain-expression-language), you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n",
"\n",
"This guide shows some ways you can programmatically introspect the internal steps of chains. If you are instead interested in debugging issues in your chain, see [this section](/docs/how_to/debugging) instead.\n",
"\n",
"First, let's create an example chain. We will create one that does retrieval:"
"First, let's create an example LCEL. We will create one that does retrieval"
]
},
{
@@ -29,7 +19,21 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai faiss-cpu tiktoken"
"%pip install --upgrade --quiet langchain langchain-openai faiss-cpu tiktoken"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a88f4b24",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
@@ -39,12 +43,6 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
@@ -57,8 +55,16 @@
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "70e3fe93",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
@@ -74,7 +80,7 @@
"source": [
"## Get a graph\n",
"\n",
"You can use the `get_graph()` method to get a graph representation of the runnable:"
"You can get a graph of the runnable"
]
},
{
@@ -94,7 +100,7 @@
"source": [
"## Print a graph\n",
"\n",
"While that is not super legible, you can use the `print_ascii()` method to show that graph in a way that's easier to understand:"
"While that is not super legible, you can print it to get a display that's easier to understand"
]
},
{
@@ -160,7 +166,7 @@
"source": [
"## Get the prompts\n",
"\n",
"You may want to see just the prompts that are used in a chain with the `get_prompts()` method:"
"An important part of every chain is the prompts that are used. You can get the prompts present in the chain:"
]
},
{
@@ -184,18 +190,6 @@
"chain.get_prompts()"
]
},
{
"cell_type": "markdown",
"id": "c5a74bd5",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now learned how to introspect your composed LCEL chains.\n",
"\n",
"Next, check out the other how-to guides on runnables in this section, or the related how-to guide on [debugging your chains](/docs/how_to/debugging)."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -221,7 +215,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,592 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6a4becbd-238e-4c1d-a02d-08e61fbc3763",
"metadata": {},
"source": [
"# Add message history (memory)\n",
"\n",
"The `RunnableWithMessageHistory` lets us add message history to certain types of chains. It wraps another Runnable and manages the chat message history for it.\n",
"\n",
"Specifically, it can be used for any Runnable that takes as input one of\n",
"\n",
"* a sequence of `BaseMessage`\n",
"* a dict with a key that takes a sequence of `BaseMessage`\n",
"* a dict with a key that takes the latest message(s) as a string or sequence of `BaseMessage`, and a separate key that takes historical messages\n",
"\n",
"And returns as output one of\n",
"\n",
"* a string that can be treated as the contents of an `AIMessage`\n",
"* a sequence of `BaseMessage`\n",
"* a dict with a key that contains a sequence of `BaseMessage`\n",
"\n",
"Let's take a look at some examples to see how it works. First we construct a runnable (which here accepts a dict as input and returns a message as output):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2ed413b4-33a1-48ee-89b0-2d4917ec101a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You're an assistant who's good at {ability}. Respond in 20 words or fewer\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"runnable = prompt | model"
]
},
{
"cell_type": "markdown",
"id": "9fd175e1-c7b8-4929-a57e-3331865fe7aa",
"metadata": {},
"source": [
"To manage the message history, we will need:\n",
"1. This runnable;\n",
"2. A callable that returns an instance of `BaseChatMessageHistory`.\n",
"\n",
"Check out the [memory integrations](https://integrations.langchain.com/memory) page for implementations of chat message histories using Redis and other providers. Here we demonstrate using an in-memory `ChatMessageHistory` as well as more persistent storage using `RedisChatMessageHistory`."
]
},
{
"cell_type": "markdown",
"id": "3d83adad-9672-496d-9f25-5747e7b8c8bb",
"metadata": {},
"source": [
"## In-memory\n",
"\n",
"Below we show a simple example in which the chat history lives in memory, in this case via a global Python dict.\n",
"\n",
"We construct a callable `get_session_history` that references this dict to return an instance of `ChatMessageHistory`. The arguments to the callable can be specified by passing a configuration to the `RunnableWithMessageHistory` at runtime. By default, the configuration parameter is expected to be a single string `session_id`. This can be adjusted via the `history_factory_config` kwarg.\n",
"\n",
"Using the single-parameter default:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "54348d02-d8ee-440c-bbf9-41bc0fbbc46c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" runnable,\n",
" get_session_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "01acb505-3fd3-4ab4-9f04-5ea07e81542e",
"metadata": {},
"source": [
"Note that we've specified `input_messages_key` (the key to be treated as the latest input message) and `history_messages_key` (the key to add historical messages to).\n",
"\n",
"When invoking this new runnable, we specify the corresponding chat history via a configuration parameter:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "01384412-f08e-4634-9edb-3f46f475b582",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a trigonometric function that calculates the ratio of the adjacent side to the hypotenuse of a right triangle.')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What does cosine mean?\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "954688a2-9a3f-47ee-a9e8-fa0c83e69477",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a mathematical function used to calculate the length of a side in a right triangle.')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Remembers\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What?\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "39350d7c-2641-4744-bc2a-fd6a57c4ea90",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I can help with math problems. What do you need assistance with?')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# New session_id --> does not remember.\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What?\"},\n",
" config={\"configurable\": {\"session_id\": \"def234\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d29497be-3366-408d-bbb9-d4a8bf4ef37c",
"metadata": {},
"source": [
"The configuration parameters by which we track message histories can be customized by passing in a list of ``ConfigurableFieldSpec`` objects to the ``history_factory_config`` parameter. Below, we use two parameters: a `user_id` and `conversation_id`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1c89daee-deff-4fdf-86a3-178f7d8ef536",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableFieldSpec\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(user_id: str, conversation_id: str) -> BaseChatMessageHistory:\n",
" if (user_id, conversation_id) not in store:\n",
" store[(user_id, conversation_id)] = ChatMessageHistory()\n",
" return store[(user_id, conversation_id)]\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" runnable,\n",
" get_session_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
" history_factory_config=[\n",
" ConfigurableFieldSpec(\n",
" id=\"user_id\",\n",
" annotation=str,\n",
" name=\"User ID\",\n",
" description=\"Unique identifier for the user.\",\n",
" default=\"\",\n",
" is_shared=True,\n",
" ),\n",
" ConfigurableFieldSpec(\n",
" id=\"conversation_id\",\n",
" annotation=str,\n",
" name=\"Conversation ID\",\n",
" description=\"Unique identifier for the conversation.\",\n",
" default=\"\",\n",
" is_shared=True,\n",
" ),\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65c5622e-09b8-4f2f-8c8a-2dab0fd040fa",
"metadata": {},
"outputs": [],
"source": [
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"Hello\"},\n",
" config={\"configurable\": {\"user_id\": \"123\", \"conversation_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "18f1a459-3f88-4ee6-8542-76a907070dd6",
"metadata": {},
"source": [
"### Examples with runnables of different signatures\n",
"\n",
"The above runnable takes a dict as input and returns a BaseMessage. Below we show some alternatives."
]
},
{
"cell_type": "markdown",
"id": "48eae1bf-b59d-4a61-8e62-b6dbf667e866",
"metadata": {},
"source": [
"#### Messages input, dict output"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "17733d4f-3a32-4055-9d44-5d58b9446a26",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content=\"Simone de Beauvoir believed in the existence of free will. She argued that individuals have the ability to make choices and determine their own actions, even in the face of social and cultural constraints. She rejected the idea that individuals are purely products of their environment or predetermined by biology or destiny. Instead, she emphasized the importance of personal responsibility and the need for individuals to actively engage in creating their own lives and defining their own existence. De Beauvoir believed that freedom and agency come from recognizing one's own freedom and actively exercising it in the pursuit of personal and collective liberation.\")}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.runnables import RunnableParallel\n",
"\n",
"chain = RunnableParallel({\"output_message\": ChatOpenAI()})\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" get_session_history,\n",
" output_messages_key=\"output_message\",\n",
")\n",
"\n",
"with_message_history.invoke(\n",
" [HumanMessage(content=\"What did Simone de Beauvoir believe about free will\")],\n",
" config={\"configurable\": {\"session_id\": \"baz\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "efb57ef5-91f9-426b-84b9-b77f071a9dd7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content='Simone de Beauvoir\\'s views on free will were closely aligned with those of her contemporary and partner Jean-Paul Sartre. Both de Beauvoir and Sartre were existentialist philosophers who emphasized the importance of individual freedom and the rejection of determinism. They believed that human beings have the capacity to transcend their circumstances and create their own meaning and values.\\n\\nSartre, in his famous work \"Being and Nothingness,\" argued that human beings are condemned to be free, meaning that we are burdened with the responsibility of making choices and defining ourselves in a world that lacks inherent meaning. Like de Beauvoir, Sartre believed that individuals have the ability to exercise their freedom and make choices in the face of external and internal constraints.\\n\\nWhile there may be some nuanced differences in their philosophical writings, overall, de Beauvoir and Sartre shared a similar belief in the existence of free will and the importance of individual agency in shaping one\\'s own life.')}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" [HumanMessage(content=\"How did this compare to Sartre\")],\n",
" config={\"configurable\": {\"session_id\": \"baz\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a39eac5f-a9d8-4729-be06-5e7faf0c424d",
"metadata": {},
"source": [
"#### Messages input, messages output"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e45bcd95-e31f-4a9a-967a-78f96e8da881",
"metadata": {},
"outputs": [],
"source": [
"RunnableWithMessageHistory(\n",
" ChatOpenAI(),\n",
" get_session_history,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "04daa921-a2d1-40f9-8cd1-ae4e9a4163a7",
"metadata": {},
"source": [
"#### Dict with single key for all messages input, messages output"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "27157f15-9fb0-4167-9870-f4d7f234b3cb",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"RunnableWithMessageHistory(\n",
" itemgetter(\"input_messages\") | ChatOpenAI(),\n",
" get_session_history,\n",
" input_messages_key=\"input_messages\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "418ca7af-9ed9-478c-8bca-cba0de2ca61e",
"metadata": {},
"source": [
"## Persistent storage"
]
},
{
"cell_type": "markdown",
"id": "76799a13-d99a-4c4f-91f2-db699e40b8df",
"metadata": {},
"source": [
"In many cases it is preferable to persist conversation histories. `RunnableWithMessageHistory` is agnostic as to how the `get_session_history` callable retrieves its chat message histories. See [here](https://github.com/langchain-ai/langserve/blob/main/examples/chat_with_persistence_and_user/server.py) for an example using a local filesystem. Below we demonstrate how one could use Redis. Check out the [memory integrations](https://integrations.langchain.com/memory) page for implementations of chat message histories using other providers."
]
},
{
"cell_type": "markdown",
"id": "6bca45e5-35d9-4603-9ca9-6ac0ce0e35cd",
"metadata": {},
"source": [
"### Setup\n",
"\n",
"We'll need to install Redis if it's not installed already:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "477d04b3-c2b6-4ba5-962f-492c0d625cd5",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet redis"
]
},
{
"cell_type": "markdown",
"id": "6a0ec9e0-7b1c-4c6f-b570-e61d520b47c6",
"metadata": {},
"source": [
"Start a local Redis Stack server if we don't have an existing Redis deployment to connect to:\n",
"```bash\n",
"docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "cd6a250e-17fe-4368-a39d-1fe6b2cbde68",
"metadata": {},
"outputs": [],
"source": [
"REDIS_URL = \"redis://localhost:6379/0\""
]
},
{
"cell_type": "markdown",
"id": "36f43b87-655c-4f64-aa7b-bd8c1955d8e5",
"metadata": {},
"source": [
"### [LangSmith](/docs/langsmith)\n",
"\n",
"LangSmith is especially useful for something like message history injection, where it can be hard to otherwise understand what the inputs are to various parts of the chain.\n",
"\n",
"Note that LangSmith is not needed, but it is helpful.\n",
"If you do want to use LangSmith, after you sign up at the link above, make sure to uncoment the below and set your environment variables to start logging traces:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2afc1556-8da1-4499-ba11-983b66c58b18",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "f9d81796-ce61-484c-89e2-6c567d5e54ef",
"metadata": {},
"source": [
"Updating the message history implementation just requires us to define a new callable, this time returning an instance of `RedisChatMessageHistory`:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ca7c64d8-e138-4ef8-9734-f82076c47d80",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import RedisChatMessageHistory\n",
"\n",
"\n",
"def get_message_history(session_id: str) -> RedisChatMessageHistory:\n",
" return RedisChatMessageHistory(session_id, url=REDIS_URL)\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" runnable,\n",
" get_message_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "37eefdec-9901-4650-b64c-d3c097ed5f4d",
"metadata": {},
"source": [
"We can invoke as before:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "a85bcc22-ca4c-4ad5-9440-f94be7318f3e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse in a right triangle.')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What does cosine mean?\"},\n",
" config={\"configurable\": {\"session_id\": \"foobar\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "ab29abd3-751f-41ce-a1b0-53f6b565e79d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The inverse of cosine is the arccosine function, denoted as acos or cos^-1, which gives the angle corresponding to a given cosine value.')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What's its inverse\"},\n",
" config={\"configurable\": {\"session_id\": \"foobar\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "da3d1feb-b4bb-4624-961c-7db2e1180df7",
"metadata": {},
"source": [
":::{.callout-tip}\n",
"\n",
"[Langsmith trace](https://smith.langchain.com/public/bd73e122-6ec1-48b2-82df-e6483dc9cb63/r)\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "61d5115e-64a1-4ad5-b676-8afd4ef6093e",
"metadata": {},
"source": [
"Looking at the Langsmith trace for the second call, we can see that when constructing the prompt, a \"history\" variable has been injected which is a list of two messages (our first input and first output)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -7,6 +7,7 @@
"source": [
"---\n",
"sidebar_position: 3\n",
"title: \"Route logic based on input\"\n",
"keywords: [RunnableBranch, LCEL]\n",
"---"
]
@@ -16,25 +17,16 @@
"id": "4b47436a",
"metadata": {},
"source": [
"# How to route between sub-chains\n",
"# Dynamically route logic based on input\n",
"\n",
":::info Prerequisites\n",
"This notebook covers how to do routing in the LangChain Expression Language.\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"- [Configuring chain parameters at runtime](/docs/how_to/configure)\n",
"- [Prompt templates](/docs/concepts/#prompt-templates)\n",
"- [Chat Messages](/docs/concepts/#message-types)\n",
"\n",
":::\n",
"\n",
"Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing can help provide structure and consistency around interactions with models by allowing you to define states and use information related to those states as context to model calls.\n",
"Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs.\n",
"\n",
"There are two ways to perform routing:\n",
"\n",
"1. Conditionally return runnables from a [`RunnableLambda`](/docs/how_to/functions) (recommended)\n",
"2. Using a `RunnableBranch` (legacy)\n",
"1. Conditionally return runnables from a [`RunnableLambda`](/docs/expression_language/primitives/functions) (recommended)\n",
"2. Using a `RunnableBranch`.\n",
"\n",
"We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about `LangChain`, `Anthropic`, or `Other`, then routes to a corresponding prompt chain."
]
@@ -323,7 +315,7 @@
"id": "fa0f589d",
"metadata": {},
"source": [
"## Routing by semantic similarity\n",
"# Routing by semantic similarity\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example."
]
@@ -335,7 +327,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utils.math import cosine_similarity\n",
"from langchain.utils.math import cosine_similarity\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
@@ -371,7 +363,7 @@
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
" | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
" | StrOutputParser()\n",
")"
]
@@ -438,18 +430,6 @@
"print(chain.invoke(\"What's a path integral\"))"
]
},
{
"cell_type": "markdown",
"id": "ff40bcb3",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now learned how to add routing to your composed LCEL chains.\n",
"\n",
"Next, check out the other how-to guides on runnables in this section."
]
},
{
"cell_type": "markdown",
"id": "927b7498",
@@ -473,7 +453,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,34 @@
---
sidebar_class_name: hidden
---
# LangChain Expression Language (LCEL)
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
[**First-class streaming support**](/docs/expression_language/streaming)
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
[**Async support**](/docs/expression_language/interface)
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
[**Optimized parallel execution**](/docs/expression_language/primitives/parallel)
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
[**Retries and fallbacks**](/docs/guides/productionization/fallbacks)
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
[**Access intermediate results**](/docs/expression_language/interface#async-stream-events-beta)
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
[**Input and output schemas**](/docs/expression_language/interface#input-schema)
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
[**Seamless LangSmith tracing**](/docs/langsmith)
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.
[**Seamless LangServe deployment**](/docs/langserve)
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More