mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-07 01:30:24 +00:00
Compare commits
57 Commits
copilot/fi
...
cc/bench
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d3b05c1753 | ||
|
|
fb4996ccda | ||
|
|
45a067509f | ||
|
|
23c3fa65d4 | ||
|
|
13d67cf37e | ||
|
|
7f989d3c3b | ||
|
|
b7968c2b7d | ||
|
|
2f0c6421a1 | ||
|
|
c31236264e | ||
|
|
cfe13f673a | ||
|
|
02001212b0 | ||
|
|
00244122bd | ||
|
|
5599c59d4a | ||
|
|
6727d6e8c8 | ||
|
|
5036bd7adb | ||
|
|
ec2b34a02d | ||
|
|
11d68a0b9e | ||
|
|
566774a893 | ||
|
|
255a6d668a | ||
|
|
cbf4c0e565 | ||
|
|
145d38f7dd | ||
|
|
68c70da33e | ||
|
|
754528d23f | ||
|
|
dc66737f03 | ||
|
|
499dc35cfb | ||
|
|
42c1159991 | ||
|
|
ac706c77d4 | ||
|
|
8493887b6f | ||
|
|
a647073b26 | ||
|
|
e120604774 | ||
|
|
06d8754b0b | ||
|
|
6e108c1cb4 | ||
|
|
cc6139860c | ||
|
|
ae8f58ac6f | ||
|
|
346731544b | ||
|
|
c1b86cc929 | ||
|
|
376f70be96 | ||
|
|
bc4251b9e0 | ||
|
|
2543007436 | ||
|
|
ac2de920b1 | ||
|
|
e02eed5489 | ||
|
|
ba83f58141 | ||
|
|
fb490b0c39 | ||
|
|
419c173225 | ||
|
|
4011257c25 | ||
|
|
9de0892a77 | ||
|
|
5414527236 | ||
|
|
dd9f5d7cde | ||
|
|
d348cfe968 | ||
|
|
84c5048cb8 | ||
|
|
d318c655b6 | ||
|
|
df4eed0cea | ||
|
|
a25e196fe9 | ||
|
|
3137d49bd9 | ||
|
|
9a2f49df1f | ||
|
|
881c6534a6 | ||
|
|
5e9eb19a83 |
@@ -15,12 +15,12 @@ You may use the button above, or follow these steps to open this repo in a Codes
|
||||
1. Click **Create codespace on master**.
|
||||
|
||||
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
|
||||
|
||||
|
||||
## VS Code Dev Containers
|
||||
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
|
||||
|
||||
```txt
|
||||
|
||||
@@ -4,7 +4,7 @@ services:
|
||||
build:
|
||||
dockerfile: libs/langchain/dev.Dockerfile
|
||||
context: ..
|
||||
|
||||
|
||||
networks:
|
||||
- langchain-network
|
||||
|
||||
|
||||
2
.github/CODE_OF_CONDUCT.md
vendored
2
.github/CODE_OF_CONDUCT.md
vendored
@@ -129,4 +129,4 @@ For answers to common questions about this code of conduct, see the FAQ at
|
||||
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
|
||||
[Mozilla CoC]: https://github.com/mozilla/diversity
|
||||
[FAQ]: https://www.contributor-covenant.org/faq
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
|
||||
8
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
8
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -5,7 +5,7 @@ body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to file a bug report.
|
||||
Thank you for taking the time to file a bug report.
|
||||
|
||||
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
@@ -50,7 +50,7 @@ body:
|
||||
|
||||
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
||||
|
||||
**Important!**
|
||||
**Important!**
|
||||
|
||||
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
||||
@@ -58,14 +58,14 @@ body:
|
||||
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
||||
|
||||
placeholder: |
|
||||
The following code:
|
||||
The following code:
|
||||
|
||||
```python
|
||||
from langchain_core.runnables import RunnableLambda
|
||||
|
||||
def bad_code(inputs) -> int:
|
||||
raise NotImplementedError('For demo purpose')
|
||||
|
||||
|
||||
chain = RunnableLambda(bad_code)
|
||||
chain.invoke('Hello!')
|
||||
```
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/documentation.yml
vendored
2
.github/ISSUE_TEMPLATE/documentation.yml
vendored
@@ -14,7 +14,7 @@ body:
|
||||
|
||||
Do **NOT** use this to ask usage questions or reporting issues with your code.
|
||||
|
||||
If you have usage questions or need help solving some problem,
|
||||
If you have usage questions or need help solving some problem,
|
||||
please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
If you're in the wrong place, here are some helpful links to find a better
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/privileged.yml
vendored
2
.github/ISSUE_TEMPLATE/privileged.yml
vendored
@@ -8,7 +8,7 @@ body:
|
||||
|
||||
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
|
||||
|
||||
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
|
||||
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
|
||||
or are a regular contributor to LangChain with previous merged pull requests.
|
||||
- type: checkboxes
|
||||
id: privileged
|
||||
|
||||
2
.github/actions/people/Dockerfile
vendored
2
.github/actions/people/Dockerfile
vendored
@@ -4,4 +4,4 @@ RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.
|
||||
|
||||
COPY ./app /app
|
||||
|
||||
CMD ["python", "/app/main.py"]
|
||||
CMD ["python", "/app/main.py"]
|
||||
|
||||
6
.github/actions/people/action.yml
vendored
6
.github/actions/people/action.yml
vendored
@@ -4,8 +4,8 @@ description: "Generate the data for the LangChain People page"
|
||||
author: "Jacob Lee <jacob@langchain.dev>"
|
||||
inputs:
|
||||
token:
|
||||
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
|
||||
description: "User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}"
|
||||
required: true
|
||||
runs:
|
||||
using: 'docker'
|
||||
image: 'Dockerfile'
|
||||
using: "docker"
|
||||
image: "Dockerfile"
|
||||
|
||||
25
.github/scripts/check_diff.py
vendored
25
.github/scripts/check_diff.py
vendored
@@ -3,14 +3,12 @@ import json
|
||||
import os
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
from typing import Dict, List, Set
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Set
|
||||
|
||||
import tomllib
|
||||
|
||||
from packaging.requirements import Requirement
|
||||
|
||||
from get_min_versions import get_min_version_from_toml
|
||||
|
||||
from packaging.requirements import Requirement
|
||||
|
||||
LANGCHAIN_DIRS = [
|
||||
"libs/core",
|
||||
@@ -38,7 +36,7 @@ IGNORED_PARTNERS = [
|
||||
]
|
||||
|
||||
PY_312_MAX_PACKAGES = [
|
||||
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
|
||||
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
|
||||
]
|
||||
|
||||
|
||||
@@ -85,9 +83,9 @@ def dependents_graph() -> dict:
|
||||
for depline in extended_deps:
|
||||
if depline.startswith("-e "):
|
||||
# editable dependency
|
||||
assert depline.startswith(
|
||||
"-e ../partners/"
|
||||
), "Extended test deps should only editable install partner packages"
|
||||
assert depline.startswith("-e ../partners/"), (
|
||||
"Extended test deps should only editable install partner packages"
|
||||
)
|
||||
partner = depline.split("partners/")[1]
|
||||
dep = f"langchain-{partner}"
|
||||
else:
|
||||
@@ -271,7 +269,7 @@ if __name__ == "__main__":
|
||||
dirs_to_run["extended-test"].add(dir_)
|
||||
elif file.startswith("libs/standard-tests"):
|
||||
# TODO: update to include all packages that rely on standard-tests (all partner packages)
|
||||
# note: won't run on external repo partners
|
||||
# Note: won't run on external repo partners
|
||||
dirs_to_run["lint"].add("libs/standard-tests")
|
||||
dirs_to_run["test"].add("libs/standard-tests")
|
||||
dirs_to_run["lint"].add("libs/cli")
|
||||
@@ -285,7 +283,7 @@ if __name__ == "__main__":
|
||||
elif file.startswith("libs/cli"):
|
||||
dirs_to_run["lint"].add("libs/cli")
|
||||
dirs_to_run["test"].add("libs/cli")
|
||||
|
||||
|
||||
elif file.startswith("libs/partners"):
|
||||
partner_dir = file.split("/")[2]
|
||||
if os.path.isdir(f"libs/partners/{partner_dir}") and [
|
||||
@@ -303,7 +301,10 @@ if __name__ == "__main__":
|
||||
f"Unknown lib: {file}. check_diff.py likely needs "
|
||||
"an update for this new library!"
|
||||
)
|
||||
elif file.startswith("docs/") or file in ["pyproject.toml", "uv.lock"]: # docs or root uv files
|
||||
elif file.startswith("docs/") or file in [
|
||||
"pyproject.toml",
|
||||
"uv.lock",
|
||||
]: # docs or root uv files
|
||||
docs_edited = True
|
||||
dirs_to_run["lint"].add(".")
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import sys
|
||||
|
||||
import tomllib
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
26
.github/scripts/get_min_versions.py
vendored
26
.github/scripts/get_min_versions.py
vendored
@@ -1,5 +1,5 @@
|
||||
from collections import defaultdict
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
from typing import Optional
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
@@ -8,17 +8,13 @@ else:
|
||||
# for python 3.10 and below, which doesnt have stdlib tomllib
|
||||
import tomli as tomllib
|
||||
|
||||
from packaging.requirements import Requirement
|
||||
from packaging.specifiers import SpecifierSet
|
||||
from packaging.version import Version
|
||||
|
||||
|
||||
import requests
|
||||
from packaging.version import parse
|
||||
import re
|
||||
from typing import List
|
||||
|
||||
import re
|
||||
|
||||
import requests
|
||||
from packaging.requirements import Requirement
|
||||
from packaging.specifiers import SpecifierSet
|
||||
from packaging.version import Version, parse
|
||||
|
||||
MIN_VERSION_LIBS = [
|
||||
"langchain-core",
|
||||
@@ -72,11 +68,13 @@ def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
|
||||
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
|
||||
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
|
||||
for y in range(1, 10):
|
||||
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
|
||||
spec_string = re.sub(
|
||||
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
|
||||
)
|
||||
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
|
||||
for x in range(1, 10):
|
||||
spec_string = re.sub(
|
||||
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
|
||||
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
|
||||
)
|
||||
|
||||
spec_set = SpecifierSet(spec_string)
|
||||
@@ -169,12 +167,12 @@ def check_python_version(version_string, constraint_string):
|
||||
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
|
||||
for y in range(1, 10):
|
||||
constraint_string = re.sub(
|
||||
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
|
||||
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
|
||||
)
|
||||
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
|
||||
for x in range(1, 10):
|
||||
constraint_string = re.sub(
|
||||
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
|
||||
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
47
.github/scripts/prep_api_docs_build.py
vendored
47
.github/scripts/prep_api_docs_build.py
vendored
@@ -3,9 +3,10 @@
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from typing import Any, Dict
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def load_packages_yaml() -> Dict[str, Any]:
|
||||
@@ -28,7 +29,6 @@ def get_target_dir(package_name: str) -> Path:
|
||||
def clean_target_directories(packages: list) -> None:
|
||||
"""Remove old directories that will be replaced."""
|
||||
for package in packages:
|
||||
|
||||
target_dir = get_target_dir(package["name"])
|
||||
if target_dir.exists():
|
||||
print(f"Removing {target_dir}")
|
||||
@@ -38,7 +38,6 @@ def clean_target_directories(packages: list) -> None:
|
||||
def move_libraries(packages: list) -> None:
|
||||
"""Move libraries from their source locations to the target directories."""
|
||||
for package in packages:
|
||||
|
||||
repo_name = package["repo"].split("/")[1]
|
||||
source_path = package["path"]
|
||||
target_dir = get_target_dir(package["name"])
|
||||
@@ -68,23 +67,33 @@ def main():
|
||||
package_yaml = load_packages_yaml()
|
||||
|
||||
# Clean target directories
|
||||
clean_target_directories([
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"] != "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
])
|
||||
clean_target_directories(
|
||||
[
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if (
|
||||
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
|
||||
)
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"]
|
||||
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
]
|
||||
)
|
||||
|
||||
# Move libraries to their new locations
|
||||
move_libraries([
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if not p.get("disabled", False)
|
||||
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"] != "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
])
|
||||
move_libraries(
|
||||
[
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if not p.get("disabled", False)
|
||||
and (
|
||||
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
|
||||
)
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"]
|
||||
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
]
|
||||
)
|
||||
|
||||
# Delete ones without a pyproject.toml
|
||||
for partner in Path("langchain/libs/partners").iterdir():
|
||||
|
||||
416
.github/tools/git-restore-mtime
vendored
416
.github/tools/git-restore-mtime
vendored
@@ -81,56 +81,93 @@ import time
|
||||
__version__ = "2022.12+dev"
|
||||
|
||||
# Update symlinks only if the platform supports not following them
|
||||
UPDATE_SYMLINKS = bool(os.utime in getattr(os, 'supports_follow_symlinks', []))
|
||||
UPDATE_SYMLINKS = bool(os.utime in getattr(os, "supports_follow_symlinks", []))
|
||||
|
||||
# Call os.path.normpath() only if not in a POSIX platform (Windows)
|
||||
NORMALIZE_PATHS = (os.path.sep != '/')
|
||||
NORMALIZE_PATHS = os.path.sep != "/"
|
||||
|
||||
# How many files to process in each batch when re-trying merge commits
|
||||
STEPMISSING = 100
|
||||
|
||||
# (Extra) keywords for the os.utime() call performed by touch()
|
||||
UTIME_KWS = {} if not UPDATE_SYMLINKS else {'follow_symlinks': False}
|
||||
UTIME_KWS = {} if not UPDATE_SYMLINKS else {"follow_symlinks": False}
|
||||
|
||||
|
||||
# Command-line interface ######################################################
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description=__doc__.split('\n---')[0])
|
||||
parser = argparse.ArgumentParser(description=__doc__.split("\n---")[0])
|
||||
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument('--quiet', '-q', dest='loglevel',
|
||||
action="store_const", const=logging.WARNING, default=logging.INFO,
|
||||
help="Suppress informative messages and summary statistics.")
|
||||
group.add_argument('--verbose', '-v', action="count", help="""
|
||||
group.add_argument(
|
||||
"--quiet",
|
||||
"-q",
|
||||
dest="loglevel",
|
||||
action="store_const",
|
||||
const=logging.WARNING,
|
||||
default=logging.INFO,
|
||||
help="Suppress informative messages and summary statistics.",
|
||||
)
|
||||
group.add_argument(
|
||||
"--verbose",
|
||||
"-v",
|
||||
action="count",
|
||||
help="""
|
||||
Print additional information for each processed file.
|
||||
Specify twice to further increase verbosity.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--cwd', '-C', metavar="DIRECTORY", help="""
|
||||
parser.add_argument(
|
||||
"--cwd",
|
||||
"-C",
|
||||
metavar="DIRECTORY",
|
||||
help="""
|
||||
Run as if %(prog)s was started in directory %(metavar)s.
|
||||
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
|
||||
See 'man 1 git' or 'git --help' for more information.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--git-dir', dest='gitdir', metavar="GITDIR", help="""
|
||||
parser.add_argument(
|
||||
"--git-dir",
|
||||
dest="gitdir",
|
||||
metavar="GITDIR",
|
||||
help="""
|
||||
Path to the git repository, by default auto-discovered by searching
|
||||
the current directory and its parents for a .git/ subdirectory.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--work-tree', dest='workdir', metavar="WORKTREE", help="""
|
||||
parser.add_argument(
|
||||
"--work-tree",
|
||||
dest="workdir",
|
||||
metavar="WORKTREE",
|
||||
help="""
|
||||
Path to the work tree root, by default the parent of GITDIR if it's
|
||||
automatically discovered, or the current directory if GITDIR is set.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--force', '-f', default=False, action="store_true", help="""
|
||||
parser.add_argument(
|
||||
"--force",
|
||||
"-f",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
Force updating files with uncommitted modifications.
|
||||
Untracked files and uncommitted deletions, renames and additions are
|
||||
always ignored.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--merge', '-m', default=False, action="store_true", help="""
|
||||
parser.add_argument(
|
||||
"--merge",
|
||||
"-m",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
Include merge commits.
|
||||
Leads to more recent times and more files per commit, thus with the same
|
||||
time, which may or may not be what you want.
|
||||
@@ -138,71 +175,130 @@ def parse_args():
|
||||
are found sooner, which can improve performance, sometimes substantially.
|
||||
But as merge commits are usually huge, processing them may also take longer.
|
||||
By default, merge commits are only used for files missing from regular commits.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--first-parent', default=False, action="store_true", help="""
|
||||
parser.add_argument(
|
||||
"--first-parent",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
Consider only the first parent, the "main branch", when evaluating merge commits.
|
||||
Only effective when merge commits are processed, either when --merge is
|
||||
used or when finding missing files after the first regular log search.
|
||||
See --skip-missing.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--skip-missing', '-s', dest="missing", default=True,
|
||||
action="store_false", help="""
|
||||
parser.add_argument(
|
||||
"--skip-missing",
|
||||
"-s",
|
||||
dest="missing",
|
||||
default=True,
|
||||
action="store_false",
|
||||
help="""
|
||||
Do not try to find missing files.
|
||||
If merge commits were not evaluated with --merge and some files were
|
||||
not found in regular commits, by default %(prog)s searches for these
|
||||
files again in the merge commits.
|
||||
This option disables this retry, so files found only in merge commits
|
||||
will not have their timestamp updated.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--no-directories', '-D', dest='dirs', default=True,
|
||||
action="store_false", help="""
|
||||
parser.add_argument(
|
||||
"--no-directories",
|
||||
"-D",
|
||||
dest="dirs",
|
||||
default=True,
|
||||
action="store_false",
|
||||
help="""
|
||||
Do not update directory timestamps.
|
||||
By default, use the time of its most recently created, renamed or deleted file.
|
||||
Note that just modifying a file will NOT update its directory time.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--test', '-t', default=False, action="store_true",
|
||||
help="Test run: do not actually update any file timestamp.")
|
||||
parser.add_argument(
|
||||
"--test",
|
||||
"-t",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="Test run: do not actually update any file timestamp.",
|
||||
)
|
||||
|
||||
parser.add_argument('--commit-time', '-c', dest='commit_time', default=False,
|
||||
action='store_true', help="Use commit time instead of author time.")
|
||||
parser.add_argument(
|
||||
"--commit-time",
|
||||
"-c",
|
||||
dest="commit_time",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="Use commit time instead of author time.",
|
||||
)
|
||||
|
||||
parser.add_argument('--oldest-time', '-o', dest='reverse_order', default=False,
|
||||
action='store_true', help="""
|
||||
parser.add_argument(
|
||||
"--oldest-time",
|
||||
"-o",
|
||||
dest="reverse_order",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
Update times based on the oldest, instead of the most recent commit of a file.
|
||||
This reverses the order in which the git log is processed to emulate a
|
||||
file "creation" date. Note this will be inaccurate for files deleted and
|
||||
re-created at later dates.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--skip-older-than', metavar='SECONDS', type=int, help="""
|
||||
parser.add_argument(
|
||||
"--skip-older-than",
|
||||
metavar="SECONDS",
|
||||
type=int,
|
||||
help="""
|
||||
Ignore files that are currently older than %(metavar)s.
|
||||
Useful in workflows that assume such files already have a correct timestamp,
|
||||
as it may improve performance by processing fewer files.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--skip-older-than-commit', '-N', default=False,
|
||||
action='store_true', help="""
|
||||
parser.add_argument(
|
||||
"--skip-older-than-commit",
|
||||
"-N",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
Ignore files older than the timestamp it would be updated to.
|
||||
Such files may be considered "original", likely in the author's repository.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--unique-times', default=False, action="store_true", help="""
|
||||
parser.add_argument(
|
||||
"--unique-times",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
Set the microseconds to a unique value per commit.
|
||||
Allows telling apart changes that would otherwise have identical timestamps,
|
||||
as git's time accuracy is in seconds.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('pathspec', nargs='*', metavar='PATHSPEC', help="""
|
||||
parser.add_argument(
|
||||
"pathspec",
|
||||
nargs="*",
|
||||
metavar="PATHSPEC",
|
||||
help="""
|
||||
Only modify paths matching %(metavar)s, relative to current directory.
|
||||
By default, update all but untracked files and submodules.
|
||||
""")
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument('--version', '-V', action='version',
|
||||
version='%(prog)s version {version}'.format(version=get_version()))
|
||||
parser.add_argument(
|
||||
"--version",
|
||||
"-V",
|
||||
action="version",
|
||||
version="%(prog)s version {version}".format(version=get_version()),
|
||||
)
|
||||
|
||||
args_ = parser.parse_args()
|
||||
if args_.verbose:
|
||||
@@ -212,17 +308,18 @@ def parse_args():
|
||||
|
||||
|
||||
def get_version(version=__version__):
|
||||
if not version.endswith('+dev'):
|
||||
if not version.endswith("+dev"):
|
||||
return version
|
||||
try:
|
||||
cwd = os.path.dirname(os.path.realpath(__file__))
|
||||
return Git(cwd=cwd, errors=False).describe().lstrip('v')
|
||||
return Git(cwd=cwd, errors=False).describe().lstrip("v")
|
||||
except Git.Error:
|
||||
return '-'.join((version, "unknown"))
|
||||
return "-".join((version, "unknown"))
|
||||
|
||||
|
||||
# Helper functions ############################################################
|
||||
|
||||
|
||||
def setup_logging():
|
||||
"""Add TRACE logging level and corresponding method, return the root logger"""
|
||||
logging.TRACE = TRACE = logging.DEBUG // 2
|
||||
@@ -255,11 +352,13 @@ def normalize(path):
|
||||
if path and path[0] == '"':
|
||||
# Python 2: path = path[1:-1].decode("string-escape")
|
||||
# Python 3: https://stackoverflow.com/a/46650050/624066
|
||||
path = (path[1:-1] # Remove enclosing double quotes
|
||||
.encode('latin1') # Convert to bytes, required by 'unicode-escape'
|
||||
.decode('unicode-escape') # Perform the actual octal-escaping decode
|
||||
.encode('latin1') # 1:1 mapping to bytes, UTF-8 encoded
|
||||
.decode('utf8', 'surrogateescape')) # Decode from UTF-8
|
||||
path = (
|
||||
path[1:-1] # Remove enclosing double quotes
|
||||
.encode("latin1") # Convert to bytes, required by 'unicode-escape'
|
||||
.decode("unicode-escape") # Perform the actual octal-escaping decode
|
||||
.encode("latin1") # 1:1 mapping to bytes, UTF-8 encoded
|
||||
.decode("utf8", "surrogateescape")
|
||||
) # Decode from UTF-8
|
||||
if NORMALIZE_PATHS:
|
||||
# Make sure the slash matches the OS; for Windows we need a backslash
|
||||
path = os.path.normpath(path)
|
||||
@@ -282,12 +381,12 @@ def touch_ns(path, mtime_ns):
|
||||
|
||||
def isodate(secs: int):
|
||||
# time.localtime() accepts floats, but discards fractional part
|
||||
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(secs))
|
||||
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(secs))
|
||||
|
||||
|
||||
def isodate_ns(ns: int):
|
||||
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
|
||||
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=' ')
|
||||
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=" ")
|
||||
|
||||
|
||||
def get_mtime_ns(secs: int, idx: int):
|
||||
@@ -305,35 +404,49 @@ def get_mtime_path(path):
|
||||
|
||||
# Git class and parse_log(), the heart of the script ##########################
|
||||
|
||||
|
||||
class Git:
|
||||
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
|
||||
self.gitcmd = ['git']
|
||||
self.gitcmd = ["git"]
|
||||
self.errors = errors
|
||||
self._proc = None
|
||||
if workdir: self.gitcmd.extend(('--work-tree', workdir))
|
||||
if gitdir: self.gitcmd.extend(('--git-dir', gitdir))
|
||||
if cwd: self.gitcmd.extend(('-C', cwd))
|
||||
if workdir:
|
||||
self.gitcmd.extend(("--work-tree", workdir))
|
||||
if gitdir:
|
||||
self.gitcmd.extend(("--git-dir", gitdir))
|
||||
if cwd:
|
||||
self.gitcmd.extend(("-C", cwd))
|
||||
self.workdir, self.gitdir = self._get_repo_dirs()
|
||||
|
||||
def ls_files(self, paths: list = None):
|
||||
return (normalize(_) for _ in self._run('ls-files --full-name', paths))
|
||||
return (normalize(_) for _ in self._run("ls-files --full-name", paths))
|
||||
|
||||
def ls_dirty(self, force=False):
|
||||
return (normalize(_[3:].split(' -> ', 1)[-1])
|
||||
for _ in self._run('status --porcelain')
|
||||
if _[:2] != '??' and (not force or (_[0] in ('R', 'A')
|
||||
or _[1] == 'D')))
|
||||
return (
|
||||
normalize(_[3:].split(" -> ", 1)[-1])
|
||||
for _ in self._run("status --porcelain")
|
||||
if _[:2] != "??" and (not force or (_[0] in ("R", "A") or _[1] == "D"))
|
||||
)
|
||||
|
||||
def log(self, merge=False, first_parent=False, commit_time=False,
|
||||
reverse_order=False, paths: list = None):
|
||||
cmd = 'whatchanged --pretty={}'.format('%ct' if commit_time else '%at')
|
||||
if merge: cmd += ' -m'
|
||||
if first_parent: cmd += ' --first-parent'
|
||||
if reverse_order: cmd += ' --reverse'
|
||||
def log(
|
||||
self,
|
||||
merge=False,
|
||||
first_parent=False,
|
||||
commit_time=False,
|
||||
reverse_order=False,
|
||||
paths: list = None,
|
||||
):
|
||||
cmd = "whatchanged --pretty={}".format("%ct" if commit_time else "%at")
|
||||
if merge:
|
||||
cmd += " -m"
|
||||
if first_parent:
|
||||
cmd += " --first-parent"
|
||||
if reverse_order:
|
||||
cmd += " --reverse"
|
||||
return self._run(cmd, paths)
|
||||
|
||||
def describe(self):
|
||||
return self._run('describe --tags', check=True)[0]
|
||||
return self._run("describe --tags", check=True)[0]
|
||||
|
||||
def terminate(self):
|
||||
if self._proc is None:
|
||||
@@ -345,18 +458,22 @@ class Git:
|
||||
pass
|
||||
|
||||
def _get_repo_dirs(self):
|
||||
return (os.path.normpath(_) for _ in
|
||||
self._run('rev-parse --show-toplevel --absolute-git-dir', check=True))
|
||||
return (
|
||||
os.path.normpath(_)
|
||||
for _ in self._run(
|
||||
"rev-parse --show-toplevel --absolute-git-dir", check=True
|
||||
)
|
||||
)
|
||||
|
||||
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
|
||||
cmdlist = self.gitcmd + shlex.split(cmdstr)
|
||||
if paths:
|
||||
cmdlist.append('--')
|
||||
cmdlist.append("--")
|
||||
cmdlist.extend(paths)
|
||||
popen_args = dict(universal_newlines=True, encoding='utf8')
|
||||
popen_args = dict(universal_newlines=True, encoding="utf8")
|
||||
if not self.errors:
|
||||
popen_args['stderr'] = subprocess.DEVNULL
|
||||
log.trace("Executing: %s", ' '.join(cmdlist))
|
||||
popen_args["stderr"] = subprocess.DEVNULL
|
||||
log.trace("Executing: %s", " ".join(cmdlist))
|
||||
if not output:
|
||||
return subprocess.call(cmdlist, **popen_args)
|
||||
if check:
|
||||
@@ -379,30 +496,26 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
||||
mtime = 0
|
||||
datestr = isodate(0)
|
||||
for line in git.log(
|
||||
merge,
|
||||
args.first_parent,
|
||||
args.commit_time,
|
||||
args.reverse_order,
|
||||
filterlist
|
||||
merge, args.first_parent, args.commit_time, args.reverse_order, filterlist
|
||||
):
|
||||
stats['loglines'] += 1
|
||||
stats["loglines"] += 1
|
||||
|
||||
# Blank line between Date and list of files
|
||||
if not line:
|
||||
continue
|
||||
|
||||
# Date line
|
||||
if line[0] != ':': # Faster than `not line.startswith(':')`
|
||||
stats['commits'] += 1
|
||||
if line[0] != ":": # Faster than `not line.startswith(':')`
|
||||
stats["commits"] += 1
|
||||
mtime = int(line)
|
||||
if args.unique_times:
|
||||
mtime = get_mtime_ns(mtime, stats['commits'])
|
||||
mtime = get_mtime_ns(mtime, stats["commits"])
|
||||
if args.debug:
|
||||
datestr = isodate(mtime)
|
||||
continue
|
||||
|
||||
# File line: three tokens if it describes a renaming, otherwise two
|
||||
tokens = line.split('\t')
|
||||
tokens = line.split("\t")
|
||||
|
||||
# Possible statuses:
|
||||
# M: Modified (content changed)
|
||||
@@ -411,7 +524,7 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
||||
# T: Type changed: to/from regular file, symlinks, submodules
|
||||
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
|
||||
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
|
||||
status = tokens[0].split(' ')[-1]
|
||||
status = tokens[0].split(" ")[-1]
|
||||
file = tokens[-1]
|
||||
|
||||
# Handles non-ASCII chars and OS path separator
|
||||
@@ -419,56 +532,76 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
||||
|
||||
def do_file():
|
||||
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
|
||||
stats['skip'] += 1
|
||||
stats["skip"] += 1
|
||||
return
|
||||
if args.debug:
|
||||
log.debug("%d\t%d\t%d\t%s\t%s",
|
||||
stats['loglines'], stats['commits'], stats['files'],
|
||||
datestr, file)
|
||||
log.debug(
|
||||
"%d\t%d\t%d\t%s\t%s",
|
||||
stats["loglines"],
|
||||
stats["commits"],
|
||||
stats["files"],
|
||||
datestr,
|
||||
file,
|
||||
)
|
||||
try:
|
||||
touch(os.path.join(git.workdir, file), mtime)
|
||||
stats['touches'] += 1
|
||||
stats["touches"] += 1
|
||||
except Exception as e:
|
||||
log.error("ERROR: %s: %s", e, file)
|
||||
stats['errors'] += 1
|
||||
stats["errors"] += 1
|
||||
|
||||
def do_dir():
|
||||
if args.debug:
|
||||
log.debug("%d\t%d\t-\t%s\t%s",
|
||||
stats['loglines'], stats['commits'],
|
||||
datestr, "{}/".format(dirname or '.'))
|
||||
log.debug(
|
||||
"%d\t%d\t-\t%s\t%s",
|
||||
stats["loglines"],
|
||||
stats["commits"],
|
||||
datestr,
|
||||
"{}/".format(dirname or "."),
|
||||
)
|
||||
try:
|
||||
touch(os.path.join(git.workdir, dirname), mtime)
|
||||
stats['dirtouches'] += 1
|
||||
stats["dirtouches"] += 1
|
||||
except Exception as e:
|
||||
log.error("ERROR: %s: %s", e, dirname)
|
||||
stats['direrrors'] += 1
|
||||
stats["direrrors"] += 1
|
||||
|
||||
if file in filelist:
|
||||
stats['files'] -= 1
|
||||
stats["files"] -= 1
|
||||
filelist.remove(file)
|
||||
do_file()
|
||||
|
||||
if args.dirs and status in ('A', 'D'):
|
||||
if args.dirs and status in ("A", "D"):
|
||||
dirname = os.path.dirname(file)
|
||||
if dirname in dirlist:
|
||||
dirlist.remove(dirname)
|
||||
do_dir()
|
||||
|
||||
# All files done?
|
||||
if not stats['files']:
|
||||
if not stats["files"]:
|
||||
git.terminate()
|
||||
return
|
||||
|
||||
|
||||
# Main Logic ##################################################################
|
||||
|
||||
|
||||
def main():
|
||||
start = time.time() # yes, Wall time. CPU time is not realistic for users.
|
||||
stats = {_: 0 for _ in ('loglines', 'commits', 'touches', 'skip', 'errors',
|
||||
'dirtouches', 'direrrors')}
|
||||
stats = {
|
||||
_: 0
|
||||
for _ in (
|
||||
"loglines",
|
||||
"commits",
|
||||
"touches",
|
||||
"skip",
|
||||
"errors",
|
||||
"dirtouches",
|
||||
"direrrors",
|
||||
)
|
||||
}
|
||||
|
||||
logging.basicConfig(level=args.loglevel, format='%(message)s')
|
||||
logging.basicConfig(level=args.loglevel, format="%(message)s")
|
||||
log.trace("Arguments: %s", args)
|
||||
|
||||
# First things first: Where and Who are we?
|
||||
@@ -499,13 +632,16 @@ def main():
|
||||
|
||||
# Symlink (to file, to dir or broken - git handles the same way)
|
||||
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
|
||||
log.warning("WARNING: Skipping symlink, no OS support for updates: %s",
|
||||
path)
|
||||
log.warning(
|
||||
"WARNING: Skipping symlink, no OS support for updates: %s", path
|
||||
)
|
||||
continue
|
||||
|
||||
# skip files which are older than given threshold
|
||||
if (args.skip_older_than
|
||||
and start - get_mtime_path(fullpath) > args.skip_older_than):
|
||||
if (
|
||||
args.skip_older_than
|
||||
and start - get_mtime_path(fullpath) > args.skip_older_than
|
||||
):
|
||||
continue
|
||||
|
||||
# Always add files relative to worktree root
|
||||
@@ -519,15 +655,17 @@ def main():
|
||||
else:
|
||||
dirty = set(git.ls_dirty())
|
||||
if dirty:
|
||||
log.warning("WARNING: Modified files in the working directory were ignored."
|
||||
"\nTo include such files, commit your changes or use --force.")
|
||||
log.warning(
|
||||
"WARNING: Modified files in the working directory were ignored."
|
||||
"\nTo include such files, commit your changes or use --force."
|
||||
)
|
||||
filelist -= dirty
|
||||
|
||||
# Build dir list to be processed
|
||||
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
|
||||
|
||||
stats['totalfiles'] = stats['files'] = len(filelist)
|
||||
log.info("{0:,} files to be processed in work dir".format(stats['totalfiles']))
|
||||
stats["totalfiles"] = stats["files"] = len(filelist)
|
||||
log.info("{0:,} files to be processed in work dir".format(stats["totalfiles"]))
|
||||
|
||||
if not filelist:
|
||||
# Nothing to do. Exit silently and without errors, just like git does
|
||||
@@ -544,10 +682,18 @@ def main():
|
||||
if args.missing and not args.merge:
|
||||
filterlist = list(filelist)
|
||||
missing = len(filterlist)
|
||||
log.info("{0:,} files not found in log, trying merge commits".format(missing))
|
||||
log.info(
|
||||
"{0:,} files not found in log, trying merge commits".format(missing)
|
||||
)
|
||||
for i in range(0, missing, STEPMISSING):
|
||||
parse_log(filelist, dirlist, stats, git,
|
||||
merge=True, filterlist=filterlist[i:i + STEPMISSING])
|
||||
parse_log(
|
||||
filelist,
|
||||
dirlist,
|
||||
stats,
|
||||
git,
|
||||
merge=True,
|
||||
filterlist=filterlist[i : i + STEPMISSING],
|
||||
)
|
||||
|
||||
# Still missing some?
|
||||
for file in filelist:
|
||||
@@ -556,29 +702,33 @@ def main():
|
||||
# Final statistics
|
||||
# Suggestion: use git-log --before=mtime to brag about skipped log entries
|
||||
def log_info(msg, *a, width=13):
|
||||
ifmt = '{:%d,}' % (width,) # not using 'n' for consistency with ffmt
|
||||
ffmt = '{:%d,.2f}' % (width,)
|
||||
ifmt = "{:%d,}" % (width,) # not using 'n' for consistency with ffmt
|
||||
ffmt = "{:%d,.2f}" % (width,)
|
||||
# %-formatting lacks a thousand separator, must pre-render with .format()
|
||||
log.info(msg.replace('%d', ifmt).replace('%f', ffmt).format(*a))
|
||||
log.info(msg.replace("%d", ifmt).replace("%f", ffmt).format(*a))
|
||||
|
||||
log_info(
|
||||
"Statistics:\n"
|
||||
"%f seconds\n"
|
||||
"%d log lines processed\n"
|
||||
"%d commits evaluated",
|
||||
time.time() - start, stats['loglines'], stats['commits'])
|
||||
"Statistics:\n%f seconds\n%d log lines processed\n%d commits evaluated",
|
||||
time.time() - start,
|
||||
stats["loglines"],
|
||||
stats["commits"],
|
||||
)
|
||||
|
||||
if args.dirs:
|
||||
if stats['direrrors']: log_info("%d directory update errors", stats['direrrors'])
|
||||
log_info("%d directories updated", stats['dirtouches'])
|
||||
if stats["direrrors"]:
|
||||
log_info("%d directory update errors", stats["direrrors"])
|
||||
log_info("%d directories updated", stats["dirtouches"])
|
||||
|
||||
if stats['touches'] != stats['totalfiles']:
|
||||
log_info("%d files", stats['totalfiles'])
|
||||
if stats['skip']: log_info("%d files skipped", stats['skip'])
|
||||
if stats['files']: log_info("%d files missing", stats['files'])
|
||||
if stats['errors']: log_info("%d file update errors", stats['errors'])
|
||||
if stats["touches"] != stats["totalfiles"]:
|
||||
log_info("%d files", stats["totalfiles"])
|
||||
if stats["skip"]:
|
||||
log_info("%d files skipped", stats["skip"])
|
||||
if stats["files"]:
|
||||
log_info("%d files missing", stats["files"])
|
||||
if stats["errors"]:
|
||||
log_info("%d file update errors", stats["errors"])
|
||||
|
||||
log_info("%d files updated", stats['touches'])
|
||||
log_info("%d files updated", stats["touches"])
|
||||
|
||||
if args.test:
|
||||
log.info("TEST RUN - No files modified!")
|
||||
|
||||
3
.github/workflows/_release.yml
vendored
3
.github/workflows/_release.yml
vendored
@@ -388,11 +388,12 @@ jobs:
|
||||
- name: Test against ${{ matrix.partner }}
|
||||
if: startsWith(inputs.working-directory, 'libs/core')
|
||||
run: |
|
||||
# Identify latest tag
|
||||
# Identify latest tag, excluding pre-releases
|
||||
LATEST_PACKAGE_TAG="$(
|
||||
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
|
||||
| awk '{print $2}' \
|
||||
| sed 's|refs/tags/||' \
|
||||
| grep -Ev '==[^=]*(\.?dev[0-9]*|\.?rc[0-9]*)$' \
|
||||
| sort -Vr \
|
||||
| head -n 1
|
||||
)"
|
||||
|
||||
2
.github/workflows/_test.yml
vendored
2
.github/workflows/_test.yml
vendored
@@ -79,4 +79,4 @@ jobs:
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
|
||||
|
||||
2
.github/workflows/_test_pydantic.yml
vendored
2
.github/workflows/_test_pydantic.yml
vendored
@@ -64,4 +64,4 @@ jobs:
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
2
.github/workflows/api_doc_build.yml
vendored
2
.github/workflows/api_doc_build.yml
vendored
@@ -52,7 +52,6 @@ jobs:
|
||||
run: |
|
||||
# Get unique repositories
|
||||
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
|
||||
|
||||
# Checkout each unique repository
|
||||
for repo in $REPOS; do
|
||||
# Validate repository format (allow any org with proper format)
|
||||
@@ -68,7 +67,6 @@ jobs:
|
||||
echo "Error: Invalid repository name: $REPO_NAME"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Checking out $repo to $REPO_NAME"
|
||||
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
|
||||
done
|
||||
|
||||
1
.github/workflows/check_diffs.yml
vendored
1
.github/workflows/check_diffs.yml
vendored
@@ -30,6 +30,7 @@ jobs:
|
||||
build:
|
||||
name: 'Detect Changes & Set Matrix'
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v4
|
||||
|
||||
1
.github/workflows/codspeed.yml
vendored
1
.github/workflows/codspeed.yml
vendored
@@ -20,6 +20,7 @@ jobs:
|
||||
codspeed:
|
||||
name: 'Benchmark'
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
|
||||
@@ -11,4 +11,4 @@
|
||||
"MD046": {
|
||||
"style": "fenced"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
4
.vscode/settings.json
vendored
4
.vscode/settings.json
vendored
@@ -21,7 +21,7 @@
|
||||
"[python]": {
|
||||
"editor.formatOnSave": true,
|
||||
"editor.codeActionsOnSave": {
|
||||
"source.organizeImports": "explicit",
|
||||
"source.organizeImports.ruff": "explicit",
|
||||
"source.fixAll": "explicit"
|
||||
},
|
||||
"editor.defaultFormatter": "charliermarsh.ruff"
|
||||
@@ -77,4 +77,6 @@
|
||||
"editor.tabSize": 2,
|
||||
"editor.insertSpaces": true
|
||||
},
|
||||
"python.terminal.activateEnvironment": false,
|
||||
"python.defaultInterpreterPath": "./.venv/bin/python"
|
||||
}
|
||||
|
||||
@@ -63,4 +63,4 @@ Notebook | Description
|
||||
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
|
||||
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
|
||||
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
|
||||
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
|
||||
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
|
||||
|
||||
@@ -97,7 +97,7 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
|
||||
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
|
||||
kind: ClassKind = "TypedDict"
|
||||
elif type(type_) is typing._TypedDictMeta: # type: ignore
|
||||
kind: ClassKind = "TypedDict"
|
||||
kind = "TypedDict"
|
||||
elif (
|
||||
issubclass(type_, Runnable)
|
||||
and issubclass(type_, BaseModel)
|
||||
@@ -189,7 +189,7 @@ def _load_package_modules(
|
||||
if isinstance(package_directory, str)
|
||||
else package_directory
|
||||
)
|
||||
modules_by_namespace = {}
|
||||
modules_by_namespace: Dict[str, ModuleMembers] = {}
|
||||
|
||||
# Get the high level package name
|
||||
package_name = package_path.name
|
||||
@@ -217,7 +217,11 @@ def _load_package_modules(
|
||||
# Get the full namespace of the module
|
||||
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
|
||||
# Keep only the top level namespace
|
||||
top_namespace = namespace.split(".")[0]
|
||||
# (but make special exception for content_blocks and v1.messages)
|
||||
if namespace == "messages.content_blocks" or namespace == "v1.messages":
|
||||
top_namespace = namespace # Keep full namespace for content_blocks
|
||||
else:
|
||||
top_namespace = namespace.split(".")[0]
|
||||
|
||||
try:
|
||||
# If submodule is present, we need to construct the paths in a slightly
|
||||
@@ -283,7 +287,7 @@ def _construct_doc(
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: 2
|
||||
|
||||
|
||||
"""
|
||||
index_autosummary = """
|
||||
"""
|
||||
@@ -365,9 +369,9 @@ def _construct_doc(
|
||||
|
||||
module_doc += f"""\
|
||||
:template: {template}
|
||||
|
||||
|
||||
{class_["qualified_name"]}
|
||||
|
||||
|
||||
"""
|
||||
index_autosummary += f"""
|
||||
{class_["qualified_name"]}
|
||||
@@ -550,8 +554,8 @@ def _build_index(dirs: List[str]) -> None:
|
||||
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
|
||||
doc = """# LangChain Python API Reference
|
||||
|
||||
Welcome to the LangChain Python API reference. This is a reference for all
|
||||
`langchain-x` packages.
|
||||
Welcome to the LangChain Python API reference. This is a reference for all
|
||||
`langchain-x` packages.
|
||||
|
||||
For user guides see [https://python.langchain.com](https://python.langchain.com).
|
||||
|
||||
|
||||
@@ -89,7 +89,7 @@ Please see the [API reference for @tool](https://python.langchain.com/api_refere
|
||||
|
||||
## Tool artifacts
|
||||
|
||||
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
|
||||
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
|
||||
|
||||
```python
|
||||
@tool(response_format="content_and_artifact")
|
||||
|
||||
@@ -9,6 +9,14 @@ This project utilizes [uv](https://docs.astral.sh/uv/) v0.5+ as a dependency man
|
||||
|
||||
Install `uv`: **[documentation on how to install it](https://docs.astral.sh/uv/getting-started/installation/)**.
|
||||
|
||||
### Windows Users
|
||||
|
||||
If you're on Windows and don't have `make` installed, you can install it via:
|
||||
- **Option 1**: Install via [Chocolatey](https://chocolatey.org/): `choco install make`
|
||||
- **Option 2**: Install via [Scoop](https://scoop.sh/): `scoop install make`
|
||||
- **Option 3**: Use [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/)
|
||||
- **Option 4**: Use the direct `uv` commands shown in the sections below
|
||||
|
||||
## Different packages
|
||||
|
||||
This repository contains multiple packages:
|
||||
@@ -48,7 +56,11 @@ uv sync
|
||||
Then verify dependency installation:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make test
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
|
||||
```
|
||||
|
||||
## Testing
|
||||
@@ -61,7 +73,11 @@ If you add new logic, please add a unit test.
|
||||
To run unit tests:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make test
|
||||
|
||||
# If you don't have make (Windows alternative):
|
||||
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
|
||||
```
|
||||
|
||||
There are also [integration tests and code-coverage](../testing.mdx) available.
|
||||
@@ -72,7 +88,12 @@ If you are only developing `langchain_core`, you can simply install the dependen
|
||||
|
||||
```bash
|
||||
cd libs/core
|
||||
|
||||
# If you have `make` installed:
|
||||
make test
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
|
||||
```
|
||||
|
||||
## Formatting and linting
|
||||
@@ -86,20 +107,37 @@ Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules
|
||||
To run formatting for docs, cookbook and templates:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make format
|
||||
|
||||
# If you don't have make (Windows alternative):
|
||||
uv run --all-groups ruff format .
|
||||
uv run --all-groups ruff check --fix .
|
||||
```
|
||||
|
||||
To run formatting for a library, run the same command from the relevant library directory:
|
||||
|
||||
```bash
|
||||
cd libs/{LIBRARY}
|
||||
|
||||
# If you have `make` installed:
|
||||
make format
|
||||
|
||||
# If you don't have make (Windows alternative):
|
||||
uv run --all-groups ruff format .
|
||||
uv run --all-groups ruff check --fix .
|
||||
```
|
||||
|
||||
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make format_diff
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
# First, get the list of modified files:
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff format
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff check --fix
|
||||
```
|
||||
|
||||
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
|
||||
@@ -111,20 +149,40 @@ Linting for this project is done via a combination of [ruff](https://docs.astral
|
||||
To run linting for docs, cookbook and templates:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make lint
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --all-groups ruff check .
|
||||
uv run --all-groups ruff format . --diff
|
||||
uv run --all-groups mypy . --cache-dir .mypy_cache
|
||||
```
|
||||
|
||||
To run linting for a library, run the same command from the relevant library directory:
|
||||
|
||||
```bash
|
||||
cd libs/{LIBRARY}
|
||||
|
||||
# If you have `make` installed:
|
||||
make lint
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --all-groups ruff check .
|
||||
uv run --all-groups ruff format . --diff
|
||||
uv run --all-groups mypy . --cache-dir .mypy_cache
|
||||
```
|
||||
|
||||
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make lint_diff
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
# First, get the list of modified files:
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff check
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff format --diff
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups mypy --cache-dir .mypy_cache
|
||||
```
|
||||
|
||||
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
|
||||
@@ -139,13 +197,21 @@ Note that `codespell` finds common typos, so it could have false-positive (corre
|
||||
To check spelling for this project:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make spell_check
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --all-groups codespell --toml pyproject.toml
|
||||
```
|
||||
|
||||
To fix spelling in place:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make spell_fix
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --all-groups codespell --toml pyproject.toml -w
|
||||
```
|
||||
|
||||
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
|
||||
|
||||
@@ -124,6 +124,47 @@ start "" htmlcov/index.html || open htmlcov/index.html
|
||||
|
||||
```
|
||||
|
||||
## Snapshot Testing
|
||||
|
||||
Some tests use [syrupy](https://github.com/tophat/syrupy) for snapshot testing, which captures the output of functions and compares them to stored snapshots. This is particularly useful for testing JSON schema generation and other structured outputs.
|
||||
|
||||
### Updating Snapshots
|
||||
|
||||
To update snapshots when the expected output has legitimately changed:
|
||||
|
||||
```bash
|
||||
uv run --group test pytest path/to/test.py --snapshot-update
|
||||
```
|
||||
|
||||
### Pydantic Version Compatibility Issues
|
||||
|
||||
Pydantic generates different JSON schemas across versions, which can cause snapshot test failures in CI when tests run with different Pydantic versions than what was used to generate the snapshots.
|
||||
|
||||
**Symptoms:**
|
||||
- CI fails with snapshot mismatches showing differences like missing or extra fields.
|
||||
- Tests pass locally but fail in CI with different Pydantic versions
|
||||
|
||||
**Solution:**
|
||||
Locally update snapshots using the same Pydantic version that CI uses:
|
||||
|
||||
1. **Identify the failing Pydantic version** from CI logs (e.g., `2.7.0`, `2.8.0`, `2.9.0`)
|
||||
|
||||
2. **Update snapshots with that version:**
|
||||
```bash
|
||||
uv run --with "pydantic==2.9.0" --group test pytest tests/unit_tests/path/to/test.py::test_name --snapshot-update
|
||||
```
|
||||
|
||||
3. **Verify compatibility across supported versions:**
|
||||
```bash
|
||||
# Test with the version you used to update
|
||||
uv run --with "pydantic==2.9.0" --group test pytest tests/unit_tests/path/to/test.py::test_name
|
||||
|
||||
# Test with other supported versions
|
||||
uv run --with "pydantic==2.8.0" --group test pytest tests/unit_tests/path/to/test.py::test_name
|
||||
```
|
||||
|
||||
**Note:** Some tests use `@pytest.mark.skipif` decorators to only run with specific Pydantic version ranges (e.g., `PYDANTIC_VERSION_AT_LEAST_210`). Make sure to understand these constraints when updating snapshots.
|
||||
|
||||
## Coverage
|
||||
|
||||
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
"\n",
|
||||
":::tip\n",
|
||||
"\n",
|
||||
"The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the the model can be swapped in for any other model as it supports the same standard interface.\n",
|
||||
"The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface.\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
|
||||
@@ -323,7 +323,7 @@
|
||||
"source": [
|
||||
"## RAG based approach\n",
|
||||
"\n",
|
||||
"Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the the most relevant chunks.\n",
|
||||
"Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the most relevant chunks.\n",
|
||||
"\n",
|
||||
":::caution\n",
|
||||
"It can be difficult to identify which chunks are relevant.\n",
|
||||
|
||||
@@ -104,7 +104,7 @@
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"`filter_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
|
||||
"`filter_messages` can be used imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -122,13 +122,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"# from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4-turbo\")\n",
|
||||
|
||||
@@ -199,7 +199,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def _clear():\n",
|
||||
" \"\"\"Hacky helper method to clear content. See the `full` mode section to to understand why it works.\"\"\"\n",
|
||||
" \"\"\"Hacky helper method to clear content. See the `full` mode section to understand why it works.\"\"\"\n",
|
||||
" index([], record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -88,7 +88,7 @@
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"`merge_message_runs` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
|
||||
"`merge_message_runs` can be used imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -74,12 +74,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "a88ff70c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.text_splitter import SemanticChunker\n",
|
||||
"# from langchain_experimental.text_splitter import SemanticChunker\n",
|
||||
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"text_splitter = SemanticChunker(OpenAIEmbeddings())"
|
||||
|
||||
@@ -612,56 +612,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": null,
|
||||
"id": "35ea904e-795f-411b-bef8-6484dbb6e35c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `python_repl_ast` with `{'query': \"df[['Age', 'Fare']].corr().iloc[0,1]\"}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m0.11232863699941621\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `python_repl_ast` with `{'query': \"df[['Fare', 'Survived']].corr().iloc[0,1]\"}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m0.2561785496289603\u001b[0m\u001b[32;1m\u001b[1;3mThe correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\n",
|
||||
"\n",
|
||||
"Therefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\",\n",
|
||||
" 'output': 'The correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\\n\\nTherefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.agents import create_pandas_dataframe_agent\n",
|
||||
"\n",
|
||||
"agent = create_pandas_dataframe_agent(\n",
|
||||
" llm, df, agent_type=\"openai-tools\", verbose=True, allow_dangerous_code=True\n",
|
||||
")\n",
|
||||
"agent.invoke(\n",
|
||||
" {\n",
|
||||
" \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
"outputs": [],
|
||||
"source": "from langchain_experimental.agents import create_pandas_dataframe_agent\n\nagent = create_pandas_dataframe_agent(\n llm, df, agent_type=\"openai-tools\", verbose=True, allow_dangerous_code=True\n)\nagent.invoke(\n {\n \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n }\n)"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -786,4 +741,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
}
|
||||
@@ -45,7 +45,7 @@
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"To use the integration you must:\n",
|
||||
"To use the integration you must either:\n",
|
||||
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
|
||||
"- Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\n",
|
||||
"\n",
|
||||
|
||||
298
docs/docs/integrations/chat/gradientai.ipynb
Normal file
298
docs/docs/integrations/chat/gradientai.ipynb
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "raw"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: DigitalOcean Gradient\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e49f1e0d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatGradient\n",
|
||||
"\n",
|
||||
"This will help you getting started with DigitalOcean Gradient Chat Models.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: |\n",
|
||||
"| [DigitalOcean Gradient](https://python.langchain.com/docs/api_reference/llms/langchain_gradient.llms.LangchainGradient/) | [langchain-gradient](https://python.langchain.com/docs/api_reference/langchain-gradient_api_reference/) |  |  |\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"langchain-gradient uses DigitalOcean Gradient Platform. \n",
|
||||
"\n",
|
||||
"Create an account on DigitalOcean, acquire a `DIGITALOCEAN_INFERENCE_KEY` API key from the Gradient Platform, and install the `langchain-gradient` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [DigitalOcean Login](https://cloud.digitalocean.com/login) \n",
|
||||
"\n",
|
||||
"1. Sign up/Login to DigitalOcean Cloud Console\n",
|
||||
"2. Go to the Gradient Platform and navigate to Serverless Inference.\n",
|
||||
"3. Click on Create model access key, enter a name, and create the key.\n",
|
||||
"\n",
|
||||
"Once you've done this set the `DIGITALOCEAN_INFERENCE_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"DIGITALOCEAN_INFERENCE_KEY\"):\n",
|
||||
" os.environ[\"DIGITALOCEAN_INFERENCE_KEY\"] = getpass.getpass(\n",
|
||||
" \"Enter your DIGITALOCEAN_INFERENCE_KEY API key: \"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The DigitalOcean Gradient integration lives in the `langchain-gradient` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m25.1.1\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip3.12 install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-gradient"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_gradient import ChatGradient\n",
|
||||
"\n",
|
||||
"llm = ChatGradient(\n",
|
||||
" model=\"llama3.3-70b-instruct\",\n",
|
||||
" # other params...\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2b4f3e15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Invocation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"...that had been hidden away for centuries, nestled amongst the twisted roots of an ancient tree. As soon as Mira's fingers made contact with the stone, she felt an sudden surge of energy course through her veins, like a river bursting its banks. The stone, which had been dull and lifeless just moments before, now pulsed with a soft, ethereal light, as if it had been awakened by Mira's touch.\\n\\nIntrigued, Mira turned the stone over in her hand, studying it from every angle. The light emanating from it cast eerie shadows on the trees around her, making her feel as though she was standing at the threshold of a secret world. As she gazed deeper into the stone, she began to notice that the glow was not just a random color, but a deep, rich blue that seemed to be calling to her.\\n\\nWithout thinking, Mira felt an overwhelming urge to follow the stone's gentle glow, which seemed to be leading her deeper into the mysterious forest. The trees loomed above her, their branches creaking and swaying in the wind, as if they too were urging her onward. The air was filled with the sweet scent of wildflowers and the soft hooting of owls, creating a sense of enchantment that was both exhilarating and unsettling.\\n\\nAs Mira wandered deeper into the forest, the stone's light grew brighter, illuminating a winding path that was all but invisible in the fading light of day. The trees grew taller and closer together here, forming a tunnel of foliage that seemed to be guiding her towards a hidden destination. Mira's heart pounded with excitement and a hint of fear, as she realized that she was being drawn into a world that was both magical and unknown.\\n\\nSuddenly, the trees parted, and Mira found herself standing at the edge of a clearing, surrounded by a ring of towering mushrooms that glowed with a soft, luminescent light. The air was filled with a faint humming noise, like the buzzing of a thousand bees, and the stone in her hand pulsed with an otherworldly energy. In the center of the clearing stood an enormous tree, its trunk twisted and gnarled with age, its branches reaching up towards the stars like a Nature's own cathedral.\\n\\nMira felt a sense of awe wash over her, as she approached the tree, the stone still clutched in her hand. She could feel the magic of the forest pulsing through her, calling to her, drawing her closer to the heart of the mystery. And as she reached out to touch the trunk of the tree, the stone's glow surged to a brilliant intensity, illuminating a doorway that had been hidden in the trunk all along...\", additional_kwargs={}, response_metadata={'finish_reason': 'stop'}, id='run--593a6940-4c76-413b-bed9-1fd94f91c6c1-0', usage_metadata={'input_tokens': 82, 'output_tokens': 555, 'total_tokens': 637})"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a creative storyteller. Continue any story prompt you receive in an engaging and imaginative way.\",\n",
|
||||
" ),\n",
|
||||
" (\n",
|
||||
" \"human\",\n",
|
||||
" \"Once upon a time, in a village at the edge of a mysterious forest, a young girl named Mira found a glowing stone...\",\n",
|
||||
" ),\n",
|
||||
"]\n",
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"...that had been hidden away for centuries, nestled amongst the twisted roots of an ancient tree. As soon as Mira's fingers made contact with the stone, she felt an sudden surge of energy course through her veins, like a river bursting its banks. The stone, which had been dull and lifeless just moments before, now pulsed with a soft, ethereal light, as if it had been awakened by Mira's touch.\n",
|
||||
"\n",
|
||||
"Intrigued, Mira turned the stone over in her hand, studying it from every angle. The light emanating from it cast eerie shadows on the trees around her, making her feel as though she was standing at the threshold of a secret world. As she gazed deeper into the stone, she began to notice that the glow was not just a random color, but a deep, rich blue that seemed to be calling to her.\n",
|
||||
"\n",
|
||||
"Without thinking, Mira felt an overwhelming urge to follow the stone's gentle glow, which seemed to be leading her deeper into the mysterious forest. The trees loomed above her, their branches creaking and swaying in the wind, as if they too were urging her onward. The air was filled with the sweet scent of wildflowers and the soft hooting of owls, creating a sense of enchantment that was both exhilarating and unsettling.\n",
|
||||
"\n",
|
||||
"As Mira wandered deeper into the forest, the stone's light grew brighter, illuminating a winding path that was all but invisible in the fading light of day. The trees grew taller and closer together here, forming a tunnel of foliage that seemed to be guiding her towards a hidden destination. Mira's heart pounded with excitement and a hint of fear, as she realized that she was being drawn into a world that was both magical and unknown.\n",
|
||||
"\n",
|
||||
"Suddenly, the trees parted, and Mira found herself standing at the edge of a clearing, surrounded by a ring of towering mushrooms that glowed with a soft, luminescent light. The air was filled with a faint humming noise, like the buzzing of a thousand bees, and the stone in her hand pulsed with an otherworldly energy. In the center of the clearing stood an enormous tree, its trunk twisted and gnarled with age, its branches reaching up towards the stars like a Nature's own cathedral.\n",
|
||||
"\n",
|
||||
"Mira felt a sense of awe wash over her, as she approached the tree, the stone still clutched in her hand. She could feel the magic of the forest pulsing through her, calling to her, drawing her closer to the heart of the mystery. And as she reached out to touch the trunk of the tree, the stone's glow surged to a brilliant intensity, illuminating a doorway that had been hidden in the trunk all along...\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(ai_msg.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"We can chain our model with a prompt template like so:\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='The Eiffel Tower was designed by Gustave Eiffel\\'s engineering company and was completed in 1889. (Sentence: \"It was designed by Gustave Eiffel\\'s engineering company. The tower is one of the most recognizable structures in the world. ... The Eiffel Tower is located in Paris and was completed in 1889.\")', additional_kwargs={}, response_metadata={'finish_reason': 'stop'}, id='run--c23ffab6-06ae-4130-87b1-d5b2e7744906-0', usage_metadata={'input_tokens': 153, 'output_tokens': 74, 'total_tokens': 227})"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" 'You are a knowledgeable assistant. Carefully read the provided context and answer the user\\'s question. If the answer is present in the context, cite the relevant sentence. If not, reply with \"Not found in context.\"',\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"Context: {context}\\nQuestion: {question}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | llm\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"context\": (\n",
|
||||
" \"The Eiffel Tower is located in Paris and was completed in 1889. \"\n",
|
||||
" \"It was designed by Gustave Eiffel's engineering company. \"\n",
|
||||
" \"The tower is one of the most recognizable structures in the world. \"\n",
|
||||
" \"The Statue of Liberty was a gift from France to the United States.\"\n",
|
||||
" ),\n",
|
||||
" \"question\": \"Who designed the Eiffel Tower and when was it completed?\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a6660e4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all ChatGradient features and configurations head to the API reference."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -52,7 +52,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [ChatHuggingFace](https://python.langchain.com/api_reference/huggingface/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html) | [langchain_huggingface](https://python.langchain.com/api_reference/huggingface/index.html) | ✅ | ❌ | ❌ |  |  |\n",
|
||||
"| [ChatHuggingFace](https://python.langchain.com/api_reference/huggingface/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html) | [langchain-huggingface](https://python.langchain.com/api_reference/huggingface/index.html) | ✅ | ❌ | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
@@ -61,7 +61,7 @@
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access `langchain_huggingface` models you'll need to create a/an `Hugging Face` account, get an API key, and install the `langchain_huggingface` integration package.\n",
|
||||
"To access `langchain_huggingface` models you'll need to create a `Hugging Face` account, get an API key, and install the `langchain-huggingface` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/mistral) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [ChatMistralAI](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) | [langchain_mistralai](https://python.langchain.com/api_reference/mistralai/index.html) | ❌ | beta | ✅ |  |  |\n",
|
||||
"| [ChatMistralAI](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) | [langchain-mistralai](https://python.langchain.com/api_reference/mistralai/index.html) | ❌ | beta | ✅ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
@@ -34,7 +34,7 @@
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"To access `ChatMistralAI` models you'll need to create a Mistral account, get an API key, and install the `langchain_mistralai` integration package.\n",
|
||||
"To access `ChatMistralAI` models you'll need to create a Mistral account, get an API key, and install the `langchain-mistralai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
@@ -80,7 +80,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Mistral integration lives in the `langchain_mistralai` package:"
|
||||
"The LangChain Mistral integration lives in the `langchain-mistralai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -90,7 +90,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_mistralai"
|
||||
"%pip install -qU langchain-mistralai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -41,7 +41,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [ChatNVIDIA](https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain_nvidia_ai_endpoints](https://python.langchain.com/api_reference/nvidia_ai_endpoints/index.html) | ✅ | beta | ❌ |  |  |\n",
|
||||
"| [ChatNVIDIA](https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain-nvidia-ai-endpoints](https://python.langchain.com/api_reference/nvidia_ai_endpoints/index.html) | ✅ | beta | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
@@ -102,7 +102,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain_nvidia_ai_endpoints` package:"
|
||||
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain-nvidia-ai-endpoints` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -447,6 +447,163 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c5d9d19d-8ab1-4d9d-b3a0-56ee4e89c528",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom tools\n",
|
||||
"\n",
|
||||
":::info Requires ``langchain-openai>=0.3.29``\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"[Custom tools](https://platform.openai.com/docs/guides/function-calling#custom-tools) support tools with arbitrary string inputs. They can be particularly useful when you expect your string arguments to be long or complex."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "a47c809b-852f-46bd-8b9e-d9534c17213d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"================================\u001b[1m Human Message \u001b[0m=================================\n",
|
||||
"\n",
|
||||
"Use the tool to calculate 3^3.\n",
|
||||
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
||||
"\n",
|
||||
"[{'id': 'rs_6894ff5747c0819d9b02fc5645b0be9c000169fd9fb68d99', 'summary': [], 'type': 'reasoning'}, {'call_id': 'call_7SYwMSQPbbEqFcKlKOpXeEux', 'input': 'print(3**3)', 'name': 'execute_code', 'type': 'custom_tool_call', 'id': 'ctc_6894ff5b9f54819d8155a63638d34103000169fd9fb68d99', 'status': 'completed'}]\n",
|
||||
"Tool Calls:\n",
|
||||
" execute_code (call_7SYwMSQPbbEqFcKlKOpXeEux)\n",
|
||||
" Call ID: call_7SYwMSQPbbEqFcKlKOpXeEux\n",
|
||||
" Args:\n",
|
||||
" __arg1: print(3**3)\n",
|
||||
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
|
||||
"Name: execute_code\n",
|
||||
"\n",
|
||||
"[{'type': 'custom_tool_call_output', 'output': '27'}]\n",
|
||||
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
||||
"\n",
|
||||
"[{'type': 'text', 'text': '27', 'annotations': [], 'id': 'msg_6894ff5db3b8819d9159b3a370a25843000169fd9fb68d99'}]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI, custom_tool\n",
|
||||
"from langgraph.prebuilt import create_react_agent\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@custom_tool\n",
|
||||
"def execute_code(code: str) -> str:\n",
|
||||
" \"\"\"Execute python code.\"\"\"\n",
|
||||
" return \"27\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-5\", output_version=\"responses/v1\")\n",
|
||||
"\n",
|
||||
"agent = create_react_agent(llm, [execute_code])\n",
|
||||
"\n",
|
||||
"input_message = {\"role\": \"user\", \"content\": \"Use the tool to calculate 3^3.\"}\n",
|
||||
"for step in agent.stream(\n",
|
||||
" {\"messages\": [input_message]},\n",
|
||||
" stream_mode=\"values\",\n",
|
||||
"):\n",
|
||||
" step[\"messages\"][-1].pretty_print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5ef93be6-6d4c-4eea-acfd-248774074082",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<details>\n",
|
||||
"<summary>Context-free grammars</summary>\n",
|
||||
"\n",
|
||||
"OpenAI supports the specification of a [context-free grammar](https://platform.openai.com/docs/guides/function-calling#context-free-grammars) for custom tool inputs in `lark` or `regex` format. See [OpenAI docs](https://platform.openai.com/docs/guides/function-calling#context-free-grammars) for details. The `format` parameter can be passed into `@custom_tool` as shown below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "2ae04586-be33-49c6-8947-7867801d868f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"================================\u001b[1m Human Message \u001b[0m=================================\n",
|
||||
"\n",
|
||||
"Use the tool to calculate 3^3.\n",
|
||||
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
||||
"\n",
|
||||
"[{'id': 'rs_689500828a8481a297ff0f98e328689c0681550c89797f43', 'summary': [], 'type': 'reasoning'}, {'call_id': 'call_jzH01RVhu6EFz7yUrOFXX55s', 'input': '3 * 3 * 3', 'name': 'do_math', 'type': 'custom_tool_call', 'id': 'ctc_6895008d57bc81a2b84d0993517a66b90681550c89797f43', 'status': 'completed'}]\n",
|
||||
"Tool Calls:\n",
|
||||
" do_math (call_jzH01RVhu6EFz7yUrOFXX55s)\n",
|
||||
" Call ID: call_jzH01RVhu6EFz7yUrOFXX55s\n",
|
||||
" Args:\n",
|
||||
" __arg1: 3 * 3 * 3\n",
|
||||
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
|
||||
"Name: do_math\n",
|
||||
"\n",
|
||||
"[{'type': 'custom_tool_call_output', 'output': '27'}]\n",
|
||||
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
|
||||
"\n",
|
||||
"[{'type': 'text', 'text': '27', 'annotations': [], 'id': 'msg_6895009776b881a2a25f0be8507d08f20681550c89797f43'}]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI, custom_tool\n",
|
||||
"from langgraph.prebuilt import create_react_agent\n",
|
||||
"\n",
|
||||
"grammar = \"\"\"\n",
|
||||
"start: expr\n",
|
||||
"expr: term (SP ADD SP term)* -> add\n",
|
||||
"| term\n",
|
||||
"term: factor (SP MUL SP factor)* -> mul\n",
|
||||
"| factor\n",
|
||||
"factor: INT\n",
|
||||
"SP: \" \"\n",
|
||||
"ADD: \"+\"\n",
|
||||
"MUL: \"*\"\n",
|
||||
"%import common.INT\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"format_ = {\"type\": \"grammar\", \"syntax\": \"lark\", \"definition\": grammar}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# highlight-next-line\n",
|
||||
"@custom_tool(format=format_)\n",
|
||||
"def do_math(input_string: str) -> str:\n",
|
||||
" \"\"\"Do a mathematical operation.\"\"\"\n",
|
||||
" return \"27\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-5\", output_version=\"responses/v1\")\n",
|
||||
"\n",
|
||||
"agent = create_react_agent(llm, [do_math])\n",
|
||||
"\n",
|
||||
"input_message = {\"role\": \"user\", \"content\": \"Use the tool to calculate 3^3.\"}\n",
|
||||
"for step in agent.stream(\n",
|
||||
" {\"messages\": [input_message]},\n",
|
||||
" stream_mode=\"values\",\n",
|
||||
"):\n",
|
||||
" step[\"messages\"][-1].pretty_print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c63430c9-c7b0-4e92-a491-3f165dddeb8f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"</details>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "84833dd0-17e9-4269-82ed-550639d65751",
|
||||
|
||||
@@ -69,7 +69,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_agentql"
|
||||
"%pip install -qU langchain-agentql"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -310,7 +310,7 @@
|
||||
"from langchain_openai import OpenAI\n",
|
||||
"\n",
|
||||
"chain = load_qa_chain(llm=OpenAI(), chain_type=\"map_reduce\")\n",
|
||||
"query = [\"Who are the autors?\"]\n",
|
||||
"query = [\"Who are the authors?\"]\n",
|
||||
"\n",
|
||||
"chain.run(input_documents=documents, question=query)"
|
||||
]
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet azureml-fsspec, azure-ai-generative"
|
||||
"%pip install --upgrade --quiet azureml-fsspec azure-ai-generative"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [BSHTMLLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.html_bs.BSHTMLLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [BSHTMLLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.html_bs.BSHTMLLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -52,7 +52,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **bs4**."
|
||||
"Install **langchain-community** and **bs4**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -61,7 +61,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community bs4"
|
||||
"%pip install -qU langchain-community bs4"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -245,7 +245,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -q --progress-bar off --no-warn-conflicts langchain-core langchain-huggingface langchain_milvus langchain python-dotenv"
|
||||
"%pip install -q --progress-bar off --no-warn-conflicts langchain-core langchain-huggingface langchain-milvus langchain python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/firecrawl/)|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [FireCrawlLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.firecrawl.FireCrawlLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"| [FireCrawlLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.firecrawl.FireCrawlLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/file_loaders/json/)|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [JSONLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"| [JSONLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -51,7 +51,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **jq**:"
|
||||
"Install **langchain-community** and **jq**:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -60,7 +60,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community jq "
|
||||
"%pip install -qU langchain-community jq "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [MathPixPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.MathpixPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [MathPixPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.MathpixPDFLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community**."
|
||||
"Install **langchain-community**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -69,7 +69,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community"
|
||||
"%pip install -qU langchain-community"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"source": [
|
||||
"[Socrata](https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6) provides an API for city open data. \n",
|
||||
"\n",
|
||||
"For a dataset such as [SF crime](https://data.sfgov.org/Public-Safety/Police-Department-Incident-Reports-Historical-2003/tmnf-yvry), to to the `API` tab on top right. \n",
|
||||
"For a dataset such as [SF crime](https://data.sfgov.org/Public-Safety/Police-Department-Incident-Reports-Historical-2003/tmnf-yvry), see the `API` tab on top right. \n",
|
||||
"\n",
|
||||
"That provides you with the `dataset identifier`.\n",
|
||||
"\n",
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"|:-----------------------------------------------------------------------------------------------------------------------------------------------------| :--- | :---: | :---: | :---: |\n",
|
||||
"| [PDFMinerLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PDFMinerLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ |\n",
|
||||
"| [PDFMinerLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PDFMinerLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ |\n",
|
||||
"\n",
|
||||
"--------- \n",
|
||||
"\n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **pdfminer**."
|
||||
"Install **langchain-community** and **pdfminer**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -82,7 +82,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community pdfminer.six"
|
||||
"%pip install -qU langchain-community pdfminer.six"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -938,7 +938,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_openai"
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [PDFPlumberLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PDFPlumberLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [PDFPlumberLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PDFPlumberLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -47,7 +47,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community**."
|
||||
"Install **langchain-community**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -56,7 +56,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community"
|
||||
"%pip install -qU langchain-community"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -117,7 +117,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The fields:\n",
|
||||
" - `es_host_url` is the endpoint to to MetadataIQ Elasticsearch database\n",
|
||||
" - `es_host_url` is the endpoint to MetadataIQ Elasticsearch database\n",
|
||||
" - `es_index_index` is the name of the index where PowerScale writes it file system metadata\n",
|
||||
" - `es_api_key` is the **encoded** version of your elasticsearch API key\n",
|
||||
" - `folder_path` is the path on PowerScale to be queried for changes"
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [PyMuPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [PyMuPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"\n",
|
||||
"--------- \n",
|
||||
"\n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **pymupdf**."
|
||||
"Install **langchain-community** and **pymupdf**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -71,7 +71,7 @@
|
||||
"start_time": "2025-01-16T09:48:33.057015Z"
|
||||
}
|
||||
},
|
||||
"source": "%pip install -qU langchain_community pymupdf",
|
||||
"source": "%pip install -qU langchain-community pymupdf",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -569,7 +569,7 @@
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"%pip install -qU langchain_openai"
|
||||
"%pip install -qU langchain-openai"
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [PyMuPDF4LLMLoader](https://github.com/lakinduboteju/langchain-pymupdf4llm) | [langchain_pymupdf4llm](https://pypi.org/project/langchain-pymupdf4llm) | ✅ | ❌ | ❌ |\n",
|
||||
"| [PyMuPDF4LLMLoader](https://github.com/lakinduboteju/langchain-pymupdf4llm) | [langchain-pymupdf4llm](https://pypi.org/project/langchain-pymupdf4llm) | ✅ | ❌ | ❌ |\n",
|
||||
"\n",
|
||||
"### Loader features\n",
|
||||
"\n",
|
||||
@@ -61,7 +61,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **langchain-pymupdf4llm**."
|
||||
"Install **langchain-community** and **langchain-pymupdf4llm**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -78,7 +78,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community langchain-pymupdf4llm"
|
||||
"%pip install -qU langchain-community langchain-pymupdf4llm"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -554,7 +554,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_openai"
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [PyPDFDirectoryLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFDirectoryLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [PyPDFDirectoryLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFDirectoryLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -53,7 +53,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community**."
|
||||
"Install **langchain-community**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -74,7 +74,7 @@
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": "%pip install -qU langchain_community pypdf pillow"
|
||||
"source": "%pip install -qU langchain-community pypdf pillow"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [PyPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [PyPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
" \n",
|
||||
"--------- \n",
|
||||
"\n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **pypdf**."
|
||||
"Install **langchain-community** and **pypdf**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -81,7 +81,7 @@
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": "%pip install -qU langchain_community pypdfium2"
|
||||
"source": "%pip install -qU langchain-community pypdfium2"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -802,7 +802,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_openai"
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [PyPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [PyPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
" \n",
|
||||
"--------- \n",
|
||||
"\n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community** and **pypdf**."
|
||||
"Install **langchain-community** and **pypdf**."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -82,7 +82,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community pypdf"
|
||||
"%pip install -qU langchain-community pypdf"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -818,7 +818,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_openai"
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [RecursiveUrlLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"| [RecursiveUrlLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"This loader fetches the text from the Posts of Subreddits or Reddit users, using the `praw` Python package.\n",
|
||||
"\n",
|
||||
"Make a [Reddit Application](https://www.reddit.com/prefs/apps/) and initialize the loader with with your Reddit API credentials."
|
||||
"Make a [Reddit Application](https://www.reddit.com/prefs/apps/) and initialize the loader with your Reddit API credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/sitemap/)|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [SiteMapLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.sitemap.SitemapLoader.html#langchain_community.document_loaders.sitemap.SitemapLoader) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"| [SiteMapLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.sitemap.SitemapLoader.html#langchain_community.document_loaders.sitemap.SitemapLoader) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -51,7 +51,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"Install **langchain_community**."
|
||||
"Install **langchain-community**."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/file_loaders/unstructured/)|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [UnstructuredLoader](https://python.langchain.com/api_reference/unstructured/document_loaders/langchain_unstructured.document_loaders.UnstructuredLoader.html) | [langchain_unstructured](https://python.langchain.com/api_reference/unstructured/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"| [UnstructuredLoader](https://python.langchain.com/api_reference/unstructured/document_loaders/langchain_unstructured.document_loaders.UnstructuredLoader.html) | [langchain-unstructured](https://python.langchain.com/api_reference/unstructured/index.html) | ✅ | ❌ | ✅ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
|
||||
@@ -151,10 +151,10 @@
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This is a page with something...\n",
|
||||
"\n",
|
||||
"WAW I have learned something !\n",
|
||||
@@ -183,10 +183,10 @@
|
||||
"This is a title\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Another RED arrow wow\n",
|
||||
"Arrow with point but red\n",
|
||||
"Green line\n",
|
||||
@@ -219,10 +219,10 @@
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\n",
|
||||
"\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0-\\u00a0incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in\n",
|
||||
"\n",
|
||||
@@ -252,10 +252,10 @@
|
||||
"This is a title\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"\n",
|
||||
"------ Page 7 ------\n",
|
||||
"Title page : Useful ↔ Useless page\n",
|
||||
@@ -276,10 +276,10 @@
|
||||
"This is a title\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Title of this document : BLABLABLA\n",
|
||||
"\n",
|
||||
"------ Page 8 ------\n",
|
||||
@@ -359,10 +359,10 @@
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Useful\\u2194 Useless page\\u00a0\n",
|
||||
"\n",
|
||||
"Tests of some exotics characters :\\u00a0\\u00e3\\u00e4\\u00e5\\u0101\\u0103 \\u00fc\\u2554\\u00a0\\u00a0\\u00bc \\u00c7 \\u25d8\\u25cb\\u2642\\u266b\\u2640\\u00ee\\u2665\n",
|
||||
@@ -444,10 +444,10 @@
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"This a completely useless diagram, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This diagram is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Only connectors on this page. This is the CoNNeCtor page\n"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support|\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"| [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | \n",
|
||||
"### Loader features\n",
|
||||
"| Source | Document Lazy Loading | Native Async Support\n",
|
||||
"| :---: | :---: | :---: | \n",
|
||||
@@ -44,7 +44,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community beautifulsoup4"
|
||||
"%pip install -qU langchain-community beautifulsoup4"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -132,12 +132,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"\n",
|
||||
"# from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"# Define the LLMGraphTransformer\n",
|
||||
|
||||
@@ -548,12 +548,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_experimental.graph_transformers import LLMGraphTransformer"
|
||||
"# from langchain_experimental.graph_transformers import LLMGraphTransformer"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -261,7 +261,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU upstash_redis"
|
||||
"%pip install -qU upstash-redis"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1543,7 +1543,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain_astradb\n",
|
||||
"%pip install -qU langchain-astradb\n",
|
||||
"\n",
|
||||
"import getpass\n",
|
||||
"\n",
|
||||
@@ -2683,7 +2683,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_couchbase"
|
||||
"%pip install -qU langchain-couchbase"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -34,7 +34,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain_aws"
|
||||
"%pip install --upgrade --quiet langchain-aws"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/llms/cohere/) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [Cohere](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.cohere.Cohere.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ❌ | beta | ✅ |  |  |\n"
|
||||
"| [Cohere](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.cohere.Cohere.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | beta | ✅ |  |  |\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.1/docs/integrations/llms/fireworks/) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [Fireworks](https://python.langchain.com/api_reference/fireworks/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks) | [langchain_fireworks](https://python.langchain.com/api_reference/fireworks/index.html) | ❌ | ❌ | ✅ |  |  |"
|
||||
"| [Fireworks](https://python.langchain.com/api_reference/fireworks/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks) | [langchain-fireworks](https://python.langchain.com/api_reference/fireworks/index.html) | ❌ | ❌ | ✅ |  |  |"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -59,7 +59,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"You need to install the `langchain_fireworks` python package for the rest of the notebook to work."
|
||||
"You need to install the `langchain-fireworks` python package for the rest of the notebook to work."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [NVIDIA](https://python.langchain.com/api_reference/nvidia_ai_endpoints/llms/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain_nvidia_ai_endpoints](https://python.langchain.com/api_reference/nvidia_ai_endpoints/index.html) | ✅ | beta | ❌ |  |  |\n",
|
||||
"| [NVIDIA](https://python.langchain.com/api_reference/nvidia_ai_endpoints/llms/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain-nvidia-ai-endpoints](https://python.langchain.com/api_reference/nvidia_ai_endpoints/index.html) | ✅ | beta | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
@@ -71,7 +71,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain_nvidia_ai_endpoints` package:"
|
||||
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain-nvidia-ai-endpoints` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
95
docs/docs/integrations/providers/gradientai.mdx
Normal file
95
docs/docs/integrations/providers/gradientai.mdx
Normal file
@@ -0,0 +1,95 @@
|
||||
# ChatGradient
|
||||
|
||||
This will help you getting started with DigitalOcean Gradient [chat models](/docs/concepts/chat_models).
|
||||
|
||||
## Overview
|
||||
### Integration details
|
||||
|
||||
| Class | Package | Package downloads | Package latest |
|
||||
| :--- | :--- | :---: | :---: |
|
||||
| [ChatGradient](https://python.langchain.com/api_reference/langchain-gradient/chat_models/langchain_gradient.chat_models.ChatGradient.html) | [langchain-gradient](https://python.langchain.com/api_reference/langchain-gradient/) |  |  |
|
||||
|
||||
|
||||
## Setup
|
||||
|
||||
langchain-gradient uses DigitalOcean Gradient Platform.
|
||||
|
||||
Create an account on DigitalOcean, acquire a `DIGITALOCEAN_INFERENCE_KEY` API key from the Gradient Platform, and install the `langchain-gradient` integration package.
|
||||
|
||||
### Credentials
|
||||
|
||||
Head to [DigitalOcean Gradient](https://www.digitalocean.com/products/gradient)
|
||||
|
||||
1. Sign up/Login to DigitalOcean Cloud Console
|
||||
2. Go to the Gradient Platform and navigate to Serverless Inference.
|
||||
3. Click on Create model access key, enter a name, and create the key.
|
||||
|
||||
Once you've done this set the `DIGITALOCEAN_INFERENCE_KEY` environment variable:
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ["DIGITALOCEAN_INFERENCE_KEY"] = "your-api-key"
|
||||
```
|
||||
|
||||
### Installation
|
||||
|
||||
The LangChain Gradient integration is in the `langchain-gradient` package:
|
||||
|
||||
```bash
|
||||
pip install -qU langchain-gradient
|
||||
```
|
||||
|
||||
## Instantiation
|
||||
|
||||
```python
|
||||
from langchain_gradient import ChatGradient
|
||||
|
||||
llm = ChatGradient(
|
||||
model="llama3.3-70b-instruct",
|
||||
api_key=os.environ.get("DIGITALOCEAN_INFERENCE_KEY")
|
||||
)
|
||||
```
|
||||
|
||||
## Invocation
|
||||
|
||||
```python
|
||||
messages = [
|
||||
(
|
||||
"system",
|
||||
"You are a creative storyteller. Continue any story prompt you receive in an engaging and imaginative way.",
|
||||
),
|
||||
("human", "Once upon a time, in a village at the edge of a mysterious forest, a young girl named Mira found a glowing stone..."),
|
||||
]
|
||||
ai_msg = llm.invoke(messages)
|
||||
ai_msg
|
||||
print(ai_msg.content)
|
||||
```
|
||||
|
||||
## Chaining
|
||||
|
||||
```python
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
|
||||
prompt = ChatPromptTemplate(
|
||||
[
|
||||
(
|
||||
"system",
|
||||
"You are a knowledgeable assistant. Carefully read the provided context and answer the user's question. If the answer is present in the context, cite the relevant sentence. If not, reply with \"Not found in context.\"",
|
||||
),
|
||||
("human", "Context: {context}\nQuestion: {question}"),
|
||||
]
|
||||
)
|
||||
|
||||
chain = prompt | llm
|
||||
chain.invoke(
|
||||
{
|
||||
"context": (
|
||||
"The Eiffel Tower is located in Paris and was completed in 1889. "
|
||||
"It was designed by Gustave Eiffel's engineering company. "
|
||||
"The tower is one of the most recognizable structures in the world. "
|
||||
"The Statue of Liberty was a gift from France to the United States."
|
||||
),
|
||||
"question": "Who designed the Eiffel Tower and when was it completed?"
|
||||
}
|
||||
)
|
||||
```
|
||||
@@ -235,7 +235,7 @@
|
||||
"\n",
|
||||
"**Retries**\n",
|
||||
"\n",
|
||||
"Automatically reprocess any unsuccessful API requests **`upto 5`** times. Uses an **`exponential backoff`** strategy, which spaces out retry attempts to prevent network overload.[Docs](https://portkey.ai/docs/product/ai-gateway-streamline-llm-integrations)\n",
|
||||
"Automatically reprocess any unsuccessful API requests **`upto 5`** times. Uses an **`exponential backoff`** strategy, which spaces out retry attempts to prevent network overload. [Docs](https://portkey.ai/docs/product/ai-gateway-streamline-llm-integrations)\n",
|
||||
"\n",
|
||||
"**Tagging**\n",
|
||||
"\n",
|
||||
|
||||
@@ -67,7 +67,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community wikipedia"
|
||||
"%pip install -qU langchain-community wikipedia"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [AstraDBByteStore](https://python.langchain.com/api_reference/astradb/storage/langchain_astradb.storage.AstraDBByteStore.html) | [langchain_astradb](https://python.langchain.com/api_reference/astradb/index.html) | ❌ | ❌ |  |  |\n",
|
||||
"| [AstraDBByteStore](https://python.langchain.com/api_reference/astradb/storage/langchain_astradb.storage.AstraDBByteStore.html) | [langchain-astradb](https://python.langchain.com/api_reference/astradb/index.html) | ❌ | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain AstraDB integration lives in the `langchain_astradb` package:"
|
||||
"The LangChain AstraDB integration lives in the `langchain-astradb` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -69,7 +69,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_astradb"
|
||||
"%pip install -qU langchain-astradb"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | [JS support](https://js.langchain.com/docs/integrations/stores/cassandra_storage) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [CassandraByteStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.cassandra.CassandraByteStore.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ✅ |  |  |\n",
|
||||
"| [CassandraByteStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.cassandra.CassandraByteStore.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ✅ |  |  |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -44,7 +44,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain `CassandraByteStore` integration lives in the `langchain_community` package. You'll also need to install the `cassio` package or the `cassandra-driver` package as a peer dependency depending on which initialization method you're using:"
|
||||
"The LangChain `CassandraByteStore` integration lives in the `langchain-community` package. You'll also need to install the `cassio` package or the `cassandra-driver` package as a peer dependency depending on which initialization method you're using:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -53,7 +53,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community\n",
|
||||
"%pip install -qU langchain-community\n",
|
||||
"%pip install -qU cassandra-driver\n",
|
||||
"%pip install -qU cassio"
|
||||
]
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [ElasticsearchEmbeddingsCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html) | [langchain_elasticsearch](https://python.langchain.com/api_reference/elasticsearch/index.html) | ✅ | ❌ |  |  |\n",
|
||||
"| [ElasticsearchEmbeddingsCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html) | [langchain-elasticsearch](https://python.langchain.com/api_reference/elasticsearch/index.html) | ✅ | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -42,7 +42,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain `ElasticsearchEmbeddingsCache` integration lives in the `__package_name__` package:"
|
||||
"The LangChain `ElasticsearchEmbeddingsCache` integration lives in the `langchain-elasticsearch` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,7 +51,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_elasticsearch"
|
||||
"%pip install -qU langchain-elasticsearch"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | [JS support](https://js.langchain.com/docs/integrations/stores/in_memory/) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [InMemoryByteStore](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.InMemoryByteStore.html) | [langchain_core](https://python.langchain.com/api_reference/core/index.html) | ✅ | ✅ |  |  |"
|
||||
"| [InMemoryByteStore](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.InMemoryByteStore.html) | [langchain-core](https://python.langchain.com/api_reference/core/index.html) | ✅ | ✅ |  |  |"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -38,7 +38,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain `InMemoryByteStore` integration lives in the `langchain_core` package:"
|
||||
"The LangChain `InMemoryByteStore` integration lives in the `langchain-core` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -47,7 +47,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_core"
|
||||
"%pip install -qU langchain-core"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | [JS support](https://js.langchain.com/docs/integrations/stores/ioredis_storage) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [RedisStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.redis.RedisStore.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ✅ |  |  |\n",
|
||||
"| [RedisStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.redis.RedisStore.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ✅ |  |  |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -42,7 +42,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain `RedisStore` integration lives in the `langchain_community` package:"
|
||||
"The LangChain `RedisStore` integration lives in the `langchain-community` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,7 +51,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community redis"
|
||||
"%pip install -qU langchain-community redis"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
"\n",
|
||||
"| Class | Package | Local | [JS support](https://js.langchain.com/docs/integrations/stores/upstash_redis_storage) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [UpstashRedisByteStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ✅ |  |  |\n",
|
||||
"| [UpstashRedisByteStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ✅ |  |  |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -60,7 +60,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Upstash integration lives in the `langchain_community` package. You'll also need to install the `upstash-redis` package as a peer dependency:"
|
||||
"The LangChain Upstash integration lives in the `langchain-community` package. You'll also need to install the `upstash-redis` package as a peer dependency:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -69,7 +69,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain_community upstash-redis"
|
||||
"%pip install -qU langchain-community upstash-redis"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -411,7 +411,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken langchain_community\n",
|
||||
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken langchain-community\n",
|
||||
"\n",
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --quiet -U langchain_agentql"
|
||||
"%pip install --quiet -U langchain-agentql"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -9,7 +9,9 @@
|
||||
"source": [
|
||||
"# Alpha Vantage\n",
|
||||
"\n",
|
||||
">[Alpha Vantage](https://www.alphavantage.co) Alpha Vantage provides realtime and historical financial market data through a set of powerful and developer-friendly data APIs and spreadsheets. \n",
|
||||
">[Alpha Vantage](https://www.alphavantage.co) Alpha Vantage provides realtime and historical financial market data through a set of powerful and developer-friendly data APIs and spreadsheets.\n",
|
||||
"\n",
|
||||
"Generate the `ALPHAVANTAGE_API_KEY` [at their website](https://www.alphavantage.co/support/#api-key)\n.",
|
||||
"\n",
|
||||
"Use the ``AlphaVantageAPIWrapper`` to get currency exchange rates."
|
||||
]
|
||||
|
||||
@@ -85,7 +85,7 @@
|
||||
"Install the following Python modules:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install ipykernel python-dotenv cassio langchain_openai langchain langchain-community langchainhub\n",
|
||||
"pip install ipykernel python-dotenv cassio langchain-openai langchain langchain-community langchainhub\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"### .env file\n",
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
"- [VertexAIImageEditorChat](#image-editing) : Edit an entire uploaded or generated image with a text prompt.\n",
|
||||
"- [VertexAIImageCaptioning](#image-captioning) : Get text descriptions of images with visual captioning.\n",
|
||||
"- [VertexAIVisualQnAChat](#visual-question-answering-vqa) : Get answers to a question about an image with Visual Question Answering (VQA).\n",
|
||||
" * NOTE : Currently we support only only single-turn chat for Visual QnA (VQA)"
|
||||
" * NOTE : Currently we support only single-turn chat for Visual QnA (VQA)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -48,11 +48,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create Image Gentation model Object\n",
|
||||
"# Create Image Generation model Object\n",
|
||||
"generator = VertexAIImageGeneratorChat()"
|
||||
]
|
||||
},
|
||||
@@ -140,11 +140,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create Image Gentation model Object\n",
|
||||
"# Create Image Generation model Object\n",
|
||||
"generator = VertexAIImageGeneratorChat()\n",
|
||||
"\n",
|
||||
"# Provide a text input for image\n",
|
||||
@@ -244,7 +244,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -268,10 +268,10 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# use image egenarted in Image Generation Section\n",
|
||||
"# use image generated in Image Generation Section\n",
|
||||
"img_base64 = generated_image[\"image_url\"][\"url\"]\n",
|
||||
"response = model.invoke(img_base64)\n",
|
||||
"print(f\"Generated Cpation : {response}\")\n",
|
||||
"print(f\"Generated Caption : {response}\")\n",
|
||||
"\n",
|
||||
"# Convert base64 string to Image\n",
|
||||
"img = Image.open(\n",
|
||||
|
||||
@@ -51,7 +51,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-community langchain_openai"
|
||||
"%pip install -qU langchain-community langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
"\n",
|
||||
"> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to to store vectors and query them using the `FirestoreVectorStore` class.\n",
|
||||
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to store vectors and query them using the `FirestoreVectorStore` class.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-firestore-python/blob/main/docs/vectorstores.ipynb)"
|
||||
]
|
||||
|
||||
@@ -29,8 +29,8 @@
|
||||
" Please refer to the instructions in:\n",
|
||||
" [www.jaguardb.com](http://www.jaguardb.com)\n",
|
||||
" For quick setup in docker environment:\n",
|
||||
" docker pull jaguardb/jaguardb_with_http\n",
|
||||
" docker run -d -p 8888:8888 -p 8080:8080 --name jaguardb_with_http jaguardb/jaguardb_with_http\n",
|
||||
" docker pull jaguardb/jaguardb\n",
|
||||
" docker run -d -p 8888:8888 -p 8080:8080 --name jaguardb jaguardb/jaguardb\n",
|
||||
"\n",
|
||||
"2. You must install the http client package for JaguarDB:\n",
|
||||
" ```\n",
|
||||
|
||||
@@ -36,7 +36,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"pip install -qU langchain_milvus"
|
||||
"pip install -qU langchain-milvus"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -9,13 +9,13 @@
|
||||
"\n",
|
||||
"> An implementation of LangChain vectorstore abstraction using `postgres` as the backend and utilizing the `pgvector` extension.\n",
|
||||
"\n",
|
||||
"The code lives in an integration package called: [langchain_postgres](https://github.com/langchain-ai/langchain-postgres/).\n",
|
||||
"The code lives in an integration package called: [langchain-postgres](https://github.com/langchain-ai/langchain-postgres/).\n",
|
||||
"\n",
|
||||
"## Status\n",
|
||||
"\n",
|
||||
"This code has been ported over from `langchain_community` into a dedicated package called `langchain-postgres`. The following changes have been made:\n",
|
||||
"This code has been ported over from `langchain-community` into a dedicated package called `langchain-postgres`. The following changes have been made:\n",
|
||||
"\n",
|
||||
"* langchain_postgres works only with psycopg3. Please update your connnecion strings from `postgresql+psycopg2://...` to `postgresql+psycopg://langchain:langchain@...` (yes, it's the driver name is `psycopg` not `psycopg3`, but it'll use `psycopg3`.\n",
|
||||
"* `langchain-postgres` works only with psycopg3. Please update your connnecion strings from `postgresql+psycopg2://...` to `postgresql+psycopg://langchain:langchain@...` (yes, it's the driver name is `psycopg` not `psycopg3`, but it'll use `psycopg3`.\n",
|
||||
"* The schema of the embedding store and collection have been changed to make add_documents work correctly with user specified ids.\n",
|
||||
"* One has to pass an explicit connection object now.\n",
|
||||
"\n",
|
||||
@@ -35,7 +35,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install -qU langchain_postgres"
|
||||
"pip install -qU langchain-postgres"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,7 +43,7 @@
|
||||
"id": "0dd87fcc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can run the following command to spin up a a postgres container with the `pgvector` extension:"
|
||||
"You can run the following command to spin up a postgres container with the `pgvector` extension:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -63,7 +63,7 @@
|
||||
"source": [
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"There are no credentials needed to run this notebook, just make sure you downloaded the `langchain_postgres` package and correctly started the postgres container."
|
||||
"There are no credentials needed to run this notebook, just make sure you downloaded the `langchain-postgres` package and correctly started the postgres container."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
"\n",
|
||||
"This notebook goes over how to use the `PGVectorStore` API.\n",
|
||||
"\n",
|
||||
"The code lives in an integration package called: [langchain_postgres](https://github.com/langchain-ai/langchain-postgres/)."
|
||||
"The code lives in an integration package called: [langchain-postgres](https://github.com/langchain-ai/langchain-postgres/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -61,7 +61,7 @@
|
||||
"source": [
|
||||
"## Credentials\n",
|
||||
"\n",
|
||||
"There are no credentials needed to run this notebook, just make sure you downloaded the `langchain_sqlserver` package\n",
|
||||
"There are no credentials needed to run this notebook, just make sure you downloaded the `langchain-sqlserver` package\n",
|
||||
"If you want to get best in-class automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -193,7 +193,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You should additionally not pass `ToolMessages` back to to a model if they are not preceded by an `AIMessage` with tool calls. For example, this will fail:"
|
||||
"You should additionally not pass `ToolMessages` back to a model if they are not preceded by an `AIMessage` with tool calls. For example, this will fail:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -30,49 +30,50 @@ The following table shows information on all available key-value stores.
|
||||
|
||||
KV_STORE_FEAT_TABLE = {
|
||||
"AstraDBByteStore": {
|
||||
"class": "[AstraDBByteStore](https://python.langchain.com/api_reference/astradb/storage/langchain_astradb.storage.AstraDBByteStore.html)",
|
||||
"class": "[AstraDBByteStore](https://python.langchain.com/docs/integrations/stores/astradb/)",
|
||||
"local": False,
|
||||
"package": "[langchain_astradb](https://python.langchain.com/api_reference/astradb/)",
|
||||
"package": "[langchain-astradb](https://python.langchain.com/api_reference/astradb/storage/langchain_astradb.storage.AstraDBByteStore.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
"CassandraByteStore": {
|
||||
"class": "[CassandraByteStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.cassandra.CassandraByteStore.html)",
|
||||
"class": "[CassandraByteStore](https://python.langchain.com/docs/integrations/stores/cassandra/)",
|
||||
"local": False,
|
||||
"package": "[langchain_community](https://python.langchain.com/api_reference/community/)",
|
||||
"package": "[langchain-community](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.cassandra.CassandraByteStore.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
"ElasticsearchEmbeddingsCache": {
|
||||
"class": "[ElasticsearchEmbeddingsCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html)",
|
||||
"class": "[ElasticsearchEmbeddingsCache](https://python.langchain.com/docs/integrations/stores/elasticsearch/)",
|
||||
"local": True,
|
||||
"package": "[langchain_elasticsearch](https://python.langchain.com/api_reference/elasticsearch/)",
|
||||
"package": "[langchain-elasticsearch](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
"LocalFileStore": {
|
||||
"class": "[LocalFileStore](https://python.langchain.com/api_reference/storage/langchain.storage.file_system.LocalFileStore.html)",
|
||||
"class": "[LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system/)",
|
||||
"local": True,
|
||||
"package": "[langchain](https://python.langchain.com/api_reference/langchain/)",
|
||||
"package": "[langchain](https://python.langchain.com/api_reference/langchain/storage/langchain.storage.file_system.LocalFileStore.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
"InMemoryByteStore": {
|
||||
"class": "[InMemoryByteStore](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.InMemoryByteStore.html)",
|
||||
"class": "[InMemoryByteStore](https://python.langchain.com/docs/integrations/stores/in_memory/)",
|
||||
"local": True,
|
||||
"package": "[langchain_core](https://python.langchain.com/api_reference/core/)",
|
||||
"package": "[langchain-core](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.InMemoryByteStore.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
"RedisStore": {
|
||||
"class": "[RedisStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.redis.RedisStore.html)",
|
||||
"class": "[RedisStore](https://python.langchain.com/docs/integrations/stores/redis/)",
|
||||
"local": True,
|
||||
"package": "[langchain_community](https://python.langchain.com/api_reference/community/)",
|
||||
"package": "[langchain-community](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.redis.RedisStore.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
"UpstashRedisByteStore": {
|
||||
"class": "[UpstashRedisByteStore](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html)",
|
||||
"class": "[UpstashRedisByteStore](https://python.langchain.com/docs/integrations/stores/upstash_redis/)",
|
||||
"local": False,
|
||||
"package": "[langchain_community](https://python.langchain.com/api_reference/community/)",
|
||||
"package": "[langchain-community](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html)",
|
||||
"downloads": "",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
DEPRECATED = []
|
||||
|
||||
|
||||
|
||||
@@ -321,7 +321,7 @@ const FEATURE_TABLES = {
|
||||
},
|
||||
{
|
||||
name: "VertexAILLM",
|
||||
link: "google_vertexai",
|
||||
link: "google_vertex_ai_palm",
|
||||
package: "langchain-google-vertexai",
|
||||
apiLink: "https://python.langchain.com/api_reference/google_vertexai/llms/langchain_google_vertexai.llms.VertexAI.html"
|
||||
},
|
||||
@@ -435,7 +435,7 @@ const FEATURE_TABLES = {
|
||||
selfHost: false,
|
||||
cloudOffering: true,
|
||||
apiLink: "https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html",
|
||||
package: "langchain_aws"
|
||||
package: "langchain-aws"
|
||||
},
|
||||
{
|
||||
name: "AzureAISearchRetriever",
|
||||
@@ -443,7 +443,7 @@ const FEATURE_TABLES = {
|
||||
selfHost: false,
|
||||
cloudOffering: true,
|
||||
apiLink: "https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html",
|
||||
package: "langchain_community"
|
||||
package: "langchain-community"
|
||||
},
|
||||
{
|
||||
name: "ElasticsearchRetriever",
|
||||
@@ -451,7 +451,7 @@ const FEATURE_TABLES = {
|
||||
selfHost: true,
|
||||
cloudOffering: true,
|
||||
apiLink: "https://python.langchain.com/api_reference/elasticsearch/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html",
|
||||
package: "langchain_elasticsearch"
|
||||
package: "langchain-elasticsearch"
|
||||
},
|
||||
{
|
||||
name: "VertexAISearchRetriever",
|
||||
@@ -459,7 +459,7 @@ const FEATURE_TABLES = {
|
||||
selfHost: false,
|
||||
cloudOffering: true,
|
||||
apiLink: "https://python.langchain.com/api_reference/google_community/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html",
|
||||
package: "langchain_google_community"
|
||||
package: "langchain-google-community"
|
||||
}
|
||||
],
|
||||
},
|
||||
@@ -484,21 +484,21 @@ const FEATURE_TABLES = {
|
||||
link: "arxiv",
|
||||
source: (<>Scholarly articles on <a href="https://arxiv.org/">arxiv.org</a></>),
|
||||
apiLink: "https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html",
|
||||
package: "langchain_community"
|
||||
package: "langchain-community"
|
||||
},
|
||||
{
|
||||
name: "TavilySearchAPIRetriever",
|
||||
link: "tavily",
|
||||
source: "Internet search",
|
||||
apiLink: "https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html",
|
||||
package: "langchain_community"
|
||||
package: "langchain-community"
|
||||
},
|
||||
{
|
||||
name: "WikipediaRetriever",
|
||||
link: "wikipedia",
|
||||
source: (<><a href="https://www.wikipedia.org/">Wikipedia</a> articles</>),
|
||||
apiLink: "https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html",
|
||||
package: "langchain_community"
|
||||
package: "langchain-community"
|
||||
}
|
||||
]
|
||||
|
||||
@@ -776,7 +776,7 @@ const FEATURE_TABLES = {
|
||||
},
|
||||
{
|
||||
name: "Reddit",
|
||||
link: "RedditPostsLoader",
|
||||
link: "reddit",
|
||||
loaderName: "RedditPostsLoader",
|
||||
apiLink: "https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.reddit.RedditPostsLoader.html"
|
||||
},
|
||||
|
||||
@@ -35,6 +35,7 @@ embeddings.embed_query("What is the meaning of life?")
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
`__ModuleName__LLM` class exposes LLMs from __ModuleName__.
|
||||
|
||||
```python
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
version: 0.0.1
|
||||
patterns:
|
||||
- name: github.com/getgrit/stdlib#*
|
||||
- name: github.com/getgrit/stdlib#*
|
||||
|
||||
@@ -27,16 +27,16 @@ langchain app add __package_name__
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
|
||||
```python
|
||||
__app_route_code__
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
You can sign up for LangSmith [here](https://smith.langchain.com/).
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
You can sign up for LangSmith [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGSMITH_TRACING=true
|
||||
export LANGSMITH_API_KEY=<your-api-key>
|
||||
@@ -49,11 +49,11 @@ If you are inside this directory, then you can spin up a LangServe instance dire
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/__package_name__/playground](http://127.0.0.1:8000/__package_name__/playground)
|
||||
We can access the playground at [http://127.0.0.1:8000/__package_name__/playground](http://127.0.0.1:8000/__package_name__/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
@@ -61,4 +61,4 @@ We can access the template from code with:
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/__package_name__")
|
||||
```
|
||||
```
|
||||
|
||||
@@ -11,7 +11,7 @@ pip install -U langchain-cli
|
||||
## Adding packages
|
||||
|
||||
```bash
|
||||
# adding packages from
|
||||
# adding packages from
|
||||
# https://github.com/langchain-ai/langchain/tree/master/templates
|
||||
langchain app add $PROJECT_NAME
|
||||
|
||||
@@ -31,10 +31,10 @@ langchain app remove my/custom/path/rag
|
||||
```
|
||||
|
||||
## Setup LangSmith (Optional)
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
You can sign up for LangSmith [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
You can sign up for LangSmith [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGSMITH_TRACING=true
|
||||
|
||||
@@ -144,7 +144,7 @@ def beta(
|
||||
obj.__init__ = functools.wraps(obj.__init__)( # type: ignore[misc]
|
||||
warn_if_direct_instance
|
||||
)
|
||||
return cast("T", obj)
|
||||
return obj
|
||||
|
||||
elif isinstance(obj, property):
|
||||
# note(erick): this block doesn't seem to be used?
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user