Compare commits

..

44 Commits

Author SHA1 Message Date
Bagatur
a58d4ba340 core[patch]: Release 0.2.31 (#25388) 2024-08-14 11:26:49 -07:00
Bagatur
d178fb9dc3 docs: fix api ref package tables (#25400) 2024-08-14 10:40:16 -07:00
Bagatur
414154fa59 experimental[patch]: refactor rl chain structure (#25398)
can't have a class and function with same name but different
capitalization in same file for api reference building
2024-08-14 17:09:43 +00:00
Flávio Knob
94c9cb7321 Update document_loader_custom.ipynb (#25393)
Fix typo

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-14 12:33:21 -04:00
Jacob Lee
012929551c docs[patch]: Hide deprecated integration pages (#25389) 2024-08-14 09:17:39 -07:00
Bagatur
63c483ea01 standard-tests: import fix (#25395) 2024-08-14 09:13:56 -07:00
Bagatur
eec7bb4f51 anthropic[patch]: Release 0.1.23 (#25394) 2024-08-14 09:03:39 -07:00
Flávio Knob
f0f125dac7 Update document_loader_custom.ipynb (#25391)
Fix typo and some `callout` tags

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-14 15:07:42 +00:00
Eugene Yurtsev
f4196f1fb8 ollama[patch]: Update extra in ollama package (#25383)
Backwards compatible change that converts pydantic extras to literals
which is consistent with pydantic 2 usage.
2024-08-14 10:30:01 -04:00
Chengyu Yan
d0ad713937 core: fix issue#24660, slove error messages about ValueError when use model with history (#25183)
- **Description:**
This PR will slove error messages about `ValueError` when use model with
history.
Detail in #24660.
#22933 causes that
`langchain_core.runnables.history.RunnableWithMessageHistory._get_output_messages`
miss type check of `output_val` if `output_val` is `False`. After
running `RunnableWithMessageHistory._is_not_async`, `output` is `False`.

249945a572/libs/core/langchain_core/runnables/history.py (L323-L334)

15a36dd0a2/libs/core/langchain_core/runnables/history.py (L461-L471)
~~I suggest that `_get_output_messages` return empty list when
`output_val == False`.~~

- **Issue**:
  - #24660

- **Dependencies:**: No Change.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-08-14 14:26:22 +00:00
Jacob Lee
ddd7919f6a docs[patch]: Add conceptual guide links to integration index pages (#25387) 2024-08-14 07:14:24 -07:00
Bagatur
493e474063 docs: udpated api reference (#25172)
- Move the API reference into the vercel build
- Update api reference organization and styling
2024-08-14 07:00:17 -07:00
Leonid Ganeline
4a812e3193 docs: integrations references update (#25217)
Added missed provider pages. Fixed formats and added descriptions and
links.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-14 13:58:38 +00:00
Eugene Yurtsev
5f5e8c9a60 huggingface[patch], pinecone[patch], fireworks[patch], mistralai[patch], voyageai[patch], togetherai[path]: convert Pydantic extras to literals (#25384)
Backwards compatible change that converts pydantic extras to literals
which is consistent with pydantic 2 usage.

- fireworks
- voyage ai
- mistralai
- mistral ai
- together ai
- huggigng face
- pinecone
2024-08-14 09:55:30 -04:00
Eugene Yurtsev
d00176e523 openai[patch]: Update extra to match pydantic 2 (#25382)
Backwards compatible change that converts pydantic extras to literals
which is consistent with pydantic 2 usage.
2024-08-14 09:55:18 -04:00
Eugene Yurtsev
dc51cc5690 core[minor]: Prevent PydanticOutputParser from encoding schema as ASCII (#25386)
This allows users to provide parameter descriptions in the pydantic
models in other languages.

Continuing this PR: https://github.com/langchain-ai/langchain/pull/24809
2024-08-14 13:54:31 +00:00
ccurme
27690506d0 multiple: update removal targets (#25361) 2024-08-14 09:50:39 -04:00
Ikko Eltociear Ashimine
4029f5650c docs: update clarifai.ipynb (#25373)
Intialize -> Initialize
2024-08-14 09:20:17 -04:00
Erick Friis
10e6725a7e docs: tools index table (#25370) 2024-08-14 02:38:03 +00:00
Harrison Chase
967b6f21f6 docs: improve document loaders index (#25365)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-14 01:48:48 +00:00
Erick Friis
4a78be7861 docs: remove sidebar comment (#25369) 2024-08-14 01:47:12 +00:00
Eugene Yurtsev
d6c180996f docs[patch]: Fix typo in CohereEmbeddings integration docs (#25367)
Fix typo
2024-08-14 01:18:54 +00:00
Eugene Yurtsev
93dcc47463 docs: Partial integration update for cohere embeddings (#25250)
This can be finished after the following issue is resolved:

https://github.com/langchain-ai/langchain-cohere/issues/81

Related to: https://github.com/langchain-ai/langchain/issues/24856

```json
[
   {
      "provider": "cohere",
      "js":  true,
      "local": false,
     "serializable": false,
   }
]
```

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-08-14 00:53:13 +00:00
Eugene Yurtsev
27def6bddb docs[patch]: Update integration docs for AzureOpenAIEmbeddings (#25311)
https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:33:13 +00:00
Eugene Yurtsev
b4e3bdb714 docs: Update nomic AI embeddings integration docs (#25308)
Issue: https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:32:07 +00:00
Eugene Yurtsev
f82c3f622a docs: Update AI21Embeddings Integration docs (#25298)
Update AI21 Integration docs

Issue: https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:30:16 +00:00
Eugene Yurtsev
d55d99222b docs: update integration docs for mistral ai embedding model (#25253)
Related issue: https://github.com/langchain-ai/langchain/issues/24856

```json
[
   {
      "provider": "mistralai",
      "js":  true,
      "local": false,
     "serializable": false,
    "native_async": true
   }
]
```

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:25:36 +00:00
Eugene Yurtsev
0f6217f507 docs: together ai embeddings integration docs (#25252)
Update together AI embedding integration docs

Related issue: https://github.com/langchain-ai/langchain/issues/24856

```json
[
   {
      "provider": "together",
      "js":  true,
      "local": false,
     "serializable": false,
   }
]
```

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:24:02 +00:00
Eugene Yurtsev
8645a49f31 docs: Update integration docs for OllamaEmbeddingsModel (#25314)
Issue: https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:23:05 +00:00
Eugene Yurtsev
a4ef830480 docs: update integration docs for openai embeddings (#25249)
Related issue: https://github.com/langchain-ai/langchain/issues/24856

```json
   {
      "provider": "openai",
      "js":  true,
      "local": false,
     "serializable": false,
"async_native": true
  }
```

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:21:36 +00:00
Eugene Yurtsev
b1aed44540 docs: Updating integration docs for Fireworks Embeddings (#25247)
Providers:
* fireworks

See related issue:
* https://github.com/langchain-ai/langchain/issues/24856

Features:

```json
[
   {
      "provider": "fireworks",
      "js":  true,
      "local": false,
     "serializable": false,
   }



]


```

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-08-13 17:04:18 -07:00
Isaac Francisco
f4ffd692a3 [docs]: standardize doc loader doc strings (#25325) 2024-08-13 23:18:56 +00:00
Isaac Francisco
e0bbb81d04 [docs]: standardize tool docstrings (#25351) 2024-08-13 16:10:00 -07:00
Erick Friis
d5b548b4ce docs: index pages, sidebars (#25316) 2024-08-13 15:52:51 -07:00
Isaac Francisco
0478f7f5e4 [docs]: LLM integration pages (#25005) 2024-08-13 14:50:45 -07:00
thedavgar
9d08369442 community: fix AzureSearch vectorstore asyncronous methods (#24921)
**Description**
Fix the asyncronous methods to retrieve documents from AzureSearch
VectorStore. The previous changes from [this
commit](ffe6ca986e)
create a similar code for the syncronous methods and the asyncronous
ones but the asyncronous client return an asyncronous iterator
"AsyncSearchItemPaged" as said in the issue #24740.
To solve this issue, the syncronous iterators in asyncronous methods
where changed to asyncronous iterators.

@chrislrobert said in [this
comment](https://github.com/langchain-ai/langchain/issues/24740#issuecomment-2254168302)
that there was a still a flaw due to `with` blocks that close the client
after each call. I removed this `with` blocks in the `async_client`
following the same pattern as the sync `client`.

In order to close up the connections, a __del__ method is included to
gently close up clients once the vectorstore object is destroyed.

**Issue:** #24740 and #24064
**Dependencies:** No new dependencies for this change

**Example notebook:** I created a notebook just to test the changes work
and gives the same results as the syncronous methods for vector and
hybrid search. With these changes, the asyncronous methods in the
retriever work as well.

![image](https://github.com/user-attachments/assets/697e431b-9d7f-4d0d-b205-59d051ac2b67)


**Lint and test**: Passes the tests and the linter
2024-08-13 14:20:51 -07:00
Isaac Francisco
6bc451b942 [docs]: merge tool/toolkit duplicates (#25197) 2024-08-13 12:19:17 -07:00
Fedor Nikolaev
2b15518c5f community: add args_schema to SearxSearchResults tool (#25350)
This adds `args_schema` member to `SearxSearchResults` tool. This member
is already present in the `SearxSearchRun` tool in the same file.

I was having `TypeError: Type is not JSON serializable:
AsyncCallbackManagerForToolRun` being thrown in langserve playground
when I was using `SearxSearchResults` tool as a part of chain there.
This fixes the issue, so the error is not raised anymore.

This is a example langserve app that was giving me the error, but it
works properly after the proposed fix:
```python
#!/usr/bin/env python

from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from langchain_community.utilities import SearxSearchWrapper
from langchain_community.tools.searx_search.tool import SearxSearchResults
from langserve import add_routes

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()

s = SearxSearchWrapper(searx_host="http://localhost:8080")

search = SearxSearchResults(wrapper=s)

search_chain = (
    {"context": search, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

app = FastAPI()

add_routes(
    app,
    search_chain,
    path="/chain",
)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)
```
2024-08-13 18:26:09 +00:00
Matt Kandler
b6df3405fb docs: Fix broken link to Runhouse documentation (#25349)
- **Description:** Runhouse recently migrated from Read the Docs to a
self-hosted solution. This PR updates a broken link from the old docs to
www.run.house/docs. Also changed "The Runhouse" to "Runhouse" (it's
cleaner).
- **Issue:** None
- **Dependencies:** None
2024-08-13 18:18:19 +00:00
maang-h
089f5e6cad Standardize SparkLLM (#25239)
- **Description:** Standardize SparkLLM, include:
  - docs, the issue #24803 
  - to support stream
  - update api url
  - model init arg names, the issue #20085
2024-08-13 09:50:12 -04:00
Leonid Ganeline
35e2230f56 docs: integrationsreferences update (#25322)
Added missed provider pages. Fixed formats and added descriptions and
links.
2024-08-13 09:29:51 -04:00
Chen Xiabin
24155aa1ac qianfan generate/agenerate with usage_metadata (#25332) 2024-08-13 09:24:41 -04:00
Christophe Bornet
ebbe609193 Add README for astradb package (#25345)
Similar to
https://github.com/langchain-ai/langchain/blob/master/libs/partners/ibm/README.md
2024-08-13 09:17:23 -04:00
Eugene Yurtsev
f679ed72ca ollama[patch]: Update API Reference for ollama embeddings (#25315)
Update API reference for OllamaEmbeddings
Issue: https://github.com/langchain-ai/langchain/issues/24856
2024-08-12 21:31:48 -04:00
362 changed files with 8579 additions and 7846 deletions

2
.gitignore vendored
View File

@@ -172,6 +172,8 @@ docs/api_reference/*/
!docs/api_reference/_static/
!docs/api_reference/templates/
!docs/api_reference/themes/
!docs/api_reference/_extensions/
!docs/api_reference/scripts/
docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock

View File

@@ -31,6 +31,7 @@ docs_linkcheck:
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
poetry run python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
API_PKG ?= text-splitters
@@ -38,12 +39,14 @@ api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
poetry run python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm docs/api_reference/index.md
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.

View File

@@ -41,12 +41,8 @@ generate-files:
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
@@ -86,7 +82,11 @@ vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
yarn run docusaurus build
mkdir static/api_reference
git clone --depth=1 https://github.com/baskaryan/langchain-api-docs-build.git
mv langchain-api-docs-build/api_reference_build/html/* static/api_reference/
rm -rf langchain-api-docs-build
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build
mv build v0.2
mkdir build
mv v0.2 build

View File

@@ -0,0 +1,144 @@
"""A directive to generate a gallery of images from structured data.
Generating a gallery of images that are all the same size is a common
pattern in documentation, and this can be cumbersome if the gallery is
generated programmatically. This directive wraps this particular use-case
in a helper-directive to generate it with a single YAML configuration file.
It currently exists for maintainers of the pydata-sphinx-theme,
but might be abstracted into a standalone package if it proves useful.
"""
from pathlib import Path
from typing import Any, ClassVar, Dict, List
from docutils import nodes
from docutils.parsers.rst import directives
from sphinx.application import Sphinx
from sphinx.util import logging
from sphinx.util.docutils import SphinxDirective
from yaml import safe_load
logger = logging.getLogger(__name__)
TEMPLATE_GRID = """
`````{{grid}} {columns}
{options}
{content}
`````
"""
GRID_CARD = """
````{{grid-item-card}} {title}
{options}
{content}
````
"""
class GalleryGridDirective(SphinxDirective):
"""A directive to show a gallery of images and links in a Bootstrap grid.
The grid can be generated from a YAML file that contains a list of items, or
from the content of the directive (also formatted in YAML). Use the parameter
"class-card" to add an additional CSS class to all cards. When specifying the grid
items, you can use all parameters from "grid-item-card" directive to customize
individual cards + ["image", "header", "content", "title"].
Danger:
This directive can only be used in the context of a Myst documentation page as
the templates use Markdown flavored formatting.
"""
name = "gallery-grid"
has_content = True
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
option_spec: ClassVar[dict[str, Any]] = {
# A class to be added to the resulting container
"grid-columns": directives.unchanged,
"class-container": directives.unchanged,
"class-card": directives.unchanged,
}
def run(self) -> List[nodes.Node]:
"""Create the gallery grid."""
if self.arguments:
# If an argument is given, assume it's a path to a YAML file
# Parse it and load it into the directive content
path_data_rel = Path(self.arguments[0])
path_doc, _ = self.get_source_info()
path_doc = Path(path_doc).parent
path_data = (path_doc / path_data_rel).resolve()
if not path_data.exists():
logger.info(f"Could not find grid data at {path_data}.")
nodes.text("No grid data found at {path_data}.")
return
yaml_string = path_data.read_text()
else:
yaml_string = "\n".join(self.content)
# Use all the element with an img-bottom key as sites to show
# and generate a card item for each of them
grid_items = []
for item in safe_load(yaml_string):
# remove parameters that are not needed for the card options
title = item.pop("title", "")
# build the content of the card using some extra parameters
header = f"{item.pop('header')} \n^^^ \n" if "header" in item else ""
image = f"![image]({item.pop('image')}) \n" if "image" in item else ""
content = f"{item.pop('content')} \n" if "content" in item else ""
# optional parameter that influence all cards
if "class-card" in self.options:
item["class-card"] = self.options["class-card"]
loc_options_str = "\n".join(f":{k}: {v}" for k, v in item.items()) + " \n"
card = GRID_CARD.format(
options=loc_options_str, content=header + image + content, title=title
)
grid_items.append(card)
# Parse the template with Sphinx Design to create an output container
# Prep the options for the template grid
class_ = "gallery-directive" + f' {self.options.get("class-container", "")}'
options = {"gutter": 2, "class-container": class_}
options_str = "\n".join(f":{k}: {v}" for k, v in options.items())
# Create the directive string for the grid
grid_directive = TEMPLATE_GRID.format(
columns=self.options.get("grid-columns", "1 2 3 4"),
options=options_str,
content="\n".join(grid_items),
)
# Parse content as a directive so Sphinx Design processes it
container = nodes.container()
self.state.nested_parse([grid_directive], 0, container)
# Sphinx Design outputs a container too, so just use that
return [container.children[0]]
def setup(app: Sphinx) -> Dict[str, Any]:
"""Add custom configuration to sphinx app.
Args:
app: the Sphinx application
Returns:
the 2 parallel parameters set to ``True``.
"""
app.add_directive("gallery-grid", GalleryGridDirective)
return {
"parallel_read_safe": True,
"parallel_write_safe": True,
}

View File

@@ -1,26 +1,411 @@
pre {
white-space: break-spaces;
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;700&display=swap');
/*******************************************************************************
* master color map. Only the colors that actually differ between light and dark
* themes are specified separately.
*
* To see the full list of colors see https://www.figma.com/file/rUrrHGhUBBIAAjQ82x6pz9/PyData-Design-system---proposal-for-implementation-(2)?node-id=1234%3A765&t=ifcFT1JtnrSshGfi-1
*/
/**
* Function to get items from nested maps
*/
/* Assign base colors for the PyData theme */
:root {
--pst-teal-50: #f4fbfc;
--pst-teal-100: #e9f6f8;
--pst-teal-200: #d0ecf1;
--pst-teal-300: #abdde6;
--pst-teal-400: #3fb1c5;
--pst-teal-500: #0a7d91;
--pst-teal-600: #085d6c;
--pst-teal-700: #064752;
--pst-teal-800: #042c33;
--pst-teal-900: #021b1f;
--pst-violet-50: #f4eefb;
--pst-violet-100: #e0c7ff;
--pst-violet-200: #d5b4fd;
--pst-violet-300: #b780ff;
--pst-violet-400: #9c5ffd;
--pst-violet-500: #8045e5;
--pst-violet-600: #6432bd;
--pst-violet-700: #4b258f;
--pst-violet-800: #341a61;
--pst-violet-900: #1e0e39;
--pst-gray-50: #f9f9fa;
--pst-gray-100: #f3f4f5;
--pst-gray-200: #e5e7ea;
--pst-gray-300: #d1d5da;
--pst-gray-400: #9ca4af;
--pst-gray-500: #677384;
--pst-gray-600: #48566b;
--pst-gray-700: #29313d;
--pst-gray-800: #222832;
--pst-gray-900: #14181e;
--pst-pink-50: #fcf8fd;
--pst-pink-100: #fcf0fa;
--pst-pink-200: #f8dff5;
--pst-pink-300: #f3c7ee;
--pst-pink-400: #e47fd7;
--pst-pink-500: #c132af;
--pst-pink-600: #912583;
--pst-pink-700: #6e1c64;
--pst-pink-800: #46123f;
--pst-pink-900: #2b0b27;
--pst-foundation-white: #ffffff;
--pst-foundation-black: #14181e;
--pst-green-10: #f1fdfd;
--pst-green-50: #E0F7F6;
--pst-green-100: #B3E8E6;
--pst-green-200: #80D6D3;
--pst-green-300: #4DC4C0;
--pst-green-400: #4FB2AD;
--pst-green-500: #287977;
--pst-green-600: #246161;
--pst-green-700: #204F4F;
--pst-green-800: #1C3C3C;
--pst-green-900: #0D2427;
--pst-lilac-50: #f4eefb;
--pst-lilac-100: #DAD6FE;
--pst-lilac-200: #BCB2FD;
--pst-lilac-300: #9F8BFA;
--pst-lilac-400: #7F5CF6;
--pst-lilac-500: #6F3AED;
--pst-lilac-600: #6028D9;
--pst-lilac-700: #5021B6;
--pst-lilac-800: #431D95;
--pst-lilac-900: #1e0e39;
--pst-header-height: 2.5rem;
}
@media (min-width: 1200px) {
.container,
.container-lg,
.container-md,
.container-sm,
.container-xl {
max-width: 2560px !important;
}
html {
--pst-font-family-base: 'Inter';
--pst-font-family-heading: 'Inter Tight', sans-serif;
}
#my-component-root *,
#headlessui-portal-root * {
z-index: 10000;
/*******************************************************************************
* write the color rules for each theme (light/dark)
*/
/* NOTE:
* Mixins enable us to reuse the same definitions for the different modes
* https://sass-lang.com/documentation/at-rules/mixin
* something inserts a variable into a CSS selector or property name
* https://sass-lang.com/documentation/interpolation
*/
/* Defaults to light mode if data-theme is not set */
html:not([data-theme]) {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
--pst-color-secondary-bg: #DAD6FE;
--pst-color-accent: #c132af;
--pst-color-accent-bg: #f8dff5;
--pst-color-info: #276be9;
--pst-color-info-bg: #dce7fc;
--pst-color-warning: #f66a0a;
--pst-color-warning-bg: #f8e3d0;
--pst-color-success: #00843f;
--pst-color-success-bg: #d6ece1;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #d72d47;
--pst-color-danger-bg: #f9e1e4;
--pst-color-text-base: #222832;
--pst-color-text-muted: #48566b;
--pst-color-heading-color: #ffffff;
--pst-color-shadow: rgba(0, 0, 0, 0.1);
--pst-color-border: #d1d5da;
--pst-color-border-muted: rgba(23, 23, 26, 0.2);
--pst-color-inline-code: #912583;
--pst-color-inline-code-links: #246161;
--pst-color-target: #f3cf95;
--pst-color-background: #ffffff;
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
table.longtable code {
white-space: normal;
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
--pst-color-secondary-bg: #DAD6FE;
--pst-color-accent: #c132af;
--pst-color-accent-bg: #f8dff5;
--pst-color-info: #276be9;
--pst-color-info-bg: #dce7fc;
--pst-color-warning: #f66a0a;
--pst-color-warning-bg: #f8e3d0;
--pst-color-success: #00843f;
--pst-color-success-bg: #d6ece1;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #d72d47;
--pst-color-danger-bg: #f9e1e4;
--pst-color-text-base: #222832;
--pst-color-text-muted: #48566b;
--pst-color-heading-color: #ffffff;
--pst-color-shadow: rgba(0, 0, 0, 0.1);
--pst-color-border: #d1d5da;
--pst-color-border-muted: rgba(23, 23, 26, 0.2);
--pst-color-inline-code: #912583;
--pst-color-inline-code-links: #246161;
--pst-color-target: #f3cf95;
--pst-color-background: #ffffff;
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
}
table.longtable td {
max-width: 600px;
html[data-theme=dark] {
--pst-color-primary: #4FB2AD;
--pst-color-primary-bg: #1C3C3C;
--pst-color-secondary: #7F5CF6;
--pst-color-secondary-bg: #431D95;
--pst-color-accent: #e47fd7;
--pst-color-accent-bg: #46123f;
--pst-color-info: #79a3f2;
--pst-color-info-bg: #06245d;
--pst-color-warning: #ff9245;
--pst-color-warning-bg: #652a02;
--pst-color-success: #5fb488;
--pst-color-success-bg: #002f17;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #e78894;
--pst-color-danger-bg: #4e111b;
--pst-color-text-base: #ced6dd;
--pst-color-text-muted: #9ca4af;
--pst-color-heading-color: #14181e;
--pst-color-shadow: rgba(0, 0, 0, 0.2);
--pst-color-border: #48566b;
--pst-color-border-muted: #29313d;
--pst-color-inline-code: #f3c7ee;
--pst-color-inline-code-links: #4FB2AD;
--pst-color-target: #675c04;
--pst-color-background: #14181e;
--pst-color-on-background: #222832;
--pst-color-surface: #29313d;
--pst-color-on-surface: #f3f4f5;
/* Adjust images in dark mode (unless they have class .only-dark or
* .dark-light, in which case assume they're already optimized for dark
* mode).
*/
/* Give images a light background in dark mode in case they have
* transparency and black text (unless they have class .only-dark or .dark-light, in
* which case assume they're already optimized for dark mode).
*/
color-scheme: dark;
}
html[data-theme=dark] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=dark] .only-light,
html[data-theme=dark] .only-light ~ figcaption {
display: none !important;
}
html[data-theme=dark] img:not(.only-dark):not(.dark-light) {
filter: brightness(0.8) contrast(1.2);
}
html[data-theme=dark] .bd-content img:not(.only-dark):not(.dark-light) {
background: rgb(255, 255, 255);
border-radius: 0.25rem;
}
html[data-theme=dark] .MathJax_SVG * {
fill: var(--pst-color-text-base);
}
.pst-color-primary {
color: var(--pst-color-primary);
}
.pst-color-secondary {
color: var(--pst-color-secondary);
}
.pst-color-accent {
color: var(--pst-color-accent);
}
.pst-color-info {
color: var(--pst-color-info);
}
.pst-color-warning {
color: var(--pst-color-warning);
}
.pst-color-success {
color: var(--pst-color-success);
}
.pst-color-attention {
color: var(--pst-color-attention);
}
.pst-color-danger {
color: var(--pst-color-danger);
}
.pst-color-text-base {
color: var(--pst-color-text-base);
}
.pst-color-text-muted {
color: var(--pst-color-text-muted);
}
.pst-color-heading-color {
color: var(--pst-color-heading-color);
}
.pst-color-shadow {
color: var(--pst-color-shadow);
}
.pst-color-border {
color: var(--pst-color-border);
}
.pst-color-border-muted {
color: var(--pst-color-border-muted);
}
.pst-color-inline-code {
color: var(--pst-color-inline-code);
}
.pst-color-inline-code-links {
color: var(--pst-color-inline-code-links);
}
.pst-color-target {
color: var(--pst-color-target);
}
.pst-color-background {
color: var(--pst-color-background);
}
.pst-color-on-background {
color: var(--pst-color-on-background);
}
.pst-color-surface {
color: var(--pst-color-surface);
}
.pst-color-on-surface {
color: var(--pst-color-on-surface);
}
/* Adjust the height of the navbar */
.bd-header .bd-header__inner{
height: 52px; /* Adjust this value as needed */
}
.navbar-nav > li > a {
line-height: 52px; /* Vertically center the navbar links */
}
/* Make sure the navbar items align properly */
.navbar-nav {
display: flex;
}
.bd-header .navbar-header-items__start{
margin-left: 0rem
}
.bd-header button.primary-toggle {
margin-right: 0rem;
}
.bd-header ul.navbar-nav .dropdown .dropdown-menu {
overflow-y: auto; /* Enable vertical scrolling */
max-height: 80vh
}
.bd-sidebar-primary {
width: 22%; /* Adjust this value to your preference */
line-height: 1.4;
}
.bd-sidebar-secondary {
line-height: 1.4;
}
.toc-entry a.nav-link, .toc-entry a>code {
background-color: transparent;
border-color: transparent;
}
.bd-sidebar-primary code{
background-color: transparent;
border-color: transparent;
}
.toctree-wrapper li[class^=toctree-l1]>a {
font-size: 1.3em
}
.toctree-wrapper li[class^=toctree-l1] {
margin-bottom: 2em;
}
.toctree-wrapper li[class^=toctree-l]>ul {
margin-top: 0.5em;
font-size: 0.9em;
}
*, :after, :before {
font-style: normal;
}
div.deprecated {
margin-top: 0.5em;
margin-bottom: 2em;
}
.admonition-beta.admonition, div.admonition-beta.admonition {
border-color: var(--pst-color-warning);
margin-top:0.5em;
margin-bottom: 2em;
}
.admonition-beta>.admonition-title, div.admonition-beta>.admonition-title {
background-color: var(--pst-color-warning-bg);
}
dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) dd {
margin-left: 1rem;
}
p {
font-size: 0.9rem;
margin-bottom: 0.5rem;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 777 B

View File

@@ -0,0 +1,11 @@
<svg width="72" height="19" viewBox="0 0 72 19" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_4019_2020)">
<path d="M29.4038 5.84477C30.1256 6.56657 30.1256 7.74117 29.4038 8.46296L27.7869 10.0538L27.7704 9.96259C27.6524 9.30879 27.3415 8.71552 26.8723 8.24627C26.5189 7.8936 26.1012 7.63282 25.6305 7.47143C25.3383 7.76508 25.1777 8.14989 25.1777 8.55487C25.1777 8.63706 25.1851 8.72224 25.2001 8.80742C25.4593 8.90082 25.6887 9.04503 25.8815 9.23781C26.6033 9.9596 26.6033 11.1342 25.8815 11.856L24.4738 13.2637C24.1129 13.6246 23.6392 13.8047 23.1647 13.8047C22.6902 13.8047 22.2165 13.6246 21.8556 13.2637C21.1338 12.5419 21.1338 11.3673 21.8556 10.6455L23.4725 9.05549L23.489 9.14665C23.6063 9.79896 23.9171 10.3922 24.3879 10.8622C24.742 11.2164 25.1343 11.4518 25.6043 11.6124L25.691 11.5257C25.954 11.2627 26.0982 10.913 26.0982 10.5402C26.0982 10.4572 26.0907 10.3743 26.0765 10.2929C25.8053 10.2032 25.5819 10.0754 25.3786 9.87218C25.0857 9.57928 24.9034 9.20493 24.8526 8.79024C24.8489 8.76035 24.8466 8.73121 24.8437 8.70132C24.8033 8.16109 24.9983 7.63357 25.3786 7.25399L26.7864 5.84627C27.1353 5.49733 27.6001 5.30455 28.0955 5.30455C28.5909 5.30455 29.0556 5.49658 29.4046 5.84627L29.4038 5.84477ZM36.7548 9.56583C36.7548 14.7163 32.5645 18.9058 27.4148 18.9058H9.34C4.1903 18.9058 0 14.7163 0 9.56583C0 4.41538 4.1903 0.22583 9.34 0.22583H27.4148C32.5652 0.22583 36.7548 4.41613 36.7548 9.56583ZM18 14.25C18.1472 14.0714 17.4673 13.5686 17.3283 13.384C17.0459 13.0777 17.0444 12.6368 16.8538 12.2789C16.3876 11.1985 15.8518 10.1262 15.1024 9.21166C14.3104 8.21116 13.333 7.38326 12.4745 6.44403C11.8371 5.78873 11.6668 4.85548 11.1041 4.15087C10.3285 3.00541 7.87624 2.69308 7.51683 4.31077C7.51833 4.36158 7.50264 4.39371 7.45855 4.42584C7.2598 4.57005 7.08271 4.73518 6.93402 4.93468C6.57013 5.44129 6.51409 6.30057 6.96839 6.75561C6.98333 6.51576 6.99155 6.28936 7.18134 6.1175C7.53252 6.41862 8.06304 6.52547 8.47026 6.30057C9.36989 7.585 9.14573 9.36184 9.86005 10.7457C10.0573 11.0729 10.2561 11.4069 10.5094 11.6939C10.7148 12.0137 11.4247 12.391 11.4665 12.6869C11.474 13.195 11.4142 13.7502 11.7475 14.1753C11.9044 14.4936 11.5188 14.8134 11.208 14.7738C10.8045 14.8291 10.3121 14.5026 9.95868 14.7036C9.8339 14.8388 9.58957 14.6894 9.48197 14.8769C9.44461 14.9741 9.24286 15.1108 9.36316 15.2042C9.49691 15.1026 9.62095 14.9965 9.80102 15.057C9.77412 15.2035 9.88994 15.2244 9.98184 15.267C9.97886 15.3663 9.92057 15.468 9.99679 15.5524C10.0857 15.4627 10.1388 15.3357 10.28 15.2983C10.7492 15.9238 11.2267 14.6655 12.2421 15.2318C12.0359 15.2214 11.8528 15.2475 11.7139 15.4172C11.6795 15.4553 11.6503 15.5001 11.7109 15.5494C12.2586 15.196 12.2556 15.6705 12.6112 15.5248C12.8847 15.382 13.1567 15.2035 13.4817 15.2543C13.1657 15.3454 13.153 15.5995 12.9677 15.8139C12.9363 15.8468 12.9213 15.8842 12.9579 15.9387C13.614 15.8834 13.6678 15.6652 14.1975 15.3977C14.5928 15.1564 14.9866 15.7414 15.3288 15.4082C15.4043 15.3357 15.5074 15.3604 15.6008 15.3507C15.4812 14.7133 14.1669 15.4672 14.1878 14.6124C14.6107 14.3247 14.5136 13.7741 14.542 13.3295C15.0284 13.5992 15.5694 13.7561 16.0461 14.0139C16.2867 14.4025 16.6641 14.9158 17.1669 14.8822C17.1804 14.8433 17.1923 14.8089 17.2065 14.7693C17.359 14.7955 17.5547 14.8964 17.6384 14.7036C17.8663 14.9419 18.201 14.93 18.4992 14.8687C18.7196 14.6894 18.0845 14.4338 17.9993 14.2493L18 14.25ZM31.3458 7.15387C31.3458 6.28413 31.0081 5.46744 30.3946 4.85399C29.7812 4.24054 28.9645 3.9028 28.094 3.9028C27.2235 3.9028 26.4068 4.24054 25.7933 4.85399L24.3856 6.26171C24.0569 6.59048 23.8073 6.97678 23.6436 7.40941L23.6339 7.43407L23.6085 7.44154C23.0974 7.5992 22.6469 7.86969 22.2696 8.24702L20.8618 9.65475C19.5938 10.9235 19.5938 12.9873 20.8618 14.2553C21.4753 14.8687 22.292 15.2064 23.1617 15.2064C24.0314 15.2064 24.8489 14.8687 25.4623 14.2553L26.8701 12.8475C27.1973 12.5203 27.4454 12.1355 27.609 11.7036L27.6188 11.6789L27.6442 11.6707C28.1463 11.5168 28.6095 11.2373 28.9854 10.8622L30.3931 9.4545C31.0066 8.84105 31.3443 8.02436 31.3443 7.15387H31.3458ZM12.8802 13.1972C12.7592 13.6695 12.7196 14.4742 12.1054 14.4974C12.0546 14.7701 12.2944 14.8724 12.5119 14.785C12.7278 14.6856 12.8302 14.8635 12.9026 15.0406C13.2359 15.0891 13.7291 14.9292 13.7477 14.5347C13.2501 14.2478 13.0962 13.7023 12.8795 13.1972H12.8802Z" fill="#F4F3FF"/>
<path d="M43.5142 15.2258L47.1462 3.70583H49.9702L53.6022 15.2258H51.6182L48.3222 4.88983H48.7542L45.4982 15.2258H43.5142ZM45.5382 12.7298V10.9298H51.5862V12.7298H45.5382ZM55.0486 15.2258V3.70583H59.8086C59.9206 3.70583 60.0646 3.71116 60.2406 3.72183C60.4166 3.72716 60.5792 3.74316 60.7286 3.76983C61.3952 3.87116 61.9446 4.0925 62.3766 4.43383C62.8139 4.77516 63.1366 5.20716 63.3446 5.72983C63.5579 6.24716 63.6646 6.82316 63.6646 7.45783C63.6646 8.08716 63.5579 8.66316 63.3446 9.18583C63.1312 9.70316 62.8059 10.1325 62.3686 10.4738C61.9366 10.8152 61.3899 11.0365 60.7286 11.1378C60.5792 11.1592 60.4139 11.1752 60.2326 11.1858C60.0566 11.1965 59.9152 11.2018 59.8086 11.2018H56.9766V15.2258H55.0486ZM56.9766 9.40183H59.7286C59.8352 9.40183 59.9552 9.3965 60.0886 9.38583C60.2219 9.37516 60.3446 9.35383 60.4566 9.32183C60.7766 9.24183 61.0272 9.1005 61.2086 8.89783C61.3952 8.69516 61.5259 8.46583 61.6006 8.20983C61.6806 7.95383 61.7206 7.70316 61.7206 7.45783C61.7206 7.2125 61.6806 6.96183 61.6006 6.70583C61.5259 6.4445 61.3952 6.2125 61.2086 6.00983C61.0272 5.80716 60.7766 5.66583 60.4566 5.58583C60.3446 5.55383 60.2219 5.53516 60.0886 5.52983C59.9552 5.51916 59.8352 5.51383 59.7286 5.51383H56.9766V9.40183ZM65.4273 15.2258V3.70583H67.3553V15.2258H65.4273Z" fill="#F4F3FF"/>
</g>
<defs>
<clipPath id="clip0_4019_2020">
<rect width="71.0711" height="18.68" fill="white" transform="translate(0 0.22583)"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.7 KiB

View File

@@ -0,0 +1,11 @@
<svg width="72" height="20" viewBox="0 0 72 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_4019_689)">
<path d="M29.4038 5.97905C30.1256 6.70085 30.1256 7.87545 29.4038 8.59724L27.7869 10.188L27.7704 10.0969C27.6524 9.44307 27.3415 8.84979 26.8723 8.38055C26.5189 8.02787 26.1012 7.7671 25.6305 7.60571C25.3383 7.89936 25.1777 8.28416 25.1777 8.68915C25.1777 8.77134 25.1851 8.85652 25.2001 8.9417C25.4593 9.0351 25.6887 9.17931 25.8815 9.37209C26.6033 10.0939 26.6033 11.2685 25.8815 11.9903L24.4738 13.398C24.1129 13.7589 23.6392 13.939 23.1647 13.939C22.6902 13.939 22.2165 13.7589 21.8556 13.398C21.1338 12.6762 21.1338 11.5016 21.8556 10.7798L23.4725 9.18977L23.489 9.28093C23.6063 9.93323 23.9171 10.5265 24.3879 10.9965C24.742 11.3507 25.1343 11.586 25.6043 11.7467L25.691 11.66C25.954 11.397 26.0982 11.0473 26.0982 10.6745C26.0982 10.5915 26.0907 10.5086 26.0765 10.4271C25.8053 10.3375 25.5819 10.2097 25.3786 10.0065C25.0857 9.71356 24.9034 9.33921 24.8526 8.92451C24.8489 8.89463 24.8466 8.86549 24.8437 8.8356C24.8033 8.29537 24.9983 7.76785 25.3786 7.38827L26.7864 5.98055C27.1353 5.6316 27.6001 5.43883 28.0955 5.43883C28.5909 5.43883 29.0556 5.63086 29.4046 5.98055L29.4038 5.97905ZM36.7548 9.70011C36.7548 14.8506 32.5645 19.0401 27.4148 19.0401H9.34C4.1903 19.0401 0 14.8506 0 9.70011C0 4.54966 4.1903 0.360107 9.34 0.360107H27.4148C32.5652 0.360107 36.7548 4.55041 36.7548 9.70011ZM18 14.3843C18.1472 14.2057 17.4673 13.7029 17.3283 13.5183C17.0459 13.2119 17.0444 12.7711 16.8538 12.4132C16.3876 11.3327 15.8518 10.2605 15.1024 9.34594C14.3104 8.34543 13.333 7.51754 12.4745 6.57831C11.8371 5.92301 11.6668 4.98976 11.1041 4.28515C10.3285 3.13969 7.87624 2.82736 7.51683 4.44505C7.51833 4.49586 7.50264 4.52799 7.45855 4.56012C7.2598 4.70433 7.08271 4.86946 6.93402 5.06896C6.57013 5.57556 6.51409 6.43484 6.96839 6.88989C6.98333 6.65004 6.99155 6.42364 7.18134 6.25178C7.53252 6.5529 8.06304 6.65975 8.47026 6.43484C9.36989 7.71928 9.14573 9.49612 9.86005 10.8799C10.0573 11.2072 10.2561 11.5412 10.5094 11.8281C10.7148 12.1479 11.4247 12.5253 11.4665 12.8212C11.474 13.3293 11.4142 13.8844 11.7475 14.3096C11.9044 14.6279 11.5188 14.9477 11.208 14.9081C10.8045 14.9634 10.3121 14.6369 9.95868 14.8379C9.8339 14.9731 9.58957 14.8237 9.48197 15.0112C9.44461 15.1083 9.24286 15.2451 9.36316 15.3385C9.49691 15.2369 9.62095 15.1308 9.80102 15.1913C9.77412 15.3377 9.88994 15.3587 9.98184 15.4012C9.97886 15.5006 9.92057 15.6022 9.99679 15.6867C10.0857 15.597 10.1388 15.47 10.28 15.4326C10.7492 16.058 11.2267 14.7997 12.2421 15.3661C12.0359 15.3557 11.8528 15.3818 11.7139 15.5514C11.6795 15.5895 11.6503 15.6344 11.7109 15.6837C12.2586 15.3303 12.2556 15.8047 12.6112 15.659C12.8847 15.5163 13.1567 15.3377 13.4817 15.3885C13.1657 15.4797 13.153 15.7337 12.9677 15.9482C12.9363 15.9811 12.9213 16.0184 12.9579 16.073C13.614 16.0177 13.6678 15.7995 14.1975 15.532C14.5928 15.2907 14.9866 15.8757 15.3288 15.5425C15.4043 15.47 15.5074 15.4946 15.6008 15.4849C15.4812 14.8476 14.1669 15.6015 14.1878 14.7467C14.6107 14.459 14.5136 13.9083 14.542 13.4638C15.0284 13.7335 15.5694 13.8904 16.0461 14.1482C16.2867 14.5367 16.6641 15.0501 17.1669 15.0164C17.1804 14.9776 17.1923 14.9432 17.2065 14.9036C17.359 14.9298 17.5547 15.0306 17.6384 14.8379C17.8663 15.0762 18.201 15.0643 18.4992 15.003C18.7196 14.8237 18.0845 14.5681 17.9993 14.3836L18 14.3843ZM31.3458 7.28815C31.3458 6.41841 31.0081 5.60172 30.3946 4.98826C29.7812 4.37481 28.9645 4.03708 28.094 4.03708C27.2235 4.03708 26.4068 4.37481 25.7933 4.98826L24.3856 6.39599C24.0569 6.72476 23.8073 7.11106 23.6436 7.54369L23.6339 7.56835L23.6085 7.57582C23.0974 7.73348 22.6469 8.00396 22.2696 8.3813L20.8618 9.78902C19.5938 11.0578 19.5938 13.1215 20.8618 14.3895C21.4753 15.003 22.292 15.3407 23.1617 15.3407C24.0314 15.3407 24.8489 15.003 25.4623 14.3895L26.8701 12.9818C27.1973 12.6545 27.4454 12.2697 27.609 11.8378L27.6188 11.8132L27.6442 11.805C28.1463 11.651 28.6095 11.3716 28.9854 10.9965L30.3931 9.58878C31.0066 8.97532 31.3443 8.15863 31.3443 7.28815H31.3458ZM12.8802 13.3315C12.7592 13.8037 12.7196 14.6085 12.1054 14.6316C12.0546 14.9044 12.2944 15.0067 12.5119 14.9193C12.7278 14.8199 12.8302 14.9978 12.9026 15.1748C13.2359 15.2234 13.7291 15.0635 13.7477 14.669C13.2501 14.3821 13.0962 13.8366 12.8795 13.3315H12.8802Z" fill="#246161"/>
<path d="M43.5142 15.3601L47.1462 3.84011H49.9702L53.6022 15.3601H51.6182L48.3222 5.02411H48.7542L45.4982 15.3601H43.5142ZM45.5382 12.8641V11.0641H51.5862V12.8641H45.5382ZM55.0486 15.3601V3.84011H59.8086C59.9206 3.84011 60.0646 3.84544 60.2406 3.85611C60.4166 3.86144 60.5792 3.87744 60.7286 3.90411C61.3952 4.00544 61.9446 4.22677 62.3766 4.56811C62.8139 4.90944 63.1366 5.34144 63.3446 5.86411C63.5579 6.38144 63.6646 6.95744 63.6646 7.59211C63.6646 8.22144 63.5579 8.79744 63.3446 9.32011C63.1312 9.83744 62.8059 10.2668 62.3686 10.6081C61.9366 10.9494 61.3899 11.1708 60.7286 11.2721C60.5792 11.2934 60.4139 11.3094 60.2326 11.3201C60.0566 11.3308 59.9152 11.3361 59.8086 11.3361H56.9766V15.3601H55.0486ZM56.9766 9.53611H59.7286C59.8352 9.53611 59.9552 9.53077 60.0886 9.52011C60.2219 9.50944 60.3446 9.48811 60.4566 9.45611C60.7766 9.37611 61.0272 9.23477 61.2086 9.03211C61.3952 8.82944 61.5259 8.60011 61.6006 8.34411C61.6806 8.08811 61.7206 7.83744 61.7206 7.59211C61.7206 7.34677 61.6806 7.09611 61.6006 6.84011C61.5259 6.57877 61.3952 6.34677 61.2086 6.14411C61.0272 5.94144 60.7766 5.80011 60.4566 5.72011C60.3446 5.68811 60.2219 5.66944 60.0886 5.66411C59.9552 5.65344 59.8352 5.64811 59.7286 5.64811H56.9766V9.53611ZM65.4273 15.3601V3.84011H67.3553V15.3601H65.4273Z" fill="#246161"/>
</g>
<defs>
<clipPath id="clip0_4019_689">
<rect width="71.0711" height="18.68" fill="white" transform="translate(0 0.360107)"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.7 KiB

View File

@@ -62,7 +62,7 @@ class ExampleLinksDirective(SphinxDirective):
item_node.append(para_node)
list_node.append(item_node)
if list_node.children:
title_node = nodes.title()
title_node = nodes.rubric()
title_node.append(nodes.Text(f"Examples using {class_or_func_name}"))
return [title_node, list_node]
return [list_node]
@@ -75,7 +75,10 @@ class Beta(BaseAdmonition):
def run(self):
self.content = self.content or StringList(
[
"This feature is in beta. It is actively being worked on, so the API may change."
(
"This feature is in beta. It is actively being worked on, so the "
"API may change."
)
]
)
self.arguments = self.arguments or ["Beta"]
@@ -90,13 +93,10 @@ def setup(app):
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain, Inc."
author = "LangChain, Inc."
copyright = "2023, LangChain Inc"
author = "LangChain, Inc"
version = data["tool"]["poetry"]["version"]
release = version
html_title = project + " " + version
html_favicon = "_static/img/brand/favicon.png"
html_last_updated_fmt = "%b %d, %Y"
@@ -112,11 +112,13 @@ extensions = [
"sphinx.ext.napoleon",
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"sphinx_copybutton",
"sphinx_panels",
"IPython.sphinxext.ipython_console_highlighting",
"myst_parser",
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
]
source_suffix = [".rst"]
source_suffix = [".rst", ".md"]
# some autodoc pydantic options are repeated in the actual template.
# potentially user error, but there may be bugs in the sphinx extension
@@ -148,23 +150,84 @@ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "scikit-learn-modern"
html_theme_path = ["themes"]
# The theme to use for HTML and HTML Help pages.
html_theme = "pydata_sphinx_theme"
# redirects dictionary maps from old links to new links
html_additional_pages = {}
redirects = {
"index": "langchain_api_reference",
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
# # -- General configuration ------------------------------------------------
"sidebar_includehidden": True,
"use_edit_page_button": False,
# # "analytics": {
# # "plausible_analytics_domain": "scikit-learn.org",
# # "plausible_analytics_url": "https://views.scientific-python.org/js/script.js",
# # },
# # If "prev-next" is included in article_footer_items, then setting show_prev_next
# # to True would repeat prev and next links. See
# # https://github.com/pydata/pydata-sphinx-theme/blob/b731dc230bc26a3d1d1bb039c56c977a9b3d25d8/src/pydata_sphinx_theme/theme/pydata_sphinx_theme/layout.html#L118-L129
"show_prev_next": False,
"search_bar_text": "Search",
"navigation_with_keys": True,
"collapse_navigation": True,
"navigation_depth": 3,
"show_nav_level": 1,
"show_toc_level": 3,
"navbar_align": "left",
"header_links_before_dropdown": 5,
"header_dropdown_text": "Integrations",
"logo": {
"image_light": "_static/wordmark-api.svg",
"image_dark": "_static/wordmark-api-dark.svg",
},
"surface_warnings": True,
# # -- Template placement in theme layouts ----------------------------------
"navbar_start": ["navbar-logo"],
# # Note that the alignment of navbar_center is controlled by navbar_align
"navbar_center": ["navbar-nav"],
"navbar_end": ["langchain_docs", "theme-switcher", "navbar-icon-links"],
# # navbar_persistent is persistent right (even when on mobiles)
"navbar_persistent": ["search-field"],
"article_header_start": ["breadcrumbs"],
"article_header_end": [],
"article_footer_items": [],
"content_footer_items": [],
# # Use html_sidebars that map page patterns to list of sidebar templates
# "primary_sidebar_end": [],
"footer_start": ["copyright"],
"footer_center": [],
"footer_end": [],
# # When specified as a dictionary, the keys should follow glob-style patterns, as in
# # https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-exclude_patterns
# # In particular, "**" specifies the default for all pages
# # Use :html_theme.sidebar_secondary.remove: for file-wide removal
# "secondary_sidebar_items": {"**": ["page-toc", "sourcelink"]},
# "show_version_warning_banner": True,
# "announcement": None,
"icon_links": [
{
# Label for this link
"name": "GitHub",
# URL where the link will redirect
"url": "https://github.com/langchain-ai/langchain", # required
# Icon class (if "type": "fontawesome"), or path to local image (if "type": "local")
"icon": "fa-brands fa-square-github",
# The type of image to be used (see below for details)
"type": "fontawesome",
},
{
"name": "X / Twitter",
"url": "https://twitter.com/langchainai",
"icon": "fab fa-twitter-square",
},
],
"icon_links_label": "Quick Links",
"external_links": [
{"name": "Legacy reference", "url": "https://api.python.langchain.com/"},
],
}
for old_link in redirects:
html_additional_pages[old_link] = "redirects.html"
partners_dir = Path(__file__).parent.parent.parent / "libs/partners"
partners = [
(p.name, p.name.replace("-", "_") + "_api_reference")
for p in partners_dir.iterdir()
]
partners = sorted(partners)
html_context = {
"display_github": True, # Integrate GitHub
@@ -172,8 +235,6 @@ html_context = {
"github_repo": "langchain", # Repo name
"github_version": "master", # Version
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
"redirects": redirects,
"partners": partners,
}
# Add any paths that contain custom static files (such as style sheets) here,
@@ -183,9 +244,7 @@ html_static_path = ["_static"]
# These paths are either relative to html_static_path
# or fully qualified paths (e.g. https://...)
html_css_files = [
"css/custom.css",
]
html_css_files = ["css/custom.css"]
html_use_index = False
myst_enable_extensions = ["colon_fence"]
@@ -202,3 +261,5 @@ html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True
master_doc = "index"

View File

@@ -239,7 +239,7 @@ def _construct_doc(
package_namespace: str,
members_by_namespace: Dict[str, ModuleMembers],
package_version: str,
) -> str:
) -> List[typing.Tuple[str, str]]:
"""Construct the contents of the reference.rst file for the given package.
Args:
@@ -251,15 +251,38 @@ def _construct_doc(
Returns:
The contents of the reference.rst file.
"""
full_doc = f"""\
=======================
``{package_namespace}`` {package_version}
=======================
docs = []
index_doc = f"""\
:html_theme.sidebar_secondary.remove:
.. currentmodule:: {package_namespace}
.. _{package_namespace}:
======================================
{package_namespace.replace('_', '-')}: {package_version}
======================================
.. automodule:: {package_namespace}
:no-members:
:no-inherited-members:
.. toctree::
:hidden:
:maxdepth: 2
"""
index_autosummary = """
"""
namespaces = sorted(members_by_namespace)
for module in namespaces:
index_doc += f" {module}\n"
module_doc = f"""\
.. currentmodule:: {package_namespace}
.. _{module}:
"""
_members = members_by_namespace[module]
classes = [
el
@@ -281,9 +304,9 @@ def _construct_doc(
]
if not (classes or functions):
continue
section = f":mod:`{package_namespace}.{module}`"
section = f":mod:`{module}`"
underline = "=" * (len(section) + 1)
full_doc += f"""\
module_doc += f"""
{section}
{underline}
@@ -291,16 +314,26 @@ def _construct_doc(
:no-members:
:no-inherited-members:
"""
index_autosummary += f"""
:ref:`{module}`
{'^' * (len(module) + 5)}
"""
if classes:
full_doc += f"""\
Classes
--------------
module_doc += f"""\
**Classes**
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
"""
index_autosummary += """
**Classes**
.. autosummary::
"""
for class_ in sorted(classes, key=lambda c: c["qualified_name"]):
@@ -317,19 +350,22 @@ Classes
else:
template = "class.rst"
full_doc += f"""\
module_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
index_autosummary += f"""
{class_['qualified_name']}
"""
if functions:
_functions = [f["qualified_name"] for f in functions]
fstring = "\n ".join(sorted(_functions))
full_doc += f"""\
Functions
--------------
module_doc += f"""\
**Functions**
.. currentmodule:: {package_namespace}
.. autosummary::
@@ -338,11 +374,18 @@ Functions
{fstring}
"""
index_autosummary += f"""
**Functions**
.. autosummary::
{fstring}
"""
if deprecated_classes:
full_doc += f"""\
Deprecated classes
--------------
module_doc += f"""\
**Deprecated classes**
.. currentmodule:: {package_namespace}
@@ -350,6 +393,12 @@ Deprecated classes
:toctree: {module}
"""
index_autosummary += """
**Deprecated classes*
.. autosummary::
"""
for class_ in sorted(deprecated_classes, key=lambda c: c["qualified_name"]):
if class_["kind"] == "TypedDict":
template = "typeddict.rst"
@@ -364,19 +413,21 @@ Deprecated classes
else:
template = "class.rst"
full_doc += f"""\
module_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
index_autosummary += f"""
{class_['qualified_name']}
"""
if deprecated_functions:
_functions = [f["qualified_name"] for f in deprecated_functions]
fstring = "\n ".join(sorted(_functions))
full_doc += f"""\
Deprecated functions
--------------
module_doc += f"""\
**Deprecated functions**
.. currentmodule:: {package_namespace}
@@ -387,7 +438,18 @@ Deprecated functions
{fstring}
"""
return full_doc
index_autosummary += """
**Deprecated functions**
.. autosummary::
{fstring}
"""
docs.append((f"{module}.rst", module_doc))
docs.append(("index.rst", index_doc + index_autosummary))
return docs
def _build_rst_file(package_name: str = "langchain") -> None:
@@ -399,13 +461,14 @@ def _build_rst_file(package_name: str = "langchain") -> None:
package_dir = _package_dir(package_name)
package_members = _load_package_modules(package_dir)
package_version = _get_package_version(package_dir)
with open(_out_file_path(package_name), "w") as f:
f.write(
_doc_first_line(package_name)
+ _construct_doc(
_package_namespace(package_name), package_members, package_version
)
)
output_dir = _out_file_path(package_name)
os.mkdir(output_dir)
rsts = _construct_doc(
_package_namespace(package_name), package_members, package_version
)
for name, rst in rsts:
with open(output_dir / name, "w") as f:
f.write(rst)
def _package_namespace(package_name: str) -> str:
@@ -455,12 +518,117 @@ def _get_package_version(package_dir: Path) -> str:
def _out_file_path(package_name: str) -> Path:
"""Return the path to the file containing the documentation."""
return HERE / f"{package_name.replace('-', '_')}_api_reference.rst"
return HERE / f"{package_name.replace('-', '_')}"
def _doc_first_line(package_name: str) -> str:
"""Return the path to the file containing the documentation."""
return f".. {package_name.replace('-', '_')}_api_reference:\n\n"
def _build_index(dirs: List[str]) -> None:
custom_names = {
"airbyte": "Airbyte",
"aws": "AWS",
"ai21": "AI21",
}
ordered = ["core", "langchain", "text-splitters", "community", "experimental"]
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
main_headers = [
" ".join(custom_names.get(x, x.title()) for x in dir_.split("-"))
for dir_ in main_
]
integration_headers = [
" ".join(
custom_names.get(x, x.title().replace("ai", "AI").replace("db", "DB"))
for x in dir_.split("-")
)
for dir_ in integrations
]
main_tree = "\n".join(
f"{header_name}<{dir_.replace('-', '_')}/index>"
for header_name, dir_ in zip(main_headers, main_)
)
main_grid = "\n".join(
f'- header: "**{header_name}**"\n content: "{_package_namespace(dir_).replace("_", "-")}: {_get_package_version(_package_dir(dir_))}"\n link: {dir_.replace("-", "_")}/index.html'
for header_name, dir_ in zip(main_headers, main_)
)
integration_tree = "\n".join(
f"{header_name}<{dir_.replace('-', '_')}/index>"
for header_name, dir_ in zip(integration_headers, integrations)
)
integration_grid = ""
integrations_to_show = [
"openai",
"anthropic",
"google-vertexai",
"aws",
"huggingface",
"mistralai",
]
for header_name, dir_ in sorted(
zip(integration_headers, integrations),
key=lambda h_d: integrations_to_show.index(h_d[1])
if h_d[1] in integrations_to_show
else len(integrations_to_show),
)[: len(integrations_to_show)]:
integration_grid += f'\n- header: "**{header_name}**"\n content: {_package_namespace(dir_).replace("_", "-")} {_get_package_version(_package_dir(dir_))}\n link: {dir_.replace("-", "_")}/index.html'
doc = f"""# LangChain Python API Reference
Welcome to the LangChain Python API reference. This is a reference for all
`langchain-x` packages.
For user guides see [https://python.langchain.com](https://python.langchain.com).
For the legacy API reference hosted on ReadTheDocs see [https://api.python.langchain.com/](https://api.python.langchain.com/).
## Base packages
```{{gallery-grid}}
:grid-columns: "1 2 2 3"
{main_grid}
```
```{{toctree}}
:maxdepth: 2
:hidden:
:caption: Base packages
{main_tree}
```
## Integrations
```{{gallery-grid}}
:grid-columns: "1 2 2 3"
{integration_grid}
```
See the full list of integrations in the Section Navigation.
```{{toctree}}
:maxdepth: 2
:hidden:
:caption: Integrations
{integration_tree}
```
"""
with open(HERE / "reference.md", "w") as f:
f.write(doc)
dummy_index = """\
# API reference
```{toctree}
:maxdepth: 3
:hidden:
Reference<reference>
```
"""
with open(HERE / "index.md", "w") as f:
f.write(dummy_index)
def main(dirs: Optional[list] = None) -> None:
@@ -488,6 +656,8 @@ def main(dirs: Optional[list] = None) -> None:
else:
print("Building package:", dir_)
_build_rst_file(package_name=dir_)
_build_index(dirs)
print("API reference files built.")

View File

@@ -1,8 +0,0 @@
=============
LangChain API
=============
.. toctree::
:maxdepth: 2
api_reference.rst

View File

@@ -1,17 +1,11 @@
-e libs/experimental
-e libs/langchain
-e libs/core
-e libs/community
pydantic<2
autodoc_pydantic==1.8.0
myst_parser
nbsphinx==0.8.9
sphinx>=5
sphinx-autobuild==2021.3.14
sphinx_rtd_theme==1.0.0
sphinx-typlog-theme==0.8.0
sphinx-panels
toml
myst_nb
sphinx_copybutton
pydata-sphinx-theme==0.13.1
autodoc_pydantic>=1,<2
sphinx<=7
myst-parser>=3
sphinx-autobuild>=2024
pydata-sphinx-theme>=0.15
toml>=0.10.2
myst-nb>=1.1.1
pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4

View File

@@ -0,0 +1,41 @@
import sys
from glob import glob
from pathlib import Path
from bs4 import BeautifulSoup
CUR_DIR = Path(__file__).parents[1]
def process_toc_h3_elements(html_content: str) -> str:
"""Update Class.method() TOC headers to just method()."""
# Create a BeautifulSoup object
soup = BeautifulSoup(html_content, "html.parser")
# Find all <li> elements with class "toc-h3"
toc_h3_elements = soup.find_all("li", class_="toc-h3")
# Process each element
for element in toc_h3_elements:
element = element.a.code.span
# Get the text content of the element
content = element.get_text()
# Apply the regex substitution
modified_content = content.split(".")[-1]
# Update the element's content
element.string = modified_content
# Return the modified HTML
return str(soup)
if __name__ == "__main__":
dir = sys.argv[1]
for fn in glob(str(f"{dir.rstrip('/')}/**/*.html"), recursive=True):
with open(fn, "r") as f:
html = f.read()
processed_html = process_toc_h3_elements(html)
with open(fn, "w") as f:
f.write(processed_html)

View File

@@ -1,4 +1,4 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
@@ -11,7 +11,7 @@
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
~{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
@@ -22,11 +22,11 @@
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
~{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
.. automethod:: {{ item }}
{%- endfor %}
{% endif %}

View File

@@ -1,4 +1,4 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}

View File

@@ -1,4 +1,4 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}

View File

@@ -0,0 +1,12 @@
<!-- This will display a link to LangChain docs -->
<head>
<style>
.text-link {
text-decoration: none; /* Remove underline */
color: inherit; /* Inherit color from parent element */
}
</style>
</head>
<body>
<a href="https://python.langchain.com/" class='text-link'>Docs</a>
</body>

View File

@@ -1,4 +1,4 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}

View File

@@ -1,21 +1,21 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
~{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
@@ -26,11 +26,11 @@
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
~{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
.. automethod:: {{ item }}
{%- endfor %}
{% endif %}

View File

@@ -1,10 +1,6 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
@@ -19,6 +15,10 @@
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign, as_tool
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. example_links:: {{ objname }}

View File

@@ -1,4 +1,4 @@
:mod:`{{module}}`.{{objname}}
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}

View File

@@ -1,27 +0,0 @@
Copyright (c) 2007-2023 The scikit-learn developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,67 +0,0 @@
<script>
$(document).ready(function() {
/* Add a [>>>] button on the top-right corner of code samples to hide
* the >>> and ... prompts and the output and thus make the code
* copyable. */
var div = $('.highlight-python .highlight,' +
'.highlight-python3 .highlight,' +
'.highlight-pycon .highlight,' +
'.highlight-default .highlight')
var pre = div.find('pre');
// get the styles from the current theme
pre.parent().parent().css('position', 'relative');
var hide_text = 'Hide prompts and outputs';
var show_text = 'Show prompts and outputs';
// create and add the button to all the code blocks that contain >>>
div.each(function(index) {
var jthis = $(this);
if (jthis.find('.gp').length > 0) {
var button = $('<span class="copybutton">&gt;&gt;&gt;</span>');
button.attr('title', hide_text);
button.data('hidden', 'false');
jthis.prepend(button);
}
// tracebacks (.gt) contain bare text elements that need to be
// wrapped in a span to work with .nextUntil() (see later)
jthis.find('pre:has(.gt)').contents().filter(function() {
return ((this.nodeType == 3) && (this.data.trim().length > 0));
}).wrap('<span>');
});
// define the behavior of the button when it's clicked
$('.copybutton').click(function(e){
e.preventDefault();
var button = $(this);
if (button.data('hidden') === 'false') {
// hide the code output
button.parent().find('.go, .gp, .gt').hide();
button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'hidden');
button.css('text-decoration', 'line-through');
button.attr('title', show_text);
button.data('hidden', 'true');
} else {
// show the code output
button.parent().find('.go, .gp, .gt').show();
button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'visible');
button.css('text-decoration', 'none');
button.attr('title', hide_text);
button.data('hidden', 'false');
}
});
/*** Add permalink buttons next to glossary terms ***/
$('dl.glossary > dt[id]').append(function() {
return ('<a class="headerlink" href="#' +
this.getAttribute('id') +
'" title="Permalink to this term">¶</a>');
});
});
</script>
{%- if pagename != 'index' and pagename != 'documentation' %}
{% if theme_mathjax_path %}
<script id="MathJax-script" async src="{{ theme_mathjax_path }}"></script>
{% endif %}
{%- endif %}

View File

@@ -1,132 +0,0 @@
{# TEMPLATE VAR SETTINGS #}
{%- set url_root = pathto('', 1) %}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- set lang_attr = 'en' %}
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="{{ lang_attr }}" > <![endif]-->
<!--[if gt IE 8]><!-->
<html class="no-js" lang="{{ lang_attr }}"> <!--<![endif]-->
<head>
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical"
href="https://api.python.langchain.com/en/latest/{{ pagename }}.html"/>
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
<link rel="stylesheet"
href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}"
type="text/css"/>
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}"
type="text/css"{% if css.title is not none %}
title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css"/>
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css"/>
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}"
src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
</head>
<body>
{% include "nav.html" %}
{%- block content %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary"
for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}"
class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}"
class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}
&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}
.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated
on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}"
rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
</div>
</div>
{%- endblock %}
<script src="{{ pathto('_static/js/vendor/bootstrap.min.js', 1) }}"></script>
{% include "javascript.html" %}
</body>
</html>

View File

@@ -1,78 +0,0 @@
{%- if pagename != 'index' and pagename != 'documentation' %}
{%- set nav_bar_class = "sk-docs-navbar" %}
{%- set top_container_cls = "sk-docs-container" %}
{%- else %}
{%- set nav_bar_class = "sk-landing-navbar" %}
{%- set top_container_cls = "sk-landing-container" %}
{%- endif %}
<nav id="navbar" class="{{ nav_bar_class }} navbar navbar-expand-md navbar-light bg-light py-0">
<div class="container-fluid {{ top_container_cls }} px-0">
{%- if logo_url %}
<a class="navbar-brand py-0" href="{{ pathto('index') }}">
<img
class="sk-brand-img"
src="{{ logo_url|e }}"
alt="logo"/>
</a>
{%- endif %}
<button
id="sk-navbar-toggler"
class="navbar-toggler"
type="button"
data-toggle="collapse"
data-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent"
aria-expanded="false"
aria-label="Toggle navigation"
>
<span class="navbar-toggler-icon"></span>
</button>
<div class="sk-navbar-collapse collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('langchain_api_reference') }}">LangChain</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('core_api_reference') }}">Core</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('community_api_reference') }}">Community</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('text_splitters_api_reference') }}">Text splitters</a>
</li>
{%- for title, pathname in partners %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ pathto(pathname) }}">{{ title }}</a>
</li>
{%- endfor %}
<li class="nav-item dropdown nav-more-item-dropdown">
<a class="sk-nav-link nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">Partner libs</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
{%- for title, pathname in partners %}
<a class="sk-nav-dropdown-item dropdown-item" href="{{ pathto(pathname) }}">{{ title }}</a>
{%- endfor %}
</div>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Docs</a>
</li>
</ul>
{%- if pagename != "search"%}
<div id="searchbox" role="search">
<div class="searchformwrapper">
<form class="search" action="{{ pathto('search') }}" method="get">
<input class="sk-search-text-input" type="text" name="q" aria-labelledby="searchlabel" />
<input class="sk-search-text-btn" type="submit" value="{{ _('Go') }}" />
</form>
</div>
</div>
{%- endif %}
</div>
</div>
</nav>

View File

@@ -1,16 +0,0 @@
{%- extends "basic/search.html" %}
{% block extrahead %}
<script type="text/javascript" src="{{ pathto('_static/underscore.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('searchindex.js', 1) }}" defer></script>
<script type="text/javascript" src="{{ pathto('_static/doctools.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/language_data.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/searchtools.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/sphinx_highlight.js', 1) }}"></script>
<script type="text/javascript">
$(document).ready(function() {
if (!Search.out) {
Search.init();
}
});
</script>
{% endblock %}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,8 +0,0 @@
[theme]
inherit = basic
pygments_style = default
stylesheet = css/theme.css
[options]
link_to_live_contributing_page = false
mathjax_path =

View File

@@ -63,7 +63,7 @@
"outputs": [],
"source": [
"# <!-- ruff: noqa: F821 -->\n",
"from langchain.globals import set_llm_cache"
"from langchain_core.globals import set_llm_cache"
]
},
{
@@ -103,7 +103,7 @@
],
"source": [
"%%time\n",
"from langchain.cache import InMemoryCache\n",
"from langchain_core.caches import InMemoryCache\n",
"\n",
"set_llm_cache(InMemoryCache())\n",
"\n",

View File

@@ -63,7 +63,7 @@
"* The `load` methods is a convenience method meant solely for prototyping work -- it just invokes `list(self.lazy_load())`.\n",
"* The `alazy_load` has a default implementation that will delegate to `lazy_load`. If you're using async, we recommend overriding the default implementation and providing a native async implementation.\n",
"\n",
"::: {.callout-important}\n",
":::{.callout-important}\n",
"When implementing a document loader do **NOT** provide parameters via the `lazy_load` or `alazy_load` methods.\n",
"\n",
"All configuration is expected to be passed through the initializer (__init__). This was a design choice made by LangChain to make sure that once a document loader has been instantiated it has all the information needed to load documents.\n",
@@ -235,7 +235,7 @@
"id": "56cb443e-f987-4386-b4ec-975ee129adb2",
"metadata": {},
"source": [
"::: {.callout-tip}\n",
":::{.callout-tip}\n",
"\n",
"`load()` can be helpful in an interactive environment such as a jupyter notebook.\n",
"\n",
@@ -276,7 +276,7 @@
"source": [
"## Working with Files\n",
"\n",
"Many document loaders invovle parsing files. The difference between such loaders usually stems from how the file is parsed rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.\n",
"Many document loaders involve parsing files. The difference between such loaders usually stems from how the file is parsed, rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.\n",
"\n",
"As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded.\n",
"\n",
@@ -355,7 +355,7 @@
"id": "433bfb7c-7767-43bc-b71e-42413d7494a8",
"metadata": {},
"source": [
"Using the **blob** API also allows one to load content direclty from memory without having to read it from a file!"
"Using the **blob** API also allows one to load content directly from memory without having to read it from a file!"
]
},
{

View File

@@ -364,7 +364,10 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.graph_qa.cypher_utils import CypherQueryCorrector, Schema\n",
"from langchain_community.chains.graph_qa.cypher_utils import (\n",
" CypherQueryCorrector,\n",
" Schema,\n",
")\n",
"\n",
"# Cypher validation tool for relationship directions\n",
"corrector_schema = [\n",

View File

@@ -36,7 +36,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.globals import set_llm_cache\n",
"from langchain_core.globals import set_llm_cache\n",
"from langchain_openai import OpenAI\n",
"\n",
"# To make the caching really obvious, lets use a slower and older model.\n",
@@ -71,7 +71,7 @@
],
"source": [
"%%time\n",
"from langchain.cache import InMemoryCache\n",
"from langchain_core.caches import InMemoryCache\n",
"\n",
"set_llm_cache(InMemoryCache())\n",
"\n",

View File

@@ -38,8 +38,8 @@
" Operator,\n",
" StructuredQuery,\n",
")\n",
"from langchain.retrievers.self_query.chroma import ChromaTranslator\n",
"from langchain.retrievers.self_query.elasticsearch import ElasticsearchTranslator\n",
"from langchain_community.query_constructors.chroma import ChromaTranslator\n",
"from langchain_community.query_constructors.elasticsearch import ElasticsearchTranslator\n",
"from langchain_core.pydantic_v1 import BaseModel"
]
},

View File

@@ -512,7 +512,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.self_query.chroma import ChromaTranslator\n",
"from langchain_community.query_constructors.chroma import ChromaTranslator\n",
"\n",
"retriever = SelfQueryRetriever(\n",
" query_constructor=query_constructor,\n",

View File

@@ -299,16 +299,16 @@
},
{
"cell_type": "markdown",
"id": "423c6e099e94ca69",
"metadata": {
"collapsed": false
},
"source": [
"### Gradient\n",
"\n",
"In this method, the gradient of distance is used to split chunks along with the percentile method.\n",
"This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data."
],
"metadata": {
"collapsed": false
},
"id": "423c6e099e94ca69"
]
},
{
"cell_type": "code",
@@ -325,6 +325,8 @@
{
"cell_type": "code",
"execution_count": 6,
"id": "e9f393d316ce1f6c",
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -337,13 +339,13 @@
"source": [
"docs = text_splitter.create_documents([state_of_the_union])\n",
"print(docs[0].page_content)"
],
"metadata": {},
"id": "e9f393d316ce1f6c"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a407cd57f02a0db4",
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -355,9 +357,7 @@
],
"source": [
"print(len(docs))"
],
"metadata": {},
"id": "a407cd57f02a0db4"
]
}
],
"metadata": {

View File

@@ -0,0 +1,32 @@
---
sidebar_position: 0
sidebar_class_name: hidden
keywords: [compatibility]
---
# Chat models
[Chat models](/docs/concepts/#chat-models) are language models that use a sequence of [messages](/docs/concepts/#messages) as inputs and return messages as outputs (as opposed to using plain text). These are generally newer models.
:::info
If you'd like to write your own chat model, see [this how-to](/docs/how_to/custom_chat_model/).
If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
:::
## Featured Providers
:::info
While all these LangChain classes support the indicated advanced feature, you may have
to open the provider-specific documentation to learn which hosted models or backends support
the feature.
:::
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
<CategoryTable category="chat" />
## All chat models
<IndexTable />

View File

@@ -4,9 +4,23 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatLlamaCpp\n",
"# Llama.cpp\n",
"\n",
"This notebook provides a quick overview for getting started with chat model intergrated with [llama cpp python](https://github.com/abetlen/llama-cpp-python)."
">[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov`\n",
">[llama.cpp](https://github.com/ggerganov/llama.cpp).\n",
">\n",
">This package provides:\n",
">\n",
"> - Low-level access to C API via ctypes interface.\n",
"> - High-level Python API for text completion\n",
"> - `OpenAI`-like API\n",
"> - `LangChain` compatibility\n",
"> - `LlamaIndex` compatibility\n",
"> - OpenAI compatible web server\n",
"> - Local Copilot replacement\n",
"> - Function Calling support\n",
"> - Vision API support\n",
"> - Multiple Models\n"
]
},
{
@@ -212,8 +226,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import tool\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"class WeatherInput(BaseModel):\n",
@@ -410,7 +424,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -99,7 +99,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatPerplexity\n",
"\n",
"This notebook covers how to get started with Perplexity chat models."
"This notebook covers how to get started with `Perplexity` chat models."
]
},
{
@@ -37,17 +37,31 @@
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "markdown",
"id": "b26e2035-2f81-4451-ba44-fa2e2d5aeb62",
"metadata": {},
"source": [
"The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d986aac6-1bae-4608-8514-d3ba5b35b10e",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatPerplexity(\n",
" temperature=0, pplx_api_key=\"YOUR_API_KEY\", model=\"llama-3-sonar-small-32k-online\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "97a8ce3a",
"metadata": {},
"source": [
"The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
"\n",
"```python\n",
"chat = ChatPerplexity(temperature=0, pplx_api_key=\"YOUR_API_KEY\", model=\"llama-3-sonar-small-32k-online\")\n",
"```\n",
"\n",
"You can check a list of available models [here](https://docs.perplexity.ai/docs/model-cards). For reproducibility, we can set the API key dynamically by taking it as an input in this notebook."
]
},
@@ -221,7 +235,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.10.12"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -164,7 +164,7 @@
},
"outputs": [],
"source": [
"from langchain.document_loaders import GithubFileLoader"
"from langchain_community.document_loaders import GithubFileLoader"
]
},
{

View File

@@ -0,0 +1,45 @@
---
sidebar_position: 0
sidebar_class_name: hidden
---
# Document loaders
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
DocumentLoaders load data into the standard LangChain Document format.
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the .load method.
An example use case is as follows:
```python
from langchain_community.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(
... # <-- Integration specific parameters here
)
data = loader.load()
```
## Common File Types
The below document loaders allow you to load data from common data formats.
<CategoryTable category="common_loaders" />
## PDFs
The below document loaders allow you to load documents.
<CategoryTable category="pdf_loaders" />
## Webpages
The below document loaders allow you to load webpages.
<CategoryTable category="webpage_loaders" />
## All document loaders
<IndexTable />

View File

@@ -6,7 +6,7 @@
"source": [
"# PyPDFLoader\n",
"\n",
"This notebook provides a quick overview for getting started with `PyPDF` [document loader](https://python.langchain.com/v0.2/docs/concepts/#document-loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html).\n",
"This notebook provides a quick overview for getting started with `PyPDF` [document loader](https://python.langchain.com/v0.2/docs/concepts/#document-loaders). For detailed documentation of all DocumentLoader features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html).\n",
"\n",
"\n",
"## Overview\n",
@@ -43,7 +43,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
"%pip install -qU langchain_community pypdf"
]
},
{
@@ -122,7 +122,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [
{
@@ -131,21 +131,41 @@
"6"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"page = []\n",
"pages = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" pages.append(doc)\n",
" if len(pages) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []\n",
"len(page)"
" pages = []\n",
"len(pages)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"LayoutParser : A Unified Toolkit for DL-Based DIA 11\n",
"focuses on precision, efficiency, and robustness. \n",
"{'source': './example_data/layout-parser-paper.pdf', 'page': 10}\n"
]
}
],
"source": [
"print(pages[0].page_content[:100])\n",
"print(pages[0].metadata)"
]
},
{
@@ -160,7 +180,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -174,9 +194,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -229,14 +229,14 @@
}
],
"source": [
"page = []\n",
"pages = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" pages.append(doc)\n",
" if len(pages) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
" pages = []"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -138,6 +138,8 @@
"source": [
"## Playwright URL Loader\n",
"\n",
">[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool developed by `Microsoft` that allows you to programmatically control and automate web browsers. It is designed for end-to-end testing, scraping, and automating tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`.\n",
"\n",
"This covers how to load HTML documents from a list of URLs using the `PlaywrightURLLoader`.\n",
"\n",
"[Playwright](https://playwright.dev/) enables reliable end-to-end testing for modern web apps.\n",
@@ -224,7 +226,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.12"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -15,45 +15,47 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "427d5745",
"metadata": {},
"source": "from langchain_community.document_loaders import YoutubeLoader",
"outputs": [],
"execution_count": null
"source": [
"from langchain_community.document_loaders import YoutubeLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "34a25b57",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet youtube-transcript-api"
],
"outputs": [],
"execution_count": null
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc8b308a",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=False\n",
")"
],
"outputs": [],
"execution_count": null
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d073dd36",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
],
"outputs": [],
"execution_count": null
]
},
{
"attachments": {},
@@ -66,26 +68,26 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba28af69",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pytube"
],
"outputs": [],
"execution_count": null
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b8ea390",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True\n",
")\n",
"loader.load()"
],
"outputs": [],
"execution_count": null
]
},
{
"attachments": {},
@@ -102,8 +104,10 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "08510625",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\",\n",
@@ -112,13 +116,12 @@
" translation=\"en\",\n",
")\n",
"loader.load()"
],
"outputs": [],
"execution_count": null
]
},
{
"metadata": {},
"cell_type": "markdown",
"id": "69f4e399a9764d73",
"metadata": {},
"source": [
"### Get transcripts as timestamped chunks\n",
"\n",
@@ -127,12 +130,14 @@
"`transcript_format` param: One of the `langchain_community.document_loaders.youtube.TranscriptFormat` values. In this case, `TranscriptFormat.CHUNKS`.\n",
"\n",
"`chunk_size_seconds` param: An integer number of video seconds to be represented by each chunk of transcript data. Default is 120 seconds."
],
"id": "69f4e399a9764d73"
]
},
{
"metadata": {},
"cell_type": "code",
"execution_count": null,
"id": "540bbf19182f38bc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.youtube import TranscriptFormat\n",
"\n",
@@ -143,10 +148,7 @@
" chunk_size_seconds=30,\n",
")\n",
"print(\"\\n\\n\".join(map(repr, loader.load())))"
],
"id": "540bbf19182f38bc",
"outputs": [],
"execution_count": null
]
},
{
"attachments": {},
@@ -172,8 +174,10 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "c345bc43",
"metadata": {},
"outputs": [],
"source": [
"# Init the GoogleApiClient\n",
"from pathlib import Path\n",
@@ -198,9 +202,7 @@
"\n",
"# returns a list of Documents\n",
"youtube_loader_channel.load()"
],
"outputs": [],
"execution_count": null
]
}
],
"metadata": {

View File

@@ -331,8 +331,8 @@
}
],
"source": [
"from langchain.embeddings import OpenVINOEmbeddings\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.embeddings import OpenVINOEmbeddings\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",

View File

@@ -245,8 +245,8 @@
"outputs": [],
"source": [
"import boto3\n",
"from langchain.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain\n",
"from langchain_aws import ChatBedrock\n",
"from langchain_community.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain\n",
"from langchain_community.graphs import NeptuneRdfGraph\n",
"\n",
"host = \"<your host>\"\n",

View File

@@ -65,7 +65,7 @@
"outputs": [],
"source": [
"import nest_asyncio\n",
"from langchain.chains.graph_qa.gremlin import GremlinQAChain\n",
"from langchain_community.chains.graph_qa.gremlin import GremlinQAChain\n",
"from langchain_community.graphs import GremlinGraph\n",
"from langchain_community.graphs.graph_document import GraphDocument, Node, Relationship\n",
"from langchain_core.documents import Document\n",

View File

@@ -49,7 +49,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.indexes import GraphIndexCreator\n",
"from langchain_community.graphs.index_creator import GraphIndexCreator\n",
"from langchain_openai import OpenAI"
]
},
@@ -252,7 +252,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.indexes.graph import NetworkxEntityGraph"
"from langchain_community.graphs import NetworkxEntityGraph"
]
},
{

View File

@@ -23,7 +23,7 @@
"# AnthropicLLM\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Anthropic legacy Claude 2 models as [text completion models](/docs/concepts/#llms). The latest and most popular Anthropic models are [chat completion models](/docs/concepts/#chat-models).\n",
"You are currently on a page documenting the use of Anthropic legacy Claude 2 models as [text completion models](/docs/concepts/#llms). The latest and most popular Anthropic models are [chat completion models](/docs/concepts/#chat-models), and the text completion models have been deprecated.\n",
"\n",
"You are probably looking for [this page instead](/docs/integrations/chat/anthropic/).\n",
":::\n",
@@ -115,14 +115,6 @@
"\n",
"chain.invoke({\"question\": \"What is LangChain?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a52f765c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -226,7 +226,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Intialize the parameters as dict.\n",
"# Initialize the parameters as dict.\n",
"params = dict(temperature=str(0.3), max_tokens=100)"
]
},

View File

@@ -15,7 +15,14 @@
"\n",
">[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) for detailed documentation of all attributes and methods."
"Head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) for detailed documentation of all attributes and methods.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/llms/cohere/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [Cohere](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n"
]
},
{
@@ -29,34 +36,43 @@
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `cohere` package itself. We can install these with:\n",
"\n",
"```bash\n",
"pip install -U langchain-community langchain-cohere\n",
"```\n",
"### Credentials\n",
"\n",
"We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:"
"We'll need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "3f5dc9d7-65e3-4b5b-9086-3327d016cfe0",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass()"
"if \"COHERE_API_KEY\" not in os.environ:\n",
" os.environ[\"COHERE_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "ff211537",
"metadata": {},
"source": [
"### Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "318454f9",
"metadata": {},
"outputs": [],
"source": [
"pip install -U langchain-community langchain-cohere"
]
},
{
@@ -83,7 +99,7 @@
"id": "0b4e02bf-5beb-48af-a2a2-52cbcd8ebed6",
"metadata": {},
"source": [
"## Usage\n",
"## Invocation\n",
"\n",
"Cohere supports all [LLM](/docs/how_to#llms) functionality:"
]
@@ -199,6 +215,8 @@
"id": "39198f7d-6fc8-4662-954a-37ad38c4bec4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
@@ -237,12 +255,14 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4797d719",
"cell_type": "markdown",
"id": "ac5fcbed",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `Cohere` llm features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html"
]
}
],
"metadata": {

View File

@@ -15,7 +15,14 @@
"\n",
">[Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",
"This example goes over how to use LangChain to interact with `Fireworks` models."
"This example goes over how to use LangChain to interact with `Fireworks` models.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.1/docs/integrations/llms/fireworks/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [Fireworks](https://api.python.langchain.com/en/latest/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks) | [langchain_fireworks](https://api.python.langchain.com/en/latest/fireworks_api_reference.html) | ❌ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_fireworks?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_fireworks?style=flat-square&label=%20) |"
]
},
{
@@ -24,29 +31,18 @@
"id": "fb345268",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-fireworks"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "60b6dbb2",
"metadata": {},
"outputs": [],
"source": [
"from langchain_fireworks import Fireworks"
]
"source": []
},
{
"cell_type": "markdown",
"id": "ccff689e",
"metadata": {},
"source": [
"# Setup\n",
"## Setup\n",
"\n",
"1. Make sure the `langchain-fireworks` package is installed in your environment.\n",
"2. Sign in to [Fireworks AI](http://fireworks.ai) for the an API Key to access our models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable.\n",
"### Credentials \n",
"\n",
"Sign in to [Fireworks AI](http://fireworks.ai) for the an API Key to access our models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable.\n",
"3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on [fireworks.ai](https://fireworks.ai)."
]
},
@@ -60,10 +56,46 @@
"import getpass\n",
"import os\n",
"\n",
"from langchain_fireworks import Fireworks\n",
"\n",
"if \"FIREWORKS_API_KEY\" not in os.environ:\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Fireworks API Key:\")\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Fireworks API Key:\")"
]
},
{
"cell_type": "markdown",
"id": "e42ced7e",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"You need to install the `langchain_fireworks` python package for the rest of the notebook to work."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ca824723",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-fireworks"
]
},
{
"cell_type": "markdown",
"id": "acc24d0c",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d285fd7f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_fireworks import Fireworks\n",
"\n",
"# Initialize a Fireworks model\n",
"llm = Fireworks(\n",
@@ -74,10 +106,10 @@
},
{
"cell_type": "markdown",
"id": "acc24d0c",
"id": "a4c29f7b",
"metadata": {},
"source": [
"# Calling the Model Directly\n",
"## Invocation\n",
"\n",
"You can call the model directly with string prompts to get completions."
]
@@ -98,11 +130,18 @@
}
],
"source": [
"# Single prompt\n",
"output = llm.invoke(\"Who's the best quarterback in the NFL?\")\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"id": "b0283343",
"metadata": {},
"source": [
"### Invoking with multiple prompts"
]
},
{
"cell_type": "code",
"execution_count": 5,
@@ -128,6 +167,14 @@
"print(output.generations)"
]
},
{
"cell_type": "markdown",
"id": "f18f5717",
"metadata": {},
"source": [
"### Invoking with additional parameters"
]
},
{
"cell_type": "code",
"execution_count": 7,
@@ -158,7 +205,7 @@
"id": "137662a6",
"metadata": {},
"source": [
"# Simple Chain with Non-Chat Model"
"## Chaining"
]
},
{
@@ -206,6 +253,8 @@
"id": "d0a29826",
"metadata": {},
"source": [
"## Streaming\n",
"\n",
"You can stream the output, if you want."
]
},
@@ -233,12 +282,14 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fcc0eecb",
"cell_type": "markdown",
"id": "692c5e76",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `Fireworks` LLM features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks"
]
}
],
"metadata": {

View File

@@ -0,0 +1,30 @@
---
sidebar_position: 0
sidebar_class_name: hidden
keywords: [compatibility]
---
# LLMs
:::caution
You are currently on a page documenting the use of [text completion models](/docs/concepts/#llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/#chat-models).
Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/integrations/chat/).
:::
[LLMs](docs/concepts/#llms) are language models that take a string as input and return a string as output.
:::info
If you'd like to write your own LLM, see [this how-to](/docs/how_to/custom_llm/).
If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
:::
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
<CategoryTable category="llms" />
## All LLMs
<IndexTable />

View File

@@ -74,7 +74,7 @@
}
],
"source": [
"from langchain.llms import Konko\n",
"from langchain_community.llms import Konko\n",
"\n",
"llm = Konko(model=\"mistralai/mistral-7b-v0.1\", temperature=0.1, max_tokens=128)\n",
"\n",

View File

@@ -19,33 +19,86 @@
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5d71df86-8a17-4283-83d7-4e46e7c06c44",
"metadata": {
"tags": []
},
"outputs": [],
"cell_type": "markdown",
"id": "74312161",
"metadata": {},
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"## Overview\n",
"\n",
"from getpass import getpass\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/openai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain-openai](https://api.python.langchain.com/en/latest/openai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?style=flat-square&label=%20) |\n",
"\n",
"OPENAI_API_KEY = getpass()"
"\n",
"## Setup\n",
"\n",
"To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5472a7cd-af26-48ca-ae9b-5f6ae73c74d2",
"metadata": {
"tags": []
},
"outputs": [],
"execution_count": null,
"id": "efcdb2b6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your OpenAI API key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "f5d528fa",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "52fa46e8",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0fad78d8",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain OpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e300149",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
@@ -60,7 +113,11 @@
"OPENAI_ORGANIZATION = getpass()\n",
"\n",
"os.environ[\"OPENAI_ORGANIZATION\"] = OPENAI_ORGANIZATION\n",
"```"
"```\n",
"\n",
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
@@ -72,74 +129,29 @@
},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "035dea0f",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"template = \"\"\"Question: {question}\n",
"from langchain_openai import OpenAI\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = PromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3f3458d9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = OpenAI()"
]
},
{
"cell_type": "markdown",
"id": "4fc152cd",
"id": "464003c1",
"metadata": {},
"source": [
"If you manually want to specify your OpenAI API key and/or organization ID, you can use the following:\n",
"```python\n",
"llm = OpenAI(openai_api_key=\"YOUR_API_KEY\", openai_organization=\"YOUR_ORGANIZATION_ID\")\n",
"```\n",
"Remove the openai_organization parameter should it not apply to you."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a641dbd9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm_chain = prompt | llm"
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9f844993",
"metadata": {
"tags": []
},
"id": "85b49da0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Justin Bieber was born on March 1, 1994. The Super Bowl is typically played in late January or early February. So, we need to look at the Super Bowl from 1994. In 1994, the Super Bowl was Super Bowl XXVIII, played on January 30, 1994. The winning team of that Super Bowl was the Dallas Cowboys.'"
"\"\\n\\nI'm an AI language model created by OpenAI, so I don't have feelings or emotions. But thank you for asking! How can I assist you today?\""
]
},
"execution_count": 5,
@@ -148,9 +160,37 @@
}
],
"source": [
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"llm.invoke(\"Hello how are you?\")"
]
},
{
"cell_type": "markdown",
"id": "2b7e0dfc",
"metadata": {},
"source": [
"## Chaining"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a641dbd9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"llm_chain.invoke(question)"
"prompt = PromptTemplate(\"How to say {input} in {output_language}:\\n\")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
@@ -158,6 +198,8 @@
"id": "58a9ddb1",
"metadata": {},
"source": [
"## Using a proxy\n",
"\n",
"If you are behind an explicit proxy, you can specify the http_client to pass through"
]
},
@@ -168,11 +210,24 @@
"metadata": {},
"outputs": [],
"source": [
"pip install httpx\n",
"%pip install httpx\n",
"\n",
"import httpx\n",
"\n",
"openai = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", http_client=httpx.Client(proxies=\"http://proxy.yourcompany.com:8080\"))"
"openai = OpenAI(\n",
" model_name=\"gpt-3.5-turbo-instruct\",\n",
" http_client=httpx.Client(proxies=\"http://proxy.yourcompany.com:8080\"),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "73e207dd",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `OpenAI` llm features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html"
]
}
],
@@ -192,7 +247,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
"version": "3.11.9"
},
"vscode": {
"interpreter": {

View File

@@ -7,7 +7,7 @@
"source": [
"# Runhouse\n",
"\n",
"The [Runhouse](https://github.com/run-house/runhouse) allows remote compute and data across environments and users. See the [Runhouse docs](https://runhouse-docs.readthedocs-hosted.com/en/latest/).\n",
"[Runhouse](https://github.com/run-house/runhouse) allows remote compute and data across environments and users. See the [Runhouse docs](https://www.run.house/docs).\n",
"\n",
"This example goes over how to use LangChain and [Runhouse](https://github.com/run-house/runhouse) to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.\n",
"\n",

View File

@@ -19,7 +19,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory.motorhead_memory import MotorheadMemory"
"from langchain_community.memory.motorhead_memory import MotorheadMemory"
]
},
{

View File

@@ -48,7 +48,7 @@
"from uuid import uuid4\n",
"\n",
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain.memory import ZepMemory\n",
"from langchain_community.memory.zep_memory import ZepMemory\n",
"from langchain_community.retrievers import ZepRetriever\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",

View File

@@ -69,11 +69,12 @@
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_community.retrievers import ZepCloudRetriever\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.tools import Tool\n",
"from langchain_openai import OpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"

View File

@@ -308,7 +308,7 @@ See a [usage example](/docs/integrations/graphs/amazon_neptune_open_cypher).
```python
from langchain_community.graphs import NeptuneGraph
from langchain_community.graphs import NeptuneAnalyticsGraph
from langchain.chains import NeptuneOpenCypherQAChain
from langchain_community.chains.graph_qa.neptune_cypher import NeptuneOpenCypherQAChain
```
### Amazon Neptune with SPARQL
@@ -317,7 +317,7 @@ See a [usage example](/docs/integrations/graphs/amazon_neptune_sparql).
```python
from langchain_community.graphs import NeptuneRdfGraph
from langchain.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain
from langchain_community.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain
```

View File

@@ -122,5 +122,5 @@ pip install transformers huggingface_hub
See a [usage example](/docs/integrations/tools/huggingface_tools).
```python
from langchain.agents import load_huggingface_tool
from langchain_community.agent_toolkits.load_tools import load_huggingface_tool
```

View File

@@ -237,6 +237,26 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_onenote).
from langchain_community.document_loaders.onenote import OneNoteLoader
```
### Playwright URL Loader
>[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool
> developed by `Microsoft` that allows you to programmatically control and automate
> web browsers. It is designed for end-to-end testing, scraping, and automating
> tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`.
First, let's install dependencies:
```bash
pip install playwright unstructured
```
See a [usage example](/docs/integrations/document_loaders/url/#playwright-url-loader).
```python
from langchain_community.document_loaders.onenote import OneNoteLoader
```
## AI Agent Memory System
[AI agent](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents) needs robust memory systems that support multi-modality, offer strong operational performance, and enable agent memory sharing as well as separation.
@@ -406,6 +426,24 @@ from langchain_community.agent_toolkits import PowerBIToolkit
from langchain_community.utilities.powerbi import PowerBIDataset
```
### PlayWright Browser Toolkit
>[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool
> developed by `Microsoft` that allows you to programmatically control and automate
> web browsers. It is designed for end-to-end testing, scraping, and automating
> tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`.
We need to install several python packages.
```bash
pip install playwright lxml
```
See a [usage example](/docs/integrations/tools/playwright).
```python
from langchain_community.agent_toolkits import PlayWrightBrowserToolkit
```
## Graphs

View File

@@ -18,6 +18,5 @@ There are two document loaders available for GitHub.
See a [usage example](/docs/integrations/document_loaders/github).
```python
from langchain_community.document_loaders import GitHubIssuesLoader
from langchain.document_loaders import GithubFileLoader
from langchain_community.document_loaders import GitHubIssuesLoader, GithubFileLoader
```

View File

@@ -0,0 +1,31 @@
# IEIT Systems
>[IEIT Systems](https://en.ieisystem.com/) is a Chinese information technology company
> established in 1999. It provides the IT infrastructure products, solutions,
> and services, innovative IT products and solutions across cloud computing,
> big data, and artificial intelligence.
## LLMs
See a [usage example](/docs/integrations/llms/yuan2).
```python
from langchain_community.llms.yuan2 import Yuan2
```
## Chat models
See the [installation instructions](/docs/integrations/chat/yuan2/#setting-up-your-api-server).
Yuan2.0 provided an OpenAI compatible API, and ChatYuan2 is integrated into langchain by using `OpenAI client`.
Therefore, ensure the `openai` package is installed.
```bash
pip install openai
```
See a [usage example](/docs/integrations/chat/yuan2).
```python
from langchain_community.chat_models import ChatYuan2
```

View File

@@ -0,0 +1,38 @@
# iFlytek
>[iFlytek](https://www.iflytek.com) is a Chinese information technology company
> established in 1999. It creates voice recognition software and
> voice-based internet/mobile products covering education, communication,
> music, intelligent toys industries.
## Installation and Setup
- Get `SparkLLM` app_id, api_key and api_secret from [iFlyTek SparkLLM API Console](https://console.xfyun.cn/services/bm3) (for more info, see [iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi)).
- Install the Python package (not for the embedding models):
```bash
pip install websocket-client
```
## LLMs
See a [usage example](/docs/integrations/llms/sparkllm).
```python
from langchain_community.llms import SparkLLM
```
## Chat models
See a [usage example](/docs/integrations/chat/sparkllm).
```python
from langchain_community.chat_models import ChatSparkLLM
```
## Embedding models
```python
from langchain_community.embeddings import SparkLLMTextEmbeddings
```

View File

@@ -41,7 +41,7 @@ See a usage [example](/docs/integrations/llms/konko).
- **Completion with mistralai/Mistral-7B-v0.1:**
```python
from langchain.llms import Konko
from langchain_community.llms import Konko
llm = Konko(max_tokens=800, model='mistralai/Mistral-7B-v0.1')
prompt = "Generate a Product Description for Apple Iphone 15"
response = llm.invoke(prompt)

View File

@@ -1,26 +1,50 @@
# Llama.cpp
This page covers how to use [llama.cpp](https://github.com/ggerganov/llama.cpp) within LangChain.
It is broken into two parts: installation and setup, and then references to specific Llama-cpp wrappers.
>[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov`
>[llama.cpp](https://github.com/ggerganov/llama.cpp).
>
>This package provides:
>
> - Low-level access to C API via ctypes interface.
> - High-level Python API for text completion
> - `OpenAI`-like API
> - `LangChain` compatibility
> - `LlamaIndex` compatibility
> - OpenAI compatible web server
> - Local Copilot replacement
> - Function Calling support
> - Vision API support
> - Multiple Models
## Installation and Setup
- Install the Python package with `pip install llama-cpp-python`
- Install the Python package
```bash
pip install llama-cpp-python
````
- Download one of the [supported models](https://github.com/ggerganov/llama.cpp#description) and convert them to the llama.cpp format per the [instructions](https://github.com/ggerganov/llama.cpp)
## Wrappers
### LLM
## Chat models
See a [usage example](/docs/integrations/chat/llamacpp).
```python
from langchain_community.chat_models import ChatLlamaCpp
```
## LLMs
See a [usage example](/docs/integrations/llms/llamacpp).
There exists a LlamaCpp LLM wrapper, which you can access with
```python
from langchain_community.llms import LlamaCpp
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp)
### Embeddings
## Embedding models
See a [usage example](/docs/integrations/text_embedding/llamacpp).
There exists a LlamaCpp Embeddings wrapper, which you can access with
```python
from langchain_community.embeddings import LlamaCppEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp)

View File

@@ -0,0 +1,21 @@
# MariTalk
>[MariTalk](https://www.maritaca.ai/en) is an LLM-based chatbot trained to meet the needs of Brazil.
## Installation and Setup
You have to get the MariTalk API key.
You also need to install the `httpx` Python package.
```bash
pip install httpx
```
## Chat models
See a [usage example](/docs/integrations/chat/maritalk).
```python
from langchain_community.chat_models import ChatMaritalk
```

View File

@@ -167,7 +167,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",

View File

@@ -0,0 +1,34 @@
# MLX
>[MLX](https://ml-explore.github.io/mlx/build/html/index.html) is a `NumPy`-like array framework
> designed for efficient and flexible machine learning on `Apple` silicon,
> brought to you by `Apple machine learning research`.
## Installation and Setup
Install several Python packages:
```bash
pip install mlx-lm transformers huggingface_hub
````
## Chat models
See a [usage example](/docs/integrations/chat/mlx).
```python
from langchain_community.chat_models.mlx import ChatMLX
```
## LLMs
### MLX Local Pipelines
See a [usage example](/docs/integrations/llms/mlx_pipelines).
```python
from langchain_community.llms.mlx_pipeline import MLXPipeline
```

View File

@@ -12,5 +12,5 @@ See instructions at [Motörhead](https://github.com/getmetal/motorhead) for runn
See a [usage example](/docs/integrations/memory/motorhead_memory).
```python
from langchain.memory import MotorheadMemory
from langchain_community.memory import MotorheadMemory
```

View File

@@ -0,0 +1,37 @@
# OctoAI
>[OctoAI](https://docs.octoai.cloud/docs) offers easy access to efficient compute
> and enables users to integrate their choice of AI models into applications.
> The `OctoAI` compute service helps you run, tune, and scale AI applications easily.
## Installation and Setup
- Install the `openai` Python package:
```bash
pip install openai
````
- Register on `OctoAI` and get an API Token from [your OctoAI account page](https://octoai.cloud/settings).
## Chat models
See a [usage example](/docs/integrations/chat/octoai).
```python
from langchain_community.chat_models import ChatOctoAI
```
## LLMs
See a [usage example](/docs/integrations/llms/octoai).
```python
from langchain_community.llms.octoai_endpoint import OctoAIEndpoint
```
## Embedding models
```python
from langchain_community.embeddings.octoai_embeddings import OctoAIEmbeddings
```

View File

@@ -0,0 +1,25 @@
# Perplexity
>[Perplexity](https://www.perplexity.ai/pro) is the most powerful way to search
> the internet with unlimited Pro Search, upgraded AI models, unlimited file upload,
> image generation, and API credits.
>
> You can check a [list of available models](https://docs.perplexity.ai/docs/model-cards).
## Installation and Setup
Install a Python package:
```bash
pip install openai
````
Get your API key from [here](https://docs.perplexity.ai/docs/getting-started).
## Chat models
See a [usage example](/docs/integrations/chat/perplexity).
```python
from langchain_community.chat_models import ChatPerplexity
```

View File

@@ -108,7 +108,7 @@
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"state_of_the_union.txt\")\n",
"documents = loader.load()\n",

View File

@@ -86,7 +86,7 @@ See a [Perpetual Memory Example here](/docs/integrations/memory/zep_cloud_chat_m
You can use `ZepCloudMemory` together with agents that support Memory.
```python
from langchain.memory import ZepCloudMemory
from langchain_community.memory import ZepCloudMemory
```
See a [Memory RAG Example here](/docs/integrations/memory/zep_memory_cloud).
@@ -117,4 +117,4 @@ MMR search is useful for ensuring that the retrieved documents are diverse and n
from langchain_community.vectorstores import ZepCloudVectorStore
```
See a [usage example](/docs/integrations/vectorstores/zep_cloud).
See a [usage example](/docs/integrations/vectorstores/zep_cloud).

View File

@@ -0,0 +1,18 @@
# Zhipu AI
>[Zhipu AI](https://www.zhipuai.cn/en/aboutus), originating from the technological
> advancements of `Tsinghua University's Computer Science Department`,
> is an artificial intelligence company with the mission of teaching machines
> to think like humans. Its world-leading AI team has developed the cutting-edge
> large language and multimodal models and built the high-precision billion-scale
> knowledge graphs, the combination of which uniquely empowers us to create a powerful
> data- and knowledge-driven cognitive engine towards artificial general intelligence.
## Chat models
See a [usage example](/docs/integrations/chat/zhipuai).
```python
from langchain_community.chat_models import ChatZhipuAI
```

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "00a924a0-57e2-43fa-95dc-3ea48a56d3a5",
"metadata": {},
"source": [
@@ -17,8 +17,6 @@
"source": [
"# ArxivRetriever\n",
"\n",
"## Overview\n",
"\n",
">[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\n",
"\n",
"This notebook shows how to retrieve scientific articles from Arxiv.org into the [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) format that is used downstream.\n",
@@ -27,9 +25,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[ArxivRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) | Scholarly articles on [arxiv.org](https://arxiv.org/) | langchain_community |\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"external_retrievers\" item=\"ArxivRetriever\" />\n",
"\n",
"## Setup\n",
"\n",

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "f9a62e19-b00b-4f6c-a700-1e500e4c290a",
"metadata": {},
"source": [
@@ -17,7 +17,6 @@
"source": [
"# AzureAISearchRetriever\n",
"\n",
"## Overview\n",
"[Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) (formerly known as `Azure Cognitive Search`) is a Microsoft cloud search service that gives developers infrastructure, APIs, and tools for information retrieval of vector, keyword, and hybrid queries at scale.\n",
"\n",
"`AzureAISearchRetriever` is an integration module that returns documents from an unstructured query. It's based on the BaseRetriever class and it targets the 2023-11-01 stable REST API version of Azure AI Search, which means it supports vector indexing and queries.\n",
@@ -28,9 +27,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[AzureAISearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) | ❌ | ✅ | langchain_community |\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"document_retrievers\" item=\"AzureAISearchRetriever\" />\n",
"\n",
"\n",
"## Setup\n",

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "b0872249-1af5-4d54-b816-1babad7a8c9e",
"metadata": {},
"source": [
@@ -17,8 +17,6 @@
"source": [
"# Bedrock (Knowledge Bases) Retriever\n",
"\n",
"## Overview\n",
"\n",
"This guide will help you getting started with the AWS Knowledge Bases [retriever](/docs/concepts/#retrievers).\n",
"\n",
"[Knowledge Bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n",
@@ -29,9 +27,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[AmazonKnowledgeBasesRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) | ❌ | ✅ | langchain_aws |\n"
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"document_retrievers\" item=\"AmazonKnowledgeBasesRetriever\" />\n"
]
},
{

View File

@@ -76,7 +76,7 @@
},
"outputs": [],
"source": [
"from langchain.retrievers import DriaRetriever\n",
"from langchain_community.retrievers import DriaRetriever\n",
"\n",
"api_key = os.getenv(\"DRIA_API_KEY\")\n",
"retriever = DriaRetriever(api_key=api_key)"

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "41ccce84-f6d9-4ba0-8281-22cbf29f20d3",
"metadata": {},
"source": [
@@ -17,7 +17,6 @@
"source": [
"# ElasticsearchRetriever\n",
"\n",
"## Overview\n",
">[Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. It supports keyword search, vector search, hybrid search and complex filtering.\n",
"\n",
"The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don't you can use `ElasticsearchRetriever`.\n",
@@ -26,9 +25,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[ElasticsearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) | ✅ | ✅ | langchain_elasticsearch |\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"document_retrievers\" item=\"ElasticsearchRetriever\" />\n",
"\n",
"\n",
"## Setup\n",

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
@@ -15,8 +15,6 @@
"source": [
"# Google Vertex AI Search\n",
"\n",
"## Overview\n",
"\n",
">[Google Vertex AI Search](https://cloud.google.com/enterprise-search) (formerly known as `Enterprise Search` on `Generative AI App Builder`) is a part of the [Vertex AI](https://cloud.google.com/vertex-ai) machine learning platform offered by `Google Cloud`.\n",
">\n",
">`Vertex AI Search` lets organizations quickly build generative AI-powered search engines for customers and employees. It's underpinned by a variety of `Google Search` technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the users query input. Vertex AI Search also benefits from Googles expertise in understanding how users search and factors in content relevance to order displayed results.\n",
@@ -29,9 +27,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[VertexAISearchRetriever](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) | ❌ | ✅ | langchain_google_community |\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"document_retrievers\" item=\"VertexAISearchRetriever\" />\n",
"\n",
"\n",
"## Setup\n",

View File

@@ -3,6 +3,8 @@ sidebar_position: 0
sidebar_class_name: hidden
---
import {CategoryTable, IndexTable} from '@theme/FeatureTables'
# Retrievers
A [retriever](/docs/concepts/#retrievers) is an interface that returns documents given an unstructured query.
@@ -22,20 +24,14 @@ This page lists custom retrievers, implemented via subclassing [BaseRetriever](/
The below retrievers allow you to index and search a custom corpus of documents.
| Retriever | Self-host | Cloud offering | Package |
|-----------|-----------|----------------|---------|
| [AmazonKnowledgeBasesRetriever](/docs/integrations/retrievers/bedrock) | ❌ | ✅ | [langchain_aws](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) |
| [AzureAISearchRetriever](/docs/integrations/retrievers/azure_ai_search) | ❌ | ✅ | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) |
| [ElasticsearchRetriever](/docs/integrations/retrievers/elasticsearch_retriever) | ✅ | ✅ | [langchain_elasticsearch](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) |
| [MilvusCollectionHybridSearchRetriever](/docs/integrations/retrievers/milvus_hybrid_search) | ✅ | ❌ | [langchain_milvus](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) |
| [VertexAISearchRetriever](/docs/integrations/retrievers/google_vertex_ai_search) | ❌ | ✅ | [langchain_google_community](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) |
<CategoryTable category="document_retrievers" />
## External index
The below retrievers will search over an external index (e.g., constructed from Internet data or similar).
| Retriever | Source | Package |
|-----------|--------|---------|
| [ArxivRetriever](/docs/integrations/retrievers/arxiv) | Scholarly articles on [arxiv.org](https://arxiv.org/) | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) |
| [TavilySearchAPIRetriever](/docs/integrations/retrievers/tavily) | Internet search | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) |
| [WikipediaRetriever](/docs/integrations/retrievers/wikipedia) | [Wikipedia](https://www.wikipedia.org/) articles | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) |
<CategoryTable category="external_retrievers" />
## All retrievers
<IndexTable />

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
@@ -15,8 +15,6 @@
"source": [
"# Milvus Hybrid Search Retriever\n",
"\n",
"## Overview\n",
"\n",
"> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n",
"\n",
"This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/#retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).\n",
@@ -25,11 +23,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[MilvusCollectionHybridSearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) | ✅ | ❌ | langchain_milvus |\n",
"\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"document_retrievers\" item=\"MilvusCollectionHybridSearchRetriever\" />\n",
"\n",
"## Setup\n",
"\n",

View File

@@ -379,4 +379,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -85,13 +85,14 @@
"source": [
"import os\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"os.environ[\"VECTARA_API_KEY\"] = \"<YOUR_VECTARA_API_KEY>\"\n",
"os.environ[\"VECTARA_CORPUS_ID\"] = \"<YOUR_VECTARA_CORPUS_ID>\"\n",
"os.environ[\"VECTARA_CUSTOMER_ID\"] = \"<YOUR_VECTARA_CUSTOMER_ID>\"\n",
"\n",
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import Vectara\n",
"from langchain_openai.chat_models import ChatOpenAI"
]

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
@@ -15,16 +15,15 @@
"source": [
"# TavilySearchAPIRetriever\n",
"\n",
"## Overview\n",
">[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n",
"\n",
"We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | Internet search | langchain_community |\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"external_retrievers\" item=\"TavilySearchAPIRetriever\" />\n",
"\n",
"## Setup"
]

View File

@@ -23,7 +23,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import NeuralDBRetriever\n",
"from langchain_community.retrievers import NeuralDBRetriever\n",
"\n",
"# From scratch\n",
"retriever = NeuralDBRetriever.from_scratch(thirdai_key=\"your-thirdai-key\")\n",

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "62727aaa-bcff-4087-891c-e539f824ee1f",
"metadata": {},
"source": [
@@ -24,9 +24,9 @@
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[WikipediaRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) | [Wikipedia](https://www.wikipedia.org/) articles | langchain_community |"
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"external_retrievers\" item=\"WikipediaRetriever\" />"
]
},
{

View File

@@ -60,7 +60,7 @@
"import time\n",
"from uuid import uuid4\n",
"\n",
"from langchain.memory import ZepMemory\n",
"from langchain_community.memory.zep_memory import ZepMemory\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"# Set this to your Zep server URL\n",

View File

@@ -219,4 +219,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
}

View File

@@ -2,116 +2,248 @@
"cells": [
{
"cell_type": "raw",
"id": "c2923bd1",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AI21 Labs\n",
"sidebar_label: AI21\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cc3c6ef6bbd57ce9",
"metadata": {
"collapsed": false
},
"id": "9a3d6f34",
"metadata": {},
"source": [
"# AI21Embeddings\n",
"\n",
"This notebook covers how to get started with AI21 embedding models.\n",
"This will help you get started with AI21 embedding models using LangChain. For detailed documentation on `AI21Embeddings` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html).\n",
"\n",
"## Installation"
"## Overview\n",
"### Integration details\n",
"\n",
"import { ItemTable } from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"text_embedding\" item=\"AI21\" />\n",
"\n",
"## Setup\n",
"\n",
"To access AI21 embedding models you'll need to create an AI21 account, get an API key, and install the `langchain-ai21` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [https://docs.ai21.com/](https://docs.ai21.com/) to sign up to AI21 and generate an API key. Once you've done this set the `AI21_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c3bef91",
"execution_count": 2,
"id": "36521c2a",
"metadata": {},
"outputs": [],
"source": [
"!pip install -qU langchain-ai21"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Environment Setup\n",
"\n",
"We'll need to get a [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"AI21_API_KEY\"] = getpass()"
"if not os.getenv(\"AI21_API_KEY\"):\n",
" os.environ[\"AI21_API_KEY\"] = getpass.getpass(\"Enter your AI21 API key: \")"
]
},
{
"cell_type": "markdown",
"id": "74ef9d8b40a1319e",
"metadata": {
"collapsed": false
},
"id": "c84fb993",
"metadata": {},
"source": [
"## Usage"
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "12fcfb4b",
"execution_count": 3,
"id": "39a4953b",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "d9664366",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AI21 integration lives in the `langchain-ai21` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "64853226",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-ai21"
]
},
{
"cell_type": "markdown",
"id": "45dd1724",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9ea7a09b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ai21 import AI21Embeddings\n",
"\n",
"embeddings = AI21Embeddings()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f2e6104",
"metadata": {},
"outputs": [],
"source": [
"embeddings.embed_query(\"My query to look up\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3465d7e63bfb3d1",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"embeddings.embed_documents(\n",
" [\"This is a content of the document\", \"This is another document\"]\n",
"embeddings = AI21Embeddings(\n",
" # Can optionally increase or decrease the batch_size\n",
" # to improve latency.\n",
" # Use larger batch sizes with smaller documents, and\n",
" # smaller batch sizes with larger documents.\n",
" # batch_size=256,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d60af6d",
"cell_type": "markdown",
"id": "77d271b6",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Indexing and Retrieval\n",
"\n",
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials under the [working with external knowledge tutorials](/docs/tutorials/#working-with-external-knowledge).\n",
"\n",
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d817716b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'LangChain is the framework for building context-aware reasoning applications'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a vector store with a sample text\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"\n",
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
"\n",
"vectorstore = InMemoryVectorStore.from_texts(\n",
" [text],\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Use the vectorstore as a retriever\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"# Retrieve the most similar text\n",
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
"\n",
"# show the retrieved document's content\n",
"retrieved_documents[0].page_content"
]
},
{
"cell_type": "markdown",
"id": "e02b9855",
"metadata": {},
"source": [
"## Direct Usage\n",
"\n",
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
"\n",
"You can directly call these methods to get embeddings for your own use cases.\n",
"\n",
"### Embed single texts\n",
"\n",
"You can embed single texts or documents with `embed_query`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0d2befcd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0.01913362182676792, 0.004960147198289633, -0.01582135073840618, -0.042474791407585144, 0.040200788\n"
]
}
],
"source": [
"single_vector = embeddings.embed_query(text)\n",
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
]
},
{
"cell_type": "markdown",
"id": "1b5a7d03",
"metadata": {},
"source": [
"### Embed multiple texts\n",
"\n",
"You can embed multiple texts with `embed_documents`:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2f4d6e97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0.03029559925198555, 0.002908500377088785, -0.02700909972190857, -0.04616579785943031, 0.0382771529\n",
"[0.018214847892522812, 0.011460083536803722, -0.03329407051205635, -0.04951060563325882, 0.032756105\n"
]
}
],
"source": [
"text2 = (\n",
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
")\n",
"two_vectors = embeddings.embed_documents([text, text2])\n",
"for vector in two_vectors:\n",
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
]
},
{
"cell_type": "markdown",
"id": "98785c12",
"metadata": {},
"source": [
"## API Reference\n",
"\n",
"For detailed documentation on `AI21Embeddings` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html).\n"
]
}
],
"metadata": {
@@ -130,7 +262,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -2,195 +2,261 @@
"cells": [
{
"cell_type": "raw",
"id": "0aed0743",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"keywords: [AzureOpenAIEmbeddings]\n",
"sidebar_label: AzureOpenAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "c3852491",
"id": "9a3d6f34",
"metadata": {},
"source": [
"# Azure OpenAI\n",
"# AzureOpenAIEmbeddings\n",
"\n",
"Let's load the Azure OpenAI Embedding class with environment variables set to indicate to use Azure endpoints."
"This will help you get started with AzureOpenAI embedding models using LangChain. For detailed documentation on `AzureOpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.azure.AzureOpenAIEmbeddings.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"import { ItemTable } from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"text_embedding\" item=\"AzureOpenAI\" />\n",
"\n",
"## Setup\n",
"\n",
"To access AzureOpenAI embedding models you'll need to create an Azure account, get an API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Youll need to have an Azure OpenAI instance deployed. You can deploy a version on Azure Portal following this [guide](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).\n",
"\n",
"Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the “Keys and Endpoint” section of your instance.\n",
"\n",
"```bash\n",
"AZURE_OPENAI_ENDPOINT=<YOUR API ENDPOINT>\n",
"AZURE_OPENAI_API_KEY=<YOUR_KEY>\n",
"AZURE_OPENAI_API_VERSION=\"2024-02-01\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "36521c2a",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your AzureOpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "c84fb993",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "39a4953b",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "d9664366",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AzureOpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "228faf0c",
"id": "64853226",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-openai"
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8a6ed30d-806f-4800-b5fd-d04126be9060",
"cell_type": "markdown",
"id": "45dd1724",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"## Instantiation\n",
"\n",
"os.environ[\"AZURE_OPENAI_API_KEY\"] = \"...\"\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://<your-endpoint>.openai.azure.com/\""
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "20179bc7-3f71-4909-be12-d38bce009b18",
"execution_count": 11,
"id": "9ea7a09b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import AzureOpenAIEmbeddings\n",
"\n",
"embeddings = AzureOpenAIEmbeddings(\n",
" azure_deployment=\"<your-embeddings-deployment-name>\",\n",
" openai_api_version=\"2023-05-15\",\n",
" model=\"text-embedding-3-large\",\n",
" # dimensions: Optional[int] = None, # Can specify dimensions with new text-embedding-3 models\n",
" # azure_endpoint=\"https://<your-endpoint>.openai.azure.com/\", If not provided, will read env variable AZURE_OPENAI_ENDPOINT\n",
" # api_key=... # Can provide an API key directly. If missing read env variable AZURE_OPENAI_API_KEY\n",
" # openai_api_version=..., # If not provided, will read env variable AZURE_OPENAI_API_VERSION\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f8cb9dca-738b-450f-9986-5c3efd3c6eb3",
"cell_type": "markdown",
"id": "77d271b6",
"metadata": {},
"outputs": [],
"source": [
"text = \"this is a test document\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0fae0295-b117-4a5a-8b98-500c79306551",
"metadata": {},
"outputs": [],
"source": [
"query_result = embeddings.embed_query(text)"
"## Indexing and Retrieval\n",
"\n",
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials under the [working with external knowledge tutorials](/docs/tutorials/#working-with-external-knowledge).\n",
"\n",
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "65a01ddd-0bbf-444f-a87f-93af25ef902c",
"metadata": {},
"outputs": [],
"source": [
"doc_result = embeddings.embed_documents([text])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "45771052-68ca-4e03-9c4f-a0c7796d9442",
"id": "d817716b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[-0.012222584727053133,\n",
" 0.0072103982392216145,\n",
" -0.014818063280923775,\n",
" -0.026444746872933557,\n",
" -0.0034330499700826883]"
"'LangChain is the framework for building context-aware reasoning applications'"
]
},
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"doc_result[0][:5]"
"# Create a vector store with a sample text\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"\n",
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
"\n",
"vectorstore = InMemoryVectorStore.from_texts(\n",
" [text],\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Use the vectorstore as a retriever\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"# Retrieve the most similar text\n",
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
"\n",
"# show the retrieved document's content\n",
"retrieved_documents[0].page_content"
]
},
{
"cell_type": "markdown",
"id": "e66ec1f2-6768-4ee5-84bf-a2d76adc20c8",
"id": "e02b9855",
"metadata": {},
"source": [
"## [Legacy] When using `openai<1`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b40f827",
"metadata": {},
"outputs": [],
"source": [
"# set the environment variables needed for openai package to know to reach out to azure\n",
"import os\n",
"## Direct Usage\n",
"\n",
"os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n",
"os.environ[\"OPENAI_API_BASE\"] = \"https://<your-endpoint.openai.azure.com/\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your AzureOpenAI key\"\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2023-05-15\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bb36d16c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
"\n",
"embeddings = OpenAIEmbeddings(deployment=\"your-embeddings-deployment-name\")"
"You can directly call these methods to get embeddings for your own use cases.\n",
"\n",
"### Embed single texts\n",
"\n",
"You can embed single texts or documents with `embed_query`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "228abcbb",
"execution_count": 6,
"id": "0d2befcd",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[-0.0011676070280373096, 0.007125577889382839, -0.014674457721412182, -0.034061674028635025, 0.01128\n"
]
}
],
"source": [
"text = \"This is a test document.\""
"single_vector = embeddings.embed_query(text)\n",
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "60dd7fad",
"cell_type": "markdown",
"id": "1b5a7d03",
"metadata": {},
"outputs": [],
"source": [
"query_result = embeddings.embed_query(text)"
"### Embed multiple texts\n",
"\n",
"You can embed multiple texts with `embed_documents`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83bc1a72",
"execution_count": 7,
"id": "2f4d6e97",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[-0.0011966148158535361, 0.007160289213061333, -0.014659193344414234, -0.03403077274560928, 0.011280\n",
"[-0.005595256108790636, 0.016757294535636902, -0.011055258102715015, -0.031094247475266457, -0.00363\n"
]
}
],
"source": [
"doc_result = embeddings.embed_documents([text])"
"text2 = (\n",
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
")\n",
"two_vectors = embeddings.embed_documents([text, text2])\n",
"for vector in two_vectors:\n",
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aaad49f8",
"cell_type": "markdown",
"id": "98785c12",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## API Reference\n",
"\n",
"For detailed documentation on `AzureOpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.azure.AzureOpenAIEmbeddings.html).\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -204,7 +270,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.9.6"
}
},
"nbformat": 4,

Some files were not shown because too many files have changed in this diff Show More