Compare commits

...

251 Commits

Author SHA1 Message Date
Bagatur
8616e1c44a wip 2024-06-06 14:45:05 -07:00
Bagatur
2e1dc2c660 recursive url bash 2024-06-06 14:32:37 -07:00
seyf97
2904c50cd5 openai[patch]: correct grammar in exception message in embeddings/base.py (#22629)
Correct the grammar error for missing transformers package ValueError
2024-06-06 18:55:04 +00:00
Anush
80560419b0 qdrant[patch]: Make path optional in from_existing_collection() (#21875)
## Description

The `path` param is used to specify the local persistence directory,
which isn't required if using Qdrant server.

This is a breaking but necessary change.
2024-06-06 10:37:08 -07:00
ccurme
b57aa89f34 multiple: implement ls_params (#22621)
implement ls_params for ai21, fireworks, groq.
2024-06-06 16:51:37 +00:00
Xiangrui Meng
f26ab93df8 community: support Databricks Unity Catalog functions as LangChain tools (#22555)
This PR adds support for using Databricks Unity Catalog functions as
LangChain tools, which runs inside a Databricks SQL warehouse.

* An example notebook is provided.
2024-06-06 09:38:50 -07:00
ccurme
c1ef731503 anthropic: update attribute name and alias (#22625)
update name to `stop_sequences` and alias to `stop` (instead of the
other way around), since `stop_sequences` is the name used by anthropic.
2024-06-06 12:29:10 -04:00
lucasiscovici
05bf98b2f9 community[patch]: pgvector replace nin_ by not_in (#22619)
- [ ] **community**: "pgvector: replace nin_ by not_in"

- [ ] **PR message**: nin_ do not exist in sqlalchemy orm, it's not_in
2024-06-06 12:17:22 -04:00
ccurme
3999761201 multiple: add stop attribute (#22573) 2024-06-06 12:11:52 -04:00
ccurme
e08879147b Revert "anthropic: stream token usage" (#22624)
Reverts langchain-ai/langchain#20180
2024-06-06 12:05:08 -04:00
Bagatur
0d495f3f63 anthropic: stream token usage (#20180)
open to other ideas
<img width="1181" alt="Screenshot 2024-04-08 at 5 34 08 PM"
src="https://github.com/langchain-ai/langchain/assets/22008038/03eb11c4-5eb5-43e3-9109-a13f76098fa4">

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-06 11:51:34 -04:00
liuzc9
e0e40f3f63 docs: Fix typo in llmonitor.md (#22590) 2024-06-06 15:26:51 +00:00
Bagatur
feb73d4281 docs: Add ChatGoogleGenerativeAI to model feat table (#22617) 2024-06-06 08:07:13 -07:00
Satyam Kumar
17b486a37b openai, azure: update model_name in ChatResult to use name from API response (#22569)
The response.get("model", self.model_name) checks if the model key
exists in the response dictionary. If it does, it uses that value;
otherwise, it uses self.model_name.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-06 11:00:09 -04:00
Suganth Solamanraja
02495ae7c5 docs: Correct return type in docstring (#22597)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
- **Description:** This PR corrects the return type in the docstring of
the `docs/api_reference/create_api_rst.py/_load_package_modules`
function. The return type was previously described as a list of

Co-authored-by: suganthsolamanraja <suganth.solamanraja@techjays..com>
2024-06-06 14:51:46 +00:00
svmpsp-rc
51942c03eb docs: correct typos in Italian words (#22606)
**Description**

Fix typos in Italian words.
2024-06-06 07:46:07 -07:00
Gabriele Ghisleni
95883a99a9 docs: ElasticsearchCacheStore in stores integrations documentation (#22612)
The package for LangChain integrations with Elasticsearch
https://github.com/langchain-ai/langchain-elastic contains a
Elasticsearch byte store cache integration (see
https://github.com/langchain-ai/langchain-elastic/pull/27). This is the
documentation contribution on the page dedicated to stores integrations

Co-authored-by: Gabriele Ghisleni <gabriele.ghisleni@spaziodati.eu>
2024-06-06 14:36:43 +00:00
Christophe Bornet
12ddb4fc6f core[patch]: Use explicit classes for InMemoryByteStore and InMemoryStore (#22608)
The current implementation doesn't work well with type checking.
Instead replace with class definition that correctly works with type
checking.
2024-06-06 07:34:43 -07:00
andyjessen
cfed68e06f docs: Fix description (#22611)
This commit fixes the description of the hair_color field.
2024-06-06 07:25:27 -07:00
ccurme
1925bde32e together: bump langchain-core (#22616)
langchain-together depends on langchain-openai ^0.1.8
langchain-openai 0.1.8 has langchain-core >= 0.2.2

Here we bump langchain-core to 0.2.2, just to pass minimum dependency
version tests.
2024-06-06 14:09:40 +00:00
ccurme
35f4aa927b together[patch]: Release 0.1.3 (#22615) 2024-06-06 13:58:35 +00:00
Asi Greenholts
f23bec7be6 docs: Fix typo (#22596)
Fix typo
2024-06-06 08:39:54 -04:00
CharlesCNorton
abb0cecb44 fix: typo in Agents section of README (#22599)
Corrected the phrase "complete done" to "completely done" for better
grammatical accuracy and clarity in the Agents section of the README.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-06 07:44:36 -04:00
Kirushikesh DB
db7e7b69e3 docs: Removed unwanted cell in refine segment (#22604)
**Description:**
There is one unwanted duplicate cell in refine section of summarization
documentation, i have removed it.
2024-06-06 07:40:26 -04:00
andyjessen
8b40428f58 docs: Fix typo (#22603)
This commit changes minor typo in the field description.
2024-06-06 07:38:36 -04:00
Isaac Francisco
ba3e219d83 community[patch]: recursive url loader fix and unit tests (#22521)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-05 17:56:20 -07:00
Jacob Lee
234394f631 docs[minor]: Add "Build a PDF ingestion and Question/Answering system" tutorial (#22570)
More direct entrypoint for a common use-case. Meant to give people a
more hands-on intro to document loaders/loading data from different data
sources as well.

Some duplicate content for RAG and extraction (to show what you can do
with the loaded documents), but defers to the appropriate sections
rather than going too in-depth.

@baskaryan @hwchase17
2024-06-05 17:09:28 -07:00
Jeffrey Mak
5fc5ed463c community[patch]:Support filter for AzureAISearchRetriever (#22303)
**Description**: 
The AzureAISearchRetriever does not support the "$filter" argument
offered in the AISearch API:
https://learn.microsoft.com/en-us/rest/api/searchservice/documents/search-get?view=rest-searchservice-2023-11-01&tabs=HTTP
The $filter allows filtering of indexes based on values in metadata.

**Issue**: 
https://github.com/langchain-ai/langchain/issues/19885

**Dependencies**: 
No

**Twitter handle**: 
@Jeffreym9M
 

- [ ] **Add tests and docs**: Not relevant


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-05 16:53:19 -07:00
Isaac Francisco
148088a588 docs: duckduckgosearch options listed (#22568)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-05 23:29:47 +00:00
Mikhail Khludnev
ef868bc24b docs: mentioning query_instruction with regards to BGE-M3 (#22405)
see
https://github.com/langchain-ai/langchain/pull/18017#issuecomment-2143942760
https://huggingface.co/BAAI/bge-m3#faq

Co-authored-by: mikhail-khludnev <mikhail_khludnev@rntgroup.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-05 22:44:40 +00:00
X-HAN
62f13f95e4 community[minor]: add DashScope Rerank (#22403)
**Description:** this PR adds DashScope Rerank capability to Langchain,
you can find DashScope Rerank API from
[here](https://help.aliyun.com/document_detail/2780058.html?spm=a2c4g.2780059.0.0.6d995024FlrJ12)
&
[here](https://help.aliyun.com/document_detail/2780059.html?spm=a2c4g.2780058.0.0.63f75024cr11N9).
[DashScope](https://dashscope.aliyun.com/) is the generative AI service
from Alibaba Cloud (Aliyun). You can create DashScope API key from
[here](https://bailian.console.aliyun.com/?apiKey=1#/api-key).

**Dependencies:** DashScopeRerank depends on `dashscope` python package.

**Twitter handle:** my twitter/x account is https://x.com/LastMonopoly
and I'd like a mention, thanks you!


**Tests and docs**
  1. integration test: `test_dashscope_rerank.py`
  2. example notebook: `dashscope_rerank.ipynb`

**Lint and test**: I have run `make format`, `make lint` and `make test`
from the root of the package I've modified.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-05 15:40:21 -07:00
Ethan Yang
29064848f9 [Community]add option to delete the prompt from HF output (#22225)
This will help to solve pattern mismatching issue when parsing the
output in Agent.

https://github.com/langchain-ai/langchain/issues/21912
2024-06-05 18:38:54 -04:00
Jacob Lee
c040dc7017 docs[patch]: Adds heading keywords to concepts page (#22577)
@efriis @baskaryan
2024-06-05 15:28:58 -07:00
Erick Friis
24fa17593f docs: update agentexecutor title to legacy (#22575) 2024-06-05 15:09:41 -07:00
Bagatur
584a1e30ac community[patch]: AzureSearch async functions (#22075) 2024-06-05 14:39:54 -07:00
Bagatur
1a911018bc langchain[minor]: add universal init_model (#22039)
decisions to discuss
- only chat models
- model_provider isn't based on any existing values like llm-type,
package names, class names
- implemented as function not as a wrapper ChatModel
- function name (init_model)
- in langchain as opposed to community or core
- marked beta
2024-06-05 14:39:40 -07:00
Isaac Francisco
67012c2558 docs: deprecation of max_length parameter used in Exa search (#22567) 2024-06-05 12:09:53 -07:00
ccurme
af129974a3 community: update how OpenAIAssistantV2Runnable creates threads with tool_resources (#22549)
https://github.com/langchain-ai/langchain/issues/22503
2024-06-05 14:19:41 -04:00
Bagatur
51a0d4574e community[patch]: Release 0.2.3 (#22562) 2024-06-05 17:27:24 +00:00
Bagatur
b2daba37c7 nomic[patch]: Release 0.1.2 (#22561) 2024-06-05 17:06:58 +00:00
Zach Nussbaum
14f3014cce embeddings: nomic embed vision (#22482)
Thank you for contributing to LangChain!

**Description:** Adds Langchain support for Nomic Embed Vision
**Twitter handle:** nomic_ai,zach_nussbaum


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Lance Martin <122662504+rlancemartin@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-05 09:47:17 -07:00
leila-messallem
3280a5b49b community[patch]: improve test setup to accurately test filtering of labels in neo4j (#22531)
**Description:** This PR addresses an issue with an existing test that
was not effectively testing the intended functionality. The previous
test setup did not adequately validate the filtering of the labels in
neo4j, because the nodes and relationship in the test data did not have
any properties set. Without properties these labels would not have been
returned, regardless of the filtering.

---------

Co-authored-by: Oskar Hane <oh@oskarhane.com>
2024-06-05 15:56:53 +00:00
Mohammad Mohtashim
7fcef2556c [Experimental]: Async agenerate method ollama functions (#21682)
- **Description:** :
Added Async method for Generate for OllamaFunctions which was missing
and was raising errors for the users.
   
- **Issue:** 
#21422
2024-06-05 11:50:36 -04:00
Stefano Lottini
328d0c99f2 community[minor]: Add support for metadata indexing policy in Cassandra vector store (#22548)
This PR adds a constructor `metadata_indexing` parameter to the
Cassandra vector store to allow optional fine-tuning of which fields of
the metadata are to be indexed.

This is a feature supported by the underlying CassIO library. Indexing
mode of "all", "none" or deny- and allow-list based choices are
available.

The rationale is, in some cases it's advisable to programmatically
exclude some portions of the metadata from the index if one knows in
advance they won't ever be used at search-time. this keeps the index
more lightweight and performant and avoids limitations on the length of
_indexed_ strings.

I added a integration test of the feature. I also added the possibility
of running the integration test with Cassandra on an arbitrary IP
address (e.g. Dockerized), via
`CASSANDRA_CONTACT_POINTS=10.1.1.5,10.1.1.6 poetry run pytest [...]` or
similar.

While I was at it, I added a line to the `.gitignore` since the mypy
_test_ cache was not ignored yet.

My X (Twitter) handle: @rsprrs.
2024-06-05 11:23:26 -04:00
Emilien Chauvet
c3d4126eb1 community[minor]: add user agent for web scraping loaders (#22480)
**Description:** This PR adds a `USER_AGENT` env variable that is to be
used for web scraping. It creates a util to get that user agent and uses
it in the classes used for scraping in [this piece of
doc](https://python.langchain.com/v0.1/docs/use_cases/web_scraping/).
Identifying your scraper is considered a good politeness practice, this
PR aims at easing it.
**Issue:** `None`
**Dependencies:** `None`
**Twitter handle:** `None`
2024-06-05 15:20:34 +00:00
Philippe PRADOS
8250c177de community[minor]: Add native async support to SQLChatMessageHistory (#22065)
# package community: Fix SQLChatMessageHistory

## Description
Here is a rewrite of `SQLChatMessageHistory` to properly implement the
asynchronous approach. The code circumvents [issue
22021](https://github.com/langchain-ai/langchain/issues/22021) by
accepting a synchronous call to `def add_messages()` in an asynchronous
scenario. This bypasses the bug.

For the same reasons as in [PR
22](https://github.com/langchain-ai/langchain-postgres/pull/32) of
`langchain-postgres`, we use a lazy strategy for table creation. Indeed,
the promise of the constructor cannot be fulfilled without this. It is
not possible to invoke a synchronous call in a constructor. We
compensate for this by waiting for the next asynchronous method call to
create the table.

The goal of the `PostgresChatMessageHistory` class (in
`langchain-postgres`) is, among other things, to be able to recycle
database connections. The implementation of the class is problematic, as
we have demonstrated in [issue
22021](https://github.com/langchain-ai/langchain/issues/22021).

Our new implementation of `SQLChatMessageHistory` achieves this by using
a singleton of type (`Async`)`Engine` for the database connection. The
connection pool is managed by this singleton, and the code is then
reentrant.

We also accept the type `str` (optionally complemented by `async_mode`.
I know you don't like this much, but it's the only way to allow an
asynchronous connection string).

In order to unify the different classes handling database connections,
we have renamed `connection_string` to `connection`, and `Session` to
`session_maker`.

Now, a single transaction is used to add a list of messages. Thus, a
crash during this write operation will not leave the database in an
unstable state with a partially added message list. This makes the code
resilient.

We believe that the `PostgresChatMessageHistory` class is no longer
necessary and can be replaced by:
```
PostgresChatMessageHistory = SQLChatMessageHistory
```
This also fixes the bug.


## Issue
- [issue 22021](https://github.com/langchain-ai/langchain/issues/22021)
  - Bug in _exit_history()
  - Bugs in PostgresChatMessageHistory and sync usage
  - Bugs in PostgresChatMessageHistory and async usage
- [issue
36](https://github.com/langchain-ai/langchain-postgres/issues/36)
 ## Twitter handle:
pprados

## Tests
- libs/community/tests/unit_tests/chat_message_histories/test_sql.py
(add async test)

@baskaryan, @eyurtsev or @hwchase17 can you check this PR ?
And, I've been waiting a long time for validation from other PRs. Can
you take a look?
- [PR 32](https://github.com/langchain-ai/langchain-postgres/pull/32)
- [PR 15575](https://github.com/langchain-ai/langchain/pull/15575)
- [PR 13200](https://github.com/langchain-ai/langchain/pull/13200)

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-05 15:10:38 +00:00
Vincent Min
59bef31997 community[minor]: Improve InMemoryVectorStore with ability to persist to disk and filter on metadata. (#22186)
- **Description:** The InMemoryVectorStore is a nice and simple vector
store implementation for quick development and debugging. The current
implementation is quite limited in its functionalities. This PR extends
the functionalities by adding utility function to persist the vector
store to a json file and to load it from a json file. We choose the json
file format because it allows inspection of the database contents in a
text editor, which is great for debugging. Furthermore, it adds a
`filter` keyword that can be used to filter out documents on their
`page_content` or `metadata`.
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** @Vincent_Min
2024-06-05 10:40:34 -04:00
Christophe Bornet
c34ad8c163 core[patch]: Improve VectorStore API doc (#22547) 2024-06-05 10:23:44 -04:00
maang-h
89128b7a49 community[patch]: add detailed paragraph and example for BaichuanTextEmbeddings (#22031)
- **Description:** add detailed paragraph and example for
BaichuanTextEmbeddings
   - **Issue:** the issue #21983
2024-06-05 10:18:11 -04:00
Anthony Bernabeu
4e676a63b8 community[minor]: Added filter search for LanceDB (#22461)
- [ ] **community**: "vectorstore: added filtering support for LanceDB
vector store"

- [ ] **This PR adds filtering capabilities to LanceDB**:
- **Description:** In LanceDB filtering can be applied when searching
for data into the vectorstore. It is using the SQL language as mentioned
in the LanceDB documentation.
    - **Issue:** #18235 
    - **Dependencies:** No

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-05 09:33:54 -04:00
Erick Friis
4050d6ea2b huggingface: remove text-generation dep (#22543) 2024-06-05 12:13:40 +00:00
Erick Friis
a6fc74f379 ai21: fix core version (#22544) 2024-06-05 08:09:19 -04:00
Asaf Joseph Gardin
75cba742e5 ai21: fix ai21 unittests (#22526)
Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-05 08:00:42 -04:00
Erick Friis
58192d617f community: fix huggingface deprecations (#22522) 2024-06-05 04:13:13 +00:00
Jacob Lee
1e748a6d40 docs[patch]: Adds links to deprecations page (#22514)
@baskaryan
2024-06-04 16:19:32 -07:00
William FH
91fed3ace7 [Docs] Structured output Keywords (#22511) 2024-06-04 20:56:05 +00:00
Christophe Bornet
8ba868d3b0 core[patch]: Add similarity_score_threshold to VectorStore search types (#22477) 2024-06-04 13:43:55 -07:00
Eugene Yurtsev
9120cf5df2 core[patch]: Deduplicate of callback handlers in merge_configs (#22478)
This PR adds deduplication of callback handlers in merge_configs.

Fix for this issue:
https://github.com/langchain-ai/langchain/issues/22227

The issue appears when the code is:

1) running python >=3.11
2) invokes a runnable from within a runnable
3) binds the callbacks to the child runnable from the parent runnable
using with_config

In this case, the same callbacks end up appearing twice: (1) the first
time from with_config, (2) the second time with langchain automatically
propagating them on behalf of the user.


Prior to this PR this will emit duplicate events:

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model.with_config(
        {
            "callbacks": callbacks,  # <-- Propagate callbacks
        }
    )
    return await chain.ainvoke({"question": question})
```

Prior to this PR this will work work correctly (no duplicate events):

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question}, {"callbacks": callbacks})
```

This will also work (as long as the user is using python >= 3.11) -- as
langchain will automatically propagate callbacks

```python
@tool
async def get_items(question: str,):  
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question})
```
2024-06-04 16:19:00 -04:00
Jacob Lee
64dbc52cae docs[patch]: Update quickstart tutorial (#22504)
Mentions LCEL more, hopefully flags it to more people as a simple
entrypoint

@baskaryan @hwchase17
2024-06-04 13:04:56 -07:00
Ofer Mendelevitch
ad502e8d50 community[minor]: Vectara Integration Update - Streaming, FCS, Chat, updates to documentation and example notebooks (#21334)
Thank you for contributing to LangChain!

**Description:** update to the Vectara / Langchain integration to
integrate new Vectara capabilities:
- Full RAG implemented as a Runnable with as_rag()
- Vectara chat supported with as_chat()
- Both support streaming response
- Updated documentation and example notebook to reflect all the changes
- Updated Vectara templates

**Twitter handle:** ofermend

**Add tests and docs**: no new tests or docs, but updated both existing
tests and existing docs
2024-06-04 12:57:28 -07:00
Bagatur
cb183a9bf1 docs: update anthropic chat model (#22483)
Related to #22296

And update anthropic to accept base_url
2024-06-04 12:42:06 -07:00
Erick Friis
d700ce8545 robocorp: typo (#22509) 2024-06-04 15:33:38 -04:00
Erick Friis
39fd44579a robocorp: release 0.0.9.post1 (#22507) 2024-06-04 15:32:30 -04:00
Erick Friis
339e3b7f55 ai21: release 0.1.6 (#22508) 2024-06-04 15:31:23 -04:00
ccurme
3c53cea760 together, upstage: bump minimum langchain-openai version (#22505) 2024-06-04 15:20:41 -04:00
Erick Friis
c438b5b78e docs: fix api ref link generation (#22438)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 12:09:22 -07:00
Bagatur
efcb04f84b mongodb[patch]: Release 0.1.6 (#22501) 2024-06-04 12:01:37 -07:00
Bagatur
222b1ba112 groq[patch]: Release 0.1.5 (#22500) 2024-06-04 12:01:17 -07:00
Bagatur
f021be510e milvus[patch]: Release 0.1.1 (#22499) 2024-06-04 12:00:53 -07:00
Bagatur
64d68c17cd upstage[patch]: Release 0.1.6 (#22498) 2024-06-04 11:58:44 -07:00
Bagatur
48fba40fce experimental[patch]: Release 0.0.60 (#22497) 2024-06-04 11:56:42 -07:00
Bagatur
e60f88ccdd community[patch]: Release 0.2.2 (#22496) 2024-06-04 11:42:11 -07:00
Bagatur
85aa218564 langchain[patch]: Release 0.2.2 (#22495) 2024-06-04 11:33:45 -07:00
Bagatur
8e86080def mistralai[patch]: Release 0.1.8 (#22494) 2024-06-04 11:33:06 -07:00
Bagatur
e850de2422 huggingface[patch]: release 0.0.2 (#22493) 2024-06-04 11:32:36 -07:00
Jacob Lee
593de8a913 docs[patch]: Add robots.txt and root sitemap (#22492)
CC @efriis @baskaryan
2024-06-04 11:26:40 -07:00
Bagatur
99a3cad258 text-splitters[patch]: Release 0.2.1 (#22490) 2024-06-04 11:19:21 -07:00
Bagatur
161b02a8be core[patch]: Release 0.2.4 (#22489) 2024-06-04 11:14:54 -07:00
Ragul Kachiappan
50258a7dda docs: Update chroma docs link for collection reference (#22472)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
- **Description:** Updated dead link referencing chroma docs in Chroma
notebook under vectorstores
2024-06-04 18:01:13 +00:00
nareshnagpal06
9b45374118 docs: Added Semantic Cache Example with BedrockChat using Bedrock Embedding… (#22190)
…s and Opensearch Semantic Cache

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 17:40:29 +00:00
Joydeep Banik Roy
3796672c67 community, milvus, pinecone, qdrant, mongo: Broadcast operation failure while using simsimd beyond v3.7.7 (#22271)
- [ ] **Packages affected**: 
  - community: fix `cosine_similarity` to support simsimd beyond 3.7.7
- partners/milvus: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/mongodb: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/pinecone: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/qdrant: fix `cosine_similarity` to support simsimd beyond
3.7.7


- [ ] **Broadcast operation failure while using simsimd beyond v3.7.7**:
- **Description:** I was using simsimd 4.3.1 and the unsupported operand
type issue popped up. When I checked out the repo and ran the tests,
they failed as well (have attached a screenshot for that). Looks like it
is a variant of https://github.com/langchain-ai/langchain/issues/18022 .
Prior to 3.7.7, simd.cdist returned an ndarray but now it returns
simsimd.DistancesTensor which is ineligible for a broadcast operation
with numpy. With this change, it also remove the need to explicitly cast
`Z` to numpy array
    - **Issue:** #19905
    - **Dependencies:** No
    - **Twitter handle:** https://x.com/GetzJoydeep

<img width="1622" alt="Screenshot 2024-05-29 at 2 50 00 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/fb27b383-a9ae-4a6f-b355-6d503b72db56">

- [ ] **Considerations**: 
1. I started with community but since similar changes were there in
Milvus, MongoDB, Pinecone, and QDrant so I modified their files as well.
If touching multiple packages in one PR is not the norm, then I can
remove them from this PR and raise separate ones
2. I have run and verified that the tests work. Since, only MongoDB had
tests, I ran theirs and verified it works as well. Screenshots attached
:
<img width="1573" alt="Screenshot 2024-05-29 at 2 52 13 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/ce87d1ea-19b6-4900-9384-61fbc1a30de9">
<img width="1614" alt="Screenshot 2024-05-29 at 3 33 51 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/6ce1d679-db4c-4291-8453-01028ab2dca5">
  

I have added a test for simsimd. I feel it may not go well with the
CI/CD setup as installing simsimd is not a dependency requirement. I
have just imported simsimd to ensure simsimd cosine similarity is
invoked. However, its not a good approach. Suggestions are welcome and I
can make the required changes on the PR. Please provide guidance on the
same as I am new to the community.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-04 17:36:31 +00:00
KyrianC
03178ee74f community[minor]: Add tools calls to ChatEdenAI (#22320)
### Description  
Add tools implementation to `ChatEdenAI`:
- `bind_tools()`
- `with_structured_output()`

### Documentation 
Updated `docs/docs/integrations/chat/edenai.ipynb`

### Notes
We don´t support stream with tools as of yet. If stream is called with
tools we directly yield the whole message from `generate` (implemented
the same way as Anthropic did).
2024-06-04 10:29:28 -07:00
pranavvuppala
9d4350e69a docs : Update docstrings for OpenAI base.py (#22221)
- [x] **PR title**: Update docstrings for OpenAI base.py
-**Description:** Updated the docstring of few OpenAI functions for a
better understanding of the function.
    - **Issue:** #21983

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 17:24:17 +00:00
Anindyadeep
7a197539aa communty[patch]: Native RAG Support in Prem AI langchain (#22238)
This PR adds native RAG support in langchain premai package. The same
has been added in the docs too.
2024-06-04 10:19:54 -07:00
Rahul Triptahi
77ad857934 community[minor]: Enable retrieval api calls in PebbloRetrievalQA (#21958)
Description: Enable app discovery and Prompt/Response apis in
PebbloSafeRetrieval
Documentation: NA
Unit test: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-04 10:18:50 -07:00
liugz18
8fd231086e experimental[patch]: Fix graph_transformers llms #21482 (#22417)
Fix AttributeError on calling
LLMGraphTransformer.convert_to_graph_documents #21482

 since raw_schema is always a str

@baskaryan
2024-06-04 17:07:38 +00:00
ccurme
6db25b4e31 core[patch]: bump langsmith (#22476)
Noticing errors logged in some situations when tracing with Langsmith:
```python
from langchain_core.pydantic_v1 import BaseModel
from langchain_anthropic import ChatAnthropic


class AnswerWithJustification(BaseModel):
    """An answer to the user question along with justification for the answer."""
    answer: str
    justification: str


llm = ChatAnthropic(model="claude-3-haiku-20240307")
structured_llm = llm.with_structured_output(AnswerWithJustification)

list(structured_llm.stream("What weighs more a pound of bricks or a pound of feathers"))
```
```
Error in LangChainTracer.on_chain_end callback: AttributeError("'NoneType' object has no attribute 'append'")
[AnswerWithJustification(answer='A pound of bricks and a pound of feathers weigh the same amount.', justification='This is because a pound is a unit of mass, not volume. By definition, a pound of any material, whether bricks or feathers, will weigh the same - one pound. The physical size or volume of the materials does not matter when measuring by mass. So a pound of bricks and a pound of feathers both weigh exactly one pound.')]
```
2024-06-04 10:05:53 -07:00
Bagatur
17c127531a community[patch]: deprecate all HF classes (#22444) 2024-06-04 09:48:25 -07:00
Nuno Campos
58b118544e Use immutable sequence type for batch/batch_as_completed types (#22433)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-04 08:04:09 -07:00
Christophe Bornet
9a8fe58ebe community[minor]: Improve Cassandra VectorStore as_retriever (#22465)
The Vectorstore's API `as_retriever` doesn't expose explicitly the
parameters `search_type` and `search_kwargs` and so these are not well
documented.
This PR improves `as_retriever` for the Cassandra VectorStore by making
these parameters explicit.

NB: An alternative would have been to modify `as_retriever` in
`Vectorstore`. But there's probably a good reason these were not exposed
in the first place ? Is it because implementations may decide to not
support them and have fixed values when creating the
VectorStoreRetriever ?
2024-06-04 09:51:17 -04:00
Christophe Bornet
23bba18f92 core[patch]: Fix VectorStore's as_retriever mutating tags param (#22470)
The current VectorStore `as_retriever` implementation mutates the `tags`
param when it's passed in kwargs.
This fix ensures that a copy is done.
2024-06-04 09:50:36 -04:00
Michal Gregor
98b2e7b195 huggingface[patch]: Support for HuggingFacePipeline in ChatHuggingFace. (#22194)
- **Description:** Added support for using HuggingFacePipeline in
ChatHuggingFace (previously it was only usable with API endpoints,
probably by oversight).
- **Issue:** #19997 
- **Dependencies:** none
- **Twitter handle:** none

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-04 00:47:35 +00:00
Fahreddin Özcan
0061ded002 community[patch]: Upstash Vector Store Namespace Support (#22251)
This PR introduces namespace support for Upstash Vector Store, which
would allow users to partition their data in the vector index.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 17:30:56 -07:00
Isaac Francisco
25cf1a74d5 docs: rag tutorial small fixes (#22450) 2024-06-04 00:16:54 +00:00
Jacob Lee
b0f014666d docs[patch]: Adds search keywords for common queries (#22449)
CC @baskaryan @efriis @ccurme
2024-06-03 16:30:17 -07:00
Guangdong Liu
bc7e32f315 core(patch):fix partial_variables not working with SystemMessagePromptTemplate (#20711)
- **Issue:**  close #17560
- @baskaryan, @eyurtsev
2024-06-03 16:22:42 -07:00
Martin Kolb
f2dd31b9e8 docs: Fix doc issue for HANA Cloud Vector Engine (#22260)
- **Description:**
This PR fixes a rendering issue in the docs (Python notebook) of HANA
Cloud Vector Engine.

  - **Issue:** N/A
  - **Dependencies:** no new dependencies added

File of the fixed notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-06-03 15:53:43 -07:00
Dristy Srivastava
ef3df45d9d community[minor]: Updating payload for pebblo discover API (#22309)
**Description:** Updating response for pebblo discover API. Also
updating filed name case type
**Documentation:** N/A
**Unit tests:** N/A
2024-06-03 15:36:17 -07:00
Miroslav
cbd5720011 huggingface[patch]: Skip Login to HuggingFaceHub when token is not set (#22365) 2024-06-03 15:20:32 -07:00
Stefano Lottini
f78ae1d932 docs: Astra DB vectorstore, add automatic-embedding example (#22350)
Description: Adding an example showcasing the newly-introduced API-side
embedding computation option for the Astra DB vector store
2024-06-03 15:13:57 -07:00
bhardwaj-vipul
f397a84a59 langchain[patch]: Fix MongoDBAtlasVectorSearch reference in self query retriever (#22401)
**Description:** 
SelfQuery Retriever with MongoDBAtlasVectorSearch (from
langchain_mongodb import MongoDBAtlasVectorSearch) and
Chroma (from langchain_chroma import Chroma) is not supported.
The imports in the [builtin
translators](8cbce684d4/libs/langchain/langchain/retrievers/self_query/base.py (L73))
points to the
[deprecated](acaf214a45/libs/community/langchain_community/vectorstores/mongodb_atlas.py (L36))
vectorstore.

**Issue:** 
#22272

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 22:10:15 +00:00
ccurme
afe89a1411 community: add standard chat model params to Ollama (#22446) 2024-06-03 17:45:03 -04:00
Isaac Francisco
5119ab2fb9 docs: agents tutorial wording (#22447) 2024-06-03 14:40:01 -07:00
Ethan Yang
52da6a160d community[patch]: Update OpenVINO embedding and reranker to support static input shape (#22171)
It can help to deploy embedding models on NPU device
2024-06-03 13:27:17 -07:00
Tom Clelford
c599732e1a text-splitters[patch]: fix HTMLSectionSplitter parsing of xslt paths (#22176)
## Description
This PR allows passing the HTMLSectionSplitter paths to xslt files. It
does so by fixing two trivial bugs with how passed paths were being
handled. It also changes the default value of the param `xslt_path` to
`None` so the special case where the file was part of the langchain
package could be handled.

## Issue
#22175
2024-06-03 20:26:59 +00:00
maang-h
01352bb55f community[minor]: Implement MiniMaxChat interface (#22391)
- **Description:** Implement MiniMaxChat interface, include:
    - No longer inherits the LLM class (like other chat model)
    - Update request parameters (v1 -> v2)
        - update `base url`
        - update message role (system, user, assistant)
        - add `stream` function
        - no longer use `group id`
    - Implement the `_stream`, `_agenerate`, and `_astream` interfaces

[minimax v2 api
document](https://platform.minimaxi.com/document/guides/chat-model/V2?id=65e0736ab2845de20908e2dd)
2024-06-03 13:22:38 -07:00
Brandon Sharp
56e5aa4dd9 community[patch]: Airtable to allow for addtl params (#22092)
- [X] **PR title**: "community: added optional params to Airtable
table.all()"


- [X] **PR message**: 
- **Description:** Add's **kwargs to AirtableLoader to allow for kwargs:
https://pyairtable.readthedocs.io/en/latest/api.html#pyairtable.Table.all
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** parakoopa88


- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 13:05:56 -07:00
Harichandan Roy
1f751343e2 community[patch]: update embeddings/oracleai.py (#22240)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"

"community/embeddings: update oracleai.py"

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

Adding oracle VECTOR_ARRAY_T support.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Tests are not impacted.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 12:38:51 -07:00
maang-h
13140dc4ff community[patch]: Update the default api_url and reqeust_body of sparkllm embedding (#22136)
- **Description:** When I was running the SparkLLMTextEmbeddings,
app_id, api_key and api_secret are all correct, but it cannot run
normally using the current URL.

    ```python
    # example
    from langchain_community.embeddings import SparkLLMTextEmbeddings

    embedding= SparkLLMTextEmbeddings(
        spark_app_id="my-app-id",
        spark_api_key="my-api-key",
        spark_api_secret="my-api-secret"
    )
    embedding= "hello"
    print(spark.embed_query(text1))
    ```

![sparkembedding](https://github.com/langchain-ai/langchain/assets/55082429/11daa853-4f67-45b2-aae2-c95caa14e38c)
   
So I updated the url and request body parameters according to
[Embedding_api](https://www.xfyun.cn/doc/spark/Embedding_api.html), now
it is runnable.
2024-06-03 12:38:11 -07:00
Yuwen Hu
ba0dca46d7 community[minor]: Add IPEX-LLM BGE embedding support on both Intel CPU and GPU (#22226)
**Description:** [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local
PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low
latency. This PR adds ipex-llm integrations to langchain for BGE
embedding support on both Intel CPU and GPU.
**Dependencies:** `ipex-llm`, `sentence-transformers`
**Contribution maintainer**: @Oscilloscope98 
**tests and docs**: 
- langchain/docs/docs/integrations/text_embedding/ipex_llm.ipynb
- langchain/docs/docs/integrations/text_embedding/ipex_llm_gpu.ipynb
-
langchain/libs/community/tests/integration_tests/embeddings/test_ipex_llm.py

---------

Co-authored-by: Shengsheng Huang <shannie.huang@gmail.com>
2024-06-03 12:37:10 -07:00
Jacob Lee
c01467b1f4 core[patch]: RFC: Allow concatenation of messages with multi part content (#22002)
Anthropic's streaming treats tool calls as different content parts
(streamed back with a different index) from normal content in the
`content`.

This means that we need to update our chunk-merging logic to handle
chunks with multi-part content. The alternative is coerceing Anthropic's
responses into a string, but we generally like to preserve model
provider responses faithfully when we can. This will also likely be
useful for multimodal outputs in the future.

This current PR does unfortunately make `index` a magic field within
content parts, but Anthropic and OpenAI both use it at the moment to
determine order anyway. To avoid cases where we have content arrays with
holes and to simplify the logic, I've also restricted merging to chunks
in order.

TODO: tests

CC @baskaryan @ccurme @efriis
2024-06-03 09:46:40 -07:00
Dan
86509161b0 community: fix AzureSearch delete documents (#22315)
**Description**

Fix AzureSearch delete documents method by using FIELDS_ID variable
instead of the hard coded "id" value

**Issue:** 

This is linked to this issue:
https://github.com/langchain-ai/langchain/issues/22314

Co-authored-by: dseban <dan.seban@neoxia.com>
2024-06-03 15:55:06 +00:00
Harrison Chase
8fad2e209a fix error message (#22437)
Was confusing when language is in Enum but not implemented
2024-06-03 15:48:26 +00:00
Bagatur
678a19a5f7 infra: bump anthropic mypy 1 (#22373) 2024-06-03 08:21:55 -07:00
Nuno Campos
ceb73ad06f core: In BaseRetriever make get_relevant_docs delegate to invoke (#22434)
- This fixes all the tracing issues with people still using
get_relevant_docs, and a change we need for 0.3 anyway

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 07:34:53 -07:00
Zheng Robert Jia
1ad1dc5303 docs: resolve minor syntax error. (#22375)
Used the correct magic command. 
Changed from `% pip...` to `%pip`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-03 14:34:24 +00:00
Charles John
2d81a72884 community: fix missing apify_api_token field in ApifyWrapper (#22421)
- **Description:** The `ApifyWrapper` class expects `apify_api_token` to
be passed as a named parameter or set as an environment variable. But
the corresponding field was missing in the class definition causing the
argument to be ignored when passed as a named param. This patch fixes
that.
2024-06-03 14:32:57 +00:00
Klaudia Lemiec
dac355fc62 docs: notebook loader: change .html to .ipynb (#22407)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-03 14:26:28 +00:00
Joan Fontanals
a7ae16f912 add embed_image API to JinaEmbedding (#22416)
- **Description:** Add `embed_image` to JinaEmbedding to embed images
 - **Twitter handle:** https://x.com/JinaAI_
2024-06-03 10:23:37 -04:00
Qingchuan Hao
3e92ed8056 docs: add Microsoft Azure to ChatModelTabs (#22367)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-03 10:19:00 -04:00
Nuno Campos
ed8e9c437a core: In RunnableSequence pass kwargs to the first step (#22393)
- This is a pattern that shows up occasionally in langgraph questions,
people chain a graph to something else after, and want to pass the graph
some kwargs (eg. stream_mode)
2024-06-03 14:18:10 +00:00
Jeffrey Morgan
eabcfaa3d6 Update Ollama instructions (#22394) 2024-06-03 10:17:35 -04:00
Harrison Chase
acaf214a45 update agent docs (#22370)
to use create_react_agent

---------

Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
2024-06-01 08:28:32 -07:00
Jacob Lee
16cce76a68 👥 Update LangChain people data (#22388)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-06-01 07:36:45 -07:00
Jacob Lee
8a57102918 docs[patch]: Fix typo (#22377) 2024-05-31 16:37:05 -07:00
Bagatur
4d82cea71f docs: fix llm caches redirect (#22371) 2024-05-31 19:37:06 +00:00
Bagatur
a8098f5ddb anthropic[patch]: Release 0.1.15, fix sdk tools break (#22369) 2024-05-31 12:10:22 -07:00
Erick Friis
6ffa0acf32 ai21: fix text-splitters version (#22366) 2024-05-31 11:41:05 -04:00
Erick Friis
1bad0ac946 docs: redirect integration links to 0.2 (#22326) 2024-05-31 11:40:48 -04:00
ccurme
8cbce684d4 docs: update retriever how-to content (#22362)
- [x] How to: use a vector store to retrieve data
- [ ] How to: generate multiple queries to retrieve data for
- [x] How to: use contextual compression to compress the data retrieved
- [x] How to: write a custom retriever class
- [x] How to: add similarity scores to retriever results
^ done last month
- [x] How to: combine the results from multiple retrievers
- [x] How to: reorder retrieved results to mitigate the "lost in the
middle" effect
- [x] How to: generate multiple embeddings per document
^ this PR
- [ ] How to: retrieve the whole document for a chunk
- [ ] How to: generate metadata filters
- [ ] How to: create a time-weighted retriever
- [ ] How to: use hybrid vector and keyword retrieval
^ todo
2024-05-31 10:57:35 -04:00
Jacob Lee
75ed9ee929 docs: Fix Solar and OCI integration page typos (#22343)
@efriis @baskaryan
2024-05-31 10:36:12 -04:00
Bagatur
0214246dc6 docs: list tool calling models (#22334) 2024-05-30 14:32:33 -07:00
Bagatur
410e9add44 infra: run scheduled tests on aws, google, cohere, nvidia (#22328)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-30 13:57:12 -07:00
Harrison Chase
0c9a034ed7 add simpler agent tutorial (#22249)
1/ added section at start with full code
2/ removed retriever tool (was just distracting)
3/ added section on starting a new conversation

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-30 12:33:32 -07:00
Bagatur
2b9f1469d8 core[patch]: Release 0.2.3 (#22329) 2024-05-30 11:35:09 -07:00
Harrison Chase
ee32369265 core[patch]: fix runnable history and add docs (#22283) 2024-05-30 11:26:41 -07:00
William FH
dcec133b85 [Core] Update Tracing Interops (#22318)
LangSmith and LangChain context var handling evolved in parallel since
originally we didn't expect people to want to interweave the decorator
and langchain code.

Once we get a new langsmith release, this PR will let you seemlessly
hand off between @traceable context and runnable config context so you
can arbitrarily nest code.

It's expected that this fails right now until we get another release of
the SDK
2024-05-30 10:34:49 -07:00
ccurme
f34337447f openai: update ChatOpenAI api ref (#22324)
Update to reflect that token usage is no longer default in streaming
mode.

Add detail for streaming context under Token Usage section.
2024-05-30 12:31:28 -04:00
ChengZi
2443e85533 docs: fix milvus import and update template (#22306)
docs: fix milvus import problem
update milvus-rag template with milvus-lite

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-05-30 08:28:55 -07:00
WU LIFU
86698b02a9 doc: fix wrong documentation on FAISS load_local function (#22310)
### Issue: #22299 

### descriptions
The documentation appears to be wrong. When the user actually sets this
parameter "asynchronous" to be True, it fails because the __init__
function of FAISS class doesn't allow this parameter. In fact, most of
the class/instance functions of this class have both the sync/async
version, so it looks like what we need is just to remove this parameter
from the doc.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
2024-05-30 15:15:04 +00:00
maang-h
596c062cba community[patch]: Standardize qianfan model init args name (#22322)
- **Description:**  
    - Standardize qianfan chat model intialization arguments name
        - qianfan_ak (qianfan api key)  -> api_key
        - qianfan_sk (qianfan secret key)  ->  secret_key
       
    - Delete unuse variable
- **Issue:** #20085
2024-05-30 11:08:32 -04:00
KhoPhi
c64b0a3095 Docs: Ollama (LLM, Chat Model & Text Embedding) (#22321)
- [x] Docs Update: Ollama
  - llm/ollama 
- Switched to using llama3 as model with reference to templating and
prompting
      - Added concurrency notes to llm/ollama docs
  - chat_models/ollama
      - Added concurrency notes to llm/ollama docs
  - text_embedding/ollama
     - include example for specific embedding models from Ollama
2024-05-30 11:06:45 -04:00
Dobiichi-Origami
10b12e1c08 community: adding tool_call_id for every ToolCall (#22323)
- **Description:** This PR contains a bugfix which result in malfunction
of multi-turn conversation in QianfanChatEndpoint and adaption for
ToolCall and ToolMessage
2024-05-30 10:59:08 -04:00
Bagatur
569d325a59 docs: link GH org (#22308) 2024-05-30 00:17:59 -07:00
Bagatur
93049d1563 docs: make llm cache its own section (#22301) 2024-05-30 00:17:33 -07:00
Bagatur
04631439c9 docs: add v0.2 links to README (#22300) 2024-05-29 16:22:01 -07:00
ccurme
f39e1a2288 community, docs: update token usage tracking callback + how-to guides (#22145) 2024-05-29 17:00:47 -04:00
Bagatur
2bc50fb895 docs, cli[patch]: chat model template nit (#22294) 2024-05-29 20:53:58 +00:00
Bagatur
aa6c31df53 cli[patch]: Release 0.0.24 (#22293) 2024-05-29 13:37:34 -07:00
Bagatur
627a337887 docs, cli[patch]: chat model doc template (#22290)
Update ChatModel integration doc template, integration docstring, and
adds langchain-cli command to easily create just doc (for updating
existing integrations):

```bash
langchain-cli integration create-doc --name "foo-bar"
```
2024-05-29 13:34:58 -07:00
Wu Enze
f40e341a03 docs : Added integrations for memory with langchain_community (#22265)
PR title: Integration Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: [#22005](https://github.com/langchain-ai/langchain/issues/22005)
2024-05-29 16:12:05 -04:00
ccurme
6e1df72a88 openai[patch]: Release 0.1.8 (#22291) 2024-05-29 20:08:30 +00:00
ccurme
e71b0b5827 core[patch]: Release 0.2.2 (#22289) 2024-05-29 19:51:37 +00:00
William FH
9d6cabe84a Update sequence.ipynb (#22288) 2024-05-29 19:34:44 +00:00
Daniel Glogowski
7ff05357ba docs: updating NIM documentation (#22258)
Updating NVIDIA NIM notebooks and readme file.

Thanks!
Daniel
2024-05-29 10:28:39 -07:00
Bagatur
6dd0f095c3 docs: revamp ChatOpenAI (#22253)
Can build API ref docs by running
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
only builds openai ref, takes ~20 sec
2024-05-29 10:20:14 -07:00
Erick Friis
00c70d98c2 robocorp: release 0.0.9 (#22282) 2024-05-29 16:49:18 +00:00
Mikko Korpela
fc5909ad6f langchain-robocorp: Fix parsing of Union types (such as Optional). (#22277) 2024-05-29 09:47:02 -07:00
ccurme
af1f723ada openai: don't override stream_options default (#22242)
ChatOpenAI supports a kwarg `stream_options` which can take values
`{"include_usage": True}` and `{"include_usage": False}`.

Setting include_usage to True adds a message chunk to the end of the
stream with usage_metadata populated. In this case the final chunk no
longer includes `"finish_reason"` in the `response_metadata`. This is
the current default and is not yet released. Because this could be
disruptive to workflows, here we remove this default. The default will
now be consistent with OpenAI's API (see parameter
[here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options)).

Examples:
```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='Hello' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='!' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='' response_metadata={'finish_reason': 'stop'} id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
```

```python
for chunk in llm.stream("hi", stream_options={"include_usage": True}):
    print(chunk)
```
```
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='Hello' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='!' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' response_metadata={'finish_reason': 'stop'} id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```

```python
llm = ChatOpenAI().bind(stream_options={"include_usage": True})

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='Hello' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='!' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' response_metadata={'finish_reason': 'stop'} id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```
2024-05-29 10:30:40 -04:00
Karim Lalani
a1899439fc [experimental][llms][ollama_functions] Update OllamaFunctions to send tool_calls attribute (#21625)
Update OllamaFunctions to return `tool_calls` for AIMessages when used
for tool calling.
2024-05-29 09:38:33 -04:00
Bagatur
d61bdeba25 core[patch]: allow access RunnableWithFallbacks.runnable attrs (#22139)
RFC, candidate fix for #13095 #22134
2024-05-28 13:18:09 -07:00
SteveLiao
7496fe2b16 Update parent_document_retriever.py about **kwargs (#22219)
Add kwargs in add_documents function

**langchain**: Add **kwargs in parent_document_retriever"
 - **Add kwargs for `add_document` in `parent_document_retriever.py`** 


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-28 11:35:38 -07:00
Mark Cusack
8dfa3c5f1a Update/fix docs to list Yellowbrick as a supported indexed vectorstore (#22235)
Update/fix docs to list Yellowbrick as a supported indexed vectorstore
and fix the Jupyter notebook.
2024-05-28 11:34:49 -07:00
Erick Friis
93240fac68 milvus: fix core dep (#22239) 2024-05-28 10:21:37 -07:00
Erick Friis
611faa22c7 infra: allow first releases 2 (#22237) 2024-05-28 09:53:21 -07:00
Erick Friis
26c6e4a5ef infra: allow first releases (#22236) 2024-05-28 09:39:40 -07:00
ChengZi
404d92ded0 milvus: New langchain_milvus package and new milvus features (#21077)
New features:

- New langchain_milvus package in partner
- Milvus collection hybrid search retriever
- Zilliz cloud pipeline retriever
- Milvus Local guid
- Rag-milvus template

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jackson <jacksonxie612@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-28 08:24:20 -07:00
Leonid Ganeline
d7f70535ba docs: arxiv page, added cookbooks (#22215)
Issue: The `arXiv` page is missing the arxiv paper references from the
`langchain/cookbook`.
PR: Added the cookbook references.
Result: `Found 29 arXiv references in the 3 docs, 21 API Refs, 5
Templates, and 18 Cookbooks.` - much more references are visible now.
2024-05-27 15:47:02 -07:00
Leonid Ganeline
d6995e814b ai21[patch]: added license (#22153)
The `pyproject.toml` missed the `license` parameter. I've added it as
`MIT`
2024-05-27 15:14:14 -07:00
Maddy Adams
8332a36f69 infra: update langchainhub and add integration test (#22154)
**Description:** Update langchainhub integration test dependency and add
an integration test for pulling private prompt
**Dependencies:** langchainhub 0.1.16
2024-05-27 14:58:10 -07:00
Will Higgins
83d10df78d community[patch]: Update firecrawl api key name (#22183)
Change 'FIREWALL' to 'FIRECRAWL' as I believe this may have been in
error. Other docs refer to 'FIRECRAWL_API_KEY'.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:39:29 +00:00
hmasdev
bbd7015b5d core[patch]: Add TypeError handler into get_graph of Runnable (#19856)
# Description

## Problem

`Runnable.get_graph` fails when `InputType` or `OutputType` property
raises `TypeError`.

-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L250-L274)
-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L394-L396)

This problem prevents getting a graph of `Runnable` objects whose
`InputType` or `OutputType` property raises `TypeError` but whose
`invoke` works well, such as `langchain.output_parsers.RegexParser`,
which I have already pointed out in #19792 that a `TypeError` would
occur.

## Solution

- Add `try-except` syntax to handle `TypeError` to the codes which get
`input_node` and `output_node`.

# Issue
- #19801 

# Twitter Handle
- [hmdev3](https://twitter.com/hmdev3)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:34:34 +00:00
acho98
753353411f docs: Fix Clova embeddings example document (#22181)
- [ ] **PR title**: "Fix list handling in Clova embeddings example
documentation"
  - Description:
Fixes a bug in the Clova Embeddings example documentation where
document_text was incorrectly wrapped in an additional list.
   - Rationale
The embed_documents method expects a list, but the previous example
wrapped document_text in an unnecessary additional list, causing an
error. The updated example correctly passes document_text directly to
the method, ensuring it functions as intended.
2024-05-27 14:31:34 -07:00
Mohammad Mohtashim
577ed68b59 mistralai[patch]: Added Json Mode for ChatMistralAI (#22213)
- **Description:** Powered
[ChatMistralAI.with_structured_output](fbfed65fb1/libs/partners/mistralai/langchain_mistralai/chat_models.py (L609))
via json mode
 

-  **Issue:** #22081

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:16:52 +00:00
Pranith
25c270b5a5 docs : Added integrations for tools with langchain_community (#22188)
PR title: Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: https://github.com/langchain-ai/langchain/issues/22005
2024-05-27 14:06:40 -07:00
Ibrahim
cfea0e231a Update llm_chain.ipynb text (#22198)
Added the missing verb "is" and a comma to the text in the Prompt
Templates description within the Build a Simple LLM Application tutorial
for more clarity.
2024-05-27 19:57:41 +00:00
Aditya
bf81ecd3b4 docs:updated documentation for llama, falcon and gemma on Vertex AI Model garden (#22201)
- **Description:** updated documentation for llama, falcona and gemma on
Vertex AI Model garden
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-27 12:56:11 -07:00
Pavlo Paliychuk
342df7cf83 community[minor]: Add Zep Cloud components + docs + examples (#21671)
Thank you for contributing to LangChain!

- [x] **PR title**: community: Add Zep Cloud components + docs +
examples

- [x] **PR message**: 
We have recently released our new zep-cloud sdks that are compatible
with Zep Cloud (not Zep Open Source). We have also maintained our Cloud
version of langchain components (ChatMessageHistory, VectorStore) as
part of our sdks. This PRs goal is to port these components to langchain
community repo, and close the gap with the existing Zep Open Source
components already present in community repo (added
ZepCloudMemory,ZepCloudVectorStore,ZepCloudRetriever).
Also added a ZepCloudChatMessageHistory components together with an
expression language example ported from our repo. We have left the
original open source components intact on purpose as to not introduce
any breaking changes.
    - **Issue:** -
- **Dependencies:** Added optional dependency of our new cloud sdk
`zep-cloud`
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-27 12:50:13 -07:00
Jan Soubusta
cccc8fbe2f community[patch]: DuckDB VS - expose similarity, improve performance of from_texts (#20971)
3 fixes of DuckDB vector store:
- unify defaults in constructor and from_texts (users no longer have to
specify `vector_key`).
- include search similarity into output metadata (fixes #20969)
- significantly improve performance of `from_documents`

Dependencies: added Pandas to speed up `from_documents`.
I was thinking about CSV and JSON options, but I expect trouble loading
JSON values this way and also CSV and JSON options require storing data
to disk.
Anyway, the poetry file for langchain-community already contains a
dependency on Pandas.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 15:17:52 -07:00
Surya Pratap Singh Shekhawat
42207f5bef Update agent_executor.ipynb (#22104)
fixed typos in the doc.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 22:14:41 +00:00
Erick Friis
8acadc34f5 docs: edit links, direct for notebooks (#22051) 2024-05-24 19:44:46 +00:00
Erick Friis
42ffcb2ff1 anthropic: release 0.1.14rc2, test release note gen (#22147) 2024-05-24 12:40:10 -07:00
Erick Friis
6ee8de62c0 infra: auto-generated release notes based on git log (#22141)
Generates release notes based on a `git log` command with title names

Aiming to improve to splitting out features vs. bugfixes using
conventional commits in the coming weeks.

Will work for any monorepo packages
2024-05-24 11:43:28 -07:00
Ameya Shenoy
8ba492ed6a community[minor]: clickhouse -- ability to use secure connection (#22108)
- **Description:** this PR gives clickhouse client the ability to use a
secure connection to the clickhosue server
- **Issue:** fixes #22082
- **Dependencies:** -
- **Twitter handle:** `_codingcoffee_`

Signed-off-by: Ameya Shenoy <shenoy.ameya@gmail.com>
Co-authored-by: Shresth Rana <shresth@grapevine.in>
2024-05-24 17:30:22 +00:00
ccurme
9a010fb761 openai: read stream_options (#21548)
OpenAI recently added a `stream_options` parameter to its chat
completions API (see [release
notes](https://platform.openai.com/docs/changelog/added-chat-completions-stream-usage)).
When this parameter is set to `{"usage": True}`, an extra "empty"
message is added to the end of a stream containing token usage. Here we
propagate token usage to `AIMessage.usage_metadata`.

We enable this feature by default. Streams would now include an extra
chunk at the end, **after** the chunk with
`response_metadata={'finish_reason': 'stop'}`.

New behavior:
```
[AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='Hello', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='!', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde', usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17})]
```

Old behavior (accessible by passing `stream_options={"include_usage":
False}` into (a)stream:
```
[AIMessageChunk(content='', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='Hello', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='!', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-1312b971-c5ea-4d92-9015-e6604535f339')]
```

From what I can tell this is not yet implemented in Azure, so we enable
only for ChatOpenAI.
2024-05-24 13:20:56 -04:00
Patrick Zhang
eb7c767e5b docs: update the name of the tool passio_nutrition_ai (#22116)
Updating the name of the Passion Nutrition AI tool so that the name of
the tool is correctly displayed in the sidebar menu.

Currently the name of the tool says "Quickstart" in the side bar.
The patch fixed the name to be Passio Nutrition AI.

<img width="681" alt="image"
src="https://github.com/langchain-ai/langchain/assets/4603110/9609975e-78ea-4032-9024-10c4f838170a">
2024-05-24 17:15:16 +00:00
Leonid Ganeline
fd4ee08167 docs: integrations/platforms/microsoft update (#22100)
Added the `Azure Container Apps dynamic sessions` tool reference
2024-05-24 13:14:51 -04:00
Rahul Triptahi
1a485f59b9 community[patch]: Put authorized identities behind a feature flag in SharepointLoader (#22125)
Description: Put authorised identities behind a feature flag, load_auth.
Documentation: N/A
Unit tests: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-24 12:42:57 -04:00
Anindyadeep
ee689412ab docs: Update PremAI Docs (#22114)
Thank you for contributing to LangChain!

- [X] **PR title**: community: Updated langchain-community PremAI
documentation

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-24 11:55:32 -04:00
sasha
1c9ceff503 community: add metadata to chain logging; (#22122)
Hey, I'm Sasha. The SDK engineer from [Comet](https://comet.com).
This PR updates the CometTracer class.
Added metadata to CometTracerr. From now on, both chains and spans will
send it.
2024-05-24 15:29:40 +00:00
Jirka Lhotka
7c0459faf2 community: Update costs of openai finetuned models (#22124)
- **Description:** Update costs of finetuned models and add
gpt-3-turbo-0125. Source: https://openai.com/api/pricing/
  - **Issue:** N/A
  - **Dependencies:** None
2024-05-24 15:25:17 +00:00
Eugene Yurtsev
d3db83abe3 community[major]: lint for usage of xml library (#22132)
* Lint for usage of standard xml library
* Add forced opt-in for quip client
* Actual security issue is with underlying QuipClient not LangChain
integration (since the client is doing the parsing), but adding
enforcement at the LangChain level.
2024-05-24 15:23:53 +00:00
Tom Aarsen
5b5ea2af30 docs: Add explanation on how to use Hugging Face embeddings (#22118)
- **Description:** I've added a tab on embedding text with LangChain
using Hugging Face models to here:
https://python.langchain.com/v0.2/docs/how_to/embed_text/. HF was
mentioned in the running text, but not in the tabs, which I thought was
odd.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** No need, this is tiny :) 

Also, I had a ton of issues with the poetry docs/lint install, so I
haven't linted this. Apologies for that.

cc @Jofthomas 

- Tom Aarsen
2024-05-24 11:21:03 -04:00
Bagatur
baa3c975cb anthropic[patch]: allow tool call mutation (#22130)
If tool_use blocks and tool_calls with overlapping IDs are present,
prefer the values of the tool_calls. Allows for mutating AIMessages just
via tool_calls.
2024-05-24 08:18:14 -07:00
Christophe Bornet
c838de5027 doc: Add doc for CassandraByteStore (#22126)
Preview:
https://langchain-git-fork-cbornet-doc-cassandrabytestore-langchain.vercel.app/v0.2/docs/integrations/stores/cassandra/
2024-05-24 10:57:55 -04:00
Vadym Barda
2edb512282 docs: improve how-to docs for message history (#22072)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 20:12:24 -04:00
Artem
eb7c453b98 docs: update hub.pull("rlm/map-prompt") to hub.pull("rlm/reduce-prompt") for reduce prompt (#22088)
**PR message**: 
Update `hub.pull("rlm/map-prompt")` to `hub.pull("rlm/reduce-prompt")`
in summarization.ipynb

**Description:** 
Fix typo in prompt hub link from `reduce_prompt =
hub.pull("rlm/map-prompt")` to `reduce_prompt =
hub.pull("rlm/reduce-prompt")` following next issue

**Issue:** #22014

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 23:07:37 +00:00
Leonid Ganeline
2416737c5f docs: compact the API Reference links (#21285)
This PR is opinionated. 
Issue: the `API Reference` sections in the examples hold too much
vertical space and make us scroll the page too much. See an
[example](https://python.langchain.com/docs/get_started/quickstart/#conversation-retrieval-chain).
These sections are **important**. So, the compacting should not make
these sections less noticeable.
Change: compacting the `API Reference` sections. See the [same example
after change
applied](https://langchain-j6nya46lf-langchain.vercel.app/docs/get_started/quickstart/#conversation-retrieval-chain).
It is more compact and now looks like references (footnotes).
Note: I would also change the section style, so it would be more
noticeable (maybe to look like the footnotes. Smaller wider font?)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 15:50:23 -07:00
ccurme
0ea1e89b2c groq: read tool calls from .tool_calls attribute (#22096) 2024-05-23 18:16:06 -04:00
Bagatur
96c21dfe56 docs: hf feat table tool calling (#22091) 2024-05-23 15:09:30 -07:00
Eugene Yurtsev
63004a0945 codespell ignore remaining issues (#22097) 2024-05-23 21:51:39 +00:00
Eugene Yurtsev
2d693c484e docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
2024-05-23 16:59:11 -04:00
Bagatur
38783d07c9 infra: api docs quick preview (#22093) 2024-05-23 13:57:45 -07:00
Pavel Zloi
fe26f937e4 community[minor]: ManticoreSearch engine added to vectorstore (#19117)
**Description:** ManticoreSearch engine added to vectorstores
**Issue:** no issue, just a new feature
**Dependencies:** https://pypi.org/project/manticoresearch-dev/
**Twitter handle:** @EvilFreelancer

- Example notebook with test integration:

https://github.com/EvilFreelancer/langchain/blob/manticore-search-vectorstore/docs/docs/integrations/vectorstores/manticore_search.ipynb

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 13:56:18 -07:00
Erick Friis
95c3e5f85f cli: model name substitution fix, release 0.0.23 (#22089) 2024-05-23 13:09:38 -07:00
Kartheek Yakkala
18b8c8628a docs : Added integrations for tools with langchain_community (#22056)
- **PR title**:  Docs enhancement

- **Description:** Adding installation instructions for integrations
requiring `langchain-community` package since 0.2
    - **Issue:** https://github.com/langchain-ai/langchain/issues/22005
2024-05-23 15:09:34 -04:00
ccurme
152c8cac33 anthropic, openai: cut pre-releases (#22083) 2024-05-23 15:02:23 -04:00
ccurme
cd07521170 core: bump to 0.2.1rc (#22080) 2024-05-23 18:36:50 +00:00
Harrison Chase
170cc8aec3 docs: add multi-modal-docs (#21734)
We dont really have any abstractions around multi-modal... so add a
section explaining we dont have any abstrations and then how to guides
for openai and anthropic (probably need to add for more)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: junefish <junefish@users.noreply.github.com>
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 18:33:25 +00:00
ccurme
fbfed65fb1 core, partners: add token usage attribute to AIMessage (#21944)
```python
class UsageMetadata(TypedDict):
    """Usage metadata for a message, such as token counts.

    Attributes:
        input_tokens: (int) count of input (or prompt) tokens
        output_tokens: (int) count of output (or completion) tokens
        total_tokens: (int) total token count
    """

    input_tokens: int
    output_tokens: int
    total_tokens: int
```
```python
class AIMessage(BaseMessage):
    ...
    usage_metadata: Optional[UsageMetadata] = None
    """If provided, token usage information associated with the message."""
    ...
```
2024-05-23 14:21:58 -04:00
Bagatur
3d26807b92 community[patch]: Release. 0.2.1 (#22073) 2024-05-23 10:40:32 -07:00
Bagatur
2d968213d7 langchain[patch]: Release 0.2.1 (#22074) 2024-05-23 10:09:36 -07:00
maang-h
9aba9e3e33 community[patch]: Update the default “API URL” and “MODEL” of sparkllm (#22070)
- **Description:** When I was running the sparkllm, I found that the
default parameters currently used could no longer run correctly.
    - original parameters & values:
         - spark_api_url: "wss://spark-api.xf-yun.com/v3.1/chat"
         - spark_llm_domain: "generalv3"
    ```python
    # example
    
    from langchain_community.chat_models import ChatSparkLLM
    
spark = ChatSparkLLM(spark_app_id="my_app_id",
spark_api_key="my_api_key", spark_api_secret="my_api_secret")
    spark.invoke("hello")
    ```

![sparkllm](https://github.com/langchain-ai/langchain/assets/55082429/5369bfdf-4305-496a-bcf5-2d3f59d39414)

So I updated them to 3.5 (same as sparkllm official website). After the
update, they can be used normally.
    - new parameters & values:
         - spark_api_url: "wss://spark-api.xf-yun.com/v3.5/chat"
         - spark_llm_domain: "generalv3.5"
2024-05-23 12:25:20 -04:00
junkeon
4fda7bf4f2 upstage[patch] : fix error handling in Layout Analysis parser (#22054)
This pull request addresses and fixes exception handling in the
UpstageLayoutAnalysisParser and enhances the test coverage by adding
error exception tests for the document loader. These improvements ensure
robust error handling and increase the reliability of the system when
dealing with external API calls and JSON responses.

### Changes Made
1. Fix Request Exception Handling:

- Issue: The existing implementation of UpstageLayoutAnalysisParser did
not properly handle exceptions thrown by the requests library, which
could lead to unhandled exceptions and potential crashes.
- Solution: Added comprehensive exception handling for
requests.RequestException to catch any request-related errors. This
includes logging the error details and raising a ValueError with a
meaningful error message.

2. Add Error Exception Tests for Document Loader:

- New Tests: Introduced new test cases to verify the robustness of the
UpstageLayoutAnalysisLoader against various error scenarios. The tests
ensure that the loader gracefully handles:
- RequestException: Simulates network issues or invalid API requests to
ensure appropriate error handling and user feedback.
- JSONDecodeError: Simulates scenarios where the API response is not a
valid JSON, ensuring the system does not crash and provides clear error
messaging.
2024-05-23 11:45:34 -04:00
JuHyung Son
d9eff44400 partner-upstage[patch]: embeddings empty list bug (#22057)
Fixed an error in `embed_documents` when the input was given as an empty
list. And I have revised the document.
2024-05-23 11:44:30 -04:00
Martin Triska
2df8ac402a community[minor]: Added propagation of document metadata from O365BaseLoader (#20663)
**Description:**
- Added propagation of document metadata from O365BaseLoader to
FileSystemBlobLoader (O365BaseLoader uses FileSystemBlobLoader under the
hood).
- This is done by passing dictionary `metadata_dict`: key=filename and
value=dictionary containing document's metadata
- Modified `FileSystemBlobLoader` to accept the `metadata_dict`, use
`mimetype` from it (if available) and pass metadata further into blob
loader.

**Issue:**
- `O365BaseLoader` under the hood downloads documents to temp folder and
then uses `FileSystemBlobLoader` on it.
- However metadata about the document in question is lost in this
process. In particular:
- `mime_type`: `FileSystemBlobLoader` guesses `mime_type` from the file
extension, but that does not work 100% of the time.
- `web_url`: this is useful to keep around since in RAG LLM we might
want to provide link to the source document. In order to work well with
document parsers, we pass the `web_url` as `source` (`web_url` is
ignored by parsers, `source` is preserved)

**Dependencies:**
None

**Twitter handle:**
@martintriska1

Please review @baskaryan

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-23 11:42:19 -04:00
Eugene Yurtsev
e5541d1da7 community[patch]: Update doc-string in CloudBlobLoader (#22069)
Update doc-string
2024-05-23 15:31:41 +00:00
Maxime Perrin
8ba4f77734 docs : Adding correct imports to the integrations callbacks doc (#22059)
- **Description:** Adding correct imports to the integrations callbacks
doc (langchain-community package)
  - **Issue:** #22005

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-05-23 11:27:36 -04:00
Philippe PRADOS
6dd621d636 community[minor]: Add CloudBlobLoader that supports loading data from cloud buckets (#21957)
Thank you for contributing to LangChain!

- [ ] **PR title**: "Add CloudBlobLoader"
  - community: Add CloudBlobLoader

- [ ] **PR message**: Add cloud blob loader
    - **Description:** 
 Langchain provides several approaches to read different file formats:

Specific loaders (`CVSLoader`) or blob-compatible loaders
(`FileSystemBlobLoader`). The only implementation proposed for
BlobLoader is `FileSystemBlobLoader`.
      
Many projects retrieve files from cloud storage. We propose a new
implementation of `BlobLoader` to read files from the three cloud
storage systems. The interface is strictly identical to
`FileSystemBlobLoader`. The only difference is the constructor, which
takes a cloud "url" object such as `s3://my-bucket`, `az://my-bucket`,
or `gs://my-bucket`.
      
By streamlining the process, this novel implementation eliminates the
requirement to pre-download files from cloud storage to local temporary
files (which are seldom removed).
      
The code relies on the
[CloudPathLib](https://cloudpathlib.drivendata.org/stable/) library to
interpret cloud URLs. This has been added as an optional dependency.

```Python
loader = CloudBlobLoader("s3://mybucket/id")
for blob in loader.yield_blobs():
    print(blob)
```

- [X] **Dependencies:** CloudPathLib
- [X] **Twitter handle:** pprados


- [X] **Add tests and docs**: Add unit test, but it's easy to convert to
integration test, with some files in a cloud storage (see
`test_cloud_blob_loader.py`)

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

Hello from Paris @hwchase17. Can you review this PR?

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-05-23 10:59:55 -04:00
Christophe Bornet
74947ec894 community[minor]: Add Cassandra ByteStore (#22064) 2024-05-23 10:46:23 -04:00
Christophe Bornet
fea6b99b16 community[minor]: Add async methods to CassandraChatMessageHistory (#21975) 2024-05-23 10:13:05 -04:00
Eugene Yurtsev
37cfc00310 docs: concepts callbacks fix admonition (#22048)
Correct the admonition text
2024-05-22 20:33:28 -04:00
Erick Friis
53293dace8 docs: version increases (#22050) 2024-05-22 16:20:10 -07:00
Sky
12d65f17ff community[patch]: surrealdb provide functions for MMR (Maximal Marginal Relevance) (#21185)
This PR contains 4 added functions:

- max_marginal_relevance_search_by_vector
- amax_marginal_relevance_search_by_vector
- max_marginal_relevance_search
- amax_marginal_relevance_search

I'm no langchain expert, but tried do inspect other vectorstore sources
like chroma, to build these functions for SurrealDB. If someone has some
changes for me, please let me know. Otherwise I would be happy, if these
changes are added to the repository, so that I can use the orignal repo
and not my local monkey patched version.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 22:53:55 +00:00
Erick Friis
58b6c72375 docs: add astream v2 migration guide links (#21845)
- docs: v0.2 version sidebar
- x
- x
2024-05-22 15:48:42 -07:00
Bruno Alvisio
5eabe90494 community[patch]: Adding HEADER to the list of supported locations (#21946)
**Description:** adds headers to the list of supported locations when
generating the openai function schema
2024-05-22 22:47:56 +00:00
Bagatur
50186da0a1 infra: rm unused # noqa violations (#22049)
Updating #21137
2024-05-22 15:21:08 -07:00
acho98
45ed5f3f51 community[minor]: Add Clova Embeddings for LangChain Community (#21890)
- [ ] **PR title**: "Add Naver ClovaX embedding to LangChain community"
- HyperClovaX is a large language model developed by
[Naver](https://clova-x.naver.com/welcome).
It's a powerful and purpose-trained LLM.

- You can visit the embedding service provided by
[ClovaX](https://www.ncloud.com/product/aiService/clovaStudio)

- You may get CLOVA_EMB_API_KEY, CLOVA_EMB_APIGW_API_KEY,
CLOVA_EMB_APP_ID From
https://www.ncloud.com/product/aiService/clovaStudio

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 22:08:47 +00:00
arpitkumar980
444c2a3d9f community[patch]: sharepoint loader identity enabled (#21176)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:https://github.com/arpitkumar980/langchain.git
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-22 22:08:31 +00:00
Eugene Yurtsev
8a877120c3 docs: add admonitions to how-to callbacks (#22046)
Add admonitions with more information.
2024-05-22 22:05:57 +00:00
HuiyuanYan
bf3aefce93 community[patch]: Update tongyi.py to support MultimodalConversation in dashscope. (#21249)
Add the support of multimodal conversation in dashscope,now we can use
multimodal language model "qwen-vl-v1", "qwen-vl-chat-v1",
"qwen-audio-turbo" to processing picture an audio. :)

- [ ] **PR title**: "community: add multimodal conversation support in
dashscope"



- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** add multimodal conversation support in dashscope
    - **Issue:** 
    - **Dependencies:** dashscope≥1.18.0
    - **Twitter handle:** none :)


- [ ] **How to use it?**:
   - ```python
     Tongyi_chat = ChatTongyi(
        top_p=0.5,
        dashscope_api_key=api_key,
        model="qwen-vl-v1"
     )
     response= Tongyi_chat.invoke(
        input = 
        [
        {
            "role": "user",
            "content": [
{"image":
"https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"},
                {"text": "这是什么?"}
            ]
        }
        ]
       )
      ```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 22:04:58 +00:00
mochi
63284ffebf experimental[patch], docs: refine notebook for MyScale SelfQueryRetriever (#22016)
- **Description:** upgrade model to `gpt-4o`
2024-05-22 21:49:01 +00:00
MSubik
d948783a4c community[patch]: standardize init args, update for javelin sdk release. (#21980)
Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085) Updated
the Javelin chat model to standardize the initialization argument. Also
fixed an existing bug, where code was initialized with incorrect call to
the JavelinClient defined in the javelin_sdk, resulting in an
initialization error. See related [Javelin
Documentation](https://docs.getjavelin.io/docs/javelin-python/quickstart).
2024-05-22 21:47:28 +00:00
Mohammad Mohtashim
16617dd239 community[patch]: AzureSearchVectorStoreRetriever Fixed to account for search_kwargs (#21572)
- **Description:** Fixed `AzureSearchVectorStoreRetriever` to account
for search_kwargs. More explanation is in the mentioned issue.
- **Issue:** #21492

---------

Co-authored-by: MAC <mac@MACs-MacBook-Pro.local>
Co-authored-by: Massimiliano Pronesti <massimiliano.pronesti@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 14:46:41 -07:00
Klaudia Lemiec
45351d1bc6 docs: Chroma docstrings update (#22001)
Thank you for contributing to LangChain!

- [X] **PR title**: "docs: Chroma docstrings update"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [X] **PR message**: 
    - **Description:** Added and updated Chroma docstrings
    - **Issue:** https://github.com/langchain-ai/langchain/issues/21983


- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
  - only docs


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-22 21:45:30 +00:00
Jerron Lim
28456c2c33 community[patch]: add args_schema to WikipediaQueryRun (#22019)
Description: This change adds args_schema (pydantic BaseModel) to
WikipediaQueryRun for correct schema formatting on LLM function calls

Issue: currently using WikipediaQueryRun with OpenAI function calling
returns the following error "TypeError: WikipediaQueryRun._run() got an
unexpected keyword argument '__arg1' ". This happens because the schema
sent to the LLM is "input: '{"__arg1":"Hunter x Hunter"}'" while the
method should be called with the "query" parameter.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 21:31:58 +00:00
Mazen Ramadan
3c1d77dd64 community[minor]: Add Scrapfly Loader community integration (#22036)
Added [Scrapfly](https://scrapfly.io/) Web Loader integration. Scrapfly
is a web scraping API that allows extracting web page data into
accessible markdown or text datasets.

- __Description__: Added Scrapfly web loader for retrieving web page
data as markdown or text.
- Dependencies: scrapfly-sdk
- Twitter: @thealchemi1st

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 21:29:13 +00:00
Chad Juliano
9a66c43146 docs: Use Kinetica Sql context API (#21993)
Update python notebook to use new Kinetica SQL context API.
2024-05-22 14:26:20 -07:00
ccurme
b51a1eba4d langchain, community: move OpenAIAssistantV2Runnable to community (#22044) 2024-05-22 21:22:50 +00:00
Mirna Wong
b4d5f3181b docs: updates code examples in neo4j_cypher.ipynb (#21973)
Resolves #19134

Thank you for contributing to LangChain!

- [x ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** this pr replaces `title` with `name` in the [add
examples in cypher generation
prompt](https://python.langchain.com/v0.1/docs/integrations/graphs/neo4j_cypher/#add-examples-in-the-cypher-generation-prompt)
section.
    - **Issue:** 19134
    - **Dependencies:** any dependencies required for this change
    - **Twitter handle:** @mirna_wong
2024-05-22 20:48:09 +00:00
CaroFG
6b98140b38 community[patch]: update for compatibility with Meilisearch v1.8 (#21979)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updates Meilisearch vectorstore for compatibility
with v1.8. Adds [”showRankingScore”:
true”](https://www.meilisearch.com/docs/reference/api/search#ranking-score)
in the search parameters and replaces `_semanticScore` field with `
_rankingScore`


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-22 13:37:01 -07:00
Oleksii Pokotylo
98c0b093bb community[patch]: Extend AzureSearch with maximal_marginal_relevance, from_embeddings (#21065)
**Description:**
- Extend AzureSearch with `maximal_marginal_relevance` (for vector and
hybrid search)
- Add construction `from_embeddings` - if the user has already embedded
the texts
- Add `add_embeddings` 
- Refactor common parts (`_simple_search`, `_results_to_documents`,
`_reorder_results_with_maximal_marginal_relevance`)
- Add `vector_search_dimensions` as a parameter to the constructor to
avoid extra calls to `embed_query` (most of the time the user applies
the same model and knows the dimension)

**Issue:** none
**Dependencies:** none

- [x] **Add tests and docs**: The docstrings have been added to the new
functions, and unified for the existing ones. The example notebook is
great in illustrating the main usage of AzureSearch, adding the new
methods would only dilute the main content.
- [x] **Lint and test**

---------

Co-authored-by: Oleksii Pokotylo <oleksii.pokotylo@pwc.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 13:36:06 -07:00
Erick Friis
ed5914ff61 docs: move feedback into paginator from content (#22041)
we only index what's in the `<article>` tags for search. We should not
have the feedback in the article.
2024-05-22 13:21:27 -07:00
SaschaStoll
709664a079 community[patch]: Performant filter columns option for Hanavector (#21971)
**Description:** Backwards compatible extension of the initialisation
interface of HanaDB to allow the user to specify
specific_metadata_columns that are used for metadata storage of selected
keys which yields increased filter performance. Any not-mentioned
metadata remains in the general metadata column as part of a JSON
string. Furthermore switched to executemany for batch inserts into
HanaDB.

**Issue:** N/A

**Dependencies:** no new dependencies added

**Twitter handle:** @sapopensource

---------

Co-authored-by: Martin Kolb <martin.kolb@sap.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 13:21:21 -07:00
Bagatur
16b55b0704 langchain[patch]: remove dataclasses-json dep (#22042)
vestigial dep afaict
2024-05-22 13:20:57 -07:00
Christos Boulmpasakos
c3bcfad66d text-splitters[patch]: Extend TextSplitter:keep_separator functionality (#21130)
**Description:** Added extra functionality to `CharacterTextSplitter`,
`TextSplitter` classes.
The user can select whether to append the separator to the previous
chunk with `keep_separator='end' ` or else prepend to the next chunk.
Previous functionality prepended by default to next chunk.
  
**Issue:** Fixes #20908

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-22 13:17:45 -07:00
Bagatur
b859765752 docs: fix partner api ref build (#22007) 2024-05-22 13:16:07 -07:00
Eric Zhang
e7e41eaabe langchain: add RankLLM Reranker (#21171)
Integrate RankLLM reranker (https://github.com/castorini/rank_llm) into
LangChain

An example notebook is given in
`docs/docs/integrations/retrievers/rankllm-reranker.ipynb`

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-22 20:12:55 +00:00
Eugene Yurtsev
14a9c7c44e concepts: update callback concepts (#22040)
Update callback concepts
2024-05-22 15:58:02 -04:00
maang-h
fc93bed8c4 community: Fix CSVLoader columns is None (#20701)
- **Bug code**: In
langchain_community/document_loaders/csv_loader.py:100

- **Description**: currently, when 'CSVLoader' reads the column as None
in the 'csv' file, it will report an error because the 'CSVLoader' does
not verify whether the column is of str type and does not consider how
to handle the corresponding 'row_data' when the column is' None 'in the
csv. This pr provides a solution.

- **Issue:**  Fix #20699 

- **thinking:**

1. Refer to the processing method for
'langchain_community/document_loaders/csv_loader.py:100' when **'v'**
equals'None', and apply the same method to '**k**'.
(Reference`csv.DictReader` ,**'k'** will only be None when `
len(columns) < len(number_row_data)` is established)
2. **‘k’** equals None only holds when it is the last column, and its
corresponding **'v'** type is a list. Therefore, I referred to the data
format in 'Document' and used ',' to concatenated the elements in the
list.(But I'm not sure if you accept this form, if you have any other
ideas, communicate)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-22 12:57:46 -07:00
Nithin James Padayatti
403142eaba langchain: added revision_example prompt template (#20916)
**Description:** Added revision_example prompt template to include the
revision request and revision examples in the revision chain.
    **Issue:** Not Applicable
    **Dependencies:** Not Applicable
    **Twitter handle:**  @nithinjp09
2024-05-22 19:57:32 +00:00
632 changed files with 35406 additions and 11130 deletions

View File

@@ -91,4 +91,4 @@ if __name__ == "__main__":
}
for key, value in outputs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}") # noqa: T201
print(f"{key}={json_output}")

View File

@@ -76,4 +76,4 @@ if __name__ == "__main__":
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201
)

7
.github/workflows/.codespell-exclude vendored Normal file
View File

@@ -0,0 +1,7 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -72,10 +72,67 @@ jobs:
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo "# Release $TAG"
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
with:
@@ -86,6 +143,7 @@ jobs:
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
steps:
@@ -229,6 +287,7 @@ jobs:
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
@@ -270,6 +329,7 @@ jobs:
mark-release:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
@@ -306,6 +366,6 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
body: ${{ needs.release-notes.outputs.release-body }}
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}

View File

@@ -29,9 +29,9 @@ jobs:
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -7,4 +7,4 @@ ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}") # noqa: T201
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

View File

@@ -10,6 +10,7 @@ env:
jobs:
build:
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
@@ -25,16 +26,52 @@ jobs:
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/together"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
- "libs/partners/cohere"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
- "libs/partners/nvidia-ai-endpoints"
steps:
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-nvidia
path: langchain-nvidia
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-cohere
path: langchain-cohere
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/nvidia-ai-endpoints \
langchain/libs/partners/cohere
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-nvidia/libs/ai-endpoints langchain/libs/partners/nvidia-ai-endpoints
mv langchain-cohere/libs/cohere langchain/libs/partners/cohere
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }}
uses: "./.github/actions/poetry_setup"
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.working-directory }}
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: 'Authenticate to Google Cloud'
@@ -43,16 +80,20 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Run integration tests
working-directory: ${{ matrix.working-directory }}
shell: bash
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -67,12 +108,26 @@ jobs:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
run: |
make integration_test
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/nvidia-ai-endpoints \
langchain/libs/partners/cohere \
langchain/libs/partners/aws
- name: Ensure the tests did not create any additional files
working-directory: ${{ matrix.working-directory }}
shell: bash
working-directory: langchain
run: |
set -eu

2
.gitignore vendored
View File

@@ -133,6 +133,7 @@ env.bak/
# mypy
.mypy_cache/
.mypy_cache_test/
.dmypy.json
dmypy.json
@@ -178,3 +179,4 @@ _dist
docs/docs/templates
prof
virtualenv/

View File

@@ -32,10 +32,19 @@ api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
cd docs/api_reference && poetry run make clean
git clean -fdX ./docs/api_reference
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:

View File

@@ -2,17 +2,17 @@
⚡ Build context-aware reasoning applications ⚡
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![Downloads](https://static.pepy.tech/badge/langchain-core/month)](https://pepy.tech/project/langchain-core)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -38,22 +38,22 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) and [components](https://python.langchain.com/v0.2/docs/concepts/#components). Integrate with hundreds of [third-party providers](https://python.langchain.com/v0.2/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/v0.2/docs/langserve/).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
- **[LangServe](https://python.langchain.com/v0.2/docs/langserve/)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
@@ -61,20 +61,20 @@ For these applications, LangChain simplifies the entire application lifecycle:
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
@@ -87,49 +87,50 @@ Off-the-shelf chains make it easy to get started. Components make it easy to cus
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
**📚 Retrieval:**
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents:**
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents) along with the [LangGraph](https://github.com/langchain-ai/langgraph) extension for building custom agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
- [LangChain Templates](https://python.langchain.com/v0.2/docs/templates/): Example applications hosted with LangServe.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
## 🌟 Contributors

View File

@@ -0,0 +1,497 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9fc3897d-176f-4729-8fd1-cfb4add53abd",
"metadata": {},
"source": [
"## Nomic multi-modal RAG\n",
"\n",
"Many documents contain a mixture of content types, including text and images. \n",
"\n",
"Yet, information captured in images is lost in most RAG applications.\n",
"\n",
"With the emergence of multimodal LLMs, like [GPT-4V](https://openai.com/research/gpt-4v-system-card), it is worth considering how to utilize images in RAG:\n",
"\n",
"In this demo we\n",
"\n",
"* Use multimodal embeddings from Nomic Embed [Vision](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) and [Text](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) to embed images and text\n",
"* Retrieve both using similarity search\n",
"* Pass raw images and text chunks to a multimodal LLM for answer synthesis \n",
"\n",
"## Signup\n",
"\n",
"Get your API token, then run:\n",
"```\n",
"! nomic login\n",
"```\n",
"\n",
"Then run with your generated API token \n",
"```\n",
"! nomic login < token > \n",
"```\n",
"\n",
"## Packages\n",
"\n",
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54926b9b-75c2-4cd4-8f14-b3882a0d370b",
"metadata": {},
"outputs": [],
"source": [
"! nomic login token"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"! pip install -U langchain-nomic langchain_community tiktoken langchain-openai chromadb langchain # (newest versions required for multi-modal)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acbdc603-39e2-4a5f-836c-2bbaecd46b0b",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
"metadata": {},
"source": [
"## Data Loading\n",
"\n",
"### Partition PDF text and images\n",
" \n",
"Let's look at an example pdfs containing interesting images.\n",
"\n",
"1/ Art from the J Paul Getty museum:\n",
"\n",
" * Here is a [zip file](https://drive.google.com/file/d/18kRKbq2dqAhhJ3DfZRnYcTBEUfYxe1YR/view?usp=sharing) with the PDF and the already extracted images. \n",
"* https://www.getty.edu/publications/resources/virtuallibrary/0892360224.pdf\n",
"\n",
"2/ Famous photographs from library of congress:\n",
"\n",
"* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n",
"* We'll use this as an example below\n",
"\n",
"We can use `partition_pdf` below from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images.\n",
"\n",
"To supply this to extract the images:\n",
"```\n",
"extract_images_in_pdf=True\n",
"```\n",
"\n",
"\n",
"\n",
"If using this zip file, then you can simply process the text only with:\n",
"```\n",
"extract_images_in_pdf=False\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9646b524-71a7-4b2a-bdc8-0b81f77e968f",
"metadata": {},
"outputs": [],
"source": [
"# Folder with pdf and extracted images\n",
"from pathlib import Path\n",
"\n",
"# replace with actual path to images\n",
"path = Path(\"../art\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "77f096ab-a933-41d0-8f4e-1efc83998fc3",
"metadata": {},
"outputs": [],
"source": [
"path.resolve()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc4839c0-8773-4a07-ba59-5364501269b2",
"metadata": {},
"outputs": [],
"source": [
"# Extract images, tables, and chunk text\n",
"from unstructured.partition.pdf import partition_pdf\n",
"\n",
"raw_pdf_elements = partition_pdf(\n",
" filename=str(path.resolve()) + \"/getty.pdf\",\n",
" extract_images_in_pdf=False,\n",
" infer_table_structure=True,\n",
" chunking_strategy=\"by_title\",\n",
" max_characters=4000,\n",
" new_after_n_chars=3800,\n",
" combine_text_under_n_chars=2000,\n",
" image_output_dir_path=path,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "969545ad",
"metadata": {},
"outputs": [],
"source": [
"# Categorize text elements by type\n",
"tables = []\n",
"texts = []\n",
"for element in raw_pdf_elements:\n",
" if \"unstructured.documents.elements.Table\" in str(type(element)):\n",
" tables.append(str(element))\n",
" elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):\n",
" texts.append(str(element))"
]
},
{
"cell_type": "markdown",
"id": "5d8e6349-1547-4cbf-9c6f-491d8610ec10",
"metadata": {},
"source": [
"## Multi-modal embeddings with our document\n",
"\n",
"We will use [nomic-embed-vision-v1.5](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) embeddings. This model is aligned \n",
"to [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) allowing for multimodal semantic search and Multimodal RAG!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4bc15842-cb95-4f84-9eb5-656b0282a800",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import uuid\n",
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_nomic import NomicEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",
"# Create chroma\n",
"text_vectorstore = Chroma(\n",
" collection_name=\"mm_rag_clip_photos_text\",\n",
" embedding_function=NomicEmbeddings(\n",
" vision_model=\"nomic-embed-vision-v1.5\", model=\"nomic-embed-text-v1.5\"\n",
" ),\n",
")\n",
"image_vectorstore = Chroma(\n",
" collection_name=\"mm_rag_clip_photos_image\",\n",
" embedding_function=NomicEmbeddings(\n",
" vision_model=\"nomic-embed-vision-v1.5\", model=\"nomic-embed-text-v1.5\"\n",
" ),\n",
")\n",
"\n",
"# Get image URIs with .jpg extension only\n",
"image_uris = sorted(\n",
" [\n",
" os.path.join(path, image_name)\n",
" for image_name in os.listdir(path)\n",
" if image_name.endswith(\".jpg\")\n",
" ]\n",
")\n",
"\n",
"# Add images\n",
"image_vectorstore.add_images(uris=image_uris)\n",
"\n",
"# Add documents\n",
"text_vectorstore.add_texts(texts=texts)\n",
"\n",
"# Make retriever\n",
"image_retriever = image_vectorstore.as_retriever()\n",
"text_retriever = text_vectorstore.as_retriever()"
]
},
{
"cell_type": "markdown",
"id": "02a186d0-27e0-4820-8092-63b5349dd25d",
"metadata": {},
"source": [
"## RAG\n",
"\n",
"`vectorstore.add_images` will store / retrieve images as base64 encoded strings.\n",
"\n",
"These can be passed to [GPT-4V](https://platform.openai.com/docs/guides/vision)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "344f56a8-0dc3-433e-851c-3f7600c7a72b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"import io\n",
"from io import BytesIO\n",
"\n",
"import numpy as np\n",
"from PIL import Image\n",
"\n",
"\n",
"def resize_base64_image(base64_string, size=(128, 128)):\n",
" \"\"\"\n",
" Resize an image encoded as a Base64 string.\n",
"\n",
" Args:\n",
" base64_string (str): Base64 string of the original image.\n",
" size (tuple): Desired size of the image as (width, height).\n",
"\n",
" Returns:\n",
" str: Base64 string of the resized image.\n",
" \"\"\"\n",
" # Decode the Base64 string\n",
" img_data = base64.b64decode(base64_string)\n",
" img = Image.open(io.BytesIO(img_data))\n",
"\n",
" # Resize the image\n",
" resized_img = img.resize(size, Image.LANCZOS)\n",
"\n",
" # Save the resized image to a bytes buffer\n",
" buffered = io.BytesIO()\n",
" resized_img.save(buffered, format=img.format)\n",
"\n",
" # Encode the resized image to Base64\n",
" return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
"\n",
"\n",
"def is_base64(s):\n",
" \"\"\"Check if a string is Base64 encoded\"\"\"\n",
" try:\n",
" return base64.b64encode(base64.b64decode(s)) == s.encode()\n",
" except Exception:\n",
" return False\n",
"\n",
"\n",
"def split_image_text_types(docs):\n",
" \"\"\"Split numpy array images and texts\"\"\"\n",
" images = []\n",
" text = []\n",
" for doc in docs:\n",
" doc = doc.page_content # Extract Document contents\n",
" if is_base64(doc):\n",
" # Resize image to avoid OAI server error\n",
" images.append(\n",
" resize_base64_image(doc, size=(250, 250))\n",
" ) # base64 encoded str\n",
" else:\n",
" text.append(doc)\n",
" return {\"images\": images, \"texts\": text}"
]
},
{
"cell_type": "markdown",
"id": "23a2c1d8-fea6-4152-b184-3172dd46c735",
"metadata": {},
"source": [
"Currently, we format the inputs using a `RunnableLambda` while we add image support to `ChatPromptTemplates`.\n",
"\n",
"Our runnable follows the classic RAG flow - \n",
"\n",
"* We first compute the context (both \"texts\" and \"images\" in this case) and the question (just a RunnablePassthrough here) \n",
"* Then we pass this into our prompt template, which is a custom function that formats the message for the gpt-4-vision-preview model. \n",
"* And finally we parse the output as a string."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d8919dc-c238-4746-86ba-45d940a7d260",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c93fab3-74c4-4f1d-958a-0bc4cdd0797e",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",
" # Joining the context texts into a single string\n",
" formatted_texts = \"\\n\".join(data_dict[\"text_context\"][\"texts\"])\n",
" messages = []\n",
"\n",
" # Adding image(s) to the messages if present\n",
" if data_dict[\"image_context\"][\"images\"]:\n",
" image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": f\"data:image/jpeg;base64,{data_dict['image_context']['images'][0]}\"\n",
" },\n",
" }\n",
" messages.append(image_message)\n",
"\n",
" # Adding the text message for analysis\n",
" text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": (\n",
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
" \"- Connections between the image and the related text.\\n\\n\"\n",
" f\"User-provided keywords: {data_dict['question']}\\n\\n\"\n",
" \"Text and / or tables:\\n\"\n",
" f\"{formatted_texts}\"\n",
" ),\n",
" }\n",
" messages.append(text_message)\n",
"\n",
" return [HumanMessage(content=messages)]\n",
"\n",
"\n",
"model = ChatOpenAI(temperature=0, model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
"\n",
"# RAG pipeline\n",
"chain = (\n",
" {\n",
" \"text_context\": text_retriever | RunnableLambda(split_image_text_types),\n",
" \"image_context\": image_retriever | RunnableLambda(split_image_text_types),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | RunnableLambda(prompt_func)\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1566096d-97c2-4ddc-ba4a-6ef88c525e4e",
"metadata": {},
"source": [
"## Test retrieval and run RAG"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "90121e56-674b-473b-871d-6e4753fd0c45",
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML, display\n",
"\n",
"\n",
"def plt_img_base64(img_base64):\n",
" # Create an HTML img tag with the base64 string as the source\n",
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
"\n",
" # Display the image by rendering the HTML\n",
" display(HTML(image_html))\n",
"\n",
"\n",
"docs = text_retriever.invoke(\"Women with children\", k=5)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "44eaa532-f035-4c04-b578-02339d42554c",
"metadata": {},
"outputs": [],
"source": [
"docs = image_retriever.invoke(\"Women with children\", k=5)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69fb15fd-76fc-49b4-806d-c4db2990027d",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"Women with children\")"
]
},
{
"cell_type": "markdown",
"id": "227f08b8-e732-4089-b65c-6eb6f9e48f15",
"metadata": {},
"source": [
"We can see the images retrieved in the LangSmith trace:\n",
"\n",
"LangSmith [trace](https://smith.langchain.com/public/69c558a5-49dc-4c60-a49b-3adbb70f74c5/r/e872c2c8-528c-468f-aefd-8b5cd730a673)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -526,8 +526,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** Currently, OracleEmbeddings processes each embedding generation request individually, without batching, by calling REST endpoints separately for each request. This method could potentially lead to exceeding the maximum request per minute quota set by some providers. However, we are actively working to enhance this process by implementing request batching, which will allow multiple embedding requests to be combined into fewer API calls, thereby optimizing our use of provider resources and adhering to their request limits. This update is expected to be rolled out soon, eliminating the current limitation.\n",
"\n",
"***Note:*** Users may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
]
},

View File

@@ -36,7 +36,9 @@
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(docs, embedding=UpstageEmbeddings())\n",
"vectorstore = DocArrayInMemorySearch.from_documents(\n",
" docs, embedding=UpstageEmbeddings(model=\"solar-embedding-1-large\")\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",

View File

@@ -39,12 +39,10 @@
"from langchain_community.document_loaders.recursive_url_loader import (\n",
" RecursiveUrlLoader,\n",
")\n",
"\n",
"# noqa\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter # noqa\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"DOCSTORE_DIR = \".\"\n",
"DOCSTORE_ID_KEY = \"doc_id\""

View File

@@ -35,8 +35,6 @@ generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)

View File

@@ -128,11 +128,11 @@ def _load_package_modules(
of the modules/packages are part of the package vs. 3rd party or built-in.
Parameters:
package_directory: Path to the package directory.
submodule: Optional name of submodule to load.
package_directory (Union[str, Path]): Path to the package directory.
submodule (Optional[str]): Optional name of submodule to load.
Returns:
list: A list of loaded module objects.
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
"""
package_path = (
Path(package_directory)
@@ -187,7 +187,7 @@ def _load_package_modules(
modules_by_namespace[top_namespace] = _module_members
except ImportError as e:
print(f"Error: Unable to import module '{namespace}' with error: {e}") # noqa: T201
print(f"Error: Unable to import module '{namespace}' with error: {e}")
return modules_by_namespace
@@ -364,7 +364,7 @@ def main(dirs: Optional[list] = None) -> None:
dirs += [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs" / "partners")
if os.path.isdir(dir_)
if os.path.isdir(ROOT_DIR / "libs" / "partners" / dir_)
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / "partners" / dir_)
]
for dir_ in dirs:

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -2,32 +2,150 @@
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
and Templates.
Templates, and Cookbooks.
## Summary
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|------------------|---------|-------------------|------------------------|
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **arXiv id:** 2402.03620v1
- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
- **Published Date:** 2024-02-06
- **URL:** http://arxiv.org/abs/2402.03620v1
- **LangChain:**
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
task-intrinsic reasoning structures to tackle complex reasoning problems that
are challenging for typical prompting methods. Core to the framework is a
self-discovery process where LLMs select multiple atomic reasoning modules such
as critical thinking and step-by-step thinking, and compose them into an
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
outperforms inference-intensive methods such as CoT-Self-Consistency by more
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
the self-discovered reasoning structures are universally applicable across
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
commonalities with human reasoning patterns.
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **arXiv id:** 2401.18059v1
- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
- **Published Date:** 2024-01-31
- **URL:** http://arxiv.org/abs/2401.18059v1
- **LangChain:**
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
state and incorporate long-tail knowledge. However, most existing methods
retrieve only short contiguous chunks from a retrieval corpus, limiting
holistic understanding of the overall document context. We introduce the novel
approach of recursively embedding, clustering, and summarizing chunks of text,
constructing a tree with differing levels of summarization from the bottom up.
At inference time, our RAPTOR model retrieves from this tree, integrating
information across lengthy documents at different levels of abstraction.
Controlled experiments show that retrieval with recursive summaries offers
significant improvements over traditional retrieval-augmented LMs on several
tasks. On question-answering tasks that involve complex, multi-step reasoning,
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
with the use of GPT-4, we can improve the best performance on the QuALITY
benchmark by 20% in absolute accuracy.
## Corrective Retrieval Augmented Generation
- **arXiv id:** 2401.15884v2
- **Title:** Corrective Retrieval Augmented Generation
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
- **Published Date:** 2024-01-29
- **URL:** http://arxiv.org/abs/2401.15884v2
- **LangChain:**
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
accuracy of generated texts cannot be secured solely by the parametric
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
practicable complement to LLMs, it relies heavily on the relevance of retrieved
documents, raising concerns about how the model behaves if retrieval goes
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
(CRAG) to improve the robustness of generation. Specifically, a lightweight
retrieval evaluator is designed to assess the overall quality of retrieved
documents for a query, returning a confidence degree based on which different
knowledge retrieval actions can be triggered. Since retrieval from static and
limited corpora can only return sub-optimal documents, large-scale web searches
are utilized as an extension for augmenting the retrieval results. Besides, a
decompose-then-recompose algorithm is designed for retrieved documents to
selectively focus on key information and filter out irrelevant information in
them. CRAG is plug-and-play and can be seamlessly coupled with various
RAG-based approaches. Experiments on four datasets covering short- and
long-form generation tasks show that CRAG can significantly improve the
performance of RAG-based approaches.
## Mixtral of Experts
- **arXiv id:** 2401.04088v1
- **Title:** Mixtral of Experts
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
- **Published Date:** 2024-01-08
- **URL:** http://arxiv.org/abs/2401.04088v1
- **LangChain:**
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and combine their outputs. Even though each token only sees two experts, the
selected experts can be different at each timestep. As a result, each token has
access to 47B parameters, but only uses 13B active parameters during inference.
Mixtral was trained with a context size of 32k tokens and it outperforms or
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
multilingual benchmarks. We also provide a model fine-tuned to follow
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
the base and instruct models are released under the Apache 2.0 license.
## Dense X Retrieval: What Retrieval Granularity Should We Use?
- **arXiv id:** 2312.06648v2
@@ -91,6 +209,39 @@ average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **arXiv id:** 2310.11511v1
- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
- **Published Date:** 2023-10-17
- **URL:** http://arxiv.org/abs/2310.11511v1
- **LangChain:**
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **arXiv id:** 2310.06117v2
@@ -101,6 +252,7 @@ outside the pre-training knowledge scope.
- **LangChain:**
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
@@ -113,6 +265,27 @@ including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
## Llama 2: Open Foundation and Fine-Tuned Chat Models
- **arXiv id:** 2307.09288v2
- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
- **Published Date:** 2023-07-18
- **URL:** http://arxiv.org/abs/2307.09288v2
- **LangChain:**
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
## Query Rewriting for Retrieval-Augmented Large Language Models
- **arXiv id:** 2305.14283v3
@@ -123,6 +296,7 @@ and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
- **LangChain:**
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
@@ -152,6 +326,7 @@ for retrieval-augmented LLM.
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
approach aimed at improving the problem-solving capabilities of auto-regressive
@@ -171,6 +346,132 @@ significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **arXiv id:** 2305.04091v3
- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
- **Published Date:** 2023-05-06
- **URL:** http://arxiv.org/abs/2305.04091v3
- **LangChain:**
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
## Visual Instruction Tuning
- **arXiv id:** 2304.08485v2
- **Title:** Visual Instruction Tuning
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
- **Published Date:** 2023-04-17
- **URL:** http://arxiv.org/abs/2304.08485v2
- **LangChain:**
- **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
## Generative Agents: Interactive Simulacra of Human Behavior
- **arXiv id:** 2304.03442v2
- **Title:** Generative Agents: Interactive Simulacra of Human Behavior
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
- **Published Date:** 2023-04-07
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
communication to prototyping tools. In this paper, we introduce generative
agents--computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint,
while authors write; they form opinions, notice each other, and initiate
conversations; they remember and reflect on days past as they plan the next
day. To enable generative agents, we describe an architecture that extends a
large language model to store a complete record of the agent's experiences
using natural language, synthesize those memories over time into higher-level
reflections, and retrieve them dynamically to plan behavior. We instantiate
generative agents to populate an interactive sandbox environment inspired by
The Sims, where end users can interact with a small town of twenty five agents
using natural language. In an evaluation, these generative agents produce
believable individual and emergent social behaviors: for example, starting with
only a single user-specified notion that one agent wants to throw a Valentine's
Day party, the agents autonomously spread invitations to the party over the
next two days, make new acquaintances, ask each other out on dates to the
party, and coordinate to show up for the party together at the right time. We
demonstrate through ablation that the components of our agent
architecture--observation, planning, and reflection--each contribute critically
to the believability of agent behavior. By fusing large language models with
computational, interactive agents, this work introduces architectural and
interaction patterns for enabling believable simulations of human behavior.
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **arXiv id:** 2303.17760v2
- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
- **Published Date:** 2023-03-31
- **URL:** http://arxiv.org/abs/2303.17760v2
- **LangChain:**
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **arXiv id:** 2303.17580v4
@@ -181,6 +482,7 @@ implementation of the ToT-based Sudoku solver is available on GitHub:
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are numerous AI models
@@ -235,7 +537,7 @@ more than 1/1,000th the compute of GPT-4.
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
@@ -262,6 +564,7 @@ family, and discuss robustness and security.
- **API Reference:** [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
languages, it remains difficult to create effective fully zero-shot dense
@@ -351,7 +654,8 @@ performance across three real-world tasks on multiple LLMs.
- **URL:** http://arxiv.org/abs/2211.10435v2
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
- **API Reference:** [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
@@ -442,7 +746,7 @@ encoders, mine bitexts, and validate the bitexts by training NMT systems.
- **URL:** http://arxiv.org/abs/2204.00498v1
- **LangChain:**
- **API Reference:** [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
- **API Reference:** [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
language model. We find that, without any finetuning, Codex is a strong
@@ -461,7 +765,7 @@ few-shot examples.
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
@@ -525,7 +829,7 @@ https://github.com/OpenAI/CLIP.
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We

View File

@@ -38,7 +38,7 @@ All dependencies in this package are optional to keep the package as lightweight
`langgraph` is an extension of `langchain` aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for constructing more contr
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows.
### [`langserve`](/docs/langserve)
@@ -58,6 +58,7 @@ A developer platform that lets you debug, test, evaluate, and monitor LLM applic
/>
## LangChain Expression Language (LCEL)
<span data-heading-keywords="lcel"></span>
LangChain Expression Language, or LCEL, is a declarative way to chain LangChain components.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
@@ -88,6 +89,7 @@ With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.sm
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).
### Runnable interface
<span data-heading-keywords="invoke"></span>
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
@@ -128,6 +130,7 @@ LangChain provides standard, extendable interfaces and external integrations for
Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
### Chat models
<span data-heading-keywords="chat model,chat models"></span>
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see above).
@@ -151,6 +154,7 @@ Please see the [tool calling section](/docs/concepts/#functiontool-calling) for
:::
### LLMs
<span data-heading-keywords="llm,llms"></span>
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).
@@ -174,7 +178,7 @@ The `content` property describes the content of the message.
This can be a few different things:
- A string (most models deal this type of content)
- A List of dictionaries (this is used for multi-modal input, where the dictionary contains information about that input type and that input location)
- A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location)
#### HumanMessage
@@ -214,6 +218,8 @@ This represents the result of a tool call. This is distinct from a FunctionMessa
### Prompt templates
<span data-heading-keywords="prompt,prompttemplate,chatprompttemplate"></span>
Prompt templates help to translate user input and parameters into instructions for a language model.
This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
@@ -258,6 +264,7 @@ The first is a system message, that has no variables to format.
The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
#### MessagesPlaceholder
<span data-heading-keywords="messagesplaceholder"></span>
This prompt template is responsible for adding a list of messages in a particular place.
In the above ChatPromptTemplate, we saw how we could format two messages, each one a string.
@@ -297,6 +304,7 @@ Example Selectors are classes responsible for selecting and then formatting exam
### Output parsers
<span data-heading-keywords="output parser"></span>
:::note
@@ -350,6 +358,7 @@ This `ChatHistory` will keep track of inputs and outputs of the underlying chain
Future interactions will then load those messages and pass them into the chain as part of the input.
### Documents
<span data-heading-keywords="document,documents"></span>
A Document object in LangChain contains information about some data. It has two attributes:
@@ -357,6 +366,7 @@ A Document object in LangChain contains information about some data. It has two
- `metadata: dict`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
### Document loaders
<span data-heading-keywords="document loader,document loaders"></span>
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
@@ -390,6 +400,8 @@ That means there are two different axes along which you can customize your text
2. How the chunk size is measured
### Embedding models
<span data-heading-keywords="embedding,embeddings"></span>
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
@@ -397,6 +409,8 @@ Embeddings create a vector representation of a piece of text. This is useful bec
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
### Vector stores
<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors,
and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query.
A vector store takes care of storing embedded data and performing vector search for you.
@@ -409,6 +423,8 @@ retriever = vectorstore.as_retriever()
```
### Retrievers
<span data-heading-keywords="retriever,retrievers"></span>
A retriever is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
@@ -417,6 +433,7 @@ Retrievers can be created from vectorstores, but are also broad enough to includ
Retrievers accept a string query as input and return a list of Document's as output.
### Tools
<span data-heading-keywords="tool,tools"></span>
Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world.
@@ -461,7 +478,7 @@ tools = toolkit.get_tools()
By themselves, language models can't take actions - they just output text.
A big use case for LangChain is creating **agents**.
Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.
Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.
The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
[LangGraph](https://github.com/langchain-ai/langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents.
@@ -476,86 +493,81 @@ If you are still using AgentExecutor, do not fear: we still have a guide on [how
It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent)
### Multimodal
Some models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
### Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.
#### Callback Events
| Event | Event Trigger | Associated Method |
|------------------|---------------------------------------------|-----------------------|
| Chat model start | When a chat model starts | `on_chat_model_start` |
| LLM start | When a llm starts | `on_llm_start` |
| LLM new token | When an llm OR chat model emits a new token | `on_llm_new_token` |
| LLM ends | When an llm OR chat model ends | `on_llm_end` |
| LLM errors | When an llm OR chat model errors | `on_llm_error` |
| Chain start | When a chain starts running | `on_chain_start` |
| Chain end | When a chain ends | `on_chain_end` |
| Chain error | When a chain errors | `on_chain_error` |
| Tool start | When a tool starts running | `on_tool_start` |
| Tool end | When a tool ends | `on_tool_end` |
| Tool error | When a tool errors | `on_tool_error` |
| Agent action | When an agent takes an action | `on_agent_action` |
| Agent finish | When an agent ends | `on_agent_finish` |
| Retriever start | When a retriever starts | `on_retriever_start` |
| Retriever end | When a retriever ends | `on_retriever_end` |
| Retriever error | When a retriever errors | `on_retriever_error` |
| Text | When arbitrary text is run | `on_text` |
| Retry | When a retry event is run | `on_retry` |
#### Callback handlers
`CallbackHandlers` are objects that implement the [`CallbackHandler`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) interface, which has a method for each event that can be subscribed to.
The `CallbackManager` will call the appropriate method on each handler when the event is triggered.
Callback handlers can either be `sync` or `async`:
```python
class BaseCallbackHandler:
"""Base callback handler that can be used to handle callbacks from langchain."""
* Sync callback handlers implement the [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface.
* Async callback handlers implement the [AsyncCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface.
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
"""Run when LLM starts running."""
def on_chat_model_start(
self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any
) -> Any:
"""Run when Chat Model starts running."""
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
"""Run on new LLM token. Only available when streaming is enabled."""
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
"""Run when LLM ends running."""
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""Run when chain starts running."""
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when chain ends running."""
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when chain errors."""
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
"""Run when tool starts running."""
def on_tool_end(self, output: Any, **kwargs: Any) -> Any:
"""Run when tool ends running."""
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when tool errors."""
def on_text(self, text: str, **kwargs: Any) -> Any:
"""Run on arbitrary text."""
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run on agent action."""
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run on agent end."""
```
During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered.
#### Passing callbacks
The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
- **Constructor callbacks**: defined in the constructor, e.g. `ChatAnthropic(callbacks=[handler], tags=['a-tag'])`. In this case, the callbacks will be used for all calls made on that object, and will be scoped to that object only.
For example, if you initialize a chat model with constructor callbacks, then use it within a chain, the callbacks will only be invoked for calls to that model.
- **Request callbacks**: passed into the `invoke` method used for issuing a request. In this case, the callbacks will be used for that specific request only, and all sub-requests that it contains (e.g. a call to a sequence that triggers a call to a model, which uses the same handler passed in the `invoke()` method).
In the `invoke()` method, callbacks are passed through the `config` parameter.
The callbacks are available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
- **Request time callbacks**: Passed at the time of the request in addition to the input data.
Available on all standard `Runnable` objects. These callbacks are INHERITED by all children
of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`.
- **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks
are passed as arguments to the constructor of the object. The callbacks are scoped
only to the object they are defined on, and are **not** inherited by any children of the object.
:::warning
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children
of the object.
:::
If you're creating a custom chain or runnable, you need to remember to propagate request time
callbacks to any child objects.
:::important Async in Python<=3.10
Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables
and is running async in python<=3.10, will have to propagate callbacks to child
objects manually. This is because LangChain cannot automatically propagate
callbacks to child objects in this case.
This is a common reason why you may fail to see events being emitted from custom
runnables or tools.
:::
## Techniques
@@ -653,3 +665,7 @@ Table columns:
| Character | [CharacterTextSplitter](/docs/how_to/character_text_splitter/) | A user defined character | | Splits text based on a user defined character. One of the simpler methods. |
| Semantic Chunker (Experimental) | [SemanticChunker](/docs/how_to/semantic-chunker/) | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) |
| Integration: AI21 Semantic | [AI21SemanticTextSplitter](/docs/integrations/document_transformers/ai21_semantic_text_splitter/) | ✅ | Identifies distinct topics that form coherent pieces of text and splits along those. |

View File

@@ -71,6 +71,8 @@ make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
@@ -78,6 +80,18 @@ make docs_build
make api_docs_build
```
:::tip
The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
```bash
make api_docs_quick_preview
```
which will just build a small subset of the API reference.
:::
Finally, run the link checker to ensure all links are valid:
```bash

Binary file not shown.

View File

@@ -15,18 +15,18 @@
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {},
"source": [
"# Build an Agent\n",
"# Build an Agent with AgentExecutor (Legacy)\n",
"\n",
":::{.callout-important}\n",
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
":::\n",
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.\n",
"Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.\n",
"\n",
"In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"\n",
":::{.callout-important}\n",
"This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
":::\n",
"In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"\n",
"## Concepts\n",
"\n",
@@ -34,7 +34,7 @@
"- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
"- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n",
"- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n",
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
"\n",
"## Setup\n",

View File

@@ -12,12 +12,20 @@
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [Custom callback handlers](/docs/how_to/custom_callbacks)\n",
"\n",
":::\n",
"\n",
"If you are planning to use the async APIs, it is recommended to use and extend [`AsyncCallbackHandler`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) to avoid blocking the runloop.\n",
"If you are planning to use the async APIs, it is recommended to use and extend [`AsyncCallbackHandler`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) to avoid blocking the event.\n",
"\n",
"**Note**: if you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe."
"\n",
":::{.callout-warning}\n",
"If you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe.\n",
":::\n",
"\n",
":::{.callout-danger}\n",
"\n",
"If you're on `python<=3.10`, you need to remember to propagate `config` or `callbacks` when invoking other `runnable` from within a `RunnableLambda`, `RunnableGenerator` or `@tool`. If you do not do this,\n",
"the callbacks will not be propagated to the child runnables being invoked.\n",
":::"
]
},
{
@@ -149,7 +157,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -163,9 +171,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to attach callbacks to a module\n",
"# How to attach callbacks to a runnable\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -19,6 +19,11 @@
"\n",
"If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.with_config()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method. This saves you the need to pass callbacks in each time you invoke the chain.\n",
"\n",
":::{.callout-important}\n",
"\n",
"`with_config()` binds a configuration which will be interpreted as **runtime** configuration. So these callbacks will propagate to all child components.\n",
":::\n",
"\n",
"Here's an example:"
]
},
@@ -41,7 +46,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [
{
@@ -52,17 +57,17 @@
"Chain ChatPromptTemplate started\n",
"Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]\n",
"Chat model started\n",
"Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01LjC57hgrmzVhEma4yXdLKF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-393950f9-79b9-4fd6-ac6e-50d93d75b906-0'))]] llm_output={'id': 'msg_01LjC57hgrmzVhEma4yXdLKF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n",
"Chain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01LjC57hgrmzVhEma4yXdLKF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-393950f9-79b9-4fd6-ac6e-50d93d75b906-0'\n"
"Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'))]] llm_output={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n",
"Chain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01LjC57hgrmzVhEma4yXdLKF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-393950f9-79b9-4fd6-ac6e-50d93d75b906-0')"
"AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0')"
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
@@ -122,7 +127,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -136,9 +141,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass callbacks into a module constructor\n",
"# How to propagate callbacks constructor\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -15,7 +15,12 @@
"\n",
":::\n",
"\n",
"Most LangChain modules allow you to pass `callbacks` directly into the constructor. In this case, the callbacks will only be called for that instance (and any nested runs).\n",
"Most LangChain modules allow you to pass `callbacks` directly into the constructor (i.e., initializer). In this case, the callbacks will only be called for that instance (and any nested runs).\n",
"\n",
":::{.callout-warning}\n",
"Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. This can lead to confusing behavior,\n",
"and it's generally better to pass callbacks as a run time argument.\n",
":::\n",
"\n",
"Here's an example:"
]
@@ -114,7 +119,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -128,9 +133,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "f781411d",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [charactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "c3ee8d00",

View File

@@ -0,0 +1,157 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cfdf4f09-8125-4ed1-8063-6feed57da8a3",
"metadata": {},
"source": [
"# How to let your end users choose their model\n",
"\n",
"Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n",
"\n",
":::tip Supported models\n",
"\n",
"See the [init_chat_model()](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) API reference for a full list of supported integrations.\n",
"\n",
"Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165b0de6-9ae3-4e3d-aa98-4fc8a97c4a06",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"id": "ea2c9f57-a796-45f8-b6f4-3efd3f361a9b",
"metadata": {},
"source": [
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "79e14913-803c-4382-9009-5c6af3d75d35",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?\n",
"\n",
"Claude Opus: My name is Claude. It's nice to meet you!\n",
"\n",
"Gemini 1.5: I am a large language model, trained by Google. I do not have a name. \n",
"\n",
"\n"
]
}
],
"source": [
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Returns a langchain_openai.ChatOpenAI instance.\n",
"gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n",
"# Returns a langchain_anthropic.ChatAnthropic instance.\n",
"claude_opus = init_chat_model(\n",
" \"claude-3-opus-20240229\", model_provider=\"anthropic\", temperature=0\n",
")\n",
"# Returns a langchain_google_vertexai.ChatVertexAI instance.\n",
"gemini_15 = init_chat_model(\n",
" \"gemini-1.5-pro\", model_provider=\"google_vertexai\", temperature=0\n",
")\n",
"\n",
"# Since all model integrations implement the ChatModel interface, you can use them in the same way.\n",
"print(\"GPT-4o: \" + gpt_4o.invoke(\"what's your name\").content + \"\\n\")\n",
"print(\"Claude Opus: \" + claude_opus.invoke(\"what's your name\").content + \"\\n\")\n",
"print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
]
},
{
"cell_type": "markdown",
"id": "fff9a4c8-b6ee-4a1a-8d3d-0ecaa312d4ed",
"metadata": {},
"source": [
"## Simple config example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75c25d39-bf47-4b51-a6c6-64d9c572bfd6",
"metadata": {},
"outputs": [],
"source": [
"user_config = {\n",
" \"model\": \"...user-specified...\",\n",
" \"model_provider\": \"...user-specified...\",\n",
" \"temperature\": 0,\n",
" \"max_tokens\": 1000,\n",
"}\n",
"\n",
"llm = init_chat_model(**user_config)\n",
"llm.invoke(\"what's your name\")"
]
},
{
"cell_type": "markdown",
"id": "f811f219-5e78-4b62-b495-915d52a22532",
"metadata": {},
"source": [
"## Inferring model provider\n",
"\n",
"For common and distinct model names `init_chat_model()` will attempt to infer the model provider. See the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) for a full list of inference behavior. E.g. any model that starts with `gpt-3...` or `gpt-4...` will be inferred as using model provider `openai`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0378ccc6-95bc-4d50-be50-fccc193f0a71",
"metadata": {},
"outputs": [],
"source": [
"gpt_4o = init_chat_model(\"gpt-4o\", temperature=0)\n",
"claude_opus = init_chat_model(\"claude-3-opus-20240229\", temperature=0)\n",
"gemini_15 = init_chat_model(\"gemini-1.5-pro\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da07b5c0-d2e6-42e4-bfcd-2efcfaae6221",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -14,35 +14,51 @@
"\n",
":::\n",
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls."
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-openai >= 0.1.8`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c7d1338-dd1b-4d06-b33d-d5cffc49fd6a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1a55e87a-3291-4e7f-8e8e-4c69b0854384",
"id": "598ae1e2-a52d-4459-81fd-cdc68b06742a",
"metadata": {},
"source": [
"## Using AIMessage.response_metadata\n",
"## Using LangSmith\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](/docs/how_to/response_metadata) field. Here's an example with OpenAI:"
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using AIMessage.usage_metadata\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n",
"\n",
"LangChain `AIMessage` objects include a [usage_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n",
"\n",
"Examples:\n",
"\n",
"**OpenAI**:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "467ccdeb-6b62-45e5-816e-167cd24d2586",
"id": "b39bf807-4125-4db4-bbf7-28a46afff6b4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'token_usage': {'completion_tokens': 225,\n",
" 'prompt_tokens': 17,\n",
" 'total_tokens': 242},\n",
" 'model_name': 'gpt-4-turbo',\n",
" 'system_fingerprint': 'fp_76f018034d',\n",
" 'finish_reason': 'stop',\n",
" 'logprobs': None}"
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}"
]
},
"execution_count": 1,
@@ -51,37 +67,33 @@
}
],
"source": [
"# !pip install -qU langchain-openai\n",
"# # !pip install -qU langchain-openai\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"openai_response = llm.invoke(\"hello\")\n",
"openai_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "9d5026e9-3ad4-41e6-9946-9f1a26f4a21f",
"id": "2299c44a-2fe6-4d52-a6a2-99ff6d231c73",
"metadata": {},
"source": [
"And here's an example with Anthropic:"
"**Anthropic**:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "145404f1-e088-4824-b468-236c486a9903",
"id": "9c82ff80-ec4e-4049-b019-5f0bbd7df82a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'id': 'msg_01P61rdHbapEo6h3fjpfpCQT',\n",
" 'model': 'claude-3-sonnet-20240229',\n",
" 'stop_reason': 'end_turn',\n",
" 'stop_sequence': None,\n",
" 'usage': {'input_tokens': 17, 'output_tokens': 306}}"
"{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}"
]
},
"execution_count": 2,
@@ -94,9 +106,222 @@
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
"anthropic_response = llm.invoke(\"hello\")\n",
"anthropic_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f",
"metadata": {},
"source": [
"### Using AIMessage.response_metadata\n",
"\n",
"Metadata from the model response is also included in the AIMessage [response_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f156f9da-21f2-4c81-a714-54cbf9ad393e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n",
"\n",
"Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n"
]
}
],
"source": [
"print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n",
"print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')"
]
},
{
"cell_type": "markdown",
"id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Some providers support token count metadata in a streaming context.\n",
"\n",
"#### OpenAI\n",
"\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={\"include_usage\": True}`.\n",
"\n",
"```{=mdx}\n",
":::note\n",
"By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "07f0c872-6b6c-4fed-a129-9b5a858505be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"aggregate = None\n",
"for chunk in llm.stream(\"hello\", stream_options={\"include_usage\": True}):\n",
" print(chunk)\n",
" aggregate = chunk if aggregate is None else aggregate + chunk"
]
},
{
"cell_type": "markdown",
"id": "dd809ded-8b13-4d5f-be5e-277b79d51802",
"metadata": {},
"source": [
"Note that the usage metadata will be included in the sum of the individual message chunks:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3db7bc03-a7d4-4704-92ab-f8ba92ef59ae",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! How can I assist you today?\n",
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"print(aggregate.content)\n",
"print(aggregate.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "7dba63e8-0ed7-4533-8f0f-78e19c38a25c",
"metadata": {},
"source": [
"To disable streaming token counts for OpenAI, set `\"include_usage\"` to False in `stream_options`, or omit it from the parameters:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "67117f2b-ce68-4c1e-9556-2d3849f90e1b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n"
]
}
],
"source": [
"aggregate = None\n",
"for chunk in llm.stream(\"hello\"):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "6a5d9617-be3a-419a-9276-de9c29fa50ae",
"metadata": {},
"source": [
"You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"\n",
"See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "57dec1fb-bd9c-4c98-8798-8fbbe67f6b2c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}\n",
"\n",
"setup='Why was the math book sad?' punchline='Because it had too many problems.'\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"question to set up a joke\")\n",
" punchline: str = Field(description=\"answer to resolve the joke\")\n",
"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" model_kwargs={\"stream_options\": {\"include_usage\": True}},\n",
")\n",
"# Under the hood, .with_structured_output binds tools to the\n",
"# chat model and appends a parser.\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n",
" elif event[\"event\"] == \"on_chain_end\":\n",
" print(event[\"data\"][\"output\"])\n",
" else:\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "2bc8d313-4bef-463e-89a5-236d8bb6ab2f",
"metadata": {},
"source": [
"Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model."
]
},
{
@@ -115,7 +340,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"id": "31667d54",
"metadata": {},
"outputs": [
@@ -123,11 +348,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 26\n",
"Tokens Used: 27\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 15\n",
"\tCompletion Tokens: 16\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.00056\n"
"Total Cost (USD): $2.95e-05\n"
]
}
],
@@ -136,7 +361,7 @@
"\n",
"from langchain_community.callbacks.manager import get_openai_callback\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
@@ -153,7 +378,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "e09420f4",
"metadata": {},
"outputs": [
@@ -161,7 +386,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"52\n"
"55\n"
]
}
],
@@ -172,6 +397,39 @@
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "9ac51188-c8f4-4230-90fd-3cd78cdd955d",
"metadata": {},
"source": [
"```{=mdx}\n",
":::note\n",
"Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available:\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b241069a-265d-4497-af34-b0a5f95ae67f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"28\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" for chunk in llm.stream(\"Tell me a joke\", stream_options={\"include_usage\": True}):\n",
" pass\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
@@ -182,7 +440,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 12,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
@@ -211,15 +469,15 @@
"source": [
"```{=mdx}\n",
":::note\n",
"We have to set `stream_runnable=False` for token counting to work. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events. However, OpenAI does not return token counts when streaming model responses, so we need to turn off the underlying streaming.\n",
"We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "2f98c536",
"execution_count": 13,
"id": "3950d88b-8bfb-4294-b75b-e6fd421e633c",
"metadata": {},
"outputs": [
{
@@ -230,46 +488,51 @@
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Hummingbird`\n",
"Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Hummingbird\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.\n",
"Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.\n",
"Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 115 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).\n",
"Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.\n",
"\n",
"Page: Rufous hummingbird\n",
"Summary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.\n",
"\n",
"\n",
"Page: Bee hummingbird\n",
"Summary: The bee hummingbird, zunzuncito or Helena hummingbird (Mellisuga helenae) is a species of hummingbird, native to the island of Cuba in the Caribbean. It is the smallest known bird. The bee hummingbird feeds on nectar of flowers and bugs found in Cuba.\n",
"\n",
"Page: Hummingbird cake\n",
"Summary: Hummingbird cake is a banana-pineapple spice cake originating in Jamaica and a popular dessert in the southern United States since the 1970s. Ingredients include flour, sugar, salt, vegetable oil, ripe banana, pineapple, cinnamon, pecans, vanilla extract, eggs, and leavening agent. It is often served with cream cheese frosting.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Fastest bird`\n",
"Page: Anna's hummingbird\n",
"Summary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna Masséna, Duchess of Rivoli.\n",
"It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.\n",
"These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `{'query': 'fastest bird species'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Fastest animals\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"\n",
"\n",
"Page: Fastest animals\n",
"Summary: This is a list of the fastest animals in the world, by types of animal.\n",
"\n",
"\n",
"\n",
"Page: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon, able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"Page: Ostrich\n",
"Summary: Ostriches are large flightless birds. They are the heaviest and largest living birds, with adult common ostriches weighing anywhere between 63.5 and 145 kilograms and laying the largest eggs of any living land animal. With the ability to run at 70 km/h (43.5 mph), they are the fastest birds on land. They are farmed worldwide, with significant industries in the Philippines and in Namibia. Ostrich leather is a lucrative commodity, and the large feathers are used as plumes for the decoration of ceremonial headgear. Ostrich eggs have been used by humans for millennia.\n",
"Ostriches are of the genus Struthio in the order Struthioniformes, part of the infra-class Palaeognathae, a diverse group of flightless birds also known as ratites that includes the emus, rheas, cassowaries, kiwis and the extinct elephant birds and moas. There are two living species of ostrich: the common ostrich, native to large areas of sub-Saharan Africa, and the Somali ostrich, native to the Horn of Africa. The common ostrich was historically native to the Arabian Peninsula, and ostriches were present across Asia as far east as China and Mongolia during the Late Pleistocene and possibly into the Holocene.\u001b[0m\u001b[32;1m\u001b[1;3m### Hummingbird's Scientific Name\n",
"The scientific name for the bee hummingbird, which is the smallest known bird and a species of hummingbird, is **Mellisuga helenae**. It is native to Cuba.\n",
"\n",
"### Fastest Bird Species\n",
"The fastest bird in terms of airspeed is the **peregrine falcon**, which can exceed speeds of 320 km/h (200 mph) during its diving flight. In level flight, the fastest confirmed speed is held by the **common swift**, which can fly at 111.5 km/h (69.3 mph).\u001b[0m\n",
"Page: Falcon\n",
"Summary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.\n",
"Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.\n",
"The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial \"tooth\" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.\n",
"The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.\n",
"Some small falcons with long, narrow wings are called \"hobbies\" and some which hover while hunting are called \"kestrels\".\n",
"As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).\u001b[0m\u001b[32;1m\u001b[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1583\n",
"Prompt Tokens: 1412\n",
"Completion Tokens: 171\n",
"Total Cost (USD): $0.019250000000000003\n"
"Total Tokens: 1787\n",
"Prompt Tokens: 1687\n",
"Completion Tokens: 100\n",
"Total Cost (USD): $0.0009935\n"
]
}
],
@@ -298,19 +561,19 @@
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4a3eced5-2ff7-49a7-a48b-768af8658323",
"execution_count": 12,
"id": "1837c807-136a-49d8-9c33-060e58dc16d2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 0\n",
"\tPrompt Tokens: 0\n",
"\tCompletion Tokens: 0\n",
"Tokens Used: 96\n",
"\tPrompt Tokens: 26\n",
"\tCompletion Tokens: 70\n",
"Successful Requests: 2\n",
"Total Cost (USD): $0.0\n"
"Total Cost (USD): $0.001888\n"
]
}
],
@@ -364,7 +627,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,15 +1,5 @@
{
"cells": [
{
"cell_type": "raw",
"id": "77bf57fb-e990-45f2-8b5f-c76388b05966",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1",

View File

@@ -75,6 +75,31 @@ Otherwise you can initialize without any params:
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings()
```
</TabItem>
<TabItem value="huggingface" label="Hugging Face">
To start we'll need to install the Hugging Face partner package:
```bash
pip install langchain-huggingface
```
You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
```
You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings()
```
</TabItem>

View File

@@ -4,13 +4,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to create an Ensemble Retriever\n",
"# How to combine results from multiple retrievers\n",
"\n",
"The `EnsembleRetriever` takes a list of retrievers as input and ensemble the results of their `get_relevant_documents()` methods and rerank the results based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"The [EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"\n",
"By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. \n",
"\n",
"The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity."
"The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity.\n",
"\n",
"## Basic usage\n",
"\n",
"Below we demonstrate ensembling of a [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)."
]
},
{
@@ -24,22 +28,15 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import EnsembleRetriever\n",
"from langchain_community.retrievers import BM25Retriever\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"doc_list_1 = [\n",
" \"I like apples\",\n",
" \"I like oranges\",\n",
@@ -71,19 +68,19 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like oranges', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]"
"[Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1}),\n",
" Document(page_content='You like oranges', metadata={'source': 2})]"
]
},
"execution_count": 15,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -99,24 +96,17 @@
"source": [
"## Runtime Configuration\n",
"\n",
"We can also configure the retrievers at runtime. In order to do this, we need to mark the fields as configurable"
"We can also configure the individual retrievers at runtime using [configurable fields](/docs/how_to/configure). Below we update the \"top-k\" parameter for the FAISS retriever specifically:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField\n",
"\n",
"faiss_retriever = faiss_vectorstore.as_retriever(\n",
" search_kwargs={\"k\": 2}\n",
").configurable_fields(\n",
@@ -125,15 +115,8 @@
" name=\"Search Kwargs\",\n",
" description=\"The search kwargs to use\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
")\n",
"\n",
"ensemble_retriever = EnsembleRetriever(\n",
" retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n",
")"
@@ -141,9 +124,22 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"config = {\"configurable\": {\"search_kwargs_faiss\": {\"k\": 1}}}\n",
"docs = ensemble_retriever.invoke(\"apples\", config=config)\n",
@@ -181,7 +177,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -60,7 +60,7 @@
"source": [
"examples = [\n",
" {\"input\": \"hi\", \"output\": \"ciao\"},\n",
" {\"input\": \"bye\", \"output\": \"arrivaderci\"},\n",
" {\"input\": \"bye\", \"output\": \"arrivederci\"},\n",
" {\"input\": \"soccer\", \"output\": \"calcio\"},\n",
"]"
]
@@ -133,7 +133,7 @@
{
"data": {
"text/plain": [
"[{'input': 'bye', 'output': 'arrivaderci'}]"
"[{'input': 'bye', 'output': 'arrivederci'}]"
]
},
"execution_count": 39,
@@ -209,7 +209,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Translate the following words from English to Italain:\n",
"Translate the following words from English to Italian:\n",
"\n",
"Input: hand -> Output: mano\n",
"\n",
@@ -222,7 +222,7 @@
" example_selector=example_selector,\n",
" example_prompt=example_prompt,\n",
" suffix=\"Input: {input} -> Output:\",\n",
" prefix=\"Translate the following words from English to Italain:\",\n",
" prefix=\"Translate the following words from English to Italian:\",\n",
" input_variables=[\"input\"],\n",
")\n",
"\n",

View File

@@ -128,7 +128,7 @@
" # Having a good description can help improve extraction results.\n",
" name: Optional[str] = Field(..., description=\"The name of the person\")\n",
" hair_color: Optional[str] = Field(\n",
" ..., description=\"The color of the peron's eyes if known\"\n",
" ..., description=\"The color of the person's hair if known\"\n",
" )\n",
" height_in_meters: Optional[str] = Field(..., description=\"Height in METERs\")\n",
"\n",

View File

@@ -300,7 +300,7 @@
"Entities in the question map to the following database values:\n",
"{entities_list}\n",
"Question: {question}\n",
"Cypher query:\"\"\" # noqa: E501\n",
"Cypher query:\"\"\"\n",
"\n",
"cypher_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -377,7 +377,7 @@
"response_template = \"\"\"Based on the the question, Cypher query, and Cypher response, write a natural language response:\n",
"Question: {question}\n",
"Cypher query: {query}\n",
"Cypher Response: {response}\"\"\" # noqa: E501\n",
"Cypher Response: {response}\"\"\"\n",
"\n",
"response_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",

View File

@@ -78,6 +78,7 @@ Chat Models are newer forms of language models that take messages in and output
- [How to: stream a response back](/docs/how_to/chat_streaming)
- [How to: track token usage](/docs/how_to/chat_token_usage_tracking)
- [How to: track response metadata across providers](/docs/how_to/response_metadata)
- [How to: let your end users choose their model](/docs/how_to/chat_models_universal_init/)
### LLMs
@@ -151,7 +152,7 @@ Retrievers are responsible for taking a query and returning relevant documents.
- [How to: write a custom retriever class](/docs/how_to/custom_retriever)
- [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever)
- [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever)
- [How to: reorder retrieved results to put most relevant documents not in the middle](/docs/how_to/long_context_reorder)
- [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder)
- [How to: generate multiple embeddings per document](/docs/how_to/multi_vector)
- [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever)
- [How to: generate metadata filters](/docs/how_to/self_query)
@@ -174,7 +175,12 @@ LangChain Tools contain a description of the tool (to pass to the language model
- [How to: add ad-hoc tool calling capability to LLMs and chat models](/docs/how_to/tools_prompting)
- [How to: add a human in the loop to tool usage](/docs/how_to/tools_human)
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: call tools using multi-modal data](/docs/how_to/tool_calls_multi_modal)
### Multimodal
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
### Agents

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -2,169 +2,226 @@
"cells": [
{
"cell_type": "markdown",
"id": "e5715368",
"id": "90dff237-bc28-4185-a2c0-d5203bbdeacd",
"metadata": {},
"source": [
"# How to track token usage for LLMs\n",
"\n",
"This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single LLM call."
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [LLMs](/docs/concepts/#llms)\n",
":::\n",
"\n",
"## Using LangSmith\n",
"\n",
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using callbacks\n",
"\n",
"There are some API-specific callback context managers that allow you to track token usage across multiple calls. You'll need to check whether such an integration is available for your particular model.\n",
"\n",
"If such an integration is not available for your model, you can create a custom callback manager by adapting the implementation of the [OpenAI callback manager](https://api.python.langchain.com/en/latest/_modules/langchain_community/callbacks/openai_info.html#OpenAICallbackHandler).\n",
"\n",
"### OpenAI\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call.\n",
"\n",
":::{.callout-danger}\n",
"\n",
"The callback handler does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). For support in a streaming context, refer to the corresponding guide for chat models [here](/docs/how_to/chat_token_usage_tracking).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "f790edd9-823e-4bc5-befa-e9529c7237a0",
"metadata": {},
"source": [
"### Single call"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9455db35",
"id": "2eebbee2-6ca1-4fa8-a3aa-0376888ceefb",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 18\n",
"Prompt Tokens: 4\n",
"Completion Tokens: 14\n",
"Total Cost (USD): $3.4e-05\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI"
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "7df3be35-dd97-4e3a-bd51-52434ab2249d",
"metadata": {},
"source": [
"### Multiple calls\n",
"\n",
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence to a chain. This will also work for an agent which may use multiple steps."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d1c55cc9",
"id": "3ec10419-294c-44bf-af85-86aabf457cb6",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why did the chicken go to the seance?\n",
"\n",
"To talk to the other side of the road!\n",
"--\n",
"\n",
"\n",
"Why did the fish need a lawyer?\n",
"\n",
"Because it got caught in a net!\n",
"\n",
"---\n",
"Total Tokens: 50\n",
"Prompt Tokens: 12\n",
"Completion Tokens: 38\n",
"Total Cost (USD): $9.400000000000001e-05\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = template | llm\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = chain.invoke({\"topic\": \"birds\"})\n",
" print(response)\n",
" response = chain.invoke({\"topic\": \"fish\"})\n",
" print(\"--\")\n",
" print(response)\n",
"\n",
"\n",
"print()\n",
"print(\"---\")\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "ad7a3fba-9fac-4222-8f87-d1d276d27d6e",
"metadata": {
"tags": []
},
"source": [
"## Streaming\n",
"\n",
":::{.callout-danger}\n",
"\n",
"`get_openai_callback` does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). If you want to count tokens correctly in a streaming context, there are a number of options:\n",
"\n",
"- Use chat models as described in [this guide](/docs/how_to/chat_token_usage_tracking);\n",
"- Implement a [custom callback handler](/docs/how_to/custom_callbacks/) that uses appropriate tokenizers to count the tokens;\n",
"- Use a monitoring platform such as [LangSmith](https://www.langchain.com/langsmith).\n",
":::\n",
"\n",
"Note that when using legacy language models in a streaming context, token counts are not updated:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "31667d54",
"metadata": {},
"id": "cd61ed79-7858-49bb-afb5-d41291f597ba",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 37\n",
"\tPrompt Tokens: 4\n",
"\tCompletion Tokens: 33\n",
"Successful Requests: 1\n",
"Total Cost (USD): $7.2e-05\n"
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 0\n",
"Prompt Tokens: 0\n",
"Completion Tokens: 0\n",
"Total Cost (USD): $0.0\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "c0ab6d27",
"metadata": {},
"source": [
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e09420f4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"72\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
"metadata": {},
"source": [
"If a chain or agent with multiple steps in it is used, it will track all those steps."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2f98c536",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[\"Olivia Wilde and Harry Styles took fans by surprise with their whirlwind romance, which began when they met on the set of Don't Worry Darling.\", 'Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.', 'Olivia Wilde and Harry Styles were spotted early on in their relationship walking around London. (. Image ...', \"Looks like Olivia Wilde and Jason Sudeikis are starting 2023 on good terms. Amid their highly publicized custody battle and the actress' ...\", 'The two started dating after Wilde split up with actor Jason Sudeikisin 2020. However, their relationship came to an end last November.', \"Olivia Wilde and Harry Styles started dating during the filming of Don't Worry Darling. While the movie got a lot of backlash because of the ...\", \"Here's what we know so far about Harry Styles and Olivia Wilde's relationship.\", 'Olivia and the Grammy winner kept their romance out of the spotlight as their relationship began just two months after her split from ex-fiancé ...', \"Harry Styles and Olivia Wilde first met on the set of Don't Worry Darling and stepped out as a couple in January 2021. Relive all their biggest relationship ...\"]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Harry Styles is Olivia Wilde's boyfriend.\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m29 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 29 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 29^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 2205\n",
"Prompt Tokens: 2053\n",
"Completion Tokens: 152\n",
"Total Cost (USD): $0.0441\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = agent.run(\n",
" \"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\"\n",
" )\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
" for chunk in llm.stream(\"Tell me a joke\"):\n",
" print(chunk, end=\"\", flush=True)\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca77a3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -183,7 +240,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -5,28 +5,38 @@
"id": "fc0db1bc",
"metadata": {},
"source": [
"# How to reorder retrieved results to put most relevant documents not in the middle\n",
"# How to reorder retrieved results to mitigate the \"lost in the middle\" effect\n",
"\n",
"No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.\n",
"In brief: When models must access relevant information in the middle of long contexts, they tend to ignore the provided documents.\n",
"See: https://arxiv.org/abs/2307.03172\n",
"Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arxiv.org/abs/2307.03172) as the number of retrieved documents grows (e.g., beyond ten). In brief: models are liable to miss relevant information in the middle of long contexts.\n",
"\n",
"To avoid this issue you can re-order documents after retrieval to avoid performance degradation."
"By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/#embedding-models)).\n",
"\n",
"To mitigate the [\"lost in the middle\"](https://arxiv.org/abs/2307.03172) effect, you can re-order documents after retrieval such that the most relevant documents are positioned at extrema (e.g., the first and last pieces of context), and the least relevant documents are positioned in the middle. In some cases this can help surface the most relevant information to LLMs.\n",
"\n",
"The [LongContextReorder](https://api.python.langchain.com/en/latest/document_transformers/langchain_community.document_transformers.long_context_reorder.LongContextReorder.html) document transformer implements this re-ordering procedure. Below we demonstrate an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74d1ebe8",
"id": "2074fdaa-edff-468a-970f-6f5f26e93d4a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet sentence-transformers langchain-chroma langchain langchain-openai langchain-huggingface > /dev/null"
]
},
{
"cell_type": "markdown",
"id": "c97eaaf2-34b7-4770-9949-e1abc4ca5226",
"metadata": {},
"source": [
"First we embed some artificial documents and index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store. We will use [Hugging Face](/docs/integrations/text_embedding/huggingfacehub/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "49cbcd8e",
"metadata": {},
"outputs": [
@@ -45,20 +55,14 @@
" Document(page_content='This is just a random text.')]"
]
},
"execution_count": 3,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain, StuffDocumentsChain\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_transformers import (\n",
" LongContextReorder,\n",
")\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_openai import OpenAI\n",
"\n",
"# Get embeddings.\n",
"embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
@@ -83,14 +87,22 @@
"query = \"What can you tell me about the Celtics?\"\n",
"\n",
"# Get relevant documents ordered by relevance score\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"docs"
]
},
{
"cell_type": "markdown",
"id": "175d031a-43fa-42f4-93c4-2ba52c3c3ee5",
"metadata": {},
"source": [
"Note that documents are returned in descending order of relevance to the query. The `LongContextReorder` document transformer will implement the re-ordering described above:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "34fb9d6e",
"execution_count": 3,
"id": "9a1181f2-a3dc-4614-9233-2196ab65939e",
"metadata": {},
"outputs": [
{
@@ -108,12 +120,14 @@
" Document(page_content='This is a document about the Boston Celtics')]"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_transformers import LongContextReorder\n",
"\n",
"# Reorder the documents:\n",
"# Less relevant document will be at the middle of the list and more\n",
"# relevant elements at beginning / end.\n",
@@ -125,58 +139,54 @@
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ceccab87",
"cell_type": "markdown",
"id": "a8d2ef0c-c397-4d8d-8118-3f7acf86d241",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nThe Celtics are referenced in four of the nine text extracts. They are mentioned as the favorite team of the author, the winner of a basketball game, a team with one of the best players, and a team with a specific player. Additionally, the last extract states that the document is about the Boston Celtics. This suggests that the Celtics are a basketball team, possibly from Boston, that is well-known and has had successful players and games in the past. '"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We prepare and run a custom Stuff chain with reordered docs as context.\n",
"\n",
"# Override prompts\n",
"document_prompt = PromptTemplate(\n",
" input_variables=[\"page_content\"], template=\"{page_content}\"\n",
")\n",
"document_variable_name = \"context\"\n",
"llm = OpenAI()\n",
"stuff_prompt_override = \"\"\"Given this text extracts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\"\"\"\n",
"prompt = PromptTemplate(\n",
" template=stuff_prompt_override, input_variables=[\"context\", \"query\"]\n",
")\n",
"\n",
"# Instantiate the chain\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"chain = StuffDocumentsChain(\n",
" llm_chain=llm_chain,\n",
" document_prompt=document_prompt,\n",
" document_variable_name=document_variable_name,\n",
")\n",
"chain.run(input_documents=reordered_docs, query=query)"
"Below, we show how to incorporate the re-ordered documents into a simple question-answering chain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4696a97",
"execution_count": 5,
"id": "8bbea705-d5b9-4ed5-9957-e12547283622",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"The Celtics are a professional basketball team and one of the most iconic franchises in the NBA. They are highly regarded and have a large fan base. The team has had many successful seasons and is often considered one of the top teams in the league. They have a strong history and have produced many great players, such as Larry Bird and L. Kornet. The team is based in Boston and is often referred to as the Boston Celtics.\n"
]
}
],
"source": [
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI()\n",
"\n",
"prompt_template = \"\"\"\n",
"Given these texts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\n",
"\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=prompt_template,\n",
" input_variables=[\"context\", \"query\"],\n",
")\n",
"\n",
"# Create and invoke the chain:\n",
"chain = create_stuff_documents_chain(llm, prompt)\n",
"response = chain.invoke({\"context\": reordered_docs, \"query\": query})\n",
"print(response)"
]
}
],
"metadata": {
@@ -195,7 +205,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

File diff suppressed because it is too large Load Diff

View File

@@ -5,33 +5,36 @@
"id": "d9172545",
"metadata": {},
"source": [
"# How to use the MultiVector Retriever\n",
"# How to retrieve using multiple vectors per document\n",
"\n",
"It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base `MultiVectorRetriever` which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can embed multiple chunks of a document and associate those embeddings with the parent document, allowing retriever hits on the chunks to return the larger document.\n",
"\n",
"LangChain implements a base [MultiVectorRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html), which simplifies this process. Much of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"\n",
"The methods to create multiple vectors per document include:\n",
"\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is [ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html)).\n",
"- Summary: create a summary for each document, embed that along with (or instead of) the document.\n",
"- Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is useful because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control."
"Below we walk through an example. First we instantiate some documents. We will index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store using [OpenAI](https://python.langchain.com/v0.2/docs/integrations/text_embedding/openai/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09cecd95-3499-465a-895a-944627ffb77f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-chroma langchain langchain-openai > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eed469be",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.multi_vector import MultiVectorRetriever"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "18c1421a",
"metadata": {},
"outputs": [],
@@ -40,25 +43,22 @@
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6d869496",
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loaders = [\n",
" TextLoader(\"../../paul_graham_essay.txt\"),\n",
" TextLoader(\"paul_graham_essay.txt\"),\n",
" TextLoader(\"state_of_the_union.txt\"),\n",
"]\n",
"docs = []\n",
"for loader in loaders:\n",
" docs.extend(loader.load())\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)\n",
"docs = text_splitter.split_documents(docs)"
"docs = text_splitter.split_documents(docs)\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")"
]
},
{
@@ -68,52 +68,54 @@
"source": [
"## Smaller chunks\n",
"\n",
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the `ParentDocumentRetriever` does. Here we show what is going on under the hood."
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the [ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html) does. Here we show what is going on under the hood.\n",
"\n",
"We will make a distinction between the vector store, which indexes embeddings of the (sub) documents, and the document store, which houses the \"parent\" documents and associates them with an identifier."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "0e7b6b45",
"metadata": {},
"outputs": [],
"source": [
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")\n",
"import uuid\n",
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"\n",
"# The storage layer for the parent documents\n",
"store = InMemoryByteStore()\n",
"id_key = \"doc_id\"\n",
"\n",
"# The retriever (empty to start)\n",
"retriever = MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"import uuid\n",
"\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "72a36491",
"cell_type": "markdown",
"id": "d4feded4-856a-4282-91c3-53aabc62e6ff",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)"
"We next generate the \"sub\" documents by splitting the original documents. Note that we store the document identifier in the `metadata` of the corresponding [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) object."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "5d23247d",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"\n",
"sub_docs = []\n",
"for i, doc in enumerate(docs):\n",
" _id = doc_ids[i]\n",
@@ -123,9 +125,17 @@
" sub_docs.extend(_sub_docs)"
]
},
{
"cell_type": "markdown",
"id": "8e0634f8-90d5-4250-981a-5257c8a6d455",
"metadata": {},
"source": [
"Finally, we index the documents in our vector store and document store:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "92ed5861",
"metadata": {},
"outputs": [],
@@ -134,31 +144,46 @@
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "markdown",
"id": "14c48c6d-850c-4317-9b6e-1ade92f2f710",
"metadata": {},
"source": [
"The vector store alone will retrieve small chunks:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"id": "8afed60c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '2fd77862-9ed5-4fad-bf76-e487b747b333', 'source': 'state_of_the_union.txt'})"
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '064eca46-a4c4-4789-8e3b-583f9597e54f', 'source': 'state_of_the_union.txt'})"
]
},
"execution_count": 8,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Vectorstore alone retrieves the small chunks\n",
"retriever.vectorstore.similarity_search(\"justice breyer\")[0]"
]
},
{
"cell_type": "markdown",
"id": "717097c7-61d9-4306-8625-ef8f1940c127",
"metadata": {},
"source": [
"Whereas the retriever will return the larger parent document:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"id": "3c9017f1",
"metadata": {},
"outputs": [
@@ -168,14 +193,13 @@
"9875"
]
},
"execution_count": 9,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Retriever returns larger chunks\n",
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
@@ -183,12 +207,12 @@
"id": "cdef8339-f9fa-4b3b-955f-ad9dbdf2734f",
"metadata": {},
"source": [
"The default search type the retriever performs on the vector database is a similarity search. LangChain Vector Stores also support searching via [Max Marginal Relevance](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search) so if you want this instead you can just set the `search_type` property as follows:"
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 7,
"id": "36739460-a737-4a8e-b70f-50bf8c8eaae7",
"metadata": {},
"outputs": [
@@ -198,7 +222,7 @@
"9875"
]
},
"execution_count": 10,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -208,7 +232,7 @@
"\n",
"retriever.search_type = SearchType.mmr\n",
"\n",
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
@@ -216,14 +240,37 @@
"id": "d6a7ae0d",
"metadata": {},
"source": [
"## Summary\n",
"## Associating summaries with a document for retrieval\n",
"\n",
"Oftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those."
"A summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.\n",
"\n",
"We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 8,
"id": "6589291f-55bb-4e9a-b4ff-08f2506ed641",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1433dff4",
"metadata": {},
"outputs": [],
@@ -233,27 +280,26 @@
"from langchain_core.documents import Document\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "35b30390",
"metadata": {},
"outputs": [],
"source": [
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
" | ChatPromptTemplate.from_template(\"Summarize the following document:\\n\\n{doc}\")\n",
" | ChatOpenAI(max_retries=0)\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3faa9fde-1b09-4849-a815-8b2e89c30a02",
"metadata": {},
"source": [
"Note that we can [batch](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) the chain accross documents:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 10,
"id": "41a2a738",
"metadata": {},
"outputs": [],
@@ -261,9 +307,17 @@
"summaries = chain.batch(docs, {\"max_concurrency\": 5})"
]
},
{
"cell_type": "markdown",
"id": "73ef599e-140b-4905-8b62-6c52cdde1852",
"metadata": {},
"source": [
"We can then initialize a `MultiVectorRetriever` as before, indexing the summaries in our vector store, and retaining the original documents in our document store:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 11,
"id": "7ac5e4b1",
"metadata": {},
"outputs": [],
@@ -279,29 +333,13 @@
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "0d93309f",
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"summary_docs = [\n",
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
" for i, s in enumerate(summaries)\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6d5edf0d",
"metadata": {},
"outputs": [],
"source": [
"]\n",
"\n",
"retriever.vectorstore.add_documents(summary_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
@@ -320,50 +358,48 @@
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "299232d6",
"cell_type": "markdown",
"id": "f0274892-29c1-4616-9040-d23f9d537526",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
"Querying the vector store will return summaries:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "10e404c0",
"execution_count": 12,
"id": "299232d6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content=\"The document is a speech given by President Biden addressing various issues and outlining his agenda for the nation. He highlights the importance of nominating a Supreme Court justice and introduces his nominee, Judge Ketanji Brown Jackson. He emphasizes the need to secure the border and reform the immigration system, including providing a pathway to citizenship for Dreamers and essential workers. The President also discusses the protection of women's rights, including access to healthcare and the right to choose. He calls for the passage of the Equality Act to protect LGBTQ+ rights. Additionally, President Biden discusses the need to address the opioid epidemic, improve mental health services, support veterans, and fight against cancer. He expresses optimism for the future of America and the strength of the American people.\", metadata={'doc_id': '56345bff-3ead-418c-a4ff-dff203f77474'})"
"Document(page_content=\"President Biden recently nominated Judge Ketanji Brown Jackson to serve on the United States Supreme Court, emphasizing her qualifications and broad support. The President also outlined a plan to secure the border, fix the immigration system, protect women's rights, support LGBTQ+ Americans, and advance mental health services. He highlighted the importance of bipartisan unity in passing legislation, such as the Violence Against Women Act. The President also addressed supporting veterans, particularly those impacted by exposure to burn pits, and announced plans to expand benefits for veterans with respiratory cancers. Additionally, he proposed a plan to end cancer as we know it through the Cancer Moonshot initiative. President Biden expressed optimism about the future of America and emphasized the strength of the American people in overcoming challenges.\", metadata={'doc_id': '84015b1b-980e-400a-94d8-cf95d7e079bd'})"
]
},
"execution_count": 19,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e4cce5c2",
"cell_type": "markdown",
"id": "e4f77ac5-2926-4f60-aad5-b2067900dff9",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
"Whereas the retriever will return the larger source document:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "c8570dbb",
"execution_count": 13,
"id": "e4cce5c2",
"metadata": {},
"outputs": [
{
@@ -372,12 +408,14 @@
"9194"
]
},
"execution_count": 21,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"\n",
"len(retrieved_docs[0].page_content)"
]
},
@@ -388,42 +426,28 @@
"source": [
"## Hypothetical Queries\n",
"\n",
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embedded"
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document, which might bear close semantic similarity to relevant queries in a [RAG](/docs/tutorials/rag) application. These questions can then be embedded and associated with the documents to improve retrieval.\n",
"\n",
"Below, we use the [with_structured_output](/docs/how_to/structured_output/) method to structure the LLM output into a list of strings."
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "5219b085",
"execution_count": 16,
"id": "03d85234-c33a-4a43-861d-47328e1ec2ea",
"metadata": {},
"outputs": [],
"source": [
"functions = [\n",
" {\n",
" \"name\": \"hypothetical_questions\",\n",
" \"description\": \"Generate hypothetical questions\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"questions\": {\n",
" \"type\": \"array\",\n",
" \"items\": {\"type\": \"string\"},\n",
" },\n",
" },\n",
" \"required\": [\"questions\"],\n",
" },\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "523deb92",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
"from typing import List\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class HypotheticalQuestions(BaseModel):\n",
" \"\"\"Generate hypothetical questions.\"\"\"\n",
"\n",
" questions: List[str] = Field(..., description=\"List of questions\")\n",
"\n",
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
@@ -431,28 +455,36 @@
" | ChatPromptTemplate.from_template(\n",
" \"Generate a list of exactly 3 hypothetical questions that the below document could be used to answer:\\n\\n{doc}\"\n",
" )\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4\").bind(\n",
" functions=functions, function_call={\"name\": \"hypothetical_questions\"}\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4o\").with_structured_output(\n",
" HypotheticalQuestions\n",
" )\n",
" | JsonKeyOutputFunctionsParser(key_name=\"questions\")\n",
" | (lambda x: x.questions)\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6dddc40f-62af-413c-b944-f94a5e1f2f4e",
"metadata": {},
"source": [
"Invoking the chain on a single document demonstrates that it outputs a list of questions:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 17,
"id": "11d30554",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[\"What was the author's first experience with programming like?\",\n",
" 'Why did the author switch their focus from AI to Lisp during their graduate studies?',\n",
" 'What led the author to contemplate a career in art instead of computer science?']"
"[\"What impact did the IBM 1401 have on the author's early programming experiences?\",\n",
" \"How did the transition from using the IBM 1401 to microcomputers influence the author's programming journey?\",\n",
" \"What role did Lisp play in shaping the author's understanding and approach to AI?\"]"
]
},
"execution_count": 24,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -462,22 +494,24 @@
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "3eb2e48c",
"cell_type": "markdown",
"id": "dcffc572-7b20-4b77-857a-90ec360a8f7e",
"metadata": {},
"outputs": [],
"source": [
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})"
"We can batch then batch the chain over all documents and assemble our vector store and document store as before:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 18,
"id": "b2cd6e75",
"metadata": {},
"outputs": [],
"source": [
"# Batch chain over documents to generate hypothetical questions\n",
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})\n",
"\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"hypo-questions\", embedding_function=OpenAIEmbeddings()\n",
@@ -491,82 +525,67 @@
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "18831b3b",
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"\n",
"# Generate Document objects from hypothetical questions\n",
"question_docs = []\n",
"for i, question_list in enumerate(hypothetical_questions):\n",
" question_docs.extend(\n",
" [Document(page_content=s, metadata={id_key: doc_ids[i]}) for s in question_list]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "224b24c5",
"metadata": {},
"outputs": [],
"source": [
" )\n",
"\n",
"\n",
"retriever.vectorstore.add_documents(question_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "7b442b90",
"cell_type": "markdown",
"id": "75cba8ab-a06f-4545-85fc-cf49d0204b5e",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
"Note that querying the underlying vector store will retrieve hypothetical questions that are semantically similar to the input query:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "089b5ad0",
"execution_count": 19,
"id": "7b442b90",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Who has been nominated to serve on the United States Supreme Court?', metadata={'doc_id': '0b3a349e-c936-4e77-9c40-0a39fc3e07f0'}),\n",
" Document(page_content=\"What was the context and content of Robert Morris' advice to the document's author in 2010?\", metadata={'doc_id': 'b2b2cdca-988a-4af1-ba47-46170770bc8c'}),\n",
" Document(page_content='How did personal circumstances influence the decision to pass on the leadership of Y Combinator?', metadata={'doc_id': 'b2b2cdca-988a-4af1-ba47-46170770bc8c'}),\n",
" Document(page_content='What were the reasons for the author leaving Yahoo in the summer of 1999?', metadata={'doc_id': 'ce4f4981-ca60-4f56-86f0-89466de62325'})]"
"[Document(page_content='What might be the potential benefits of nominating Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court?', metadata={'doc_id': '43292b74-d1b8-4200-8a8b-ea0cb57fbcdb'}),\n",
" Document(page_content='How might the Bipartisan Infrastructure Law impact the economic competition between the U.S. and China?', metadata={'doc_id': '66174780-d00c-4166-9791-f0069846e734'}),\n",
" Document(page_content='What factors led to the creation of Y Combinator?', metadata={'doc_id': '72003c4e-4cc9-4f09-a787-0b541a65b38c'}),\n",
" Document(page_content='How did the ability to publish essays online change the landscape for writers and thinkers?', metadata={'doc_id': 'e8d2c648-f245-4bcc-b8d3-14e64a164b64'})]"
]
},
"execution_count": 30,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "7594b24e",
"cell_type": "markdown",
"id": "63c32e43-5f4a-463b-a0c2-2101986f70e6",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
"And invoking the retriever will return the corresponding document:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "4c120c65",
"execution_count": 20,
"id": "7594b24e",
"metadata": {},
"outputs": [
{
@@ -575,22 +594,15 @@
"9194"
]
},
"execution_count": 32,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "005072b8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -609,7 +621,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to pass multimodal data directly to models\n",
"\n",
"Here we demonstrate how to pass multimodal input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"\n",
"In this example we will ask a model to describe an image."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fb896ce9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "markdown",
"id": "4fca4da7",
"metadata": {},
"source": [
"The most commonly supported way to pass in images is to pass it in as a byte string.\n",
"This should work for most model integrations."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ca1040c",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ec680b6b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_data}\"},\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"We can feed the image URL directly in a content block of type \"image_url\". Note that only some model providers support this."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered clouds, suggesting good visibility and a likely pleasant temperature. The bright sunlight is casting distinct shadows on the grass and vegetation, indicating it is likely daytime, possibly late morning or early afternoon. The overall ambiance suggests a warm and inviting day, suitable for outdoor activities.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "1c470309",
"metadata": {},
"source": [
"We can also pass in multiple images."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "325fb4ca",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Yes, the two images are the same. They both depict a wooden boardwalk extending through a grassy field under a blue sky with light clouds. The scenery, lighting, and composition are identical.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"are these two images the same?\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "71bd28cf-d76c-44e2-a55e-c5f265db986e",
"metadata": {},
"source": [
"## Tool calls\n",
"\n",
"Some multimodal models support [tool calling](/docs/concepts/#functiontool-calling) features as well. To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cd22ea82-2f93-46f9-9f7a-6aaf479fcaa9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_BSX4oq4SKnLlp2WlzDhToHBr'}]\n"
]
}
],
"source": [
"from typing import Literal\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def weather_tool(weather: Literal[\"sunny\", \"cloudy\", \"rainy\"]) -> None:\n",
" \"\"\"Describe the weather\"\"\"\n",
" pass\n",
"\n",
"\n",
"model_with_tools = model.bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model_with_tools.invoke([message])\n",
"print(response.tool_calls)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,184 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to use multimodal prompts\n",
"\n",
"Here we demonstrate how to use prompt templates to format multimodal inputs to models. \n",
"\n",
"In this example we will ask a model to describe an image."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2671f995",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4ee35e4f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [{\"type\": \"image_url\", \"image_url\": \"data:image/jpeg;base64,{image_data}\"}],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "089f75c2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "02744b06",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "e9b9ebf6",
"metadata": {},
"source": [
"We can also pass in multiple images."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "02190ee3",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"compare the two pictures provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": \"data:image/jpeg;base64,{image_data1}\",\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": \"data:image/jpeg;base64,{image_data2}\",\n",
" },\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "42af057b",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "513abe00",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data1\": image_data, \"image_data2\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8152c3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -36,12 +36,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "ede7fdc0-ef31-483d-bd67-32e4b5c5d527",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-chroma bs4"
"%%capture --no-stderr\n",
"%pip install --upgrade --quiet langchain langchain-community langchain-chroma bs4"
]
},
{
@@ -54,7 +55,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "143787ca-d8e6-4dc9-8281-4374f4d71720",
"metadata": {},
"outputs": [],
@@ -62,7 +63,8 @@
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"if not os.environ.get(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# import dotenv\n",
"\n",
@@ -83,13 +85,14 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "07411adb-3722-4f65-ab7f-8f6f57663d11",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"if not os.environ.get(\"LANGCHAIN_API_KEY\"):\n",
" os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
@@ -126,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "cb58f273-2111-4a9b-8932-9b64c95030c8",
"metadata": {},
"outputs": [],
@@ -157,13 +160,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "820244ae-74b4-4593-b392-822979dd91b8",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_chroma import Chroma\n",
@@ -202,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "2b685428-8b82-4af1-be4f-7232c5d55b73",
"metadata": {},
"outputs": [],
@@ -239,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"id": "4c4b1695-6217-4ee8-abaf-7cc26366d988",
"metadata": {},
"outputs": [],
@@ -265,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"id": "afef4385-f571-4874-8f52-3d475642f579",
"metadata": {},
"outputs": [],
@@ -314,7 +316,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"id": "9c3fb176-8d6a-4dc7-8408-6a22c5f7cc72",
"metadata": {},
"outputs": [],
@@ -343,17 +345,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"id": "1046c92f-21b3-4214-907d-92878d8cba23",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in thinking step by step or exploring multiple reasoning possibilities at each step. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'"
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -369,17 +371,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 11,
"id": "0e89c75f-7ad7-4331-a2fe-57579eb8f840",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down complex tasks into smaller steps. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process effectively.'"
"'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.'"
]
},
"execution_count": 8,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -401,7 +403,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 12,
"id": "7686b874-3a85-499f-82b5-28a85c4c768c",
"metadata": {},
"outputs": [
@@ -411,11 +413,11 @@
"text": [
"User: What is Task Decomposition?\n",
"\n",
"AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in thinking step by step or exploring multiple reasoning possibilities at each step. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.\n",
"AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.\n",
"\n",
"User: What are common ways of doing it?\n",
"\n",
"AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down complex tasks into smaller steps. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process effectively.\n",
"AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.\n",
"\n"
]
}
@@ -452,7 +454,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 13,
"id": "71c32048-1a41-465f-a9e2-c4affc332fd9",
"metadata": {},
"outputs": [],
@@ -552,17 +554,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 14,
"id": "6d0a7a73-d151-47d9-9e99-b4f3291c0322",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. This process helps agents or models tackle difficult tasks by dividing them into more easily achievable subgoals. Task decomposition can be done through techniques like Chain of Thought or Tree of Thoughts, which guide the model in thinking step by step or exploring multiple reasoning possibilities at each step.'"
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. Techniques like Chain of Thought (CoT) and Tree of Thoughts help in decomposing hard tasks into multiple manageable tasks by instructing models to think step by step and explore multiple reasoning possibilities at each step. Task decomposition can be achieved through various methods such as using prompting techniques, task-specific instructions, or human inputs.'"
]
},
"execution_count": 2,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -578,17 +580,17 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 15,
"id": "17021822-896a-4513-a17d-1d20b1c5381c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Common ways of task decomposition include using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide models in breaking down complex tasks into smaller steps. This can be achieved through simple prompting with LLMs, task-specific instructions, or human inputs to help the model understand and navigate the task effectively. Task decomposition aims to enhance model performance on complex tasks by utilizing more test-time computation and shedding light on the model's thinking process.\""
"'Task decomposition can be done in common ways such as using prompting techniques like Chain of Thought (CoT) or Tree of Thoughts, which instruct models to think step by step and explore multiple reasoning possibilities at each step. Another way is to provide task-specific instructions, such as asking to \"Write a story outline\" for writing a novel, to guide the decomposition process. Additionally, task decomposition can also involve human inputs to break down complex tasks into smaller and simpler steps.'"
]
},
"execution_count": 3,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -618,7 +620,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 16,
"id": "809cc747-2135-40a2-8e73-e4556343ee64",
"metadata": {},
"outputs": [],
@@ -646,14 +648,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 17,
"id": "1726d151-4653-4c72-a187-a14840add526",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import chat_agent_executor\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(llm, tools)"
"agent_executor = create_react_agent(llm, tools)"
]
},
{
@@ -666,19 +668,26 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 18,
"id": "52ae46d9-43f7-481b-96d5-df750be3ad65",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID 5cd28d13-88dd-4eac-a465-3770ac27eff6, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_wxRrUmNbaNny8wh9JIb5uCRB', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-57ee0d12-6142-4957-a002-cce0093efe07-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_wxRrUmNbaNny8wh9JIb5uCRB'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_TbhPPPN05GKi36HLeaN4QM90', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2e60d910-879a-4a2a-b1e9-6a6c5c7d7ebc-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_TbhPPPN05GKi36HLeaN4QM90'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', id='9c3a17f7-653c-47fa-b4e4-fa3d8d24c85d', tool_call_id='call_wxRrUmNbaNny8wh9JIb5uCRB')]}}\n",
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_TbhPPPN05GKi36HLeaN4QM90')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents in planning and executing tasks more effectively. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to think step by step to decompose hard tasks into manageable steps. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of thought steps.\\n\\nTask decomposition can be achieved through various methods, such as using language models with simple prompting, task-specific instructions, or human inputs. By breaking down tasks into smaller components, agents can better plan and execute tasks efficiently.\\n\\nIf you would like more detailed information or examples on task decomposition, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 154, 'prompt_tokens': 588, 'total_tokens': 742}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-8991fa20-c527-4f9e-a058-fc6264fe6259-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in transforming big tasks into multiple manageable tasks, making it easier for autonomous agents to handle and interpret the thinking process. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to \"think step by step\" to decompose hard tasks. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of multiple thoughts per step. Task decomposition can be facilitated through various methods such as using simple prompts, task-specific instructions, or human inputs.', response_metadata={'token_usage': {'completion_tokens': 130, 'prompt_tokens': 636, 'total_tokens': 766}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-3ef17638-65df-4030-a7fe-795e6da91c69-0')]}}\n",
"----\n"
]
}
@@ -707,7 +716,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 19,
"id": "837a401e-9757-4d0e-a0da-24fa097d887e",
"metadata": {},
"outputs": [],
@@ -716,9 +725,7 @@
"\n",
"memory = SqliteSaver.from_conn_string(\":memory:\")\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" llm, tools, checkpointer=memory\n",
")"
"agent_executor = create_react_agent(llm, tools, checkpointer=memory)"
]
},
{
@@ -733,7 +740,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 20,
"id": "d6d70833-b958-4cd7-9e27-29c1c08bb1b8",
"metadata": {},
"outputs": [
@@ -741,7 +748,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-1451e59b-b135-4776-985d-4759338ffee5-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1cd17562-18aa-4839-b41b-403b17a0fc20-0')]}}\n",
"----\n"
]
}
@@ -766,19 +773,26 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 21,
"id": "e2c570ae-dd91-402c-8693-ae746de63b16",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID c54381c0-c5d9-495a-91a0-aca4ae755663, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ab2x4iUPSWDAHS5txL7PspSK', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-f76b5813-b41c-4d0d-9ed2-667b988d885e-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_ab2x4iUPSWDAHS5txL7PspSK'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-122bf097-7ff1-49aa-b430-e362b51354ad-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', id='e0895fa5-5d41-4be0-98db-10a83d42fc2f', tool_call_id='call_ab2x4iUPSWDAHS5txL7PspSK')]}}\n",
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_rg7zKTE5e0ICxVSslJ1u9LMg')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used in complex tasks where the task is broken down into smaller and simpler steps. This approach helps in managing and solving difficult tasks by dividing them into more manageable components. One common method for task decomposition is the Chain of Thought (CoT) technique, which prompts the model to think step by step and decompose hard tasks into smaller steps. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of thought steps.\\n\\nTask decomposition can be achieved through various methods, such as using language models with simple prompting, task-specific instructions, or human inputs. By breaking down tasks into smaller components, agents can better plan and execute complex tasks effectively.\\n\\nIf you would like more detailed information or examples related to task decomposition, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 165, 'prompt_tokens': 611, 'total_tokens': 776}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-13296566-8577-4d65-982b-a39718988ca3-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in managing and solving intricate problems by dividing them into more manageable components. By decomposing tasks, agents or models can better understand the steps involved and plan their actions accordingly. Techniques like Chain of Thought (CoT) and Tree of Thoughts are examples of methods that enhance model performance on complex tasks by breaking them down into smaller steps.', response_metadata={'token_usage': {'completion_tokens': 87, 'prompt_tokens': 659, 'total_tokens': 746}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b9166386-83e5-4b82-9a4b-590e5fa76671-0')]}}\n",
"----\n"
]
}
@@ -805,7 +819,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 22,
"id": "570d8c68-136e-4ba5-969a-03ba195f6118",
"metadata": {},
"outputs": [
@@ -813,11 +827,24 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KvoiamnLfGEzMeEMlV3u0TJ7', 'function': {'arguments': '{\"query\":\"common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 930, 'total_tokens': 951}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-dd842071-6dbd-4b68-8657-892eaca58638-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'common ways of task decomposition'}, 'id': 'call_KvoiamnLfGEzMeEMlV3u0TJ7'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI', 'function': {'arguments': '{\"query\":\"Common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 769, 'total_tokens': 790}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2d2c8327-35cd-484a-b8fd-52436657c2d8-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Common ways of task decomposition'}, 'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI'}])]}}\n",
"----\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID 29553415-e0f4-41a9-8921-ba489e377f68, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_6kbxTU5CDWLmF9mrvR7bWSkI')]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nResources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.', name='blog_post_retriever', id='c749bb8e-c8e0-4fa3-bc11-3e2e0651880b', tool_call_id='call_KvoiamnLfGEzMeEMlV3u0TJ7')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='According to the blog post, common ways of task decomposition include:\\n\\n1. Using language models with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Utilizing task-specific instructions, for example, using \"Write a story outline\" for writing a novel.\\n3. Involving human inputs in the task decomposition process.\\n\\nThese methods help in breaking down complex tasks into smaller and more manageable steps, facilitating better planning and execution of the overall task.', response_metadata={'token_usage': {'completion_tokens': 100, 'prompt_tokens': 1475, 'total_tokens': 1575}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-98b765b3-f1a6-4c9a-ad0f-2db7950b900f-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Common ways of task decomposition include:\\n1. Using LLM with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Using task-specific instructions, for example, \"Write a story outline\" for writing a novel.\\n3. Involving human inputs in the task decomposition process.', response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 1339, 'total_tokens': 1406}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ad14cde-ca75-4238-a868-f865e0fc50dd-0')]}}\n",
"----\n"
]
}
@@ -852,20 +879,15 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 23,
"id": "b1d2b4d4-e604-497d-873d-d345b808578e",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain.tools.retriever import create_retriever_tool\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from langgraph.checkpoint.sqlite import SqliteSaver\n",
@@ -900,9 +922,7 @@
"tools = [tool]\n",
"\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" llm, tools, checkpointer=memory\n",
")"
"agent_executor = create_react_agent(llm, tools, checkpointer=memory)"
]
},
{
@@ -941,7 +961,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "52976910",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [recursivecharactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "a678d550",

View File

@@ -2,11 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 0\n",
"keywords: [Runnable, Runnables, LCEL]\n",
"keywords: [Runnable, Runnables, RunnableSequence, LCEL, chain, chains, chaining]\n",
"---"
]
},
@@ -250,8 +253,7 @@
"source": [
"## Related\n",
"\n",
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n",
"- "
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n"
]
}
],

View File

@@ -503,7 +503,7 @@
}
],
"source": [
"chain = prompt | llm_with_tools | parser | tool # noqa\n",
"chain = prompt | llm_with_tools | parser | tool\n",
"chain.invoke({\"question\": \"What's the correlation between age and fare\"})"
]
},

View File

@@ -262,7 +262,7 @@
" return tables\n",
"\n",
"\n",
"table_chain = category_chain | get_tables # noqa\n",
"table_chain = category_chain | get_tables\n",
"table_chain.invoke({\"input\": \"What are all the genres of Alanis Morisette songs\"})"
]
},

View File

@@ -3,10 +3,14 @@
{
"cell_type": "raw",
"id": "0bdb3b97-4989-4237-b43b-5943dbbd8302",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1.5\n",
"keywords: [stream]\n",
"---"
]
},

View File

@@ -3,10 +3,15 @@
{
"cell_type": "raw",
"id": "27598444",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 3\n",
"keywords: [structured output, json, information extraction, with_structured_output]\n",
"---"
]
},

View File

@@ -14,14 +14,20 @@
"\n",
":::\n",
"\n",
"```{=mdx}\n",
":::info\n",
":::info Tool calling vs function calling\n",
"\n",
"We use the term tool calling interchangeably with function calling. Although\n",
"function calling is sometimes meant to refer to invocations of a single function,\n",
"we treat all models as though they can return multiple tool or function calls in \n",
"each message.\n",
"\n",
":::\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"```\n",
"\n",
"Tool calling allows a chat model to respond to a given prompt by \"calling a tool\".\n",
"While the name implies that the model is performing \n",
@@ -705,7 +711,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -1,160 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to call tools with multi-modal data\n",
"\n",
"Here we demonstrate how to call tools with multi-modal data, such as images.\n",
"\n",
"Some multi-modal models, such as those that can reason over images or audio, support [tool calling](/docs/concepts/#functiontool-calling) features as well.\n",
"\n",
"To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data).\n",
"\n",
"Below, we demonstrate examples using [OpenAI](/docs/integrations/platforms/openai) and [Anthropic](/docs/integrations/platforms/anthropic). We will use the same image and tool in all cases. Let's first select an image, and build a placeholder tool that expects as input the string \"sunny\", \"cloudy\", or \"rainy\". We will ask the models to describe the weather in the image."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"from typing import Literal\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"\n",
"\n",
"@tool\n",
"def weather_tool(weather: Literal[\"sunny\", \"cloudy\", \"rainy\"]) -> None:\n",
" \"\"\"Describe the weather\"\"\"\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"For OpenAI, we can feed the image URL directly in a content block of type \"image_url\":"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_mRYL50MtHdeNuNIjSCm5UPmB'}]\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\").bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.tool_calls)"
]
},
{
"cell_type": "markdown",
"id": "e5738224-1109-4bf8-8976-ff1570dd1d46",
"metadata": {},
"source": [
"Note that we recover tool calls with parsed arguments in LangChain's [standard format](/docs/how_to/tool_calling) in the model response."
]
},
{
"cell_type": "markdown",
"id": "0cee63ff-e09f-4dd8-8323-912edbde94f6",
"metadata": {},
"source": [
"## Anthropic\n",
"\n",
"For Anthropic, we can format a base64-encoded image into a content block of type \"image\", as below:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d90c4590-71c8-42b1-99ff-03a9eca8082e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'toolu_016m9KfknJqx5fVRYk4tkF6s'}]\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\n",
" \"type\": \"image\",\n",
" \"source\": {\n",
" \"type\": \"base64\",\n",
" \"media_type\": \"image/jpeg\",\n",
" \"data\": image_data,\n",
" },\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.tool_calls)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -42,7 +42,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai deepeval langchain-chroma"
"%pip install --upgrade --quiet langchain langchain-openai langchain-community deepeval langchain-chroma"
]
},
{

View File

@@ -36,7 +36,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai context-python"
"%pip install --upgrade --quiet langchain langchain-openai langchain-community context-python"
]
},
{

View File

@@ -36,7 +36,8 @@
"# Install necessary dependencies.\n",
"%pip install --upgrade --quiet infinopy\n",
"%pip install --upgrade --quiet matplotlib\n",
"%pip install --upgrade --quiet tiktoken"
"%pip install --upgrade --quiet tiktoken\n",
"%pip install --upgrade --quiet langchain langchain-openai langchain-community"
]
},
{

View File

@@ -56,7 +56,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain label-studio label-studio-sdk langchain-openai"
"%pip install --upgrade --quiet langchain label-studio label-studio-sdk langchain-openai langchain-community"
]
},
{

View File

@@ -110,7 +110,7 @@ with identify("user-123"):
llm.invoke("Tell me a joke")
with identify("user-456", user_props={"email": "user456@test.com"}):
agen.run("Who is Leo DiCaprio's girlfriend?")
agent.run("Who is Leo DiCaprio's girlfriend?")
```
## Support

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet promptlayer --upgrade"
"%pip install --upgrade --quiet langchain-community promptlayer --upgrade"
]
},
{

View File

@@ -35,7 +35,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet trubrics"
"%pip install --upgrade --quiet trubrics langchain langchain-community"
]
},
{

View File

@@ -81,7 +81,7 @@
}
],
"source": [
"%pip install -qU langchain langchain_openai uptrain faiss-cpu flashrank"
"%pip install -qU langchain langchain_openai langchain-community uptrain faiss-cpu flashrank"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -137,6 +137,77 @@
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "c36575b3",
"metadata": {},
"source": [
"### LLM Caching with OpenSearch Semantic Cache\n",
"\n",
"Use OpenSearch as a semantic cache to cache prompts and responses and evaluate hits based on semantic similarity.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "375d4e56",
"metadata": {},
"outputs": [],
"source": [
"from langchain.globals import set_llm_cache\n",
"from langchain_aws import BedrockEmbeddings, ChatBedrock\n",
"from langchain_community.cache import OpenSearchSemanticCache\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"bedrock_embeddings = BedrockEmbeddings(\n",
" model_id=\"amazon.titan-embed-text-v1\", region_name=\"us-east-1\"\n",
")\n",
"\n",
"chat = ChatBedrock(\n",
" model_id=\"anthropic.claude-3-haiku-20240307-v1:0\", model_kwargs={\"temperature\": 0.5}\n",
")\n",
"\n",
"# Enable LLM cache. Make sure OpenSearch is set up and running. Update URL accordingly.\n",
"set_llm_cache(\n",
" OpenSearchSemanticCache(\n",
" opensearch_url=\"http://localhost:9200\", embedding=bedrock_embeddings\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bb5d25bb",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"messages = [HumanMessage(content=\"tell me about Amazon Bedrock\")]\n",
"response_text = chat.invoke(messages)\n",
"\n",
"print(response_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6cfb3086",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"\n",
"messages = [HumanMessage(content=\"what is amazon bedrock\")]\n",
"response_text = chat.invoke(messages)\n",
"\n",
"print(response_text)"
]
}
],
"metadata": {

View File

@@ -246,11 +246,220 @@
"source": [
"chain.invoke({\"product\": \"healthy snacks\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"### bind_tools()\n",
"\n",
"With `ChatEdenAI.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"llm = ChatEdenAI(provider=\"openai\", temperature=0.2, max_tokens=500)\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([GetWeather])"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', response_metadata={'openai': {'status': 'success', 'generated_text': None, 'message': [{'role': 'user', 'message': 'what is the weather like in San Francisco', 'tools': [{'name': 'GetWeather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'description': 'The city and state, e.g. San Francisco, CA', 'type': 'string'}}, 'required': ['location']}}], 'tool_calls': None}, {'role': 'assistant', 'message': None, 'tools': None, 'tool_calls': [{'id': 'call_tRpAO7KbQwgTjlka70mCQJdo', 'name': 'GetWeather', 'arguments': '{\"location\":\"San Francisco\"}'}]}], 'cost': 0.000194}}, id='run-5c44c01a-d7bb-4df6-835e-bda596080399-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco'}, 'id': 'call_tRpAO7KbQwgTjlka70mCQJdo'}])"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'San Francisco'},\n",
" 'id': 'call_tRpAO7KbQwgTjlka70mCQJdo'}]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### with_structured_output()\n",
"\n",
"The BaseChatModel.with_structured_output interface makes it easy to get structured output from chat models. You can use ChatEdenAI.with_structured_output, which uses tool-calling under the hood), to get the model to more reliably return an output in a specific format:\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GetWeather(location='San Francisco')"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"structured_llm = llm.with_structured_output(GetWeather)\n",
"structured_llm.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Passing Tool Results to model\n",
"\n",
"Here is a full example of how to use a tool. Pass the tool output to the model, and get the result back from the model"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'11 + 11 = 22'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, ToolMessage\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\n",
"\n",
" Args:\n",
" a: first int\n",
" b: second int\n",
" \"\"\"\n",
" return a + b\n",
"\n",
"\n",
"llm = ChatEdenAI(\n",
" provider=\"openai\",\n",
" max_tokens=1000,\n",
" temperature=0.2,\n",
")\n",
"\n",
"llm_with_tools = llm.bind_tools([add], tool_choice=\"required\")\n",
"\n",
"query = \"What is 11 + 11?\"\n",
"\n",
"messages = [HumanMessage(query)]\n",
"ai_msg = llm_with_tools.invoke(messages)\n",
"messages.append(ai_msg)\n",
"\n",
"tool_call = ai_msg.tool_calls[0]\n",
"tool_output = add.invoke(tool_call[\"args\"])\n",
"\n",
"# This append the result from our tool to the model\n",
"messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
"\n",
"llm_with_tools.invoke(messages).content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Eden AI does not currently support streaming tool calls. Attempting to stream will yield a single final message."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/eden/Projects/edenai-langchain/libs/community/langchain_community/chat_models/edenai.py:603: UserWarning: stream: Tool use is not yet supported in streaming mode.\n",
" warnings.warn(\"stream: Tool use is not yet supported in streaming mode.\")\n"
]
},
{
"data": {
"text/plain": [
"[AIMessageChunk(content='', id='run-fae32908-ec48-4ab2-ad96-bb0d0511754f', tool_calls=[{'name': 'add', 'args': {'a': 9, 'b': 9}, 'id': 'call_n0Tm7I9zERWa6UpxCAVCweLN'}], tool_call_chunks=[{'name': 'add', 'args': '{\"a\": 9, \"b\": 9}', 'id': 'call_n0Tm7I9zERWa6UpxCAVCweLN', 'index': 0}])]"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"list(llm_with_tools.stream(\"What's 9 + 9\"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-pr",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},

View File

@@ -58,6 +58,62 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### `HuggingFacePipeline`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFacePipeline\n",
"\n",
"llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"HuggingFaceH4/zephyr-7b-beta\",\n",
" task=\"text-generation\",\n",
" pipeline_kwargs=dict(\n",
" max_new_tokens=512,\n",
" do_sample=False,\n",
" repetition_penalty=1.03,\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To run a quantized version, you might specify a `bitsandbytes` quantization config as follows:\n",
"\n",
"```python\n",
"from transformers import BitsAndBytesConfig\n",
"\n",
"quantization_config = BitsAndBytesConfig(\n",
" load_in_4bit=True,\n",
" bnb_4bit_quant_type=\"nf4\",\n",
" bnb_4bit_compute_dtype=\"float16\",\n",
" bnb_4bit_use_double_quant=True\n",
")\n",
"```\n",
"\n",
"and pass it to the `HuggingFacePipeline` as a part of its `model_kwargs`:\n",
"\n",
"```python\n",
"pipeline = HuggingFacePipeline(\n",
" ...\n",
"\n",
" model_kwargs={\"quantization_config\": quantization_config},\n",
" \n",
" ...\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -62,10 +62,10 @@
"%pip install --upgrade --quiet langchain-core langchain-community\n",
"\n",
"# Install Kineitca DB connection package\n",
"%pip install --upgrade --quiet gpudb typeguard\n",
"%pip install --upgrade --quiet 'gpudb>=7.2.0.8' typeguard pandas tqdm\n",
"\n",
"# Install packages needed for this tutorial\n",
"%pip install --upgrade --quiet faker"
"%pip install --upgrade --quiet faker ipykernel "
]
},
{
@@ -114,7 +114,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 11,
"metadata": {},
"outputs": [
{
@@ -139,11 +139,11 @@
"\n",
" birthdate \n",
"id \n",
"0 1997-12-01 \n",
"1 1924-07-27 \n",
"2 1933-11-28 \n",
"3 1988-10-19 \n",
"4 1931-03-12 \n"
"0 1997-12-08 \n",
"1 1924-08-03 \n",
"2 1933-12-05 \n",
"3 1988-10-26 \n",
"4 1931-03-19 \n"
]
}
],
@@ -222,39 +222,60 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CREATE OR REPLACE CONTEXT \"demo\".\"test_llm_ctx\" (\n",
" TABLE = \"demo\".\"user_profiles\",\n",
" COMMENT = 'Contains user profiles.'\n",
"),\n",
"(\n",
" SAMPLES = ( \n",
" 'How many male users are there?' = 'select count(1) as num_users\n",
" from demo.user_profiles\n",
" where sex = ''M'';' )\n",
")\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 4,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# create an LLM context for the table.\n",
"from gpudb import GPUdbSamplesClause, GPUdbSqlContext, GPUdbTableClause\n",
"\n",
"sql = f\"\"\"\n",
"CREATE OR REPLACE CONTEXT {kinetica_ctx}\n",
"(\n",
" TABLE = {table_name}\n",
" COMMENT = 'Contains user profiles.'\n",
"),\n",
"(\n",
" SAMPLES = (\n",
" 'How many male users are there?' = \n",
" 'select count(1) as num_users\n",
" from {table_name}\n",
" where sex = ''M'';')\n",
"table_ctx = GPUdbTableClause(table=table_name, comment=\"Contains user profiles.\")\n",
"\n",
"samples_ctx = GPUdbSamplesClause(\n",
" samples=[\n",
" (\n",
" \"How many male users are there?\",\n",
" f\"\"\"\n",
" select count(1) as num_users\n",
" from {table_name}\n",
" where sex = 'M';\n",
" \"\"\",\n",
" )\n",
" ]\n",
")\n",
"\"\"\"\n",
"\n",
"count_affected = kinetica_llm.kdbc.execute(sql)\n",
"context_sql = GPUdbSqlContext(\n",
" name=kinetica_ctx, tables=[table_ctx], samples=samples_ctx\n",
").build_sql()\n",
"\n",
"print(context_sql)\n",
"count_affected = kinetica_llm.kdbc.execute(context_sql)\n",
"count_affected"
]
},
@@ -273,7 +294,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"metadata": {},
"outputs": [
{
@@ -334,7 +355,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
@@ -357,7 +378,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"outputs": [
{
@@ -404,7 +425,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.19"
"version": "3.9.19"
}
},
"nbformat": 4,

View File

@@ -7,18 +7,24 @@
"id": "cc6caafa"
},
"source": [
"# NVIDIA AI Foundation Endpoints\n",
"# NVIDIA NIMs\n",
"\n",
"The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
"> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below.\n",
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
"\n",
"This example goes over how to use LangChain to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints."
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
@@ -50,9 +56,9 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
@@ -69,12 +75,23 @@
"import getpass\n",
"import os\n",
"\n",
"if not os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" nvapi_key = getpass.getpass(\"Enter your NVIDIA API key: \")\n",
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
"else:\n",
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
" assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "markdown",
"id": "af0ce26b",
"metadata": {},
"source": [
"## Working with NVIDIA API Catalog"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -96,6 +113,30 @@
"print(result.content)"
]
},
{
"cell_type": "markdown",
"id": "9d35686b",
"metadata": {},
"source": [
"## Working with NVIDIA NIMs\n",
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
"\n",
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49838930",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# connect to an embedding NIM running at localhost:8000, specifying a specific model\n",
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta-llama3-8b-instruct\")"
]
},
{
"cell_type": "markdown",
"id": "71d37987-d568-4a73-9d2a-8bd86323f8bf",
@@ -252,81 +293,6 @@
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "642a618a-faa3-443e-99c3-67b8142f3c51",
"metadata": {},
"source": [
"## Steering LLMs\n",
"\n",
"> [SteerLM-optimized models](https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference/) supports \"dynamic steering\" of model outputs at inference time.\n",
"\n",
"This lets you \"control\" the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.\n",
"\n",
"The \"steer\" models support this type of input, such as `nemotron_steerlm_8b`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36a96b1a-e3e7-4ae3-b4b0-9331b5eca04f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"nemotron_steerlm_8b\")\n",
"# Try making it uncreative and not verbose\n",
"complex_result = llm.invoke(\n",
" \"What's a PB&J?\", labels={\"creativity\": 0, \"complexity\": 3, \"verbosity\": 0}\n",
")\n",
"print(\"Un-creative\\n\")\n",
"print(complex_result.content)\n",
"\n",
"# Try making it very creative and verbose\n",
"print(\"\\n\\nCreative\\n\")\n",
"creative_result = llm.invoke(\n",
" \"What's a PB&J?\", labels={\"creativity\": 9, \"complexity\": 3, \"verbosity\": 9}\n",
")\n",
"print(creative_result.content)"
]
},
{
"cell_type": "markdown",
"id": "75849e7a-2adf-4038-8d9d-8a9e12417789",
"metadata": {},
"source": [
"#### Use within LCEL\n",
"\n",
"The labels are passed as invocation params. You can `bind` these to the LLM using the `bind` method on the LLM to include it within a declarative, functional chain. Below is an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae1105c3-2a0c-4db3-916e-24d5e427bd01",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful AI assistant named Fred.\"), (\"user\", \"{input}\")]\n",
")\n",
"chain = (\n",
" prompt\n",
" | ChatNVIDIA(model=\"nemotron_steerlm_8b\").bind(\n",
" labels={\"creativity\": 9, \"complexity\": 0, \"verbosity\": 9}\n",
" )\n",
" | StrOutputParser()\n",
")\n",
"\n",
"for txt in chain.stream({\"input\": \"Why is a PB&J?\"}):\n",
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "7f465ff6-5922-41d8-8abb-1d1e4095cc27",
@@ -334,7 +300,7 @@
"source": [
"## Multimodal\n",
"\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `playground_neva_22b`.\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `nvidia/neva-22b`.\n",
"\n",
"\n",
"These models accept LangChain's standard image formats, and accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle.\n",
@@ -367,7 +333,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"playground_neva_22b\")"
"llm = ChatNVIDIA(model=\"nvidia/neva-22b\")"
]
},
{
@@ -500,7 +466,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"kosmos = ChatNVIDIA(model=\"kosmos_2\")\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
@@ -544,7 +510,7 @@
"\n",
"\n",
"## Override the payload passthrough. Default is to pass through the payload as is.\n",
"kosmos = ChatNVIDIA(model=\"kosmos_2\")\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"kosmos.client.payload_fn = drop_streaming_key\n",
"\n",
"kosmos.invoke(\n",
@@ -567,43 +533,6 @@
"For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the `NVEModel` client as a requests backbone. The `NVIDIAEmbeddings` class is a good source of inspiration for this. "
]
},
{
"cell_type": "markdown",
"id": "1cd6249a-7ffa-4886-b7e8-5778dc93499e",
"metadata": {},
"source": [
"## RAG: Context models\n",
"\n",
"NVIDIA also has Q&A models that support a special \"context\" chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model. The `_qa_` models like `nemotron_qa_8b` support this.\n",
"\n",
"**Note:** Only \"user\" (human) and \"context\" chat messages are supported for these models; System or AI messages that would useful in conversational flows are not supported."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f994b4d3-c1b0-4e87-aad0-a7b487e2aa43",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import ChatMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" ChatMessage(\n",
" role=\"context\", content=\"Parrots and Cats have signed the peace accord.\"\n",
" ),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
"llm = ChatNVIDIA(model=\"nemotron_qa_8b\")\n",
"chain = prompt | llm | StrOutputParser()\n",
"chain.invoke({\"input\": \"What was signed?\"})"
]
},
{
"cell_type": "markdown",
"id": "137662a6",
@@ -708,14 +637,6 @@
"source": [
"conversation.invoke(\"Tell me about yourself.\")[\"response\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a719bd3-755d-4a05-bda2-de132bf99314",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -723,9 +644,9 @@
"provenance": []
},
"kernelspec": {
"display_name": "Python (venvoss)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "venvoss"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -737,7 +658,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -54,12 +54,12 @@
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### directly in the terminal:\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"### via an API\n",
"#### Via an API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
@@ -72,9 +72,11 @@
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"#### Via LangChain\n",
"\n",
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application."
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application. \n",
"\n",
"View the [API Reference for ChatOllama](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama) for more."
]
},
{
@@ -105,7 +107,7 @@
"\n",
"# using LangChain Expressive Language chain syntax\n",
"# learn more about the LCEL on\n",
"# /docs/expression_language/why\n",
"# /docs/concepts/#langchain-expression-language-lcel\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"# for brevity, response is printed in terminal\n",
@@ -189,7 +191,7 @@
"\n",
"## Building from source\n",
"\n",
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/jmorganca/ollama?tab=readme-ov-file#building)"
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/ollama/ollama?tab=readme-ov-file#building)"
]
},
{
@@ -333,7 +335,7 @@
}
],
"source": [
"pip install --upgrade --quiet pillow"
"!pip install --upgrade --quiet pillow"
]
},
{
@@ -444,6 +446,24 @@
"\n",
"print(query_chain)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {

View File

@@ -12,56 +12,153 @@
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"id": "cb4dd00a-8893-4a45-96f7-9a9fc341cd61",
"metadata": {},
"source": [
"# ChatOpenAI\n",
"\n",
"This notebook covers how to get started with OpenAI chat models."
"This notebook provides a quick overview for getting started with OpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [OpenAI docs](https://platform.openai.com/docs/models).\n",
"\n",
":::info Azure OpenAI\n",
"\n",
"Note that certain OpenAI models can also be accessed via the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-services/openai-service). To use the Azure OpenAI service use the [AzureChatOpenAI integration](/docs/integrations/chat/azure_chat_openai/).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/openai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain-openai](https://api.python.langchain.com/en/latest/openai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "e817fe2e-4f1d-4533-b19e-2400b1cf6ce8",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter your OpenAI API key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "c2a3ce99-a44a-4ea6-8d23-8a88e332f0f9",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85255d53-ac8a-44e1-aa26-8e567bb77ae7",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "c59722a9-6dbb-45f7-ae59-5be50ca5733d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain OpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2113471c-75d7-45df-b784-d78da4ef7aba",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1098bc9d-ce83-462b-8c19-f85bf3a159dc",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "522686de",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # api_key=\"...\", # if you prefer to pass api key in directly instaed of using env vars\n",
" # base_url=\"...\",\n",
" # organization=\"...\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4e5fe97e",
"id": "6511982a-734a-4193-a47d-254f8dcaff5e",
"metadata": {},
"source": [
"The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:\n",
"\n",
"```python\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0, api_key=\"YOUR_API_KEY\", openai_organization=\"YOUR_ORGANIZATION_ID\")\n",
"```\n",
"Remove the openai_organization parameter should it not apply to you."
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
"metadata": {
"tags": []
@@ -70,20 +167,42 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 34, 'total_tokens': 40}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-8591eae1-b42b-402b-a23a-dfdb0cd151bd-0')"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 31, 'total_tokens': 36}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_43dfabdef1', 'finish_reason': 'stop', 'logprobs': None}, id='run-012cffe2-5d3d-424d-83b5-51c6d4a593d1-0', usage_metadata={'input_tokens': 31, 'output_tokens': 5, 'total_tokens': 36})"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"Translate this sentence from English to French. I love programming.\"),\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"llm.invoke(messages)"
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2cd224b8-4499-41fb-a604-d53a7ff17b2e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
@@ -93,7 +212,7 @@
"source": [
"## Chaining\n",
"\n",
"We can chain our model with a prompt template like so:"
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
@@ -116,6 +235,8 @@
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
@@ -277,13 +398,23 @@
"\n",
"fine_tuned_model(messages)"
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-2",
"language": "python",
"name": "python3"
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {

View File

@@ -15,10 +15,9 @@
"source": [
"# ChatPremAI\n",
"\n",
">[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth. \n",
"[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).\n",
"\n",
"\n",
"This example goes over how to use LangChain to interact with `ChatPremAI`. "
"This example goes over how to use LangChain to interact with different chat models with `ChatPremAI`"
]
},
{
@@ -27,23 +26,13 @@
"source": [
"### Installation and setup\n",
"\n",
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
"We start by installing `langchain` and `premai-sdk`. You can type the following command to install:\n",
"\n",
"```bash\n",
"pip install premai langchain\n",
"```\n",
"\n",
"Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:\n",
"\n",
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
"\n",
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
"\n",
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
"\n",
"4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt. \n",
"\n",
"Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application. "
"Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key."
]
},
{
@@ -60,13 +49,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup ChatPremAI instance in LangChain \n",
"### Setup PremAI client in LangChain\n",
"\n",
"Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.\n",
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.\n",
"\n",
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model. \n",
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat-client. By default it will use the model name and parameters used in the [LaunchPad](https://docs.premai.io/get-started/launchpad). \n",
"\n",
"`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations. "
"> Note: If you change the `model` or any other parameters like `temperature` or `max_tokens` while setting the client, it will override existing default configurations, that was used in LaunchPad. "
]
},
{
@@ -102,13 +91,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Calling the Model\n",
"### Chat Completions\n",
"\n",
"Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`. \n",
"`ChatPremAI` supports two methods: `invoke` (which is the same as `generate`) and `stream`. \n",
"\n",
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. \n",
"\n",
"### Generation"
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. "
]
},
{
@@ -165,7 +152,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also change generation parameters while calling the model. Here's how you can do that"
"You can provide system prompt here like this:"
]
},
{
@@ -192,15 +179,72 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Important notes:\n",
"> If you are going to place system prompt here, then it will override your system prompt that was fixed while deploying the application from the platform. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Native RAG Support with Prem Repositories\n",
"\n",
"Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported. \n",
"Prem Repositories which allows users to upload documents (.txt, .pdf etc) and connect those repositories to the LLMs. You can think Prem repositories as native RAG, where each repository can be considered as a vector database. You can connect multiple repositories. You can learn more about repositories [here](https://docs.premai.io/get-started/repositories).\n",
"\n",
"We will provide support for those two above parameters in sooner versions. \n",
"Repositories are also supported in langchain premai. Here is how you can do it. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query = \"what is the diameter of individual Galaxy\"\n",
"repository_ids = [\n",
" 1991,\n",
"]\n",
"repositories = dict(ids=repository_ids, similarity_threshold=0.3, limit=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we start by defining our repository with some repository ids. Make sure that the ids are valid repository ids. You can learn more about how to get the repository id [here](https://docs.premai.io/get-started/repositories). \n",
"\n",
"> Please note: Similar like `model_name` when you invoke the argument `repositories`, then you are potentially overriding the repositories connected in the launchpad. \n",
"\n",
"Now, we connect the repository with our chat object to invoke RAG based generations. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"response = chat.invoke(query, max_tokens=100, repositories=repositories)\n",
"\n",
"print(response.content)\n",
"print(json.dumps(response.response_metadata, indent=4))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in prem platform. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"And finally, here's how you do token streaming for dynamic chat like applications. "
"In this section, let's see how we can stream tokens using langchain and PremAI. Here's how you do it. "
]
},
{
@@ -228,7 +272,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it. "
"Similar to above, if you want to override the system-prompt and the generation parameters, you need to add the following:"
]
},
{

View File

@@ -95,7 +95,7 @@
" \"\"\"\n",
" self.path = path\n",
" self._message_line_regex = re.compile(\n",
" r\"(.+?) — (\\w{3,9} \\d{1,2}(?:st|nd|rd|th)?(?:, \\d{4})? \\d{1,2}:\\d{2} (?:AM|PM)|Today at \\d{1,2}:\\d{2} (?:AM|PM)|Yesterday at \\d{1,2}:\\d{2} (?:AM|PM))\", # noqa\n",
" r\"(.+?) — (\\w{3,9} \\d{1,2}(?:st|nd|rd|th)?(?:, \\d{4})? \\d{1,2}:\\d{2} (?:AM|PM)|Today at \\d{1,2}:\\d{2} (?:AM|PM)|Yesterday at \\d{1,2}:\\d{2} (?:AM|PM))\",\n",
" flags=re.DOTALL,\n",
" )\n",
"\n",
@@ -120,7 +120,7 @@
" current_content = []\n",
" for line in lines:\n",
" if re.match(\n",
" r\".+? — (\\d{2}/\\d{2}/\\d{4} \\d{1,2}:\\d{2} (?:AM|PM)|Today at \\d{1,2}:\\d{2} (?:AM|PM)|Yesterday at \\d{1,2}:\\d{2} (?:AM|PM))\", # noqa\n",
" r\".+? — (\\d{2}/\\d{2}/\\d{4} \\d{1,2}:\\d{2} (?:AM|PM)|Today at \\d{1,2}:\\d{2} (?:AM|PM)|Yesterday at \\d{1,2}:\\d{2} (?:AM|PM))\",\n",
" line,\n",
" ):\n",
" if current_sender and current_content:\n",

View File

@@ -94,7 +94,7 @@
" \"\"\"\n",
" self.path = path\n",
" self._message_line_regex = re.compile(\n",
" r\"(?P<sender>.+?) (?P<timestamp>\\d{4}/\\d{2}/\\d{2} \\d{1,2}:\\d{2} (?:AM|PM))\", # noqa\n",
" r\"(?P<sender>.+?) (?P<timestamp>\\d{4}/\\d{2}/\\d{2} \\d{1,2}:\\d{2} (?:AM|PM))\",\n",
" # flags=re.DOTALL,\n",
" )\n",
"\n",

View File

@@ -47,7 +47,8 @@
"source": [
"api_key = \"xxx\"\n",
"base_id = \"xxx\"\n",
"table_id = \"xxx\""
"table_id = \"xxx\"\n",
"view = \"xxx\" # optional"
]
},
{
@@ -57,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"loader = AirtableLoader(api_key, table_id, base_id)\n",
"loader = AirtableLoader(api_key, table_id, base_id, view=view)\n",
"docs = loader.load()"
]
},

View File

@@ -48,7 +48,7 @@
"from langchain_community.document_loaders import AsyncChromiumLoader\n",
"\n",
"urls = [\"https://www.wsj.com\"]\n",
"loader = AsyncChromiumLoader(urls)\n",
"loader = AsyncChromiumLoader(urls, user_agent=\"MyAppUserAgent\")\n",
"docs = loader.load()\n",
"docs[0].page_content[0:100]"
]

View File

@@ -3,7 +3,7 @@ class MyClass:
self.name = name
def greet(self):
print(f"Hello, {self.name}!") # noqa: T201
print(f"Hello, {self.name}!")
def main():

View File

@@ -8,7 +8,7 @@
"\n",
">[Jupyter Notebook](https://en.wikipedia.org/wiki/Project_Jupyter#Applications) (formerly `IPython Notebook`) is a web-based interactive computational environment for creating notebook documents.\n",
"\n",
"This notebook covers how to load data from a `Jupyter notebook (.html)` into a format suitable by LangChain."
"This notebook covers how to load data from a `Jupyter notebook (.ipynb)` into a format suitable by LangChain."
]
},
{
@@ -31,7 +31,7 @@
"outputs": [],
"source": [
"loader = NotebookLoader(\n",
" \"example_data/notebook.html\",\n",
" \"example_data/notebook.ipynb\",\n",
" include_outputs=True,\n",
" max_output_length=20,\n",
" remove_newline=True,\n",
@@ -42,7 +42,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`NotebookLoader.load()` loads the `.html` notebook file into a `Document` object.\n",
"`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\n",
"\n",
"**Parameters**:\n",
"\n",

View File

@@ -0,0 +1,107 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ScrapFly\n",
"[ScrapFly](https://scrapfly.io/) is a web scraping API with headless browser capabilities, proxies, and anti-bot bypass. It allows for extracting web page data into accessible LLM markdown or text."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Installation\n",
"Install ScrapFly Python SDK and he required Langchain packages using pip:\n",
"```shell\n",
"pip install scrapfly-sdk langchain langchain-community\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import ScrapflyLoader\n",
"\n",
"scrapfly_loader = ScrapflyLoader(\n",
" [\"https://web-scraping.dev/products\"],\n",
" api_key=\"Your ScrapFly API key\", # Get your API key from https://www.scrapfly.io/\n",
" ignore_scrape_failures=True, # Ignore unprocessable web pages and log their exceptions\n",
")\n",
"\n",
"# Load documents from URLs as markdown\n",
"documents = scrapfly_loader.load()\n",
"print(documents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The ScrapflyLoader also allows passigng ScrapeConfig object for customizing the scrape request. See the documentation for the full feature details and their API params: https://scrapfly.io/docs/scrape-api/getting-started"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import ScrapflyLoader\n",
"\n",
"scrapfly_scrape_config = {\n",
" \"asp\": True, # Bypass scraping blocking and antibot solutions, like Cloudflare\n",
" \"render_js\": True, # Enable JavaScript rendering with a cloud headless browser\n",
" \"proxy_pool\": \"public_residential_pool\", # Select a proxy pool (datacenter or residnetial)\n",
" \"country\": \"us\", # Select a proxy location\n",
" \"auto_scroll\": True, # Auto scroll the page\n",
" \"js\": \"\", # Execute custom JavaScript code by the headless browser\n",
"}\n",
"\n",
"scrapfly_loader = ScrapflyLoader(\n",
" [\"https://web-scraping.dev/products\"],\n",
" api_key=\"Your ScrapFly API key\", # Get your API key from https://www.scrapfly.io/\n",
" ignore_scrape_failures=True, # Ignore unprocessable web pages and log their exceptions\n",
" scrape_config=scrapfly_scrape_config, # Pass the scrape_config object\n",
" scrape_format=\"markdown\", # The scrape result format, either `markdown`(default) or `text`\n",
")\n",
"\n",
"# Load documents from URLs as markdown\n",
"documents = scrapfly_loader.load()\n",
"print(documents)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,387 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DashScope Reranker\n",
"\n",
"This notebook shows how to use DashScope Reranker for document compression and retrieval. [DashScope](https://dashscope.aliyun.com/) is the generative AI service from Alibaba Cloud (Aliyun).\n",
"\n",
"DashScope's [Text ReRank Model](https://help.aliyun.com/document_detail/2780058.html?spm=a2c4g.2780059.0.0.6d995024FlrJ12) supports reranking documents with a maximum of 4000 tokens. Moreover, it supports Chinese, English, Japanese, Korean, Thai, Spanish, French, Portuguese, Indonesian, Arabic, and over 50 other languages. For more details, please visit [here](https://help.aliyun.com/document_detail/2780059.html?spm=a2c4g.2780058.0.0.3a9e5b1dWeOQjI)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet dashscope"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss\n",
"\n",
"# OR (depending on Python version)\n",
"\n",
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# To create api key: https://bailian.console.aliyun.com/?apiKey=1#/api-key\n",
"\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"DashScope API Key:\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"I understand. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"To all Americans, I will be honest with you, as Ive always promised. A Russian dictator, invading a foreign country, has costs around the world. \n",
"\n",
"And Im taking robust action to make sure the pain of our sanctions is targeted at Russias economy. And I will use every tool at our disposal to protect American businesses and consumers. \n",
"\n",
"Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"Every Administration says theyll do it, but we are actually doing it. \n",
"\n",
"We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n",
"\n",
"But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we havent done in a long time: build a better America. \n",
"\n",
"For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n",
"\n",
"And I know youre tired, frustrated, and exhausted. \n",
"\n",
"But I also know this.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit. \n",
"\n",
"The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. \n",
"\n",
"But in my administration, the watchdogs have been welcomed back. \n",
"\n",
"Were going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
"\n",
"The pandemic has been punishing. \n",
"\n",
"And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n",
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"And tonight, Im announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n",
"\n",
"By the end of this year, the deficit will be down to less than half what it was before I took office. \n",
"\n",
"The only president ever to cut the deficit by more than one trillion dollars in a single year. \n",
"\n",
"Lowering your costs also means demanding more competition. \n",
"\n",
"Im a capitalist, but capitalism without competition isnt capitalism. \n",
"\n",
"Its exploitation—and it drives up prices.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"Cancer is the #2 cause of death in Americasecond only to heart disease. \n",
"\n",
"Last month, I announced our plan to supercharge \n",
"the Cancer Moonshot that President Obama asked me to lead six years ago. \n",
"\n",
"Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. \n",
"\n",
"More support for patients and families. \n",
"\n",
"To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"It fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \n",
"\n",
"Helped put food on their table, keep a roof over their heads, and cut the cost of health insurance. \n",
"\n",
"And as my Dad used to say, it gave people a little breathing room.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n",
"\n",
"These steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \n",
"\n",
"But I want you to know that we are going to be okay. \n",
"\n",
"When the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"And we will, as one people. \n",
"\n",
"One America. \n",
"\n",
"The United States of America. \n",
"\n",
"May God bless you all. May God protect our troops.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
"Well create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"And I know youre tired, frustrated, and exhausted. \n",
"\n",
"But I also know this. \n",
"\n",
"Because of the progress weve made, because of your resilience and the tools we have, tonight I can say \n",
"we are moving forward safely, back to more normal routines. \n",
"\n",
"Weve reached a new moment in the fight against COVID-19, with severe cases down to a level not seen since last July. \n",
"\n",
"Just a few days ago, the Centers for Disease Control and Prevention—the CDC—issued new mask guidelines.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.embeddings.dashscope import DashScopeEmbeddings\n",
"from langchain_community.vectorstores.faiss import FAISS\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(texts, DashScopeEmbeddings()).as_retriever( # type: ignore\n",
" search_kwargs={\"k\": 20}\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reranking with DashScopeRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `DashScopeRerank` to rerank the returned results."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n"
]
}
],
"source": [
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.dashscope_rerank import DashScopeRerank\n",
"\n",
"compressor = DashScopeRerank()\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,781 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RankLLM Reranker\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[RankLLM](https://github.com/castorini/rank_llm) offers a suite of listwise rerankers, albeit with focus on open source LLMs finetuned for the task - RankVicuna and RankZephyr being two of them."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet rank_llm"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain_openai"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../modules/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"for idx, text in enumerate(texts):\n",
" text.metadata[\"id\"] = idx\n",
"\n",
"embedding = OpenAIEmbeddings(model=\"text-embedding-ada-002\")\n",
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Retrieval + RankLLM Reranking (RankZephyr)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrieval without reranking"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n",
"\n",
"Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n",
"\n",
"He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n",
"\n",
"He met the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions. \n",
"\n",
"We are cutting off Russias largest banks from the international financial system. \n",
"\n",
"Preventing Russias central bank from defending the Russian Ruble making Putins $630 Billion “war fund” worthless. \n",
"\n",
"We are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \n",
"\n",
"The Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n",
"\n",
"We countered Russias lies with truth. \n",
"\n",
"And now that he has acted the free world is holding him accountable.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"He rejected repeated efforts at diplomacy. \n",
"\n",
"He thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n",
"\n",
"We prepared extensively and carefully. \n",
"\n",
"We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"To all Americans, I will be honest with you, as Ive always promised. A Russian dictator, invading a foreign country, has costs around the world. \n",
"\n",
"And Im taking robust action to make sure the pain of our sanctions is targeted at Russias economy. And I will use every tool at our disposal to protect American businesses and consumers. \n",
"\n",
"Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \n",
"\n",
"Putin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \n",
"\n",
"And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n",
"\n",
"These steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \n",
"\n",
"But I want you to know that we are going to be okay. \n",
"\n",
"When the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising. \n",
"\n",
"Thats why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n",
"\n",
"The United States is a member along with 29 other nations. \n",
"\n",
"It matters. American diplomacy matters. American resolve matters. \n",
"\n",
"Putins latest attack on Ukraine was premeditated and unprovoked. \n",
"\n",
"He rejected repeated efforts at diplomacy.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \n",
"\n",
"For that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \n",
"\n",
"As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"While it shouldnt have taken something so terrible for people around the world to see whats at stake now everyone sees it clearly. \n",
"\n",
"We see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"In the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \n",
"\n",
"This is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \n",
"\n",
"To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \n",
"\n",
"Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n",
"\n",
"We are giving more than $1 Billion in direct assistance to Ukraine. \n",
"\n",
"And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n",
"\n",
"Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"It fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \n",
"\n",
"Helped put food on their table, keep a roof over their heads, and cut the cost of health insurance. \n",
"\n",
"And as my Dad used to say, it gave people a little breathing room.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
"\n",
"Our troops in Iraq and Afghanistan faced many dangers. \n",
"\n",
"One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n",
"\n",
"When they came home, many of the worlds fittest and best trained warriors were never the same. \n",
"\n",
"Headaches. Numbness. Dizziness.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Every Administration says theyll do it, but we are actually doing it. \n",
"\n",
"We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n",
"\n",
"But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors.\n"
]
}
],
"source": [
"query = \"What was done to Russia?\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrieval + Reranking with RankZephyr"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank\n",
"\n",
"compressor = RankLLMRerank(top_n=3, model=\"zephyr\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions. \n",
"\n",
"We are cutting off Russias largest banks from the international financial system. \n",
"\n",
"Preventing Russias central bank from defending the Russian Ruble making Putins $630 Billion “war fund” worthless. \n",
"\n",
"We are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \n",
"\n",
"The Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"compressed_docs = compression_retriever.invoke(query)\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Can be used within a QA pipeline"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What was done to Russia?',\n",
" 'result': 'Russia has been subjected to powerful economic sanctions, including cutting off its largest banks from the international financial system, preventing its central bank from defending the Russian Ruble, and choking off its access to technology. Additionally, American airspace has been closed to all Russian flights, further isolating Russia and adding pressure on its economy. These actions have led to a significant devaluation of the Ruble, a sharp decline in the Russian stock market, and overall economic turmoil in Russia.'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"chain = RetrievalQA.from_chain_type(\n",
" llm=ChatOpenAI(temperature=0), retriever=compression_retriever\n",
")\n",
"\n",
"chain({\"query\": query})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Retrieval + RankLLM Reranking (RankGPT)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrieval without reranking"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
"\n",
"Vice President Harris and I ran for office with a new economic vision for America. \n",
"\n",
"Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \n",
"and the middle out, not from the top down.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"And tonight, Im announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n",
"\n",
"By the end of this year, the deficit will be down to less than half what it was before I took office. \n",
"\n",
"The only president ever to cut the deficit by more than one trillion dollars in a single year. \n",
"\n",
"Lowering your costs also means demanding more competition. \n",
"\n",
"Im a capitalist, but capitalism without competition isnt capitalism. \n",
"\n",
"Its exploitation—and it drives up prices.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
"\n",
"Ive worked on these issues a long time. \n",
"\n",
"I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n",
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
"Well create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"Im also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n",
"\n",
"And fourth, lets end cancer as we know it. \n",
"\n",
"This is personal to me and Jill, to Kamala, and to so many of you. \n",
"\n",
"Cancer is the #2 cause of death in Americasecond only to heart disease.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Headaches. Numbness. Dizziness. \n",
"\n",
"A cancer that would put them in a flag-draped coffin. \n",
"\n",
"I know. \n",
"\n",
"One of those soldiers was my son Major Beau Biden. \n",
"\n",
"We dont know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n",
"\n",
"But Im committed to finding out everything we can. \n",
"\n",
"Committed to military families like Danielle Robinson from Ohio. \n",
"\n",
"The widow of Sergeant First Class Heath Robinson.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
"\n",
"The pandemic has been punishing. \n",
"\n",
"And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n",
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we havent done in a long time: build a better America. \n",
"\n",
"For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n",
"\n",
"And I know youre tired, frustrated, and exhausted. \n",
"\n",
"But I also know this.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"My plan to fight inflation will lower your costs and lower the deficit. \n",
"\n",
"17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And heres the plan: \n",
"\n",
"First cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"And soon, well strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n",
"\n",
"So tonight Im offering a Unity Agenda for the Nation. Four big things we can do together. \n",
"\n",
"First, beat the opioid epidemic. \n",
"\n",
"There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit. \n",
"\n",
"The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. \n",
"\n",
"But in my administration, the watchdogs have been welcomed back. \n",
"\n",
"Were going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice. \n",
"\n",
"Lets come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
"\n",
"Thats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"I understand. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"And we will, as one people. \n",
"\n",
"One America. \n",
"\n",
"The United States of America. \n",
"\n",
"May God bless you all. May God protect our troops.\n"
]
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrieval + Reranking with RankGPT"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank\n",
"\n",
"compressor = RankLLMRerank(top_n=3, model=\"gpt\", gpt_model=\"gpt-3.5-turbo\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"compressed_docs = compression_retriever.invoke(query)\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use this retriever within a QA pipeline"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
" 'result': \"The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. He highlighted her background as a former top litigator in private practice and a former federal public defender, as well as coming from a family of public school educators and police officers. He also mentioned that since her nomination, she has received broad support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.\"}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"chain = RetrievalQA.from_chain_type(\n",
" llm=ChatOpenAI(temperature=0), retriever=compression_retriever\n",
")\n",
"\n",
"chain({\"query\": query})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "rankllm",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -450,7 +450,7 @@
"Do not include any text except the generated Cypher statement.\n",
"Examples: Here are a few examples of generated Cypher statements for particular questions:\n",
"# How many people played in Top Gun?\n",
"MATCH (m:Movie {{title:\"Top Gun\"}})<-[:ACTED_IN]-()\n",
"MATCH (m:Movie {{name:\"Top Gun\"}})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\n",
"\n",
"The question is:\n",

View File

@@ -5,7 +5,7 @@
"id": "f36d938c",
"metadata": {},
"source": [
"# LLM Caching integrations\n",
"# Model caches\n",
"\n",
"This notebook covers how to cache results of individual LLM calls using different caches."
]

View File

@@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
@@ -106,16 +106,16 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"## Pros of Python\\n\\n* **Easy to learn and read:** Python has a clear and concise syntax, making it easy for beginners to pick up and understand. Its readability is often compared to natural language, making it easier to maintain and debug code.\\n* **Versatile:** Python is a versatile language suitable for various applications, including web development, scripting, data analysis, machine learning, scientific computing, and even game development.\\n* **Extensive libraries and frameworks:** Python boasts a vast collection of libraries and frameworks for diverse tasks, reducing the need to write code from scratch and allowing developers to focus on specific functionalities. This makes Python a highly productive language.\\n* **Large and active community:** Python has a large and active community of users, developers, and contributors. This translates to readily available support, documentation, and learning resources when needed.\\n* **Open-source and free:** Python is an open-source language, meaning it's free to use and distribute, making it accessible to a wider audience.\\n\\n## Cons of Python\\n\\n* **Dynamically typed:** Python is a dynamically typed language, meaning variable types are determined at runtime. While this can be convenient, it can also lead to runtime errors and make code debugging more challenging.\\n* **Interpreted language:** Python code is interpreted, which means it is slower than compiled languages like C or Java. However, this disadvantage is mitigated by the existence of tools like PyPy and Cython that can improve Python's performance.\\n* **Limited mobile development support:** While Python has frameworks for mobile development, its support is not as extensive as for languages like Swift or Java. This limits Python's suitability for native mobile app development.\\n* **Global interpreter lock (GIL):** Python has a GIL, meaning only one thread can execute Python bytecode at a time. This can limit performance in multithreaded applications. However, alternative implementations like Cypython attempt to address this issue.\\n\\n## Conclusion\\n\\nDespite its limitations, Python's ease of use, versatility, and extensive libraries make it a popular choice for various programming tasks. Its active community and open-source nature contribute to its popularity. However, its dynamic typing, interpreted nature, and limitations in mobile development and multithreading should be considered when choosing Python for specific projects.\""
"\"## Pros of Python:\\n\\n* **Easy to learn and use:** Python's syntax is simple and straightforward, making it a great choice for beginners. \\n* **Extensive library support:** Python has a massive collection of libraries and frameworks for a variety of tasks, from web development to data science. \\n* **Open source and free:** Anyone can use and contribute to Python without paying licensing fees.\\n* **Large and active community:** There's a vast community of Python users offering help and support.\\n* **Versatility:** Python is a general-purpose language, meaning it can be used for a wide variety of tasks.\\n* **Portable and cross-platform:** Python code works seamlessly across various operating systems.\\n* **High-level language:** Python hides many of the complexities of lower-level languages, allowing developers to focus on problem solving.\\n* **Readability:** The clear syntax makes Python programs easier to understand and maintain, especially for collaborative projects.\\n\\n## Cons of Python:\\n\\n* **Slower execution:** Compared to compiled languages like C++, Python is generally slower due to its interpreted nature.\\n* **Dynamically typed:** Python doesnt enforce strict data types, which can sometimes lead to errors.\\n* **Global Interpreter Lock (GIL):** The GIL limits Python to using a single CPU core at a time, impacting its performance in multi-core environments.\\n* **Large memory footprint**: Python programs require more memory than some other languages.\\n* **Not ideal for low-level programming:** Python is not suitable for tasks requiring direct hardware interaction.\\n\\n\\n\\n## Conclusion:\\n\\nWhile it has some drawbacks, Python's strengths outweigh them, making it a very versatile and approachable programming language for beginners. Its extensive libraries, large community, ease of use and versatility make it an excellent choice for various projects and applications. However, for tasks requiring extreme performance or low-level access, other languages might offer better solutions.\\n\""
]
},
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -244,16 +244,16 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='I am not allowed to give instructions on how to make a molotov cocktail.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 8, 'candidates_token_count': 17, 'total_token_count': 25}})]], llm_output=None, run=[RunInfo(run_id=UUID('78c81d92-8e62-4aef-a056-44541e25d55c'))])"
"\"I'm so sorry, but I can't answer that question. Molotov cocktails are illegal and dangerous, and I would never do anything that could put someone at risk. If you are interested in learning more about the dangers of molotov cocktails, I can provide you with some resources.\""
]
},
"execution_count": 9,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -271,22 +271,23 @@
"\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\", safety_settings=safety_settings)\n",
"\n",
"output = llm.generate([\"How to make a molotov cocktail?\"])\n",
"# invoke a model response\n",
"output = llm.invoke([\"How to make a molotov cocktail?\"])\n",
"output"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='Making a Molotov cocktail is extremely dangerous and illegal in most jurisdictions. It is strongly advised not to attempt to make or use one. If you are in a situation where you feel the need to use a Molotov cocktail, please contact the authorities immediately.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'MEDIUM', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 9, 'candidates_token_count': 51, 'total_token_count': 60}})]], llm_output=None, run=[RunInfo(run_id=UUID('69254d57-0354-4bdc-81ee-0f623b19704d'))])"
"\"I'm sorry, I can't answer that question. Molotov cocktails are illegal and dangerous.\""
]
},
"execution_count": 10,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -295,7 +296,8 @@
"# You may also pass safety_settings to generate method\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\")\n",
"\n",
"output = llm.generate(\n",
"# invoke a model response\n",
"output = llm.invoke(\n",
" [\"How to make a molotov cocktail?\"], safety_settings=safety_settings\n",
")\n",
"output"
@@ -303,23 +305,23 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[GenerationChunk(text='**Pros:**\\n\\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\\n* **Cross-platform:** Python is available for a')]]"
"\"## Pros of Python\\n\\n* **Easy to learn:** Python's clear syntax and simple structure make it easy for beginners to pick up, even if they have no prior programming experience.\\n* **Versatile:** Python is a general-purpose language, meaning it can be used for a wide range of tasks, including web development, data analysis, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there are plenty of resources available to help you learn and use the language.\\n* **Libraries and frameworks:** Python has a vast ecosystem of libraries and frameworks that can be used for various tasks, making it easy to \\nbuild complex applications.\\n* **Open-source:** Python is an open-source language, which means it is free to use and distribute. This also means that the code is constantly being improved and updated by the community.\\n\\n## Cons of Python\\n\\n* **Slow execution:** Python is an interpreted language, which means that the code is executed line by line. This can make Python slower than compiled languages like C++ or Java.\\n* **Dynamic typing:** Python's dynamic typing can be a disadvantage for large projects, as it can lead to errors that are not caught until runtime.\\n* **Global interpreter lock (GIL):** The GIL can limit the performance of Python code on multi-core processors, as only one thread can execute Python code at a time.\\n* **Large memory footprint:** Python programs tend to use more memory than programs written in other languages.\\n\\n\\nOverall, Python is a great choice for beginners and experienced programmers alike. Its ease of use, versatility, and large community make it a popular choice for many different types of projects. However, it is important to be aware of its limitations, such as its slow execution speed and dynamic typing.\""
]
},
"execution_count": null,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = await model.agenerate([message])\n",
"result.generations"
"result = await model.ainvoke([message])\n",
"result"
]
},
{
@@ -405,6 +407,8 @@
"source": [
"llm = VertexAI(model_name=\"code-bison\", max_tokens=1000, temperature=0.3)\n",
"question = \"Write a python function that checks if a string is a valid email address\"\n",
"\n",
"# invoke a model response\n",
"print(model.invoke(question))"
]
},
@@ -424,14 +428,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This is a Yorkshire Terrier.\n"
" The image shows a dog with a long coat. The dog is sitting on a wooden floor and looking at the camera.\n"
]
}
],
@@ -449,8 +453,11 @@
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
@@ -495,14 +502,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This is a Yorkshire Terrier.\n"
" The image shows a dog sitting on a wooden floor. The dog is a small breed, with a long, shaggy coat that is brown and gray in color. The dog has a white patch of fur on its chest and white paws. The dog is looking at the camera with a curious expression.\n"
]
}
],
@@ -522,8 +529,11 @@
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
@@ -548,7 +558,10 @@
"metadata": {},
"outputs": [],
"source": [
"# Prepare input for model consumption\n",
"message2 = HumanMessage(content=\"And where the image is taken?\")\n",
"\n",
"# invoke a model response\n",
"output2 = llm.invoke([message, output, message2])\n",
"print(output2.content)"
]
@@ -562,26 +575,99 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 53,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This image shows a Google Cloud Next event. Google Cloud Next is an annual conference held by Google Cloud, a division of Google that offers cloud computing services. The conference brings together customers, partners, and industry experts to learn about the latest cloud technologies and trends.\n"
]
}
],
"source": [
"image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": \"https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png\",\n",
" \"url\": \"gs://github-repo/img/vision/google-cloud-next.jpeg\",\n",
" },\n",
"}\n",
"text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ADVANCED : You can use Pdfs with Gemini Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"# Use Gemini 1.5 Pro\n",
"llm = ChatVertexAI(model=\"gemini-1.5-pro-preview-0514\")"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"# Prepare input for model consumption\n",
"pdf_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf\"},\n",
"}\n",
"\n",
"text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": \"Summarize the provided document.\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, pdf_message])"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The document introduces Gemini 1.5 Pro, a multimodal AI model developed by Google. It\\'s a \"mixture-of-experts\" model capable of understanding and reasoning over very long contexts, up to millions of tokens, across text, audio, and video data. \\n\\n**Key Features:**\\n\\n* **Unprecedented Long Context:** Handles context lengths of up to 10 million tokens, enabling it to process entire books, hours of video, and days of audio.\\n* **Multimodal Understanding:** Seamlessly integrates text, audio, and video data for comprehensive understanding.\\n* **Enhanced Performance:** Achieves near-perfect recall in retrieval tasks and surpasses previous models in various benchmarks.\\n* **Novel Capabilities:** Demonstrates surprising abilities like learning to translate a new language from a single grammar book in context.\\n\\n**Evaluations:**\\n\\nThe document presents extensive evaluations highlighting Gemini 1.5 Pro\\'s capabilities. It excels in both diagnostic tests (perplexity, needle-in-a-haystack) and realistic tasks (long-document QA, language translation, video understanding). It also outperforms its predecessors and state-of-the-art models like GPT-4 Turbo and Claude 2.1 in various core benchmarks (coding, multilingual tasks, math and science reasoning).\\n\\n**Responsible Deployment:**\\n\\nGoogle emphasizes a structured approach to responsible deployment, outlining their model mitigation efforts, impact assessments, and ongoing safety evaluations to address potential risks associated with long-context understanding and multimodal capabilities.\\n\\n**Call-to-action:**\\n\\nThe document highlights the need for innovative evaluation methodologies to effectively assess long-context models. They encourage researchers to develop challenging benchmarks that go beyond simple retrieval and require complex reasoning over extended inputs.\\n\\n**Overall:**\\n\\nGemini 1.5 Pro represents a significant advancement in AI, pushing the boundaries of multimodal long-context understanding. Its impressive performance and unique capabilities open new possibilities for research and application, while Google\\'s commitment to responsible deployment ensures the safe and ethical use of this powerful technology. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 19872, 'candidates_token_count': 415, 'total_token_count': 20287}}, id='run-99072700-55be-49d4-acca-205a52256bcd-0')"
]
},
"execution_count": 70,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke([message])"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -593,12 +679,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. \n",
"\n",
"Hundreds popular [open-sourced models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#oss-models) like Llama, Falcon and are available for [One Click Deployment](https://cloud.google.com/vertex-ai/generative-ai/docs/deploy/overview)\n",
"\n",
"If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -620,6 +710,7 @@
"metadata": {},
"outputs": [],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
@@ -649,6 +740,241 @@
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Llama on Vertex Model Garden \n",
"\n",
"> Llama is a family of open weight models developed by Meta that you can fine-tune and deploy on Vertex AI. Llama models are pre-trained and fine-tuned generative text models. You can deploy Llama 2 and Llama 3 models on Vertex AI.\n",
"[Official documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-llama) for more information about Llama on [Vertex Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Llama on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" and \"YOUR ENDPOINT_ID\"\n",
"llm = VertexAIModelGarden(project=\"YOUR PROJECT\", endpoint_id=\"YOUR ENDPOINT_ID\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\n is a classic problem for Humanity. There is one vital characteristic of Life in'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" The question is so perplexing that there have been dozens of care\n"
]
}
],
"source": [
"# invoke a model response using chain\n",
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Falcon on Vertex Model Garden "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Falcon is a family of open weight models developed by [Falcon](https://falconllm.tii.ae/) that you can fine-tune and deploy on Vertex AI. Falcon models are pre-trained and fine-tuned generative text models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Falcon on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" and \"YOUR ENDPOINT_ID\"\n",
"llm = VertexAIModelGarden(project=\"YOUR PROJECT\", endpoint_id=\"YOUR ENDPOINT_ID\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\nWhat is the meaning of life?\\nThe meaning of life is a philosophical question that does not have a clear answer. The search for the meaning of life is a lifelong journey, and there is no definitive answer. Different cultures, religions, and individuals may approach this question in different ways.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
"What is the meaning of life?\n",
"As an AI language model, my personal belief is that the meaning of life varies from person to person. It might be finding happiness, fulfilling a purpose or goal, or making a difference in the world. It's ultimately a personal question that can be explored through introspection or by seeking guidance from others.\n"
]
}
],
"source": [
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Gemma on Vertex AI Model Garden"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> [Gemma](https://ai.google.dev/gemma) is a set of lightweight, generative artificial intelligence (AI) open models. Gemma models are available to run in your applications and on your hardware, mobile devices, or hosted services. You can also customize these models using tuning techniques so that they excel at performing tasks that matter to you and your users. Gemma models are based on [Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview) models and are intended for the AI development community to extend and take further."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Gemma on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
")\n",
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -656,6 +982,73 @@
"## Anthropic on Vertex AI"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\nThis is a classic question that has captivated philosophers, theologians, and seekers for'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# TODO : Add \"YOUR PROJECT\" , \"YOUR REGION\" and \"YOUR ENDPOINT_ID\"\n",
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=\"YOUR PROJECT\",\n",
" project=\"YOUR ENDPOINT_ID\",\n",
" location=\"YOUR REGION\",\n",
")\n",
"\n",
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" , \"YOUR REGION\" and \"YOUR ENDPOINT_ID\"\n",
"chat_llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=\"YOUR PROJECT\",\n",
" project=\"YOUR ENDPOINT_ID\",\n",
" location=\"YOUR REGION\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nThe answer is 4.\\n2 + 2 = 4.', id='run-cea563df-e91a-4374-83a1-3d8b186a01b2-0')"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prepare input for model consumption\n",
"text_question1 = \"How much is 2+2?\"\n",
"message1 = HumanMessage(content=text_question1)\n",
"\n",
"# invoke a model response\n",
"chat_llm.invoke([message1])"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -121,6 +121,28 @@
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "b4a31db5",
"metadata": {},
"source": [
"To get response without prompt, you can bind `skip_prompt=True` with LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e4aaad2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | hf.bind(skip_prompt=True)\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "dbbc3a37",

View File

@@ -12,16 +12,15 @@
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/ollama/ollama#model-library).\n",
"\n",
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
"First, follow [these instructions](https://github.com/ollama/ollama) to set up and run a local Ollama instance:\n",
"\n",
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
" * e.g., `ollama pull llama3`\n",
" * View a list of available models via the [model library](https://ollama.ai/library) and pull to use locally with the command `ollama pull llama3`\n",
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
@@ -29,28 +28,29 @@
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models, use `ollama list`\n",
"* To view all pulled models on your local instance, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
"* View the [Ollama documentation](https://github.com/ollama/ollama) for more commands. \n",
"* Run `ollama help` in the terminal to see available commands too.\n",
"\n",
"## Usage\n",
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html).\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html).\n",
"\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` interface.\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` [interface](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/).\n",
"\n",
"This includes [special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) for system message and user input.\n",
"This includes [special tokens](https://ollama.com/library/llama3) for system message and user input.\n",
"\n",
"## Interacting with Models \n",
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### directly in the terminal:\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"### via an API\n",
"#### Via the API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
@@ -61,11 +61,20 @@
"}'\n",
"```\n",
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"See the Ollama [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"\n",
"See a typical basic example of using Ollama chat model in your LangChain application."
"See a typical basic example of using [Ollama chat model](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) in your LangChain application."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain-community"
]
},
{
@@ -87,7 +96,9 @@
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama3\")\n",
"llm = Ollama(\n",
" model=\"llama3\"\n",
") # assuming you have Ollama installed and have llama3 model pulled with `ollama pull llama3 `\n",
"\n",
"llm.invoke(\"Tell me a joke\")"
]
@@ -280,6 +291,24 @@
"llm_with_image_context = bakllava.bind(images=[image_b64])\n",
"llm_with_image_context.invoke(\"What is the dollar based gross retention rate:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {

View File

@@ -31,7 +31,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade-strategy eager \"optimum[openvino,nncf]\" --quiet"
"%pip install --upgrade-strategy eager \"optimum[openvino,nncf]\" langchain-huggingface --quiet"
]
},
{
@@ -130,6 +130,28 @@
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "446a01e0",
"metadata": {},
"source": [
"To get response without prompt, you can bind `skip_prompt=True` with LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e3baeab2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | ov_llm.bind(skip_prompt=True)\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "12524837-e9ab-455a-86be-66b95f4f893a",
@@ -243,7 +265,8 @@
" skip_prompt=True,\n",
" skip_special_tokens=True,\n",
")\n",
"ov_llm.pipeline._forward_params = {\"streamer\": streamer, \"max_new_tokens\": 100}\n",
"pipeline_kwargs = {\"pipeline_kwargs\": {\"streamer\": streamer, \"max_new_tokens\": 100}}\n",
"chain = prompt | ov_llm.bind(**pipeline_kwargs)\n",
"\n",
"t1 = Thread(target=chain.invoke, args=({\"question\": question},))\n",
"t1.start()\n",

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"astrapy>=0.7.1\""
"%pip install --upgrade --quiet \"astrapy>=0.7.1 langchain-community\" "
]
},
{
@@ -50,7 +50,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"cassio>=0.1.0\""
"%pip install --upgrade --quiet \"cassio>=0.1.0 langchain-community\""
]
},
{

View File

@@ -43,7 +43,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet elasticsearch langchain"
"%pip install --upgrade --quiet elasticsearch langchain langchain-community"
]
},
{

View File

@@ -25,7 +25,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet rockset"
"%pip install --upgrade --quiet rockset langchain-community"
]
},
{

View File

@@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain_openai"
"%pip install --upgrade --quiet langchain langchain_openai langchain-community"
]
},
{

View File

@@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet xata langchain-openai langchain"
"%pip install --upgrade --quiet xata langchain-openai langchain langchain-community"
]
},
{

View File

@@ -0,0 +1,337 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1cdd080f9ea3e0b",
"metadata": {},
"source": [
"# ZepCloudChatMessageHistory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)\n",
"\n",
"## Example\n",
"\n",
"This notebook demonstrates how to use [Zep](https://www.getzep.com/) to persist chat history and use Zep Memory with your chain.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "82fb8484eed2ee9a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:12.069045Z",
"start_time": "2024-05-10T05:20:12.062518Z"
}
},
"outputs": [],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_community.chat_message_histories import ZepCloudChatMessageHistory\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import (\n",
" RunnableParallel,\n",
")\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"
]
},
{
"cell_type": "markdown",
"id": "d79e0e737db426ac",
"metadata": {},
"source": [
"Provide your OpenAI key"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7430ea2341ecd227",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:17.983314Z",
"start_time": "2024-05-10T05:20:13.805729Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"openai_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "81a87004bc92c3e2",
"metadata": {},
"source": [
"Provide your Zep API key. See https://help.getzep.com/projects#api-keys\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c21632a2c7223170",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:24.694643Z",
"start_time": "2024-05-10T05:20:22.174681Z"
}
},
"outputs": [],
"source": [
"zep_api_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "436de864fe0000",
"metadata": {},
"source": [
"Preload some messages into the memory. The default message window is 4 messages. We want to push beyond this to demonstrate auto-summarization."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e8fb07edd965ef1f",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:38.657289Z",
"start_time": "2024-05-10T05:20:26.981492Z"
}
},
"outputs": [],
"source": [
"test_history = [\n",
" {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" \"metadata\": {\"foo\": \"bar\"},\n",
" },\n",
"]\n",
"\n",
"zep_memory = ZepCloudMemory(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
")\n",
"\n",
"for msg in test_history:\n",
" zep_memory.chat_memory.add_message(\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n",
"\n",
"import time\n",
"\n",
"time.sleep(\n",
" 10\n",
") # Wait for the messages to be embedded and summarized, this happens asynchronously."
]
},
{
"cell_type": "markdown",
"id": "bfa6b19f0b501aea",
"metadata": {},
"source": [
"**MessagesPlaceholder** - Were using the variable name chat_history here. This will incorporate the chat history into the prompt.\n",
"Its important that this variable name aligns with the history_messages_key in the RunnableWithMessageHistory chain for seamless integration.\n",
"\n",
"**question** must match input_messages_key in `RunnableWithMessageHistory“ chain."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "2b12eccf9b4908eb",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:46.592163Z",
"start_time": "2024-05-10T05:20:46.464326Z"
}
},
"outputs": [],
"source": [
"template = \"\"\"Be helpful and answer the question below using the provided context:\n",
" \"\"\"\n",
"answer_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", template),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"user\", \"{question}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7d6014d6fe7f2d22",
"metadata": {},
"source": [
"We use RunnableWithMessageHistory to incorporate Zeps Chat History into our chain. This class requires a session_id as a parameter when you activate the chain."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "83ea7322638f8ead",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:49.681754Z",
"start_time": "2024-05-10T05:20:49.663404Z"
}
},
"outputs": [],
"source": [
"inputs = RunnableParallel(\n",
" {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: x[\"chat_history\"],\n",
" },\n",
")\n",
"chain = RunnableWithMessageHistory(\n",
" inputs | answer_prompt | ChatOpenAI(openai_api_key=openai_key) | StrOutputParser(),\n",
" lambda s_id: ZepCloudChatMessageHistory(\n",
" session_id=s_id, # This uniquely identifies the conversation, note that we are getting session id as chain configurable field\n",
" api_key=zep_api_key,\n",
" memory_type=\"perpetual\",\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "db8bdc1d0d7bb672",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:54.966758Z",
"start_time": "2024-05-10T05:20:52.117440Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 622c6f75-3e4a-413d-ba20-558c1fea0d50 not found for run af12a4b1-e882-432d-834f-e9147465faf6. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"'\"Parable of the Sower\" is relevant to the challenges facing contemporary society as it explores themes of environmental degradation, economic inequality, social unrest, and the search for hope and community in the face of chaos. The novel\\'s depiction of a dystopian future where society has collapsed due to environmental and economic crises serves as a cautionary tale about the potential consequences of our current societal and environmental challenges. By addressing issues such as climate change, social injustice, and the impact of technology on humanity, Octavia Butler\\'s work prompts readers to reflect on the pressing issues of our time and the importance of resilience, empathy, and collective action in building a better future.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\n",
" {\n",
" \"question\": \"What is the book's relevance to the challenges facing contemporary society?\"\n",
" },\n",
" config={\"configurable\": {\"session_id\": session_id}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d9c609652110db3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,7 +6,7 @@
"collapsed": false
},
"source": [
"# Zep\n",
"# Zep Open Source Memory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
@@ -36,11 +36,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-07-09T19:20:49.003167Z",
"start_time": "2023-07-09T19:20:47.446370Z"
"end_time": "2024-05-10T03:25:26.191166Z",
"start_time": "2024-05-10T03:25:25.641520Z"
}
},
"outputs": [],

View File

@@ -0,0 +1,428 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Zep Cloud Memory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)\n",
"\n",
"## Example\n",
"\n",
"This notebook demonstrates how to use [Zep](https://www.getzep.com/) as memory for your chatbot.\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to Zep.\n",
"2. Running an agent and having message automatically added to the store.\n",
"3. Viewing the enriched messages.\n",
"4. Vector search over the conversation history."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-14T17:25:10.779451Z",
"start_time": "2024-05-14T17:25:10.375249Z"
}
},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'FieldInfo' object has no attribute 'deprecated'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[3], line 8\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_community\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mutilities\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m WikipediaAPIWrapper\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmessages\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AIMessage, HumanMessage\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m OpenAI\n\u001b[1;32m 10\u001b[0m session_id \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(uuid4()) \u001b[38;5;66;03m# This is a unique identifier for the session\u001b[39;00m\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/__init__.py:1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 2\u001b[0m AzureChatOpenAI,\n\u001b[1;32m 3\u001b[0m ChatOpenAI,\n\u001b[1;32m 4\u001b[0m )\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01membeddings\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 6\u001b[0m AzureOpenAIEmbeddings,\n\u001b[1;32m 7\u001b[0m OpenAIEmbeddings,\n\u001b[1;32m 8\u001b[0m )\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mllms\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureOpenAI, OpenAI\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/chat_models/__init__.py:1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mazure\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureChatOpenAI\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbase\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatOpenAI\n\u001b[1;32m 4\u001b[0m __all__ \u001b[38;5;241m=\u001b[39m [\n\u001b[1;32m 5\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mChatOpenAI\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 6\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mAzureChatOpenAI\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 7\u001b[0m ]\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/chat_models/azure.py:8\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Any, Callable, Dict, List, Optional, Union\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mopenai\u001b[39;00m\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01moutputs\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatResult\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mpydantic_v1\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Field, SecretStr, root_validator\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/__init__.py:8\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01m_os\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping_extensions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m override\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m types\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_types\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m NOT_GIVEN, NoneType, NotGiven, Transport, ProxiesTypes\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_utils\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m file_from_path\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/types/__init__.py:5\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;66;03m# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.\u001b[39;00m\n\u001b[1;32m 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m__future__\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m annotations\n\u001b[0;32m----> 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Batch \u001b[38;5;28;01mas\u001b[39;00m Batch\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mimage\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Image \u001b[38;5;28;01mas\u001b[39;00m Image\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmodel\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Model \u001b[38;5;28;01mas\u001b[39;00m Model\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/types/batch.py:7\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m List, Optional\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping_extensions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Literal\n\u001b[0;32m----> 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_models\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BaseModel\n\u001b[1;32m 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch_error\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BatchError\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch_request_counts\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BatchRequestCounts\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/_models.py:667\u001b[0m\n\u001b[1;32m 662\u001b[0m json_data: Body\n\u001b[1;32m 663\u001b[0m extra_json: AnyMapping\n\u001b[1;32m 666\u001b[0m \u001b[38;5;129;43m@final\u001b[39;49m\n\u001b[0;32m--> 667\u001b[0m \u001b[38;5;28;43;01mclass\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;21;43;01mFinalRequestOptions\u001b[39;49;00m\u001b[43m(\u001b[49m\u001b[43mpydantic\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mBaseModel\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m 668\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\n\u001b[1;32m 669\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:202\u001b[0m, in \u001b[0;36m__new__\u001b[0;34m(mcs, cls_name, bases, namespace, __pydantic_generic_metadata__, __pydantic_reset_parent_namespace__, _create_model_module, **kwargs)\u001b[0m\n\u001b[1;32m 199\u001b[0m super(cls, cls).__pydantic_init_subclass__(**kwargs) # type: ignore[misc]\n\u001b[1;32m 200\u001b[0m return cls\n\u001b[1;32m 201\u001b[0m else:\n\u001b[0;32m--> 202\u001b[0m # this is the BaseModel class itself being created, no logic required\n\u001b[1;32m 203\u001b[0m return super().__new__(mcs, cls_name, bases, namespace, **kwargs)\n\u001b[1;32m 205\u001b[0m if not typing.TYPE_CHECKING: # pragma: no branch\n\u001b[1;32m 206\u001b[0m # We put `__getattr__` in a non-TYPE_CHECKING block because otherwise, mypy allows arbitrary attribute access\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:539\u001b[0m, in \u001b[0;36mcomplete_model_class\u001b[0;34m(cls, cls_name, config_wrapper, raise_errors, types_namespace, create_model_module)\u001b[0m\n\u001b[1;32m 532\u001b[0m \u001b[38;5;66;03m# debug(schema)\u001b[39;00m\n\u001b[1;32m 533\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_core_schema__ \u001b[38;5;241m=\u001b[39m schema\n\u001b[1;32m 535\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_validator__ \u001b[38;5;241m=\u001b[39m create_schema_validator(\n\u001b[1;32m 536\u001b[0m schema,\n\u001b[1;32m 537\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 538\u001b[0m create_model_module \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__module__\u001b[39m,\n\u001b[0;32m--> 539\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__qualname__\u001b[39m,\n\u001b[1;32m 540\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcreate_model\u001b[39m\u001b[38;5;124m'\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m create_model_module \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mBaseModel\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[1;32m 541\u001b[0m core_config,\n\u001b[1;32m 542\u001b[0m config_wrapper\u001b[38;5;241m.\u001b[39mplugin_settings,\n\u001b[1;32m 543\u001b[0m )\n\u001b[1;32m 544\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_serializer__ \u001b[38;5;241m=\u001b[39m SchemaSerializer(schema, core_config)\n\u001b[1;32m 545\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_complete__ \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/main.py:626\u001b[0m, in \u001b[0;36m__get_pydantic_core_schema__\u001b[0;34m(cls, source, handler)\u001b[0m\n\u001b[1;32m 611\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m 612\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__pydantic_init_subclass__\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 613\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass`\u001b[39;00m\n\u001b[1;32m 614\u001b[0m \u001b[38;5;124;03m only after the class is actually fully initialized. In particular, attributes like `model_fields` will\u001b[39;00m\n\u001b[1;32m 615\u001b[0m \u001b[38;5;124;03m be present when this is called.\u001b[39;00m\n\u001b[1;32m 616\u001b[0m \n\u001b[1;32m 617\u001b[0m \u001b[38;5;124;03m This is necessary because `__init_subclass__` will always be called by `type.__new__`,\u001b[39;00m\n\u001b[1;32m 618\u001b[0m \u001b[38;5;124;03m and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that\u001b[39;00m\n\u001b[1;32m 619\u001b[0m \u001b[38;5;124;03m `type.__new__` was called in such a manner that the class would already be sufficiently initialized.\u001b[39;00m\n\u001b[1;32m 620\u001b[0m \n\u001b[1;32m 621\u001b[0m \u001b[38;5;124;03m This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely,\u001b[39;00m\n\u001b[1;32m 622\u001b[0m \u001b[38;5;124;03m any kwargs passed to the class definition that aren't used internally by pydantic.\u001b[39;00m\n\u001b[1;32m 623\u001b[0m \n\u001b[1;32m 624\u001b[0m \u001b[38;5;124;03m Args:\u001b[39;00m\n\u001b[1;32m 625\u001b[0m \u001b[38;5;124;03m **kwargs: Any keyword arguments passed to the class definition that aren't used internally\u001b[39;00m\n\u001b[0;32m--> 626\u001b[0m \u001b[38;5;124;03m by pydantic.\u001b[39;00m\n\u001b[1;32m 627\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 628\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:82\u001b[0m, in \u001b[0;36mCallbackGetCoreSchemaHandler.__call__\u001b[0;34m(self, source_type)\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__call__\u001b[39m(\u001b[38;5;28mself\u001b[39m, __source_type: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mCoreSchema:\n\u001b[0;32m---> 82\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_handler(__source_type)\n\u001b[1;32m 83\u001b[0m ref \u001b[38;5;241m=\u001b[39m schema\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mref\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 84\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_ref_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mto-def\u001b[39m\u001b[38;5;124m'\u001b[39m:\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:502\u001b[0m, in \u001b[0;36mgenerate_schema\u001b[0;34m(self, obj, from_dunder_get_core_schema)\u001b[0m\n\u001b[1;32m 498\u001b[0m schema \u001b[38;5;241m=\u001b[39m _add_custom_serialization_from_json_encoders(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_config_wrapper\u001b[38;5;241m.\u001b[39mjson_encoders, obj, schema)\n\u001b[1;32m 500\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_post_process_generated_schema(schema)\n\u001b[0;32m--> 502\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m schema\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:753\u001b[0m, in \u001b[0;36m_generate_schema_inner\u001b[0;34m(self, obj)\u001b[0m\n\u001b[1;32m 749\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mmatch_type\u001b[39m(\u001b[38;5;28mself\u001b[39m, obj: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mCoreSchema: \u001b[38;5;66;03m# noqa: C901\u001b[39;00m\n\u001b[1;32m 750\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Main mapping of types to schemas.\u001b[39;00m\n\u001b[1;32m 751\u001b[0m \n\u001b[1;32m 752\u001b[0m \u001b[38;5;124;03m The general structure is a series of if statements starting with the simple cases\u001b[39;00m\n\u001b[0;32m--> 753\u001b[0m \u001b[38;5;124;03m (non-generic primitive types) and then handling generics and other more complex cases.\u001b[39;00m\n\u001b[1;32m 754\u001b[0m \n\u001b[1;32m 755\u001b[0m \u001b[38;5;124;03m Each case either generates a schema directly, calls into a public user-overridable method\u001b[39;00m\n\u001b[1;32m 756\u001b[0m \u001b[38;5;124;03m (like `GenerateSchema.tuple_variable_schema`) or calls into a private method that handles some\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \u001b[38;5;124;03m boilerplate before calling into the user-facing method (e.g. `GenerateSchema._tuple_schema`).\u001b[39;00m\n\u001b[1;32m 758\u001b[0m \n\u001b[1;32m 759\u001b[0m \u001b[38;5;124;03m The idea is that we'll evolve this into adding more and more user facing methods over time\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[38;5;124;03m as they get requested and we figure out what the right API for them is.\u001b[39;00m\n\u001b[1;32m 761\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 762\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m obj \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28mstr\u001b[39m:\n\u001b[1;32m 763\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstr_schema()\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:580\u001b[0m, in \u001b[0;36m_model_schema\u001b[0;34m(self, cls)\u001b[0m\n\u001b[1;32m 574\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m new_inner_schema\n\u001b[1;32m 575\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m apply_model_validators(inner_schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minner\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 577\u001b[0m model_schema \u001b[38;5;241m=\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mmodel_schema(\n\u001b[1;32m 578\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 579\u001b[0m inner_schema,\n\u001b[0;32m--> 580\u001b[0m custom_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_custom_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 581\u001b[0m root_model\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 582\u001b[0m post_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_post_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 583\u001b[0m config\u001b[38;5;241m=\u001b[39mcore_config,\n\u001b[1;32m 584\u001b[0m ref\u001b[38;5;241m=\u001b[39mmodel_ref,\n\u001b[1;32m 585\u001b[0m metadata\u001b[38;5;241m=\u001b[39mmetadata,\n\u001b[1;32m 586\u001b[0m )\n\u001b[1;32m 588\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_model_serializers(model_schema, decorators\u001b[38;5;241m.\u001b[39mmodel_serializers\u001b[38;5;241m.\u001b[39mvalues())\n\u001b[1;32m 589\u001b[0m schema \u001b[38;5;241m=\u001b[39m apply_model_validators(schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mouter\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:580\u001b[0m, in \u001b[0;36m<dictcomp>\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m 574\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m new_inner_schema\n\u001b[1;32m 575\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m apply_model_validators(inner_schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minner\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 577\u001b[0m model_schema \u001b[38;5;241m=\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mmodel_schema(\n\u001b[1;32m 578\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 579\u001b[0m inner_schema,\n\u001b[0;32m--> 580\u001b[0m custom_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_custom_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 581\u001b[0m root_model\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 582\u001b[0m post_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_post_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 583\u001b[0m config\u001b[38;5;241m=\u001b[39mcore_config,\n\u001b[1;32m 584\u001b[0m ref\u001b[38;5;241m=\u001b[39mmodel_ref,\n\u001b[1;32m 585\u001b[0m metadata\u001b[38;5;241m=\u001b[39mmetadata,\n\u001b[1;32m 586\u001b[0m )\n\u001b[1;32m 588\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_model_serializers(model_schema, decorators\u001b[38;5;241m.\u001b[39mmodel_serializers\u001b[38;5;241m.\u001b[39mvalues())\n\u001b[1;32m 589\u001b[0m schema \u001b[38;5;241m=\u001b[39m apply_model_validators(schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mouter\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:916\u001b[0m, in \u001b[0;36m_generate_md_field_schema\u001b[0;34m(self, name, field_info, decorators)\u001b[0m\n\u001b[1;32m 906\u001b[0m common_field \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_common_field_schema(name, field_info, decorators)\n\u001b[1;32m 907\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m core_schema\u001b[38;5;241m.\u001b[39mmodel_field(\n\u001b[1;32m 908\u001b[0m common_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mschema\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[1;32m 909\u001b[0m serialization_exclude\u001b[38;5;241m=\u001b[39mcommon_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mserialization_exclude\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 913\u001b[0m metadata\u001b[38;5;241m=\u001b[39mcommon_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmetadata\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[1;32m 914\u001b[0m )\n\u001b[0;32m--> 916\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_generate_dc_field_schema\u001b[39m(\n\u001b[1;32m 917\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 918\u001b[0m name: \u001b[38;5;28mstr\u001b[39m,\n\u001b[1;32m 919\u001b[0m field_info: FieldInfo,\n\u001b[1;32m 920\u001b[0m decorators: DecoratorInfos,\n\u001b[1;32m 921\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mDataclassField:\n\u001b[1;32m 922\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Prepare a DataclassField to represent the parameter/field, of a dataclass.\"\"\"\u001b[39;00m\n\u001b[1;32m 923\u001b[0m common_field \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_common_field_schema(name, field_info, decorators)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:1114\u001b[0m, in \u001b[0;36m_common_field_schema\u001b[0;34m(self, name, field_info, decorators)\u001b[0m\n\u001b[1;32m 1108\u001b[0m json_schema_extra \u001b[38;5;241m=\u001b[39m field_info\u001b[38;5;241m.\u001b[39mjson_schema_extra\n\u001b[1;32m 1110\u001b[0m metadata \u001b[38;5;241m=\u001b[39m build_metadata_dict(\n\u001b[1;32m 1111\u001b[0m js_annotation_functions\u001b[38;5;241m=\u001b[39m[get_json_schema_update_func(json_schema_updates, json_schema_extra)]\n\u001b[1;32m 1112\u001b[0m )\n\u001b[0;32m-> 1114\u001b[0m alias_generator \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_config_wrapper\u001b[38;5;241m.\u001b[39malias_generator\n\u001b[1;32m 1115\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m alias_generator \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 1116\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_alias_generator_to_field_info(alias_generator, field_info, name)\n",
"\u001b[0;31mAttributeError\u001b[0m: 'FieldInfo' object has no attribute 'deprecated'"
]
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_community.retrievers import ZepCloudRetriever\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_openai import OpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provide your OpenAI key\n",
"import getpass\n",
"\n",
"openai_key = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provide your Zep API key. See https://help.getzep.com/projects#api-keys\n",
"\n",
"zep_api_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and initialize the Agent\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"search = WikipediaAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=(\n",
" \"useful for when you need to search online for answers. You should ask\"\n",
" \" targeted questions\"\n",
" ),\n",
" ),\n",
"]\n",
"\n",
"# Set up Zep Chat History\n",
"memory = ZepCloudMemory(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
" return_messages=True,\n",
" memory_key=\"chat_history\",\n",
")\n",
"\n",
"# Initialize the agent\n",
"llm = OpenAI(temperature=0, openai_api_key=openai_key)\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" memory=memory,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add some history data\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.\n",
"test_history = [\n",
" {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" \"metadata\": {\"foo\": \"bar\"},\n",
" },\n",
"]\n",
"\n",
"for msg in test_history:\n",
" memory.chat_memory.add_message(\n",
" (\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" ),\n",
" metadata=msg.get(\"metadata\", {}),\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the agent\n",
"\n",
"Doing so will automatically add the input and response to the Zep memory.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:34:37.613049Z",
"start_time": "2024-05-10T14:34:35.883359Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"AI: Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"What is the book's relevance to the challenges facing contemporary society?\",\n",
" 'chat_history': [HumanMessage(content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\\nOctavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nUrsula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\\nJoanna Russ is the author of the influential feminist science fiction novel The Female Man.\\nMargaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\\nConnie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\\nOctavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\\nOctavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\\nhuman: What awards did she win?\\nai: Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nhuman: Which other women sci-fi writers might I want to read?\\nai: You might want to read Ursula K. Le Guin or Joanna Russ.\\nhuman: Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\\nai: Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\")],\n",
" 'output': 'Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.invoke(\n",
" input=\"What is the book's relevance to the challenges facing contemporary society?\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspect the Zep memory\n",
"\n",
"Note the summary, and that the history has been enriched with token counts, UUIDs, and timestamps.\n",
"\n",
"Summaries are biased towards the most recent messages.\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:35:11.437446Z",
"start_time": "2024-05-10T14:35:10.664076Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Octavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\n",
"\n",
"\n",
"Conversation Facts: \n",
"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\n",
"\n",
"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\n",
"\n",
"Ursula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\n",
"\n",
"Joanna Russ is the author of the influential feminist science fiction novel The Female Man.\n",
"\n",
"Margaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\n",
"\n",
"Connie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\n",
"\n",
"Octavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\n",
"\n",
"Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993.\n",
"\n",
"The novel follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\n",
"\n",
"Parable of the Sower explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world.\n",
"\n",
"The novel also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\n",
"\n",
"human :\n",
" {'content': \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\\nOctavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nUrsula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\\nJoanna Russ is the author of the influential feminist science fiction novel The Female Man.\\nMargaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\\nConnie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\\nOctavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\\nParable of the Sower is a science fiction novel by Octavia Butler, published in 1993.\\nThe novel follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\\nParable of the Sower explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world.\\nThe novel also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\\nOctavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\\nhuman: Which other women sci-fi writers might I want to read?\\nai: You might want to read Ursula K. Le Guin or Joanna Russ.\\nhuman: Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\\nai: Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\\nhuman: What is the book's relevance to the challenges facing contemporary society?\\nai: Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\", 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': None, 'example': False}\n"
]
}
],
"source": [
"def print_messages(messages):\n",
" for m in messages:\n",
" print(m.type, \":\\n\", m.dict())\n",
"\n",
"\n",
"print(memory.chat_memory.zep_summary)\n",
"print(\"\\n\")\n",
"print(\"Conversation Facts: \")\n",
"facts = memory.chat_memory.zep_facts\n",
"for fact in facts:\n",
" print(fact + \"\\n\")\n",
"print_messages(memory.chat_memory.messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory via the `ZepRetriever`.\n",
"\n",
"You can use the `ZepRetriever` with chains that support passing in a Langchain `Retriever` object.\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:35:33.023765Z",
"start_time": "2024-05-10T14:35:32.613576Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Which other women sci-fi writers might I want to read?' created_at='2024-05-10T14:34:16.714292Z' metadata=None role='human' role_type=None token_count=12 updated_at='0001-01-01T00:00:00Z' uuid_='64ca1fae-8db1-4b4f-8a45-9b0e57e88af5' 0.8960460126399994\n"
]
}
],
"source": [
"retriever = ZepCloudRetriever(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
")\n",
"\n",
"search_results = memory.chat_memory.search(\"who are some famous women sci-fi authors?\")\n",
"for r in search_results:\n",
" if r.score > 0.8: # Only print results with similarity of 0.8 or higher\n",
" print(r.message, r.score)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,3 +1,7 @@
---
keywords: [azure]
---
# Microsoft
All functionality related to `Microsoft Azure` and other `Microsoft` products.
@@ -271,6 +275,26 @@ See a [usage example](/docs/integrations/retrievers/azure_ai_search).
from langchain.retrievers import AzureAISearchRetriever
```
## Tools
### Azure Container Apps dynamic sessions
We need to get the `POOL_MANAGEMENT_ENDPOINT` environment variable from the Azure Container Apps service.
See the instructions [here](https://python.langchain.com/v0.2/docs/integrations/tools/azure_dynamic_sessions/#setup).
We need to install a python package.
```bash
pip install langchain-azure-dynamic-sessions
```
See a [usage example](/docs/integrations/tools/azure_dynamic_sessions).
```python
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
```
## Toolkits
### Azure AI Services

View File

@@ -1,3 +1,7 @@
---
keywords: [openai]
---
# OpenAI
All functionality related to OpenAI

View File

@@ -64,7 +64,7 @@ set_llm_cache(AstraDBCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the Astra DB section).
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the Astra DB section).
## Semantic LLM Cache
@@ -80,7 +80,7 @@ set_llm_cache(AstraDBSemanticCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_message_history).

View File

@@ -40,7 +40,7 @@ from langchain_community.cache import CassandraCache
set_llm_cache(CassandraCache())
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the Cassandra section).
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the Cassandra section).
## Semantic LLM Cache
@@ -54,7 +54,7 @@ set_llm_cache(CassandraSemanticCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the appropriate section).
## Document loader

View File

@@ -205,7 +205,7 @@ For chat models is very useful to define prompt as a set of message templates...
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` sufix inside the <prompt:_role_> tag
- note the `:system` suffix inside the <prompt:_role_> tag
```<prompt:system>

View File

@@ -48,6 +48,6 @@ eng = sqlalchemy.create_engine(conn_str)
set_llm_cache(SQLAlchemyCache(engine=eng))
```
From here, see the [LLM Caching](/docs/integrations/llms/llm_caching) documentation on how to use.
From here, see the [LLM Caching](/docs/integrations/llm_caching) documentation on how to use.

View File

@@ -1,63 +1,82 @@
# NVIDIA
The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
>NVIDIA provides an integration package for LangChain: `langchain-nvidia-ai-endpoints`.
NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing,
NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
giving enterprises ownership and full control of their IP and AI application.
## NVIDIA AI Foundation Endpoints
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for
> NVIDIA AI Foundation Models like `Mixtral 8x7B`, `Llama 2`, `Stable Diffusion`, etc. These models,
> hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on
> the NVIDIA AI platform, making them fast and easy to evaluate, further customize,
> and seamlessly run at peak performance on any accelerated stack.
>
> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully
> accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these
> models can be deployed anywhere with enterprise-grade security, stability,
> and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).
Below is an example on how to use some common functionality surrounding text-generative and embedding models.
A selection of NVIDIA AI Foundation models is supported directly in LangChain with familiar APIs.
## Installation
The supported models can be found [in build.nvidia.com](https://build.nvidia.com/).
These models can be accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/)
package, as shown below.
### Setting up
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models
2. Click on your model of choice
3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```bash
export NVIDIA_API_KEY=nvapi-XXXXXXXXXXXXXXXXXXXXXXXXXX
```python
pip install -U --quiet langchain-nvidia-ai-endpoints
```
- Install a package:
## Setup
```bash
pip install -U langchain-nvidia-ai-endpoints
**To get started:**
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.
2. Click on your model of choice.
3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
```python
import getpass
import os
if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
nvidia_api_key = getpass.getpass("Enter your NVIDIA API key: ")
assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvidia_api_key
```
### Chat models
See a [usage example](/docs/integrations/chat/nvidia_ai_endpoints).
## Working with NVIDIA API Catalog
```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="mixtral_8x7b")
llm = ChatNVIDIA(model="mistralai/mixtral-8x22b-instruct-v0.1")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
```
### Embedding models
Using the API, you can query live endpoints available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM which is part of NVIDIA AI Enterprise, shown in the next section [Working with NVIDIA NIMs](##working-with-nvidia-nims).
See a [usage example](/docs/integrations/text_embedding/nvidia_ai_endpoints).
## Working with NVIDIA NIMs
When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.
[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)
```python
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank
# connect to an chat NIM running at localhost:8000, specifyig a specific model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta-llama3-8b-instruct")
# connect to an embedding NIM running at localhost:8080
embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1")
# connect to a reranking NIM running at localhost:2016
ranker = NVIDIARerank(base_url="http://localhost:2016/v1")
```
## Using NVIDIA AI Foundation Endpoints
A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs.
The active models which are supported can be found [in API Catalog](https://build.nvidia.com/).
**The following may be useful examples to help you get started:**
- **[`ChatNVIDIA` Model](/docs/integrations/chat/nvidia_ai_endpoints).**
- **[`NVIDIAEmbeddings` Model for RAG Workflows](/docs/integrations/text_embedding/nvidia_ai_endpoints).**

Some files were not shown because too many files have changed in this diff Show More