Compare commits

..

157 Commits

Author SHA1 Message Date
isaac hershenson
3a6cf8f893 testing roadblock 2024-06-05 14:53:22 -07:00
isaac hershenson
f59c4fac95 sitemap test draft 2024-06-05 12:39:40 -07:00
Anthony Bernabeu
4e676a63b8 community[minor]: Added filter search for LanceDB (#22461)
- [ ] **community**: "vectorstore: added filtering support for LanceDB
vector store"

- [ ] **This PR adds filtering capabilities to LanceDB**:
- **Description:** In LanceDB filtering can be applied when searching
for data into the vectorstore. It is using the SQL language as mentioned
in the LanceDB documentation.
    - **Issue:** #18235 
    - **Dependencies:** No

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-05 09:33:54 -04:00
Erick Friis
4050d6ea2b huggingface: remove text-generation dep (#22543) 2024-06-05 12:13:40 +00:00
Erick Friis
a6fc74f379 ai21: fix core version (#22544) 2024-06-05 08:09:19 -04:00
Asaf Joseph Gardin
75cba742e5 ai21: fix ai21 unittests (#22526)
Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-05 08:00:42 -04:00
Erick Friis
58192d617f community: fix huggingface deprecations (#22522) 2024-06-05 04:13:13 +00:00
Jacob Lee
1e748a6d40 docs[patch]: Adds links to deprecations page (#22514)
@baskaryan
2024-06-04 16:19:32 -07:00
William FH
91fed3ace7 [Docs] Structured output Keywords (#22511) 2024-06-04 20:56:05 +00:00
Christophe Bornet
8ba868d3b0 core[patch]: Add similarity_score_threshold to VectorStore search types (#22477) 2024-06-04 13:43:55 -07:00
Eugene Yurtsev
9120cf5df2 core[patch]: Deduplicate of callback handlers in merge_configs (#22478)
This PR adds deduplication of callback handlers in merge_configs.

Fix for this issue:
https://github.com/langchain-ai/langchain/issues/22227

The issue appears when the code is:

1) running python >=3.11
2) invokes a runnable from within a runnable
3) binds the callbacks to the child runnable from the parent runnable
using with_config

In this case, the same callbacks end up appearing twice: (1) the first
time from with_config, (2) the second time with langchain automatically
propagating them on behalf of the user.


Prior to this PR this will emit duplicate events:

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model.with_config(
        {
            "callbacks": callbacks,  # <-- Propagate callbacks
        }
    )
    return await chain.ainvoke({"question": question})
```

Prior to this PR this will work work correctly (no duplicate events):

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question}, {"callbacks": callbacks})
```

This will also work (as long as the user is using python >= 3.11) -- as
langchain will automatically propagate callbacks

```python
@tool
async def get_items(question: str,):  
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question})
```
2024-06-04 16:19:00 -04:00
Jacob Lee
64dbc52cae docs[patch]: Update quickstart tutorial (#22504)
Mentions LCEL more, hopefully flags it to more people as a simple
entrypoint

@baskaryan @hwchase17
2024-06-04 13:04:56 -07:00
Ofer Mendelevitch
ad502e8d50 community[minor]: Vectara Integration Update - Streaming, FCS, Chat, updates to documentation and example notebooks (#21334)
Thank you for contributing to LangChain!

**Description:** update to the Vectara / Langchain integration to
integrate new Vectara capabilities:
- Full RAG implemented as a Runnable with as_rag()
- Vectara chat supported with as_chat()
- Both support streaming response
- Updated documentation and example notebook to reflect all the changes
- Updated Vectara templates

**Twitter handle:** ofermend

**Add tests and docs**: no new tests or docs, but updated both existing
tests and existing docs
2024-06-04 12:57:28 -07:00
Bagatur
cb183a9bf1 docs: update anthropic chat model (#22483)
Related to #22296

And update anthropic to accept base_url
2024-06-04 12:42:06 -07:00
Erick Friis
d700ce8545 robocorp: typo (#22509) 2024-06-04 15:33:38 -04:00
Erick Friis
39fd44579a robocorp: release 0.0.9.post1 (#22507) 2024-06-04 15:32:30 -04:00
Erick Friis
339e3b7f55 ai21: release 0.1.6 (#22508) 2024-06-04 15:31:23 -04:00
ccurme
3c53cea760 together, upstage: bump minimum langchain-openai version (#22505) 2024-06-04 15:20:41 -04:00
Erick Friis
c438b5b78e docs: fix api ref link generation (#22438)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 12:09:22 -07:00
Bagatur
efcb04f84b mongodb[patch]: Release 0.1.6 (#22501) 2024-06-04 12:01:37 -07:00
Bagatur
222b1ba112 groq[patch]: Release 0.1.5 (#22500) 2024-06-04 12:01:17 -07:00
Bagatur
f021be510e milvus[patch]: Release 0.1.1 (#22499) 2024-06-04 12:00:53 -07:00
Bagatur
64d68c17cd upstage[patch]: Release 0.1.6 (#22498) 2024-06-04 11:58:44 -07:00
Bagatur
48fba40fce experimental[patch]: Release 0.0.60 (#22497) 2024-06-04 11:56:42 -07:00
Bagatur
e60f88ccdd community[patch]: Release 0.2.2 (#22496) 2024-06-04 11:42:11 -07:00
Bagatur
85aa218564 langchain[patch]: Release 0.2.2 (#22495) 2024-06-04 11:33:45 -07:00
Bagatur
8e86080def mistralai[patch]: Release 0.1.8 (#22494) 2024-06-04 11:33:06 -07:00
Bagatur
e850de2422 huggingface[patch]: release 0.0.2 (#22493) 2024-06-04 11:32:36 -07:00
Jacob Lee
593de8a913 docs[patch]: Add robots.txt and root sitemap (#22492)
CC @efriis @baskaryan
2024-06-04 11:26:40 -07:00
Bagatur
99a3cad258 text-splitters[patch]: Release 0.2.1 (#22490) 2024-06-04 11:19:21 -07:00
Bagatur
161b02a8be core[patch]: Release 0.2.4 (#22489) 2024-06-04 11:14:54 -07:00
Ragul Kachiappan
50258a7dda docs: Update chroma docs link for collection reference (#22472)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
- **Description:** Updated dead link referencing chroma docs in Chroma
notebook under vectorstores
2024-06-04 18:01:13 +00:00
nareshnagpal06
9b45374118 docs: Added Semantic Cache Example with BedrockChat using Bedrock Embedding… (#22190)
…s and Opensearch Semantic Cache

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 17:40:29 +00:00
Joydeep Banik Roy
3796672c67 community, milvus, pinecone, qdrant, mongo: Broadcast operation failure while using simsimd beyond v3.7.7 (#22271)
- [ ] **Packages affected**: 
  - community: fix `cosine_similarity` to support simsimd beyond 3.7.7
- partners/milvus: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/mongodb: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/pinecone: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/qdrant: fix `cosine_similarity` to support simsimd beyond
3.7.7


- [ ] **Broadcast operation failure while using simsimd beyond v3.7.7**:
- **Description:** I was using simsimd 4.3.1 and the unsupported operand
type issue popped up. When I checked out the repo and ran the tests,
they failed as well (have attached a screenshot for that). Looks like it
is a variant of https://github.com/langchain-ai/langchain/issues/18022 .
Prior to 3.7.7, simd.cdist returned an ndarray but now it returns
simsimd.DistancesTensor which is ineligible for a broadcast operation
with numpy. With this change, it also remove the need to explicitly cast
`Z` to numpy array
    - **Issue:** #19905
    - **Dependencies:** No
    - **Twitter handle:** https://x.com/GetzJoydeep

<img width="1622" alt="Screenshot 2024-05-29 at 2 50 00 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/fb27b383-a9ae-4a6f-b355-6d503b72db56">

- [ ] **Considerations**: 
1. I started with community but since similar changes were there in
Milvus, MongoDB, Pinecone, and QDrant so I modified their files as well.
If touching multiple packages in one PR is not the norm, then I can
remove them from this PR and raise separate ones
2. I have run and verified that the tests work. Since, only MongoDB had
tests, I ran theirs and verified it works as well. Screenshots attached
:
<img width="1573" alt="Screenshot 2024-05-29 at 2 52 13 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/ce87d1ea-19b6-4900-9384-61fbc1a30de9">
<img width="1614" alt="Screenshot 2024-05-29 at 3 33 51 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/6ce1d679-db4c-4291-8453-01028ab2dca5">
  

I have added a test for simsimd. I feel it may not go well with the
CI/CD setup as installing simsimd is not a dependency requirement. I
have just imported simsimd to ensure simsimd cosine similarity is
invoked. However, its not a good approach. Suggestions are welcome and I
can make the required changes on the PR. Please provide guidance on the
same as I am new to the community.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-04 17:36:31 +00:00
KyrianC
03178ee74f community[minor]: Add tools calls to ChatEdenAI (#22320)
### Description  
Add tools implementation to `ChatEdenAI`:
- `bind_tools()`
- `with_structured_output()`

### Documentation 
Updated `docs/docs/integrations/chat/edenai.ipynb`

### Notes
We don´t support stream with tools as of yet. If stream is called with
tools we directly yield the whole message from `generate` (implemented
the same way as Anthropic did).
2024-06-04 10:29:28 -07:00
pranavvuppala
9d4350e69a docs : Update docstrings for OpenAI base.py (#22221)
- [x] **PR title**: Update docstrings for OpenAI base.py
-**Description:** Updated the docstring of few OpenAI functions for a
better understanding of the function.
    - **Issue:** #21983

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 17:24:17 +00:00
Anindyadeep
7a197539aa communty[patch]: Native RAG Support in Prem AI langchain (#22238)
This PR adds native RAG support in langchain premai package. The same
has been added in the docs too.
2024-06-04 10:19:54 -07:00
Rahul Triptahi
77ad857934 community[minor]: Enable retrieval api calls in PebbloRetrievalQA (#21958)
Description: Enable app discovery and Prompt/Response apis in
PebbloSafeRetrieval
Documentation: NA
Unit test: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-04 10:18:50 -07:00
liugz18
8fd231086e experimental[patch]: Fix graph_transformers llms #21482 (#22417)
Fix AttributeError on calling
LLMGraphTransformer.convert_to_graph_documents #21482

 since raw_schema is always a str

@baskaryan
2024-06-04 17:07:38 +00:00
ccurme
6db25b4e31 core[patch]: bump langsmith (#22476)
Noticing errors logged in some situations when tracing with Langsmith:
```python
from langchain_core.pydantic_v1 import BaseModel
from langchain_anthropic import ChatAnthropic


class AnswerWithJustification(BaseModel):
    """An answer to the user question along with justification for the answer."""
    answer: str
    justification: str


llm = ChatAnthropic(model="claude-3-haiku-20240307")
structured_llm = llm.with_structured_output(AnswerWithJustification)

list(structured_llm.stream("What weighs more a pound of bricks or a pound of feathers"))
```
```
Error in LangChainTracer.on_chain_end callback: AttributeError("'NoneType' object has no attribute 'append'")
[AnswerWithJustification(answer='A pound of bricks and a pound of feathers weigh the same amount.', justification='This is because a pound is a unit of mass, not volume. By definition, a pound of any material, whether bricks or feathers, will weigh the same - one pound. The physical size or volume of the materials does not matter when measuring by mass. So a pound of bricks and a pound of feathers both weigh exactly one pound.')]
```
2024-06-04 10:05:53 -07:00
Bagatur
17c127531a community[patch]: deprecate all HF classes (#22444) 2024-06-04 09:48:25 -07:00
Nuno Campos
58b118544e Use immutable sequence type for batch/batch_as_completed types (#22433)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-04 08:04:09 -07:00
Christophe Bornet
9a8fe58ebe community[minor]: Improve Cassandra VectorStore as_retriever (#22465)
The Vectorstore's API `as_retriever` doesn't expose explicitly the
parameters `search_type` and `search_kwargs` and so these are not well
documented.
This PR improves `as_retriever` for the Cassandra VectorStore by making
these parameters explicit.

NB: An alternative would have been to modify `as_retriever` in
`Vectorstore`. But there's probably a good reason these were not exposed
in the first place ? Is it because implementations may decide to not
support them and have fixed values when creating the
VectorStoreRetriever ?
2024-06-04 09:51:17 -04:00
Christophe Bornet
23bba18f92 core[patch]: Fix VectorStore's as_retriever mutating tags param (#22470)
The current VectorStore `as_retriever` implementation mutates the `tags`
param when it's passed in kwargs.
This fix ensures that a copy is done.
2024-06-04 09:50:36 -04:00
Michal Gregor
98b2e7b195 huggingface[patch]: Support for HuggingFacePipeline in ChatHuggingFace. (#22194)
- **Description:** Added support for using HuggingFacePipeline in
ChatHuggingFace (previously it was only usable with API endpoints,
probably by oversight).
- **Issue:** #19997 
- **Dependencies:** none
- **Twitter handle:** none

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-04 00:47:35 +00:00
Fahreddin Özcan
0061ded002 community[patch]: Upstash Vector Store Namespace Support (#22251)
This PR introduces namespace support for Upstash Vector Store, which
would allow users to partition their data in the vector index.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 17:30:56 -07:00
Isaac Francisco
25cf1a74d5 docs: rag tutorial small fixes (#22450) 2024-06-04 00:16:54 +00:00
Jacob Lee
b0f014666d docs[patch]: Adds search keywords for common queries (#22449)
CC @baskaryan @efriis @ccurme
2024-06-03 16:30:17 -07:00
Guangdong Liu
bc7e32f315 core(patch):fix partial_variables not working with SystemMessagePromptTemplate (#20711)
- **Issue:**  close #17560
- @baskaryan, @eyurtsev
2024-06-03 16:22:42 -07:00
Martin Kolb
f2dd31b9e8 docs: Fix doc issue for HANA Cloud Vector Engine (#22260)
- **Description:**
This PR fixes a rendering issue in the docs (Python notebook) of HANA
Cloud Vector Engine.

  - **Issue:** N/A
  - **Dependencies:** no new dependencies added

File of the fixed notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-06-03 15:53:43 -07:00
Dristy Srivastava
ef3df45d9d community[minor]: Updating payload for pebblo discover API (#22309)
**Description:** Updating response for pebblo discover API. Also
updating filed name case type
**Documentation:** N/A
**Unit tests:** N/A
2024-06-03 15:36:17 -07:00
Miroslav
cbd5720011 huggingface[patch]: Skip Login to HuggingFaceHub when token is not set (#22365) 2024-06-03 15:20:32 -07:00
Stefano Lottini
f78ae1d932 docs: Astra DB vectorstore, add automatic-embedding example (#22350)
Description: Adding an example showcasing the newly-introduced API-side
embedding computation option for the Astra DB vector store
2024-06-03 15:13:57 -07:00
bhardwaj-vipul
f397a84a59 langchain[patch]: Fix MongoDBAtlasVectorSearch reference in self query retriever (#22401)
**Description:** 
SelfQuery Retriever with MongoDBAtlasVectorSearch (from
langchain_mongodb import MongoDBAtlasVectorSearch) and
Chroma (from langchain_chroma import Chroma) is not supported.
The imports in the [builtin
translators](8cbce684d4/libs/langchain/langchain/retrievers/self_query/base.py (L73))
points to the
[deprecated](acaf214a45/libs/community/langchain_community/vectorstores/mongodb_atlas.py (L36))
vectorstore.

**Issue:** 
#22272

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 22:10:15 +00:00
ccurme
afe89a1411 community: add standard chat model params to Ollama (#22446) 2024-06-03 17:45:03 -04:00
Isaac Francisco
5119ab2fb9 docs: agents tutorial wording (#22447) 2024-06-03 14:40:01 -07:00
Ethan Yang
52da6a160d community[patch]: Update OpenVINO embedding and reranker to support static input shape (#22171)
It can help to deploy embedding models on NPU device
2024-06-03 13:27:17 -07:00
Tom Clelford
c599732e1a text-splitters[patch]: fix HTMLSectionSplitter parsing of xslt paths (#22176)
## Description
This PR allows passing the HTMLSectionSplitter paths to xslt files. It
does so by fixing two trivial bugs with how passed paths were being
handled. It also changes the default value of the param `xslt_path` to
`None` so the special case where the file was part of the langchain
package could be handled.

## Issue
#22175
2024-06-03 20:26:59 +00:00
maang-h
01352bb55f community[minor]: Implement MiniMaxChat interface (#22391)
- **Description:** Implement MiniMaxChat interface, include:
    - No longer inherits the LLM class (like other chat model)
    - Update request parameters (v1 -> v2)
        - update `base url`
        - update message role (system, user, assistant)
        - add `stream` function
        - no longer use `group id`
    - Implement the `_stream`, `_agenerate`, and `_astream` interfaces

[minimax v2 api
document](https://platform.minimaxi.com/document/guides/chat-model/V2?id=65e0736ab2845de20908e2dd)
2024-06-03 13:22:38 -07:00
Brandon Sharp
56e5aa4dd9 community[patch]: Airtable to allow for addtl params (#22092)
- [X] **PR title**: "community: added optional params to Airtable
table.all()"


- [X] **PR message**: 
- **Description:** Add's **kwargs to AirtableLoader to allow for kwargs:
https://pyairtable.readthedocs.io/en/latest/api.html#pyairtable.Table.all
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** parakoopa88


- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 13:05:56 -07:00
Harichandan Roy
1f751343e2 community[patch]: update embeddings/oracleai.py (#22240)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"

"community/embeddings: update oracleai.py"

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

Adding oracle VECTOR_ARRAY_T support.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Tests are not impacted.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 12:38:51 -07:00
maang-h
13140dc4ff community[patch]: Update the default api_url and reqeust_body of sparkllm embedding (#22136)
- **Description:** When I was running the SparkLLMTextEmbeddings,
app_id, api_key and api_secret are all correct, but it cannot run
normally using the current URL.

    ```python
    # example
    from langchain_community.embeddings import SparkLLMTextEmbeddings

    embedding= SparkLLMTextEmbeddings(
        spark_app_id="my-app-id",
        spark_api_key="my-api-key",
        spark_api_secret="my-api-secret"
    )
    embedding= "hello"
    print(spark.embed_query(text1))
    ```

![sparkembedding](https://github.com/langchain-ai/langchain/assets/55082429/11daa853-4f67-45b2-aae2-c95caa14e38c)
   
So I updated the url and request body parameters according to
[Embedding_api](https://www.xfyun.cn/doc/spark/Embedding_api.html), now
it is runnable.
2024-06-03 12:38:11 -07:00
Yuwen Hu
ba0dca46d7 community[minor]: Add IPEX-LLM BGE embedding support on both Intel CPU and GPU (#22226)
**Description:** [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local
PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low
latency. This PR adds ipex-llm integrations to langchain for BGE
embedding support on both Intel CPU and GPU.
**Dependencies:** `ipex-llm`, `sentence-transformers`
**Contribution maintainer**: @Oscilloscope98 
**tests and docs**: 
- langchain/docs/docs/integrations/text_embedding/ipex_llm.ipynb
- langchain/docs/docs/integrations/text_embedding/ipex_llm_gpu.ipynb
-
langchain/libs/community/tests/integration_tests/embeddings/test_ipex_llm.py

---------

Co-authored-by: Shengsheng Huang <shannie.huang@gmail.com>
2024-06-03 12:37:10 -07:00
Jacob Lee
c01467b1f4 core[patch]: RFC: Allow concatenation of messages with multi part content (#22002)
Anthropic's streaming treats tool calls as different content parts
(streamed back with a different index) from normal content in the
`content`.

This means that we need to update our chunk-merging logic to handle
chunks with multi-part content. The alternative is coerceing Anthropic's
responses into a string, but we generally like to preserve model
provider responses faithfully when we can. This will also likely be
useful for multimodal outputs in the future.

This current PR does unfortunately make `index` a magic field within
content parts, but Anthropic and OpenAI both use it at the moment to
determine order anyway. To avoid cases where we have content arrays with
holes and to simplify the logic, I've also restricted merging to chunks
in order.

TODO: tests

CC @baskaryan @ccurme @efriis
2024-06-03 09:46:40 -07:00
Dan
86509161b0 community: fix AzureSearch delete documents (#22315)
**Description**

Fix AzureSearch delete documents method by using FIELDS_ID variable
instead of the hard coded "id" value

**Issue:** 

This is linked to this issue:
https://github.com/langchain-ai/langchain/issues/22314

Co-authored-by: dseban <dan.seban@neoxia.com>
2024-06-03 15:55:06 +00:00
Harrison Chase
8fad2e209a fix error message (#22437)
Was confusing when language is in Enum but not implemented
2024-06-03 15:48:26 +00:00
Bagatur
678a19a5f7 infra: bump anthropic mypy 1 (#22373) 2024-06-03 08:21:55 -07:00
Nuno Campos
ceb73ad06f core: In BaseRetriever make get_relevant_docs delegate to invoke (#22434)
- This fixes all the tracing issues with people still using
get_relevant_docs, and a change we need for 0.3 anyway

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 07:34:53 -07:00
Zheng Robert Jia
1ad1dc5303 docs: resolve minor syntax error. (#22375)
Used the correct magic command. 
Changed from `% pip...` to `%pip`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-03 14:34:24 +00:00
Charles John
2d81a72884 community: fix missing apify_api_token field in ApifyWrapper (#22421)
- **Description:** The `ApifyWrapper` class expects `apify_api_token` to
be passed as a named parameter or set as an environment variable. But
the corresponding field was missing in the class definition causing the
argument to be ignored when passed as a named param. This patch fixes
that.
2024-06-03 14:32:57 +00:00
Klaudia Lemiec
dac355fc62 docs: notebook loader: change .html to .ipynb (#22407)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-03 14:26:28 +00:00
Joan Fontanals
a7ae16f912 add embed_image API to JinaEmbedding (#22416)
- **Description:** Add `embed_image` to JinaEmbedding to embed images
 - **Twitter handle:** https://x.com/JinaAI_
2024-06-03 10:23:37 -04:00
Qingchuan Hao
3e92ed8056 docs: add Microsoft Azure to ChatModelTabs (#22367)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-03 10:19:00 -04:00
Nuno Campos
ed8e9c437a core: In RunnableSequence pass kwargs to the first step (#22393)
- This is a pattern that shows up occasionally in langgraph questions,
people chain a graph to something else after, and want to pass the graph
some kwargs (eg. stream_mode)
2024-06-03 14:18:10 +00:00
Jeffrey Morgan
eabcfaa3d6 Update Ollama instructions (#22394) 2024-06-03 10:17:35 -04:00
Harrison Chase
acaf214a45 update agent docs (#22370)
to use create_react_agent

---------

Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
2024-06-01 08:28:32 -07:00
Jacob Lee
16cce76a68 👥 Update LangChain people data (#22388)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-06-01 07:36:45 -07:00
Jacob Lee
8a57102918 docs[patch]: Fix typo (#22377) 2024-05-31 16:37:05 -07:00
Bagatur
4d82cea71f docs: fix llm caches redirect (#22371) 2024-05-31 19:37:06 +00:00
Bagatur
a8098f5ddb anthropic[patch]: Release 0.1.15, fix sdk tools break (#22369) 2024-05-31 12:10:22 -07:00
Erick Friis
6ffa0acf32 ai21: fix text-splitters version (#22366) 2024-05-31 11:41:05 -04:00
Erick Friis
1bad0ac946 docs: redirect integration links to 0.2 (#22326) 2024-05-31 11:40:48 -04:00
ccurme
8cbce684d4 docs: update retriever how-to content (#22362)
- [x] How to: use a vector store to retrieve data
- [ ] How to: generate multiple queries to retrieve data for
- [x] How to: use contextual compression to compress the data retrieved
- [x] How to: write a custom retriever class
- [x] How to: add similarity scores to retriever results
^ done last month
- [x] How to: combine the results from multiple retrievers
- [x] How to: reorder retrieved results to mitigate the "lost in the
middle" effect
- [x] How to: generate multiple embeddings per document
^ this PR
- [ ] How to: retrieve the whole document for a chunk
- [ ] How to: generate metadata filters
- [ ] How to: create a time-weighted retriever
- [ ] How to: use hybrid vector and keyword retrieval
^ todo
2024-05-31 10:57:35 -04:00
Jacob Lee
75ed9ee929 docs: Fix Solar and OCI integration page typos (#22343)
@efriis @baskaryan
2024-05-31 10:36:12 -04:00
Bagatur
0214246dc6 docs: list tool calling models (#22334) 2024-05-30 14:32:33 -07:00
Bagatur
410e9add44 infra: run scheduled tests on aws, google, cohere, nvidia (#22328)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-30 13:57:12 -07:00
Harrison Chase
0c9a034ed7 add simpler agent tutorial (#22249)
1/ added section at start with full code
2/ removed retriever tool (was just distracting)
3/ added section on starting a new conversation

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-30 12:33:32 -07:00
Bagatur
2b9f1469d8 core[patch]: Release 0.2.3 (#22329) 2024-05-30 11:35:09 -07:00
Harrison Chase
ee32369265 core[patch]: fix runnable history and add docs (#22283) 2024-05-30 11:26:41 -07:00
William FH
dcec133b85 [Core] Update Tracing Interops (#22318)
LangSmith and LangChain context var handling evolved in parallel since
originally we didn't expect people to want to interweave the decorator
and langchain code.

Once we get a new langsmith release, this PR will let you seemlessly
hand off between @traceable context and runnable config context so you
can arbitrarily nest code.

It's expected that this fails right now until we get another release of
the SDK
2024-05-30 10:34:49 -07:00
ccurme
f34337447f openai: update ChatOpenAI api ref (#22324)
Update to reflect that token usage is no longer default in streaming
mode.

Add detail for streaming context under Token Usage section.
2024-05-30 12:31:28 -04:00
ChengZi
2443e85533 docs: fix milvus import and update template (#22306)
docs: fix milvus import problem
update milvus-rag template with milvus-lite

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-05-30 08:28:55 -07:00
WU LIFU
86698b02a9 doc: fix wrong documentation on FAISS load_local function (#22310)
### Issue: #22299 

### descriptions
The documentation appears to be wrong. When the user actually sets this
parameter "asynchronous" to be True, it fails because the __init__
function of FAISS class doesn't allow this parameter. In fact, most of
the class/instance functions of this class have both the sync/async
version, so it looks like what we need is just to remove this parameter
from the doc.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
2024-05-30 15:15:04 +00:00
maang-h
596c062cba community[patch]: Standardize qianfan model init args name (#22322)
- **Description:**  
    - Standardize qianfan chat model intialization arguments name
        - qianfan_ak (qianfan api key)  -> api_key
        - qianfan_sk (qianfan secret key)  ->  secret_key
       
    - Delete unuse variable
- **Issue:** #20085
2024-05-30 11:08:32 -04:00
KhoPhi
c64b0a3095 Docs: Ollama (LLM, Chat Model & Text Embedding) (#22321)
- [x] Docs Update: Ollama
  - llm/ollama 
- Switched to using llama3 as model with reference to templating and
prompting
      - Added concurrency notes to llm/ollama docs
  - chat_models/ollama
      - Added concurrency notes to llm/ollama docs
  - text_embedding/ollama
     - include example for specific embedding models from Ollama
2024-05-30 11:06:45 -04:00
Dobiichi-Origami
10b12e1c08 community: adding tool_call_id for every ToolCall (#22323)
- **Description:** This PR contains a bugfix which result in malfunction
of multi-turn conversation in QianfanChatEndpoint and adaption for
ToolCall and ToolMessage
2024-05-30 10:59:08 -04:00
Bagatur
569d325a59 docs: link GH org (#22308) 2024-05-30 00:17:59 -07:00
Bagatur
93049d1563 docs: make llm cache its own section (#22301) 2024-05-30 00:17:33 -07:00
Bagatur
04631439c9 docs: add v0.2 links to README (#22300) 2024-05-29 16:22:01 -07:00
ccurme
f39e1a2288 community, docs: update token usage tracking callback + how-to guides (#22145) 2024-05-29 17:00:47 -04:00
Bagatur
2bc50fb895 docs, cli[patch]: chat model template nit (#22294) 2024-05-29 20:53:58 +00:00
Bagatur
aa6c31df53 cli[patch]: Release 0.0.24 (#22293) 2024-05-29 13:37:34 -07:00
Bagatur
627a337887 docs, cli[patch]: chat model doc template (#22290)
Update ChatModel integration doc template, integration docstring, and
adds langchain-cli command to easily create just doc (for updating
existing integrations):

```bash
langchain-cli integration create-doc --name "foo-bar"
```
2024-05-29 13:34:58 -07:00
Wu Enze
f40e341a03 docs : Added integrations for memory with langchain_community (#22265)
PR title: Integration Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: [#22005](https://github.com/langchain-ai/langchain/issues/22005)
2024-05-29 16:12:05 -04:00
ccurme
6e1df72a88 openai[patch]: Release 0.1.8 (#22291) 2024-05-29 20:08:30 +00:00
ccurme
e71b0b5827 core[patch]: Release 0.2.2 (#22289) 2024-05-29 19:51:37 +00:00
William FH
9d6cabe84a Update sequence.ipynb (#22288) 2024-05-29 19:34:44 +00:00
Daniel Glogowski
7ff05357ba docs: updating NIM documentation (#22258)
Updating NVIDIA NIM notebooks and readme file.

Thanks!
Daniel
2024-05-29 10:28:39 -07:00
Bagatur
6dd0f095c3 docs: revamp ChatOpenAI (#22253)
Can build API ref docs by running
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
only builds openai ref, takes ~20 sec
2024-05-29 10:20:14 -07:00
Erick Friis
00c70d98c2 robocorp: release 0.0.9 (#22282) 2024-05-29 16:49:18 +00:00
Mikko Korpela
fc5909ad6f langchain-robocorp: Fix parsing of Union types (such as Optional). (#22277) 2024-05-29 09:47:02 -07:00
ccurme
af1f723ada openai: don't override stream_options default (#22242)
ChatOpenAI supports a kwarg `stream_options` which can take values
`{"include_usage": True}` and `{"include_usage": False}`.

Setting include_usage to True adds a message chunk to the end of the
stream with usage_metadata populated. In this case the final chunk no
longer includes `"finish_reason"` in the `response_metadata`. This is
the current default and is not yet released. Because this could be
disruptive to workflows, here we remove this default. The default will
now be consistent with OpenAI's API (see parameter
[here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options)).

Examples:
```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='Hello' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='!' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='' response_metadata={'finish_reason': 'stop'} id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
```

```python
for chunk in llm.stream("hi", stream_options={"include_usage": True}):
    print(chunk)
```
```
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='Hello' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='!' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' response_metadata={'finish_reason': 'stop'} id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```

```python
llm = ChatOpenAI().bind(stream_options={"include_usage": True})

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='Hello' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='!' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' response_metadata={'finish_reason': 'stop'} id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```
2024-05-29 10:30:40 -04:00
Karim Lalani
a1899439fc [experimental][llms][ollama_functions] Update OllamaFunctions to send tool_calls attribute (#21625)
Update OllamaFunctions to return `tool_calls` for AIMessages when used
for tool calling.
2024-05-29 09:38:33 -04:00
Bagatur
d61bdeba25 core[patch]: allow access RunnableWithFallbacks.runnable attrs (#22139)
RFC, candidate fix for #13095 #22134
2024-05-28 13:18:09 -07:00
SteveLiao
7496fe2b16 Update parent_document_retriever.py about **kwargs (#22219)
Add kwargs in add_documents function

**langchain**: Add **kwargs in parent_document_retriever"
 - **Add kwargs for `add_document` in `parent_document_retriever.py`** 


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-28 11:35:38 -07:00
Mark Cusack
8dfa3c5f1a Update/fix docs to list Yellowbrick as a supported indexed vectorstore (#22235)
Update/fix docs to list Yellowbrick as a supported indexed vectorstore
and fix the Jupyter notebook.
2024-05-28 11:34:49 -07:00
Erick Friis
93240fac68 milvus: fix core dep (#22239) 2024-05-28 10:21:37 -07:00
Erick Friis
611faa22c7 infra: allow first releases 2 (#22237) 2024-05-28 09:53:21 -07:00
Erick Friis
26c6e4a5ef infra: allow first releases (#22236) 2024-05-28 09:39:40 -07:00
ChengZi
404d92ded0 milvus: New langchain_milvus package and new milvus features (#21077)
New features:

- New langchain_milvus package in partner
- Milvus collection hybrid search retriever
- Zilliz cloud pipeline retriever
- Milvus Local guid
- Rag-milvus template

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jackson <jacksonxie612@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-28 08:24:20 -07:00
Leonid Ganeline
d7f70535ba docs: arxiv page, added cookbooks (#22215)
Issue: The `arXiv` page is missing the arxiv paper references from the
`langchain/cookbook`.
PR: Added the cookbook references.
Result: `Found 29 arXiv references in the 3 docs, 21 API Refs, 5
Templates, and 18 Cookbooks.` - much more references are visible now.
2024-05-27 15:47:02 -07:00
Leonid Ganeline
d6995e814b ai21[patch]: added license (#22153)
The `pyproject.toml` missed the `license` parameter. I've added it as
`MIT`
2024-05-27 15:14:14 -07:00
Maddy Adams
8332a36f69 infra: update langchainhub and add integration test (#22154)
**Description:** Update langchainhub integration test dependency and add
an integration test for pulling private prompt
**Dependencies:** langchainhub 0.1.16
2024-05-27 14:58:10 -07:00
Will Higgins
83d10df78d community[patch]: Update firecrawl api key name (#22183)
Change 'FIREWALL' to 'FIRECRAWL' as I believe this may have been in
error. Other docs refer to 'FIRECRAWL_API_KEY'.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:39:29 +00:00
hmasdev
bbd7015b5d core[patch]: Add TypeError handler into get_graph of Runnable (#19856)
# Description

## Problem

`Runnable.get_graph` fails when `InputType` or `OutputType` property
raises `TypeError`.

-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L250-L274)
-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L394-L396)

This problem prevents getting a graph of `Runnable` objects whose
`InputType` or `OutputType` property raises `TypeError` but whose
`invoke` works well, such as `langchain.output_parsers.RegexParser`,
which I have already pointed out in #19792 that a `TypeError` would
occur.

## Solution

- Add `try-except` syntax to handle `TypeError` to the codes which get
`input_node` and `output_node`.

# Issue
- #19801 

# Twitter Handle
- [hmdev3](https://twitter.com/hmdev3)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:34:34 +00:00
acho98
753353411f docs: Fix Clova embeddings example document (#22181)
- [ ] **PR title**: "Fix list handling in Clova embeddings example
documentation"
  - Description:
Fixes a bug in the Clova Embeddings example documentation where
document_text was incorrectly wrapped in an additional list.
   - Rationale
The embed_documents method expects a list, but the previous example
wrapped document_text in an unnecessary additional list, causing an
error. The updated example correctly passes document_text directly to
the method, ensuring it functions as intended.
2024-05-27 14:31:34 -07:00
Mohammad Mohtashim
577ed68b59 mistralai[patch]: Added Json Mode for ChatMistralAI (#22213)
- **Description:** Powered
[ChatMistralAI.with_structured_output](fbfed65fb1/libs/partners/mistralai/langchain_mistralai/chat_models.py (L609))
via json mode
 

-  **Issue:** #22081

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:16:52 +00:00
Pranith
25c270b5a5 docs : Added integrations for tools with langchain_community (#22188)
PR title: Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: https://github.com/langchain-ai/langchain/issues/22005
2024-05-27 14:06:40 -07:00
Ibrahim
cfea0e231a Update llm_chain.ipynb text (#22198)
Added the missing verb "is" and a comma to the text in the Prompt
Templates description within the Build a Simple LLM Application tutorial
for more clarity.
2024-05-27 19:57:41 +00:00
Aditya
bf81ecd3b4 docs:updated documentation for llama, falcon and gemma on Vertex AI Model garden (#22201)
- **Description:** updated documentation for llama, falcona and gemma on
Vertex AI Model garden
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-27 12:56:11 -07:00
Pavlo Paliychuk
342df7cf83 community[minor]: Add Zep Cloud components + docs + examples (#21671)
Thank you for contributing to LangChain!

- [x] **PR title**: community: Add Zep Cloud components + docs +
examples

- [x] **PR message**: 
We have recently released our new zep-cloud sdks that are compatible
with Zep Cloud (not Zep Open Source). We have also maintained our Cloud
version of langchain components (ChatMessageHistory, VectorStore) as
part of our sdks. This PRs goal is to port these components to langchain
community repo, and close the gap with the existing Zep Open Source
components already present in community repo (added
ZepCloudMemory,ZepCloudVectorStore,ZepCloudRetriever).
Also added a ZepCloudChatMessageHistory components together with an
expression language example ported from our repo. We have left the
original open source components intact on purpose as to not introduce
any breaking changes.
    - **Issue:** -
- **Dependencies:** Added optional dependency of our new cloud sdk
`zep-cloud`
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-27 12:50:13 -07:00
Jan Soubusta
cccc8fbe2f community[patch]: DuckDB VS - expose similarity, improve performance of from_texts (#20971)
3 fixes of DuckDB vector store:
- unify defaults in constructor and from_texts (users no longer have to
specify `vector_key`).
- include search similarity into output metadata (fixes #20969)
- significantly improve performance of `from_documents`

Dependencies: added Pandas to speed up `from_documents`.
I was thinking about CSV and JSON options, but I expect trouble loading
JSON values this way and also CSV and JSON options require storing data
to disk.
Anyway, the poetry file for langchain-community already contains a
dependency on Pandas.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 15:17:52 -07:00
Surya Pratap Singh Shekhawat
42207f5bef Update agent_executor.ipynb (#22104)
fixed typos in the doc.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 22:14:41 +00:00
Erick Friis
8acadc34f5 docs: edit links, direct for notebooks (#22051) 2024-05-24 19:44:46 +00:00
Erick Friis
42ffcb2ff1 anthropic: release 0.1.14rc2, test release note gen (#22147) 2024-05-24 12:40:10 -07:00
Erick Friis
6ee8de62c0 infra: auto-generated release notes based on git log (#22141)
Generates release notes based on a `git log` command with title names

Aiming to improve to splitting out features vs. bugfixes using
conventional commits in the coming weeks.

Will work for any monorepo packages
2024-05-24 11:43:28 -07:00
Ameya Shenoy
8ba492ed6a community[minor]: clickhouse -- ability to use secure connection (#22108)
- **Description:** this PR gives clickhouse client the ability to use a
secure connection to the clickhosue server
- **Issue:** fixes #22082
- **Dependencies:** -
- **Twitter handle:** `_codingcoffee_`

Signed-off-by: Ameya Shenoy <shenoy.ameya@gmail.com>
Co-authored-by: Shresth Rana <shresth@grapevine.in>
2024-05-24 17:30:22 +00:00
ccurme
9a010fb761 openai: read stream_options (#21548)
OpenAI recently added a `stream_options` parameter to its chat
completions API (see [release
notes](https://platform.openai.com/docs/changelog/added-chat-completions-stream-usage)).
When this parameter is set to `{"usage": True}`, an extra "empty"
message is added to the end of a stream containing token usage. Here we
propagate token usage to `AIMessage.usage_metadata`.

We enable this feature by default. Streams would now include an extra
chunk at the end, **after** the chunk with
`response_metadata={'finish_reason': 'stop'}`.

New behavior:
```
[AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='Hello', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='!', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde', usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17})]
```

Old behavior (accessible by passing `stream_options={"include_usage":
False}` into (a)stream:
```
[AIMessageChunk(content='', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='Hello', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='!', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-1312b971-c5ea-4d92-9015-e6604535f339')]
```

From what I can tell this is not yet implemented in Azure, so we enable
only for ChatOpenAI.
2024-05-24 13:20:56 -04:00
Patrick Zhang
eb7c767e5b docs: update the name of the tool passio_nutrition_ai (#22116)
Updating the name of the Passion Nutrition AI tool so that the name of
the tool is correctly displayed in the sidebar menu.

Currently the name of the tool says "Quickstart" in the side bar.
The patch fixed the name to be Passio Nutrition AI.

<img width="681" alt="image"
src="https://github.com/langchain-ai/langchain/assets/4603110/9609975e-78ea-4032-9024-10c4f838170a">
2024-05-24 17:15:16 +00:00
Leonid Ganeline
fd4ee08167 docs: integrations/platforms/microsoft update (#22100)
Added the `Azure Container Apps dynamic sessions` tool reference
2024-05-24 13:14:51 -04:00
Rahul Triptahi
1a485f59b9 community[patch]: Put authorized identities behind a feature flag in SharepointLoader (#22125)
Description: Put authorised identities behind a feature flag, load_auth.
Documentation: N/A
Unit tests: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-24 12:42:57 -04:00
Anindyadeep
ee689412ab docs: Update PremAI Docs (#22114)
Thank you for contributing to LangChain!

- [X] **PR title**: community: Updated langchain-community PremAI
documentation

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-24 11:55:32 -04:00
sasha
1c9ceff503 community: add metadata to chain logging; (#22122)
Hey, I'm Sasha. The SDK engineer from [Comet](https://comet.com).
This PR updates the CometTracer class.
Added metadata to CometTracerr. From now on, both chains and spans will
send it.
2024-05-24 15:29:40 +00:00
Jirka Lhotka
7c0459faf2 community: Update costs of openai finetuned models (#22124)
- **Description:** Update costs of finetuned models and add
gpt-3-turbo-0125. Source: https://openai.com/api/pricing/
  - **Issue:** N/A
  - **Dependencies:** None
2024-05-24 15:25:17 +00:00
Eugene Yurtsev
d3db83abe3 community[major]: lint for usage of xml library (#22132)
* Lint for usage of standard xml library
* Add forced opt-in for quip client
* Actual security issue is with underlying QuipClient not LangChain
integration (since the client is doing the parsing), but adding
enforcement at the LangChain level.
2024-05-24 15:23:53 +00:00
Tom Aarsen
5b5ea2af30 docs: Add explanation on how to use Hugging Face embeddings (#22118)
- **Description:** I've added a tab on embedding text with LangChain
using Hugging Face models to here:
https://python.langchain.com/v0.2/docs/how_to/embed_text/. HF was
mentioned in the running text, but not in the tabs, which I thought was
odd.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** No need, this is tiny :) 

Also, I had a ton of issues with the poetry docs/lint install, so I
haven't linted this. Apologies for that.

cc @Jofthomas 

- Tom Aarsen
2024-05-24 11:21:03 -04:00
Bagatur
baa3c975cb anthropic[patch]: allow tool call mutation (#22130)
If tool_use blocks and tool_calls with overlapping IDs are present,
prefer the values of the tool_calls. Allows for mutating AIMessages just
via tool_calls.
2024-05-24 08:18:14 -07:00
Christophe Bornet
c838de5027 doc: Add doc for CassandraByteStore (#22126)
Preview:
https://langchain-git-fork-cbornet-doc-cassandrabytestore-langchain.vercel.app/v0.2/docs/integrations/stores/cassandra/
2024-05-24 10:57:55 -04:00
Vadym Barda
2edb512282 docs: improve how-to docs for message history (#22072)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 20:12:24 -04:00
Artem
eb7c453b98 docs: update hub.pull("rlm/map-prompt") to hub.pull("rlm/reduce-prompt") for reduce prompt (#22088)
**PR message**: 
Update `hub.pull("rlm/map-prompt")` to `hub.pull("rlm/reduce-prompt")`
in summarization.ipynb

**Description:** 
Fix typo in prompt hub link from `reduce_prompt =
hub.pull("rlm/map-prompt")` to `reduce_prompt =
hub.pull("rlm/reduce-prompt")` following next issue

**Issue:** #22014

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 23:07:37 +00:00
Leonid Ganeline
2416737c5f docs: compact the API Reference links (#21285)
This PR is opinionated. 
Issue: the `API Reference` sections in the examples hold too much
vertical space and make us scroll the page too much. See an
[example](https://python.langchain.com/docs/get_started/quickstart/#conversation-retrieval-chain).
These sections are **important**. So, the compacting should not make
these sections less noticeable.
Change: compacting the `API Reference` sections. See the [same example
after change
applied](https://langchain-j6nya46lf-langchain.vercel.app/docs/get_started/quickstart/#conversation-retrieval-chain).
It is more compact and now looks like references (footnotes).
Note: I would also change the section style, so it would be more
noticeable (maybe to look like the footnotes. Smaller wider font?)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 15:50:23 -07:00
ccurme
0ea1e89b2c groq: read tool calls from .tool_calls attribute (#22096) 2024-05-23 18:16:06 -04:00
Bagatur
96c21dfe56 docs: hf feat table tool calling (#22091) 2024-05-23 15:09:30 -07:00
Eugene Yurtsev
63004a0945 codespell ignore remaining issues (#22097) 2024-05-23 21:51:39 +00:00
Eugene Yurtsev
2d693c484e docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
2024-05-23 16:59:11 -04:00
Bagatur
38783d07c9 infra: api docs quick preview (#22093) 2024-05-23 13:57:45 -07:00
Pavel Zloi
fe26f937e4 community[minor]: ManticoreSearch engine added to vectorstore (#19117)
**Description:** ManticoreSearch engine added to vectorstores
**Issue:** no issue, just a new feature
**Dependencies:** https://pypi.org/project/manticoresearch-dev/
**Twitter handle:** @EvilFreelancer

- Example notebook with test integration:

https://github.com/EvilFreelancer/langchain/blob/manticore-search-vectorstore/docs/docs/integrations/vectorstores/manticore_search.ipynb

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 13:56:18 -07:00
335 changed files with 25458 additions and 6630 deletions

7
.github/workflows/.codespell-exclude vendored Normal file
View File

@@ -0,0 +1,7 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -72,10 +72,67 @@ jobs:
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo "# Release $TAG"
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
with:
@@ -86,6 +143,7 @@ jobs:
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
steps:
@@ -229,6 +287,7 @@ jobs:
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
@@ -270,6 +329,7 @@ jobs:
mark-release:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
@@ -306,6 +366,6 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
body: ${{ needs.release-notes.outputs.release-body }}
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}

View File

@@ -29,9 +29,9 @@ jobs:
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -10,6 +10,7 @@ env:
jobs:
build:
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
@@ -25,16 +26,52 @@ jobs:
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/together"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
- "libs/partners/cohere"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
- "libs/partners/nvidia-ai-endpoints"
steps:
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-nvidia
path: langchain-nvidia
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-cohere
path: langchain-cohere
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/nvidia-ai-endpoints \
langchain/libs/partners/cohere
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-nvidia/libs/ai-endpoints langchain/libs/partners/nvidia-ai-endpoints
mv langchain-cohere/libs/cohere langchain/libs/partners/cohere
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }}
uses: "./.github/actions/poetry_setup"
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.working-directory }}
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: 'Authenticate to Google Cloud'
@@ -43,16 +80,20 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Run integration tests
working-directory: ${{ matrix.working-directory }}
shell: bash
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -67,12 +108,26 @@ jobs:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
run: |
make integration_test
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/nvidia-ai-endpoints \
langchain/libs/partners/cohere \
langchain/libs/partners/aws
- name: Ensure the tests did not create any additional files
working-directory: ${{ matrix.working-directory }}
shell: bash
working-directory: langchain
run: |
set -eu

View File

@@ -32,10 +32,19 @@ api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
cd docs/api_reference && poetry run make clean
git clean -fdX ./docs/api_reference
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:

View File

@@ -2,17 +2,17 @@
⚡ Build context-aware reasoning applications ⚡
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![Downloads](https://static.pepy.tech/badge/langchain-core/month)](https://pepy.tech/project/langchain-core)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -38,22 +38,22 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) and [components](https://python.langchain.com/v0.2/docs/concepts/#components). Integrate with hundreds of [third-party providers](https://python.langchain.com/v0.2/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/v0.2/docs/langserve/).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
- **[LangServe](https://python.langchain.com/v0.2/docs/langserve/)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
@@ -61,20 +61,20 @@ For these applications, LangChain simplifies the entire application lifecycle:
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
@@ -87,49 +87,50 @@ Off-the-shelf chains make it easy to get started. Components make it easy to cus
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
**📚 Retrieval:**
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents:**
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents) along with the [LangGraph](https://github.com/langchain-ai/langgraph) extension for building custom agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
- [LangChain Templates](https://python.langchain.com/v0.2/docs/templates/): Example applications hosted with LangServe.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
## 🌟 Contributors

View File

@@ -526,8 +526,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** Currently, OracleEmbeddings processes each embedding generation request individually, without batching, by calling REST endpoints separately for each request. This method could potentially lead to exceeding the maximum request per minute quota set by some providers. However, we are actively working to enhance this process by implementing request batching, which will allow multiple embedding requests to be combined into fewer API calls, thereby optimizing our use of provider resources and adhering to their request limits. This update is expected to be rolled out soon, eliminating the current limitation.\n",
"\n",
"***Note:*** Users may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
]
},

View File

@@ -35,8 +35,6 @@ generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -2,32 +2,150 @@
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
and Templates.
Templates, and Cookbooks.
## Summary
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|------------------|---------|-------------------|------------------------|
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **arXiv id:** 2402.03620v1
- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
- **Published Date:** 2024-02-06
- **URL:** http://arxiv.org/abs/2402.03620v1
- **LangChain:**
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
task-intrinsic reasoning structures to tackle complex reasoning problems that
are challenging for typical prompting methods. Core to the framework is a
self-discovery process where LLMs select multiple atomic reasoning modules such
as critical thinking and step-by-step thinking, and compose them into an
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
outperforms inference-intensive methods such as CoT-Self-Consistency by more
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
the self-discovered reasoning structures are universally applicable across
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
commonalities with human reasoning patterns.
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **arXiv id:** 2401.18059v1
- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
- **Published Date:** 2024-01-31
- **URL:** http://arxiv.org/abs/2401.18059v1
- **LangChain:**
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
state and incorporate long-tail knowledge. However, most existing methods
retrieve only short contiguous chunks from a retrieval corpus, limiting
holistic understanding of the overall document context. We introduce the novel
approach of recursively embedding, clustering, and summarizing chunks of text,
constructing a tree with differing levels of summarization from the bottom up.
At inference time, our RAPTOR model retrieves from this tree, integrating
information across lengthy documents at different levels of abstraction.
Controlled experiments show that retrieval with recursive summaries offers
significant improvements over traditional retrieval-augmented LMs on several
tasks. On question-answering tasks that involve complex, multi-step reasoning,
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
with the use of GPT-4, we can improve the best performance on the QuALITY
benchmark by 20% in absolute accuracy.
## Corrective Retrieval Augmented Generation
- **arXiv id:** 2401.15884v2
- **Title:** Corrective Retrieval Augmented Generation
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
- **Published Date:** 2024-01-29
- **URL:** http://arxiv.org/abs/2401.15884v2
- **LangChain:**
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
accuracy of generated texts cannot be secured solely by the parametric
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
practicable complement to LLMs, it relies heavily on the relevance of retrieved
documents, raising concerns about how the model behaves if retrieval goes
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
(CRAG) to improve the robustness of generation. Specifically, a lightweight
retrieval evaluator is designed to assess the overall quality of retrieved
documents for a query, returning a confidence degree based on which different
knowledge retrieval actions can be triggered. Since retrieval from static and
limited corpora can only return sub-optimal documents, large-scale web searches
are utilized as an extension for augmenting the retrieval results. Besides, a
decompose-then-recompose algorithm is designed for retrieved documents to
selectively focus on key information and filter out irrelevant information in
them. CRAG is plug-and-play and can be seamlessly coupled with various
RAG-based approaches. Experiments on four datasets covering short- and
long-form generation tasks show that CRAG can significantly improve the
performance of RAG-based approaches.
## Mixtral of Experts
- **arXiv id:** 2401.04088v1
- **Title:** Mixtral of Experts
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
- **Published Date:** 2024-01-08
- **URL:** http://arxiv.org/abs/2401.04088v1
- **LangChain:**
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and combine their outputs. Even though each token only sees two experts, the
selected experts can be different at each timestep. As a result, each token has
access to 47B parameters, but only uses 13B active parameters during inference.
Mixtral was trained with a context size of 32k tokens and it outperforms or
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
multilingual benchmarks. We also provide a model fine-tuned to follow
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
the base and instruct models are released under the Apache 2.0 license.
## Dense X Retrieval: What Retrieval Granularity Should We Use?
- **arXiv id:** 2312.06648v2
@@ -91,6 +209,39 @@ average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **arXiv id:** 2310.11511v1
- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
- **Published Date:** 2023-10-17
- **URL:** http://arxiv.org/abs/2310.11511v1
- **LangChain:**
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **arXiv id:** 2310.06117v2
@@ -101,6 +252,7 @@ outside the pre-training knowledge scope.
- **LangChain:**
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
@@ -113,6 +265,27 @@ including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
## Llama 2: Open Foundation and Fine-Tuned Chat Models
- **arXiv id:** 2307.09288v2
- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
- **Published Date:** 2023-07-18
- **URL:** http://arxiv.org/abs/2307.09288v2
- **LangChain:**
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
## Query Rewriting for Retrieval-Augmented Large Language Models
- **arXiv id:** 2305.14283v3
@@ -123,6 +296,7 @@ and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
- **LangChain:**
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
@@ -152,6 +326,7 @@ for retrieval-augmented LLM.
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
approach aimed at improving the problem-solving capabilities of auto-regressive
@@ -171,6 +346,132 @@ significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **arXiv id:** 2305.04091v3
- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
- **Published Date:** 2023-05-06
- **URL:** http://arxiv.org/abs/2305.04091v3
- **LangChain:**
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
## Visual Instruction Tuning
- **arXiv id:** 2304.08485v2
- **Title:** Visual Instruction Tuning
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
- **Published Date:** 2023-04-17
- **URL:** http://arxiv.org/abs/2304.08485v2
- **LangChain:**
- **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
## Generative Agents: Interactive Simulacra of Human Behavior
- **arXiv id:** 2304.03442v2
- **Title:** Generative Agents: Interactive Simulacra of Human Behavior
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
- **Published Date:** 2023-04-07
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
communication to prototyping tools. In this paper, we introduce generative
agents--computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint,
while authors write; they form opinions, notice each other, and initiate
conversations; they remember and reflect on days past as they plan the next
day. To enable generative agents, we describe an architecture that extends a
large language model to store a complete record of the agent's experiences
using natural language, synthesize those memories over time into higher-level
reflections, and retrieve them dynamically to plan behavior. We instantiate
generative agents to populate an interactive sandbox environment inspired by
The Sims, where end users can interact with a small town of twenty five agents
using natural language. In an evaluation, these generative agents produce
believable individual and emergent social behaviors: for example, starting with
only a single user-specified notion that one agent wants to throw a Valentine's
Day party, the agents autonomously spread invitations to the party over the
next two days, make new acquaintances, ask each other out on dates to the
party, and coordinate to show up for the party together at the right time. We
demonstrate through ablation that the components of our agent
architecture--observation, planning, and reflection--each contribute critically
to the believability of agent behavior. By fusing large language models with
computational, interactive agents, this work introduces architectural and
interaction patterns for enabling believable simulations of human behavior.
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **arXiv id:** 2303.17760v2
- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
- **Published Date:** 2023-03-31
- **URL:** http://arxiv.org/abs/2303.17760v2
- **LangChain:**
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **arXiv id:** 2303.17580v4
@@ -181,6 +482,7 @@ implementation of the ToT-based Sudoku solver is available on GitHub:
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are numerous AI models
@@ -235,7 +537,7 @@ more than 1/1,000th the compute of GPT-4.
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
@@ -262,6 +564,7 @@ family, and discuss robustness and security.
- **API Reference:** [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
languages, it remains difficult to create effective fully zero-shot dense
@@ -351,7 +654,8 @@ performance across three real-world tasks on multiple LLMs.
- **URL:** http://arxiv.org/abs/2211.10435v2
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
- **API Reference:** [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
@@ -442,7 +746,7 @@ encoders, mine bitexts, and validate the bitexts by training NMT systems.
- **URL:** http://arxiv.org/abs/2204.00498v1
- **LangChain:**
- **API Reference:** [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
- **API Reference:** [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
language model. We find that, without any finetuning, Codex is a strong
@@ -461,7 +765,7 @@ few-shot examples.
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
@@ -525,7 +829,7 @@ https://github.com/OpenAI/CLIP.
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We

View File

@@ -1,3 +1,7 @@
---
keywords: [prompt, documents, chatprompttemplate, prompttemplate, invoke, lcel, tool, tools, embedding, embeddings, vector, vectorstore, llm, loader, retriever, retrievers]
---
# Conceptual guide
import ThemedImage from '@theme/ThemedImage';
@@ -38,7 +42,7 @@ All dependencies in this package are optional to keep the package as lightweight
`langgraph` is an extension of `langchain` aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for constructing more contr
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows.
### [`langserve`](/docs/langserve)

View File

@@ -71,6 +71,8 @@ make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
@@ -78,6 +80,18 @@ make docs_build
make api_docs_build
```
:::tip
The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
```bash
make api_docs_quick_preview
```
which will just build a small subset of the API reference.
:::
Finally, run the link checker to ensure all links are valid:
```bash

View File

@@ -19,13 +19,13 @@
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.\n",
"Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.\n",
"\n",
"In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"\n",
":::{.callout-important}\n",
"This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
"This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
":::\n",
"\n",
"## Concepts\n",
@@ -34,7 +34,7 @@
"- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
"- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n",
"- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n",
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
"\n",
"## Setup\n",

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "f781411d",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [charactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "c3ee8d00",

View File

@@ -14,35 +14,51 @@
"\n",
":::\n",
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls."
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-openai >= 0.1.8`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c7d1338-dd1b-4d06-b33d-d5cffc49fd6a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1a55e87a-3291-4e7f-8e8e-4c69b0854384",
"id": "598ae1e2-a52d-4459-81fd-cdc68b06742a",
"metadata": {},
"source": [
"## Using AIMessage.response_metadata\n",
"## Using LangSmith\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](/docs/how_to/response_metadata) field. Here's an example with OpenAI:"
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using AIMessage.usage_metadata\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n",
"\n",
"LangChain `AIMessage` objects include a [usage_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n",
"\n",
"Examples:\n",
"\n",
"**OpenAI**:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "467ccdeb-6b62-45e5-816e-167cd24d2586",
"id": "b39bf807-4125-4db4-bbf7-28a46afff6b4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'token_usage': {'completion_tokens': 225,\n",
" 'prompt_tokens': 17,\n",
" 'total_tokens': 242},\n",
" 'model_name': 'gpt-4-turbo',\n",
" 'system_fingerprint': 'fp_76f018034d',\n",
" 'finish_reason': 'stop',\n",
" 'logprobs': None}"
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}"
]
},
"execution_count": 1,
@@ -51,37 +67,33 @@
}
],
"source": [
"# !pip install -qU langchain-openai\n",
"# # !pip install -qU langchain-openai\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"openai_response = llm.invoke(\"hello\")\n",
"openai_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "9d5026e9-3ad4-41e6-9946-9f1a26f4a21f",
"id": "2299c44a-2fe6-4d52-a6a2-99ff6d231c73",
"metadata": {},
"source": [
"And here's an example with Anthropic:"
"**Anthropic**:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "145404f1-e088-4824-b468-236c486a9903",
"id": "9c82ff80-ec4e-4049-b019-5f0bbd7df82a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'id': 'msg_01P61rdHbapEo6h3fjpfpCQT',\n",
" 'model': 'claude-3-sonnet-20240229',\n",
" 'stop_reason': 'end_turn',\n",
" 'stop_sequence': None,\n",
" 'usage': {'input_tokens': 17, 'output_tokens': 306}}"
"{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}"
]
},
"execution_count": 2,
@@ -94,9 +106,222 @@
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
"anthropic_response = llm.invoke(\"hello\")\n",
"anthropic_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f",
"metadata": {},
"source": [
"### Using AIMessage.response_metadata\n",
"\n",
"Metadata from the model response is also included in the AIMessage [response_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f156f9da-21f2-4c81-a714-54cbf9ad393e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n",
"\n",
"Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n"
]
}
],
"source": [
"print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n",
"print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')"
]
},
{
"cell_type": "markdown",
"id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Some providers support token count metadata in a streaming context.\n",
"\n",
"#### OpenAI\n",
"\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={\"include_usage\": True}`.\n",
"\n",
"```{=mdx}\n",
":::note\n",
"By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "07f0c872-6b6c-4fed-a129-9b5a858505be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"aggregate = None\n",
"for chunk in llm.stream(\"hello\", stream_options={\"include_usage\": True}):\n",
" print(chunk)\n",
" aggregate = chunk if aggregate is None else aggregate + chunk"
]
},
{
"cell_type": "markdown",
"id": "dd809ded-8b13-4d5f-be5e-277b79d51802",
"metadata": {},
"source": [
"Note that the usage metadata will be included in the sum of the individual message chunks:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3db7bc03-a7d4-4704-92ab-f8ba92ef59ae",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! How can I assist you today?\n",
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"print(aggregate.content)\n",
"print(aggregate.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "7dba63e8-0ed7-4533-8f0f-78e19c38a25c",
"metadata": {},
"source": [
"To disable streaming token counts for OpenAI, set `\"include_usage\"` to False in `stream_options`, or omit it from the parameters:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "67117f2b-ce68-4c1e-9556-2d3849f90e1b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n"
]
}
],
"source": [
"aggregate = None\n",
"for chunk in llm.stream(\"hello\"):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "6a5d9617-be3a-419a-9276-de9c29fa50ae",
"metadata": {},
"source": [
"You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"\n",
"See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "57dec1fb-bd9c-4c98-8798-8fbbe67f6b2c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}\n",
"\n",
"setup='Why was the math book sad?' punchline='Because it had too many problems.'\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"question to set up a joke\")\n",
" punchline: str = Field(description=\"answer to resolve the joke\")\n",
"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" model_kwargs={\"stream_options\": {\"include_usage\": True}},\n",
")\n",
"# Under the hood, .with_structured_output binds tools to the\n",
"# chat model and appends a parser.\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n",
" elif event[\"event\"] == \"on_chain_end\":\n",
" print(event[\"data\"][\"output\"])\n",
" else:\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "2bc8d313-4bef-463e-89a5-236d8bb6ab2f",
"metadata": {},
"source": [
"Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model."
]
},
{
@@ -115,7 +340,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"id": "31667d54",
"metadata": {},
"outputs": [
@@ -123,11 +348,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 26\n",
"Tokens Used: 27\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 15\n",
"\tCompletion Tokens: 16\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.00056\n"
"Total Cost (USD): $2.95e-05\n"
]
}
],
@@ -136,7 +361,7 @@
"\n",
"from langchain_community.callbacks.manager import get_openai_callback\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
@@ -153,7 +378,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "e09420f4",
"metadata": {},
"outputs": [
@@ -161,7 +386,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"52\n"
"55\n"
]
}
],
@@ -172,6 +397,39 @@
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "9ac51188-c8f4-4230-90fd-3cd78cdd955d",
"metadata": {},
"source": [
"```{=mdx}\n",
":::note\n",
"Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available:\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b241069a-265d-4497-af34-b0a5f95ae67f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"28\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" for chunk in llm.stream(\"Tell me a joke\", stream_options={\"include_usage\": True}):\n",
" pass\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
@@ -182,7 +440,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 12,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
@@ -211,15 +469,15 @@
"source": [
"```{=mdx}\n",
":::note\n",
"We have to set `stream_runnable=False` for token counting to work. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events. However, OpenAI does not return token counts when streaming model responses, so we need to turn off the underlying streaming.\n",
"We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "2f98c536",
"execution_count": 13,
"id": "3950d88b-8bfb-4294-b75b-e6fd421e633c",
"metadata": {},
"outputs": [
{
@@ -230,46 +488,51 @@
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Hummingbird`\n",
"Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Hummingbird\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.\n",
"Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.\n",
"Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 115 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).\n",
"Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.\n",
"\n",
"Page: Rufous hummingbird\n",
"Summary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.\n",
"\n",
"\n",
"Page: Bee hummingbird\n",
"Summary: The bee hummingbird, zunzuncito or Helena hummingbird (Mellisuga helenae) is a species of hummingbird, native to the island of Cuba in the Caribbean. It is the smallest known bird. The bee hummingbird feeds on nectar of flowers and bugs found in Cuba.\n",
"\n",
"Page: Hummingbird cake\n",
"Summary: Hummingbird cake is a banana-pineapple spice cake originating in Jamaica and a popular dessert in the southern United States since the 1970s. Ingredients include flour, sugar, salt, vegetable oil, ripe banana, pineapple, cinnamon, pecans, vanilla extract, eggs, and leavening agent. It is often served with cream cheese frosting.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Fastest bird`\n",
"Page: Anna's hummingbird\n",
"Summary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna Masséna, Duchess of Rivoli.\n",
"It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.\n",
"These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `{'query': 'fastest bird species'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Fastest animals\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"\n",
"\n",
"Page: Fastest animals\n",
"Summary: This is a list of the fastest animals in the world, by types of animal.\n",
"\n",
"\n",
"\n",
"Page: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon, able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"Page: Ostrich\n",
"Summary: Ostriches are large flightless birds. They are the heaviest and largest living birds, with adult common ostriches weighing anywhere between 63.5 and 145 kilograms and laying the largest eggs of any living land animal. With the ability to run at 70 km/h (43.5 mph), they are the fastest birds on land. They are farmed worldwide, with significant industries in the Philippines and in Namibia. Ostrich leather is a lucrative commodity, and the large feathers are used as plumes for the decoration of ceremonial headgear. Ostrich eggs have been used by humans for millennia.\n",
"Ostriches are of the genus Struthio in the order Struthioniformes, part of the infra-class Palaeognathae, a diverse group of flightless birds also known as ratites that includes the emus, rheas, cassowaries, kiwis and the extinct elephant birds and moas. There are two living species of ostrich: the common ostrich, native to large areas of sub-Saharan Africa, and the Somali ostrich, native to the Horn of Africa. The common ostrich was historically native to the Arabian Peninsula, and ostriches were present across Asia as far east as China and Mongolia during the Late Pleistocene and possibly into the Holocene.\u001b[0m\u001b[32;1m\u001b[1;3m### Hummingbird's Scientific Name\n",
"The scientific name for the bee hummingbird, which is the smallest known bird and a species of hummingbird, is **Mellisuga helenae**. It is native to Cuba.\n",
"\n",
"### Fastest Bird Species\n",
"The fastest bird in terms of airspeed is the **peregrine falcon**, which can exceed speeds of 320 km/h (200 mph) during its diving flight. In level flight, the fastest confirmed speed is held by the **common swift**, which can fly at 111.5 km/h (69.3 mph).\u001b[0m\n",
"Page: Falcon\n",
"Summary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.\n",
"Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.\n",
"The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial \"tooth\" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.\n",
"The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.\n",
"Some small falcons with long, narrow wings are called \"hobbies\" and some which hover while hunting are called \"kestrels\".\n",
"As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).\u001b[0m\u001b[32;1m\u001b[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1583\n",
"Prompt Tokens: 1412\n",
"Completion Tokens: 171\n",
"Total Cost (USD): $0.019250000000000003\n"
"Total Tokens: 1787\n",
"Prompt Tokens: 1687\n",
"Completion Tokens: 100\n",
"Total Cost (USD): $0.0009935\n"
]
}
],
@@ -298,19 +561,19 @@
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4a3eced5-2ff7-49a7-a48b-768af8658323",
"execution_count": 12,
"id": "1837c807-136a-49d8-9c33-060e58dc16d2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 0\n",
"\tPrompt Tokens: 0\n",
"\tCompletion Tokens: 0\n",
"Tokens Used: 96\n",
"\tPrompt Tokens: 26\n",
"\tCompletion Tokens: 70\n",
"Successful Requests: 2\n",
"Total Cost (USD): $0.0\n"
"Total Cost (USD): $0.001888\n"
]
}
],
@@ -364,7 +627,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,15 +1,5 @@
{
"cells": [
{
"cell_type": "raw",
"id": "77bf57fb-e990-45f2-8b5f-c76388b05966",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1",

View File

@@ -75,6 +75,31 @@ Otherwise you can initialize without any params:
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings()
```
</TabItem>
<TabItem value="huggingface" label="Hugging Face">
To start we'll need to install the Hugging Face partner package:
```bash
pip install langchain-huggingface
```
You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
```
You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings()
```
</TabItem>

View File

@@ -4,13 +4,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to create an Ensemble Retriever\n",
"# How to combine results from multiple retrievers\n",
"\n",
"The `EnsembleRetriever` takes a list of retrievers as input and ensemble the results of their `get_relevant_documents()` methods and rerank the results based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"The [EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"\n",
"By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. \n",
"\n",
"The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity."
"The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity.\n",
"\n",
"## Basic usage\n",
"\n",
"Below we demonstrate ensembling of a [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)."
]
},
{
@@ -24,22 +28,15 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import EnsembleRetriever\n",
"from langchain_community.retrievers import BM25Retriever\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"doc_list_1 = [\n",
" \"I like apples\",\n",
" \"I like oranges\",\n",
@@ -71,19 +68,19 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like oranges', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]"
"[Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1}),\n",
" Document(page_content='You like oranges', metadata={'source': 2})]"
]
},
"execution_count": 15,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -99,24 +96,17 @@
"source": [
"## Runtime Configuration\n",
"\n",
"We can also configure the retrievers at runtime. In order to do this, we need to mark the fields as configurable"
"We can also configure the individual retrievers at runtime using [configurable fields](/docs/how_to/configure). Below we update the \"top-k\" parameter for the FAISS retriever specifically:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField\n",
"\n",
"faiss_retriever = faiss_vectorstore.as_retriever(\n",
" search_kwargs={\"k\": 2}\n",
").configurable_fields(\n",
@@ -125,15 +115,8 @@
" name=\"Search Kwargs\",\n",
" description=\"The search kwargs to use\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
")\n",
"\n",
"ensemble_retriever = EnsembleRetriever(\n",
" retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n",
")"
@@ -141,9 +124,22 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"config = {\"configurable\": {\"search_kwargs_faiss\": {\"k\": 1}}}\n",
"docs = ensemble_retriever.invoke(\"apples\", config=config)\n",
@@ -181,7 +177,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -151,7 +151,7 @@ Retrievers are responsible for taking a query and returning relevant documents.
- [How to: write a custom retriever class](/docs/how_to/custom_retriever)
- [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever)
- [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever)
- [How to: reorder retrieved results to put most relevant documents not in the middle](/docs/how_to/long_context_reorder)
- [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder)
- [How to: generate multiple embeddings per document](/docs/how_to/multi_vector)
- [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever)
- [How to: generate metadata filters](/docs/how_to/self_query)

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -2,169 +2,226 @@
"cells": [
{
"cell_type": "markdown",
"id": "e5715368",
"id": "90dff237-bc28-4185-a2c0-d5203bbdeacd",
"metadata": {},
"source": [
"# How to track token usage for LLMs\n",
"\n",
"This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single LLM call."
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [LLMs](/docs/concepts/#llms)\n",
":::\n",
"\n",
"## Using LangSmith\n",
"\n",
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using callbacks\n",
"\n",
"There are some API-specific callback context managers that allow you to track token usage across multiple calls. You'll need to check whether such an integration is available for your particular model.\n",
"\n",
"If such an integration is not available for your model, you can create a custom callback manager by adapting the implementation of the [OpenAI callback manager](https://api.python.langchain.com/en/latest/_modules/langchain_community/callbacks/openai_info.html#OpenAICallbackHandler).\n",
"\n",
"### OpenAI\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call.\n",
"\n",
":::{.callout-danger}\n",
"\n",
"The callback handler does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). For support in a streaming context, refer to the corresponding guide for chat models [here](/docs/how_to/chat_token_usage_tracking).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "f790edd9-823e-4bc5-befa-e9529c7237a0",
"metadata": {},
"source": [
"### Single call"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9455db35",
"id": "2eebbee2-6ca1-4fa8-a3aa-0376888ceefb",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 18\n",
"Prompt Tokens: 4\n",
"Completion Tokens: 14\n",
"Total Cost (USD): $3.4e-05\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI"
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "7df3be35-dd97-4e3a-bd51-52434ab2249d",
"metadata": {},
"source": [
"### Multiple calls\n",
"\n",
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence to a chain. This will also work for an agent which may use multiple steps."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d1c55cc9",
"id": "3ec10419-294c-44bf-af85-86aabf457cb6",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why did the chicken go to the seance?\n",
"\n",
"To talk to the other side of the road!\n",
"--\n",
"\n",
"\n",
"Why did the fish need a lawyer?\n",
"\n",
"Because it got caught in a net!\n",
"\n",
"---\n",
"Total Tokens: 50\n",
"Prompt Tokens: 12\n",
"Completion Tokens: 38\n",
"Total Cost (USD): $9.400000000000001e-05\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = template | llm\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = chain.invoke({\"topic\": \"birds\"})\n",
" print(response)\n",
" response = chain.invoke({\"topic\": \"fish\"})\n",
" print(\"--\")\n",
" print(response)\n",
"\n",
"\n",
"print()\n",
"print(\"---\")\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "ad7a3fba-9fac-4222-8f87-d1d276d27d6e",
"metadata": {
"tags": []
},
"source": [
"## Streaming\n",
"\n",
":::{.callout-danger}\n",
"\n",
"`get_openai_callback` does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). If you want to count tokens correctly in a streaming context, there are a number of options:\n",
"\n",
"- Use chat models as described in [this guide](/docs/how_to/chat_token_usage_tracking);\n",
"- Implement a [custom callback handler](/docs/how_to/custom_callbacks/) that uses appropriate tokenizers to count the tokens;\n",
"- Use a monitoring platform such as [LangSmith](https://www.langchain.com/langsmith).\n",
":::\n",
"\n",
"Note that when using legacy language models in a streaming context, token counts are not updated:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "31667d54",
"metadata": {},
"id": "cd61ed79-7858-49bb-afb5-d41291f597ba",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 37\n",
"\tPrompt Tokens: 4\n",
"\tCompletion Tokens: 33\n",
"Successful Requests: 1\n",
"Total Cost (USD): $7.2e-05\n"
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 0\n",
"Prompt Tokens: 0\n",
"Completion Tokens: 0\n",
"Total Cost (USD): $0.0\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "c0ab6d27",
"metadata": {},
"source": [
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e09420f4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"72\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
"metadata": {},
"source": [
"If a chain or agent with multiple steps in it is used, it will track all those steps."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2f98c536",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[\"Olivia Wilde and Harry Styles took fans by surprise with their whirlwind romance, which began when they met on the set of Don't Worry Darling.\", 'Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.', 'Olivia Wilde and Harry Styles were spotted early on in their relationship walking around London. (. Image ...', \"Looks like Olivia Wilde and Jason Sudeikis are starting 2023 on good terms. Amid their highly publicized custody battle and the actress' ...\", 'The two started dating after Wilde split up with actor Jason Sudeikisin 2020. However, their relationship came to an end last November.', \"Olivia Wilde and Harry Styles started dating during the filming of Don't Worry Darling. While the movie got a lot of backlash because of the ...\", \"Here's what we know so far about Harry Styles and Olivia Wilde's relationship.\", 'Olivia and the Grammy winner kept their romance out of the spotlight as their relationship began just two months after her split from ex-fiancé ...', \"Harry Styles and Olivia Wilde first met on the set of Don't Worry Darling and stepped out as a couple in January 2021. Relive all their biggest relationship ...\"]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Harry Styles is Olivia Wilde's boyfriend.\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m29 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 29 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 29^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 2205\n",
"Prompt Tokens: 2053\n",
"Completion Tokens: 152\n",
"Total Cost (USD): $0.0441\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = agent.run(\n",
" \"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\"\n",
" )\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
" for chunk in llm.stream(\"Tell me a joke\"):\n",
" print(chunk, end=\"\", flush=True)\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca77a3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -183,7 +240,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -5,28 +5,38 @@
"id": "fc0db1bc",
"metadata": {},
"source": [
"# How to reorder retrieved results to put most relevant documents not in the middle\n",
"# How to reorder retrieved results to mitigate the \"lost in the middle\" effect\n",
"\n",
"No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.\n",
"In brief: When models must access relevant information in the middle of long contexts, they tend to ignore the provided documents.\n",
"See: https://arxiv.org/abs/2307.03172\n",
"Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arxiv.org/abs/2307.03172) as the number of retrieved documents grows (e.g., beyond ten). In brief: models are liable to miss relevant information in the middle of long contexts.\n",
"\n",
"To avoid this issue you can re-order documents after retrieval to avoid performance degradation."
"By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/#embedding-models)).\n",
"\n",
"To mitigate the [\"lost in the middle\"](https://arxiv.org/abs/2307.03172) effect, you can re-order documents after retrieval such that the most relevant documents are positioned at extrema (e.g., the first and last pieces of context), and the least relevant documents are positioned in the middle. In some cases this can help surface the most relevant information to LLMs.\n",
"\n",
"The [LongContextReorder](https://api.python.langchain.com/en/latest/document_transformers/langchain_community.document_transformers.long_context_reorder.LongContextReorder.html) document transformer implements this re-ordering procedure. Below we demonstrate an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74d1ebe8",
"id": "2074fdaa-edff-468a-970f-6f5f26e93d4a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet sentence-transformers langchain-chroma langchain langchain-openai langchain-huggingface > /dev/null"
]
},
{
"cell_type": "markdown",
"id": "c97eaaf2-34b7-4770-9949-e1abc4ca5226",
"metadata": {},
"source": [
"First we embed some artificial documents and index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store. We will use [Hugging Face](/docs/integrations/text_embedding/huggingfacehub/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "49cbcd8e",
"metadata": {},
"outputs": [
@@ -45,20 +55,14 @@
" Document(page_content='This is just a random text.')]"
]
},
"execution_count": 3,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain, StuffDocumentsChain\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_transformers import (\n",
" LongContextReorder,\n",
")\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_openai import OpenAI\n",
"\n",
"# Get embeddings.\n",
"embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
@@ -83,14 +87,22 @@
"query = \"What can you tell me about the Celtics?\"\n",
"\n",
"# Get relevant documents ordered by relevance score\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"docs"
]
},
{
"cell_type": "markdown",
"id": "175d031a-43fa-42f4-93c4-2ba52c3c3ee5",
"metadata": {},
"source": [
"Note that documents are returned in descending order of relevance to the query. The `LongContextReorder` document transformer will implement the re-ordering described above:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "34fb9d6e",
"execution_count": 3,
"id": "9a1181f2-a3dc-4614-9233-2196ab65939e",
"metadata": {},
"outputs": [
{
@@ -108,12 +120,14 @@
" Document(page_content='This is a document about the Boston Celtics')]"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_transformers import LongContextReorder\n",
"\n",
"# Reorder the documents:\n",
"# Less relevant document will be at the middle of the list and more\n",
"# relevant elements at beginning / end.\n",
@@ -125,58 +139,54 @@
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ceccab87",
"cell_type": "markdown",
"id": "a8d2ef0c-c397-4d8d-8118-3f7acf86d241",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nThe Celtics are referenced in four of the nine text extracts. They are mentioned as the favorite team of the author, the winner of a basketball game, a team with one of the best players, and a team with a specific player. Additionally, the last extract states that the document is about the Boston Celtics. This suggests that the Celtics are a basketball team, possibly from Boston, that is well-known and has had successful players and games in the past. '"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We prepare and run a custom Stuff chain with reordered docs as context.\n",
"\n",
"# Override prompts\n",
"document_prompt = PromptTemplate(\n",
" input_variables=[\"page_content\"], template=\"{page_content}\"\n",
")\n",
"document_variable_name = \"context\"\n",
"llm = OpenAI()\n",
"stuff_prompt_override = \"\"\"Given this text extracts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\"\"\"\n",
"prompt = PromptTemplate(\n",
" template=stuff_prompt_override, input_variables=[\"context\", \"query\"]\n",
")\n",
"\n",
"# Instantiate the chain\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"chain = StuffDocumentsChain(\n",
" llm_chain=llm_chain,\n",
" document_prompt=document_prompt,\n",
" document_variable_name=document_variable_name,\n",
")\n",
"chain.run(input_documents=reordered_docs, query=query)"
"Below, we show how to incorporate the re-ordered documents into a simple question-answering chain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4696a97",
"execution_count": 5,
"id": "8bbea705-d5b9-4ed5-9957-e12547283622",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"The Celtics are a professional basketball team and one of the most iconic franchises in the NBA. They are highly regarded and have a large fan base. The team has had many successful seasons and is often considered one of the top teams in the league. They have a strong history and have produced many great players, such as Larry Bird and L. Kornet. The team is based in Boston and is often referred to as the Boston Celtics.\n"
]
}
],
"source": [
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI()\n",
"\n",
"prompt_template = \"\"\"\n",
"Given these texts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\n",
"\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=prompt_template,\n",
" input_variables=[\"context\", \"query\"],\n",
")\n",
"\n",
"# Create and invoke the chain:\n",
"chain = create_stuff_documents_chain(llm, prompt)\n",
"response = chain.invoke({\"context\": reordered_docs, \"query\": query})\n",
"print(response)"
]
}
],
"metadata": {
@@ -195,7 +205,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

File diff suppressed because it is too large Load Diff

View File

@@ -5,33 +5,36 @@
"id": "d9172545",
"metadata": {},
"source": [
"# How to use the MultiVector Retriever\n",
"# How to retrieve using multiple vectors per document\n",
"\n",
"It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base `MultiVectorRetriever` which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can embed multiple chunks of a document and associate those embeddings with the parent document, allowing retriever hits on the chunks to return the larger document.\n",
"\n",
"LangChain implements a base [MultiVectorRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html), which simplifies this process. Much of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"\n",
"The methods to create multiple vectors per document include:\n",
"\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is [ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html)).\n",
"- Summary: create a summary for each document, embed that along with (or instead of) the document.\n",
"- Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is useful because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control."
"Below we walk through an example. First we instantiate some documents. We will index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store using [OpenAI](https://python.langchain.com/v0.2/docs/integrations/text_embedding/openai/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09cecd95-3499-465a-895a-944627ffb77f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-chroma langchain langchain-openai > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eed469be",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.multi_vector import MultiVectorRetriever"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "18c1421a",
"metadata": {},
"outputs": [],
@@ -40,25 +43,22 @@
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6d869496",
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loaders = [\n",
" TextLoader(\"../../paul_graham_essay.txt\"),\n",
" TextLoader(\"paul_graham_essay.txt\"),\n",
" TextLoader(\"state_of_the_union.txt\"),\n",
"]\n",
"docs = []\n",
"for loader in loaders:\n",
" docs.extend(loader.load())\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)\n",
"docs = text_splitter.split_documents(docs)"
"docs = text_splitter.split_documents(docs)\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")"
]
},
{
@@ -68,52 +68,54 @@
"source": [
"## Smaller chunks\n",
"\n",
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the `ParentDocumentRetriever` does. Here we show what is going on under the hood."
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the [ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html) does. Here we show what is going on under the hood.\n",
"\n",
"We will make a distinction between the vector store, which indexes embeddings of the (sub) documents, and the document store, which houses the \"parent\" documents and associates them with an identifier."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "0e7b6b45",
"metadata": {},
"outputs": [],
"source": [
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")\n",
"import uuid\n",
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"\n",
"# The storage layer for the parent documents\n",
"store = InMemoryByteStore()\n",
"id_key = \"doc_id\"\n",
"\n",
"# The retriever (empty to start)\n",
"retriever = MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"import uuid\n",
"\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "72a36491",
"cell_type": "markdown",
"id": "d4feded4-856a-4282-91c3-53aabc62e6ff",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)"
"We next generate the \"sub\" documents by splitting the original documents. Note that we store the document identifier in the `metadata` of the corresponding [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) object."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "5d23247d",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"\n",
"sub_docs = []\n",
"for i, doc in enumerate(docs):\n",
" _id = doc_ids[i]\n",
@@ -123,9 +125,17 @@
" sub_docs.extend(_sub_docs)"
]
},
{
"cell_type": "markdown",
"id": "8e0634f8-90d5-4250-981a-5257c8a6d455",
"metadata": {},
"source": [
"Finally, we index the documents in our vector store and document store:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "92ed5861",
"metadata": {},
"outputs": [],
@@ -134,31 +144,46 @@
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "markdown",
"id": "14c48c6d-850c-4317-9b6e-1ade92f2f710",
"metadata": {},
"source": [
"The vector store alone will retrieve small chunks:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"id": "8afed60c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '2fd77862-9ed5-4fad-bf76-e487b747b333', 'source': 'state_of_the_union.txt'})"
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '064eca46-a4c4-4789-8e3b-583f9597e54f', 'source': 'state_of_the_union.txt'})"
]
},
"execution_count": 8,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Vectorstore alone retrieves the small chunks\n",
"retriever.vectorstore.similarity_search(\"justice breyer\")[0]"
]
},
{
"cell_type": "markdown",
"id": "717097c7-61d9-4306-8625-ef8f1940c127",
"metadata": {},
"source": [
"Whereas the retriever will return the larger parent document:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"id": "3c9017f1",
"metadata": {},
"outputs": [
@@ -168,14 +193,13 @@
"9875"
]
},
"execution_count": 9,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Retriever returns larger chunks\n",
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
@@ -183,12 +207,12 @@
"id": "cdef8339-f9fa-4b3b-955f-ad9dbdf2734f",
"metadata": {},
"source": [
"The default search type the retriever performs on the vector database is a similarity search. LangChain Vector Stores also support searching via [Max Marginal Relevance](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search) so if you want this instead you can just set the `search_type` property as follows:"
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 7,
"id": "36739460-a737-4a8e-b70f-50bf8c8eaae7",
"metadata": {},
"outputs": [
@@ -198,7 +222,7 @@
"9875"
]
},
"execution_count": 10,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -208,7 +232,7 @@
"\n",
"retriever.search_type = SearchType.mmr\n",
"\n",
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
@@ -216,14 +240,37 @@
"id": "d6a7ae0d",
"metadata": {},
"source": [
"## Summary\n",
"## Associating summaries with a document for retrieval\n",
"\n",
"Oftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those."
"A summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.\n",
"\n",
"We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 8,
"id": "6589291f-55bb-4e9a-b4ff-08f2506ed641",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1433dff4",
"metadata": {},
"outputs": [],
@@ -233,27 +280,26 @@
"from langchain_core.documents import Document\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "35b30390",
"metadata": {},
"outputs": [],
"source": [
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
" | ChatPromptTemplate.from_template(\"Summarize the following document:\\n\\n{doc}\")\n",
" | ChatOpenAI(max_retries=0)\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3faa9fde-1b09-4849-a815-8b2e89c30a02",
"metadata": {},
"source": [
"Note that we can [batch](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) the chain accross documents:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 10,
"id": "41a2a738",
"metadata": {},
"outputs": [],
@@ -261,9 +307,17 @@
"summaries = chain.batch(docs, {\"max_concurrency\": 5})"
]
},
{
"cell_type": "markdown",
"id": "73ef599e-140b-4905-8b62-6c52cdde1852",
"metadata": {},
"source": [
"We can then initialize a `MultiVectorRetriever` as before, indexing the summaries in our vector store, and retaining the original documents in our document store:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 11,
"id": "7ac5e4b1",
"metadata": {},
"outputs": [],
@@ -279,29 +333,13 @@
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "0d93309f",
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"summary_docs = [\n",
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
" for i, s in enumerate(summaries)\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6d5edf0d",
"metadata": {},
"outputs": [],
"source": [
"]\n",
"\n",
"retriever.vectorstore.add_documents(summary_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
@@ -320,50 +358,48 @@
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "299232d6",
"cell_type": "markdown",
"id": "f0274892-29c1-4616-9040-d23f9d537526",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
"Querying the vector store will return summaries:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "10e404c0",
"execution_count": 12,
"id": "299232d6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content=\"The document is a speech given by President Biden addressing various issues and outlining his agenda for the nation. He highlights the importance of nominating a Supreme Court justice and introduces his nominee, Judge Ketanji Brown Jackson. He emphasizes the need to secure the border and reform the immigration system, including providing a pathway to citizenship for Dreamers and essential workers. The President also discusses the protection of women's rights, including access to healthcare and the right to choose. He calls for the passage of the Equality Act to protect LGBTQ+ rights. Additionally, President Biden discusses the need to address the opioid epidemic, improve mental health services, support veterans, and fight against cancer. He expresses optimism for the future of America and the strength of the American people.\", metadata={'doc_id': '56345bff-3ead-418c-a4ff-dff203f77474'})"
"Document(page_content=\"President Biden recently nominated Judge Ketanji Brown Jackson to serve on the United States Supreme Court, emphasizing her qualifications and broad support. The President also outlined a plan to secure the border, fix the immigration system, protect women's rights, support LGBTQ+ Americans, and advance mental health services. He highlighted the importance of bipartisan unity in passing legislation, such as the Violence Against Women Act. The President also addressed supporting veterans, particularly those impacted by exposure to burn pits, and announced plans to expand benefits for veterans with respiratory cancers. Additionally, he proposed a plan to end cancer as we know it through the Cancer Moonshot initiative. President Biden expressed optimism about the future of America and emphasized the strength of the American people in overcoming challenges.\", metadata={'doc_id': '84015b1b-980e-400a-94d8-cf95d7e079bd'})"
]
},
"execution_count": 19,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e4cce5c2",
"cell_type": "markdown",
"id": "e4f77ac5-2926-4f60-aad5-b2067900dff9",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
"Whereas the retriever will return the larger source document:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "c8570dbb",
"execution_count": 13,
"id": "e4cce5c2",
"metadata": {},
"outputs": [
{
@@ -372,12 +408,14 @@
"9194"
]
},
"execution_count": 21,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"\n",
"len(retrieved_docs[0].page_content)"
]
},
@@ -388,42 +426,28 @@
"source": [
"## Hypothetical Queries\n",
"\n",
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embedded"
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document, which might bear close semantic similarity to relevant queries in a [RAG](/docs/tutorials/rag) application. These questions can then be embedded and associated with the documents to improve retrieval.\n",
"\n",
"Below, we use the [with_structured_output](/docs/how_to/structured_output/) method to structure the LLM output into a list of strings."
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "5219b085",
"execution_count": 16,
"id": "03d85234-c33a-4a43-861d-47328e1ec2ea",
"metadata": {},
"outputs": [],
"source": [
"functions = [\n",
" {\n",
" \"name\": \"hypothetical_questions\",\n",
" \"description\": \"Generate hypothetical questions\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"questions\": {\n",
" \"type\": \"array\",\n",
" \"items\": {\"type\": \"string\"},\n",
" },\n",
" },\n",
" \"required\": [\"questions\"],\n",
" },\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "523deb92",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
"from typing import List\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class HypotheticalQuestions(BaseModel):\n",
" \"\"\"Generate hypothetical questions.\"\"\"\n",
"\n",
" questions: List[str] = Field(..., description=\"List of questions\")\n",
"\n",
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
@@ -431,28 +455,36 @@
" | ChatPromptTemplate.from_template(\n",
" \"Generate a list of exactly 3 hypothetical questions that the below document could be used to answer:\\n\\n{doc}\"\n",
" )\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4\").bind(\n",
" functions=functions, function_call={\"name\": \"hypothetical_questions\"}\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4o\").with_structured_output(\n",
" HypotheticalQuestions\n",
" )\n",
" | JsonKeyOutputFunctionsParser(key_name=\"questions\")\n",
" | (lambda x: x.questions)\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6dddc40f-62af-413c-b944-f94a5e1f2f4e",
"metadata": {},
"source": [
"Invoking the chain on a single document demonstrates that it outputs a list of questions:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 17,
"id": "11d30554",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[\"What was the author's first experience with programming like?\",\n",
" 'Why did the author switch their focus from AI to Lisp during their graduate studies?',\n",
" 'What led the author to contemplate a career in art instead of computer science?']"
"[\"What impact did the IBM 1401 have on the author's early programming experiences?\",\n",
" \"How did the transition from using the IBM 1401 to microcomputers influence the author's programming journey?\",\n",
" \"What role did Lisp play in shaping the author's understanding and approach to AI?\"]"
]
},
"execution_count": 24,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -462,22 +494,24 @@
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "3eb2e48c",
"cell_type": "markdown",
"id": "dcffc572-7b20-4b77-857a-90ec360a8f7e",
"metadata": {},
"outputs": [],
"source": [
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})"
"We can batch then batch the chain over all documents and assemble our vector store and document store as before:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 18,
"id": "b2cd6e75",
"metadata": {},
"outputs": [],
"source": [
"# Batch chain over documents to generate hypothetical questions\n",
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})\n",
"\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"hypo-questions\", embedding_function=OpenAIEmbeddings()\n",
@@ -491,82 +525,67 @@
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "18831b3b",
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"\n",
"# Generate Document objects from hypothetical questions\n",
"question_docs = []\n",
"for i, question_list in enumerate(hypothetical_questions):\n",
" question_docs.extend(\n",
" [Document(page_content=s, metadata={id_key: doc_ids[i]}) for s in question_list]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "224b24c5",
"metadata": {},
"outputs": [],
"source": [
" )\n",
"\n",
"\n",
"retriever.vectorstore.add_documents(question_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "7b442b90",
"cell_type": "markdown",
"id": "75cba8ab-a06f-4545-85fc-cf49d0204b5e",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
"Note that querying the underlying vector store will retrieve hypothetical questions that are semantically similar to the input query:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "089b5ad0",
"execution_count": 19,
"id": "7b442b90",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Who has been nominated to serve on the United States Supreme Court?', metadata={'doc_id': '0b3a349e-c936-4e77-9c40-0a39fc3e07f0'}),\n",
" Document(page_content=\"What was the context and content of Robert Morris' advice to the document's author in 2010?\", metadata={'doc_id': 'b2b2cdca-988a-4af1-ba47-46170770bc8c'}),\n",
" Document(page_content='How did personal circumstances influence the decision to pass on the leadership of Y Combinator?', metadata={'doc_id': 'b2b2cdca-988a-4af1-ba47-46170770bc8c'}),\n",
" Document(page_content='What were the reasons for the author leaving Yahoo in the summer of 1999?', metadata={'doc_id': 'ce4f4981-ca60-4f56-86f0-89466de62325'})]"
"[Document(page_content='What might be the potential benefits of nominating Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court?', metadata={'doc_id': '43292b74-d1b8-4200-8a8b-ea0cb57fbcdb'}),\n",
" Document(page_content='How might the Bipartisan Infrastructure Law impact the economic competition between the U.S. and China?', metadata={'doc_id': '66174780-d00c-4166-9791-f0069846e734'}),\n",
" Document(page_content='What factors led to the creation of Y Combinator?', metadata={'doc_id': '72003c4e-4cc9-4f09-a787-0b541a65b38c'}),\n",
" Document(page_content='How did the ability to publish essays online change the landscape for writers and thinkers?', metadata={'doc_id': 'e8d2c648-f245-4bcc-b8d3-14e64a164b64'})]"
]
},
"execution_count": 30,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "7594b24e",
"cell_type": "markdown",
"id": "63c32e43-5f4a-463b-a0c2-2101986f70e6",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
"And invoking the retriever will return the corresponding document:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "4c120c65",
"execution_count": 20,
"id": "7594b24e",
"metadata": {},
"outputs": [
{
@@ -575,22 +594,15 @@
"9194"
]
},
"execution_count": 32,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "005072b8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -609,7 +621,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -220,7 +220,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -36,12 +36,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "ede7fdc0-ef31-483d-bd67-32e4b5c5d527",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-chroma bs4"
"%%capture --no-stderr\n",
"%pip install --upgrade --quiet langchain langchain-community langchain-chroma bs4"
]
},
{
@@ -54,7 +55,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "143787ca-d8e6-4dc9-8281-4374f4d71720",
"metadata": {},
"outputs": [],
@@ -62,7 +63,8 @@
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"if not os.environ.get(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# import dotenv\n",
"\n",
@@ -83,13 +85,14 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "07411adb-3722-4f65-ab7f-8f6f57663d11",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"if not os.environ.get(\"LANGCHAIN_API_KEY\"):\n",
" os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
@@ -126,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "cb58f273-2111-4a9b-8932-9b64c95030c8",
"metadata": {},
"outputs": [],
@@ -157,13 +160,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "820244ae-74b4-4593-b392-822979dd91b8",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_chroma import Chroma\n",
@@ -202,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "2b685428-8b82-4af1-be4f-7232c5d55b73",
"metadata": {},
"outputs": [],
@@ -239,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"id": "4c4b1695-6217-4ee8-abaf-7cc26366d988",
"metadata": {},
"outputs": [],
@@ -265,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"id": "afef4385-f571-4874-8f52-3d475642f579",
"metadata": {},
"outputs": [],
@@ -314,7 +316,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"id": "9c3fb176-8d6a-4dc7-8408-6a22c5f7cc72",
"metadata": {},
"outputs": [],
@@ -343,17 +345,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"id": "1046c92f-21b3-4214-907d-92878d8cba23",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in thinking step by step or exploring multiple reasoning possibilities at each step. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'"
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -369,17 +371,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 11,
"id": "0e89c75f-7ad7-4331-a2fe-57579eb8f840",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down complex tasks into smaller steps. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process effectively.'"
"'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.'"
]
},
"execution_count": 8,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -401,7 +403,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 12,
"id": "7686b874-3a85-499f-82b5-28a85c4c768c",
"metadata": {},
"outputs": [
@@ -411,11 +413,11 @@
"text": [
"User: What is Task Decomposition?\n",
"\n",
"AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in thinking step by step or exploring multiple reasoning possibilities at each step. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.\n",
"AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.\n",
"\n",
"User: What are common ways of doing it?\n",
"\n",
"AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down complex tasks into smaller steps. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process effectively.\n",
"AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.\n",
"\n"
]
}
@@ -452,7 +454,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 13,
"id": "71c32048-1a41-465f-a9e2-c4affc332fd9",
"metadata": {},
"outputs": [],
@@ -552,17 +554,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 14,
"id": "6d0a7a73-d151-47d9-9e99-b4f3291c0322",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. This process helps agents or models tackle difficult tasks by dividing them into more easily achievable subgoals. Task decomposition can be done through techniques like Chain of Thought or Tree of Thoughts, which guide the model in thinking step by step or exploring multiple reasoning possibilities at each step.'"
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. Techniques like Chain of Thought (CoT) and Tree of Thoughts help in decomposing hard tasks into multiple manageable tasks by instructing models to think step by step and explore multiple reasoning possibilities at each step. Task decomposition can be achieved through various methods such as using prompting techniques, task-specific instructions, or human inputs.'"
]
},
"execution_count": 2,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -578,17 +580,17 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 15,
"id": "17021822-896a-4513-a17d-1d20b1c5381c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Common ways of task decomposition include using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide models in breaking down complex tasks into smaller steps. This can be achieved through simple prompting with LLMs, task-specific instructions, or human inputs to help the model understand and navigate the task effectively. Task decomposition aims to enhance model performance on complex tasks by utilizing more test-time computation and shedding light on the model's thinking process.\""
"'Task decomposition can be done in common ways such as using prompting techniques like Chain of Thought (CoT) or Tree of Thoughts, which instruct models to think step by step and explore multiple reasoning possibilities at each step. Another way is to provide task-specific instructions, such as asking to \"Write a story outline\" for writing a novel, to guide the decomposition process. Additionally, task decomposition can also involve human inputs to break down complex tasks into smaller and simpler steps.'"
]
},
"execution_count": 3,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -618,7 +620,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 16,
"id": "809cc747-2135-40a2-8e73-e4556343ee64",
"metadata": {},
"outputs": [],
@@ -646,14 +648,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 17,
"id": "1726d151-4653-4c72-a187-a14840add526",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import chat_agent_executor\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(llm, tools)"
"agent_executor = create_react_agent(llm, tools)"
]
},
{
@@ -666,19 +668,26 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 18,
"id": "52ae46d9-43f7-481b-96d5-df750be3ad65",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID 5cd28d13-88dd-4eac-a465-3770ac27eff6, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_wxRrUmNbaNny8wh9JIb5uCRB', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-57ee0d12-6142-4957-a002-cce0093efe07-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_wxRrUmNbaNny8wh9JIb5uCRB'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_TbhPPPN05GKi36HLeaN4QM90', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2e60d910-879a-4a2a-b1e9-6a6c5c7d7ebc-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_TbhPPPN05GKi36HLeaN4QM90'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', id='9c3a17f7-653c-47fa-b4e4-fa3d8d24c85d', tool_call_id='call_wxRrUmNbaNny8wh9JIb5uCRB')]}}\n",
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_TbhPPPN05GKi36HLeaN4QM90')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents in planning and executing tasks more effectively. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to think step by step to decompose hard tasks into manageable steps. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of thought steps.\\n\\nTask decomposition can be achieved through various methods, such as using language models with simple prompting, task-specific instructions, or human inputs. By breaking down tasks into smaller components, agents can better plan and execute tasks efficiently.\\n\\nIf you would like more detailed information or examples on task decomposition, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 154, 'prompt_tokens': 588, 'total_tokens': 742}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-8991fa20-c527-4f9e-a058-fc6264fe6259-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in transforming big tasks into multiple manageable tasks, making it easier for autonomous agents to handle and interpret the thinking process. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to \"think step by step\" to decompose hard tasks. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of multiple thoughts per step. Task decomposition can be facilitated through various methods such as using simple prompts, task-specific instructions, or human inputs.', response_metadata={'token_usage': {'completion_tokens': 130, 'prompt_tokens': 636, 'total_tokens': 766}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-3ef17638-65df-4030-a7fe-795e6da91c69-0')]}}\n",
"----\n"
]
}
@@ -707,7 +716,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 19,
"id": "837a401e-9757-4d0e-a0da-24fa097d887e",
"metadata": {},
"outputs": [],
@@ -716,9 +725,7 @@
"\n",
"memory = SqliteSaver.from_conn_string(\":memory:\")\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" llm, tools, checkpointer=memory\n",
")"
"agent_executor = create_react_agent(llm, tools, checkpointer=memory)"
]
},
{
@@ -733,7 +740,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 20,
"id": "d6d70833-b958-4cd7-9e27-29c1c08bb1b8",
"metadata": {},
"outputs": [
@@ -741,7 +748,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-1451e59b-b135-4776-985d-4759338ffee5-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1cd17562-18aa-4839-b41b-403b17a0fc20-0')]}}\n",
"----\n"
]
}
@@ -766,19 +773,26 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 21,
"id": "e2c570ae-dd91-402c-8693-ae746de63b16",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID c54381c0-c5d9-495a-91a0-aca4ae755663, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ab2x4iUPSWDAHS5txL7PspSK', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-f76b5813-b41c-4d0d-9ed2-667b988d885e-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_ab2x4iUPSWDAHS5txL7PspSK'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-122bf097-7ff1-49aa-b430-e362b51354ad-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', id='e0895fa5-5d41-4be0-98db-10a83d42fc2f', tool_call_id='call_ab2x4iUPSWDAHS5txL7PspSK')]}}\n",
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_rg7zKTE5e0ICxVSslJ1u9LMg')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used in complex tasks where the task is broken down into smaller and simpler steps. This approach helps in managing and solving difficult tasks by dividing them into more manageable components. One common method for task decomposition is the Chain of Thought (CoT) technique, which prompts the model to think step by step and decompose hard tasks into smaller steps. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of thought steps.\\n\\nTask decomposition can be achieved through various methods, such as using language models with simple prompting, task-specific instructions, or human inputs. By breaking down tasks into smaller components, agents can better plan and execute complex tasks effectively.\\n\\nIf you would like more detailed information or examples related to task decomposition, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 165, 'prompt_tokens': 611, 'total_tokens': 776}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-13296566-8577-4d65-982b-a39718988ca3-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in managing and solving intricate problems by dividing them into more manageable components. By decomposing tasks, agents or models can better understand the steps involved and plan their actions accordingly. Techniques like Chain of Thought (CoT) and Tree of Thoughts are examples of methods that enhance model performance on complex tasks by breaking them down into smaller steps.', response_metadata={'token_usage': {'completion_tokens': 87, 'prompt_tokens': 659, 'total_tokens': 746}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b9166386-83e5-4b82-9a4b-590e5fa76671-0')]}}\n",
"----\n"
]
}
@@ -805,7 +819,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 22,
"id": "570d8c68-136e-4ba5-969a-03ba195f6118",
"metadata": {},
"outputs": [
@@ -813,11 +827,24 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KvoiamnLfGEzMeEMlV3u0TJ7', 'function': {'arguments': '{\"query\":\"common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 930, 'total_tokens': 951}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-dd842071-6dbd-4b68-8657-892eaca58638-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'common ways of task decomposition'}, 'id': 'call_KvoiamnLfGEzMeEMlV3u0TJ7'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI', 'function': {'arguments': '{\"query\":\"Common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 769, 'total_tokens': 790}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2d2c8327-35cd-484a-b8fd-52436657c2d8-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Common ways of task decomposition'}, 'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI'}])]}}\n",
"----\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID 29553415-e0f4-41a9-8921-ba489e377f68, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_6kbxTU5CDWLmF9mrvR7bWSkI')]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nResources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.', name='blog_post_retriever', id='c749bb8e-c8e0-4fa3-bc11-3e2e0651880b', tool_call_id='call_KvoiamnLfGEzMeEMlV3u0TJ7')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='According to the blog post, common ways of task decomposition include:\\n\\n1. Using language models with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Utilizing task-specific instructions, for example, using \"Write a story outline\" for writing a novel.\\n3. Involving human inputs in the task decomposition process.\\n\\nThese methods help in breaking down complex tasks into smaller and more manageable steps, facilitating better planning and execution of the overall task.', response_metadata={'token_usage': {'completion_tokens': 100, 'prompt_tokens': 1475, 'total_tokens': 1575}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-98b765b3-f1a6-4c9a-ad0f-2db7950b900f-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Common ways of task decomposition include:\\n1. Using LLM with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Using task-specific instructions, for example, \"Write a story outline\" for writing a novel.\\n3. Involving human inputs in the task decomposition process.', response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 1339, 'total_tokens': 1406}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ad14cde-ca75-4238-a868-f865e0fc50dd-0')]}}\n",
"----\n"
]
}
@@ -852,20 +879,15 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 23,
"id": "b1d2b4d4-e604-497d-873d-d345b808578e",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain.tools.retriever import create_retriever_tool\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from langgraph.checkpoint.sqlite import SqliteSaver\n",
@@ -900,9 +922,7 @@
"tools = [tool]\n",
"\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" llm, tools, checkpointer=memory\n",
")"
"agent_executor = create_react_agent(llm, tools, checkpointer=memory)"
]
},
{
@@ -941,7 +961,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "52976910",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [recursivecharactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "a678d550",

View File

@@ -2,11 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 0\n",
"keywords: [Runnable, Runnables, LCEL]\n",
"keywords: [Runnable, Runnables, RunnableSequence, LCEL, chain, chains, chaining]\n",
"---"
]
},
@@ -250,8 +253,7 @@
"source": [
"## Related\n",
"\n",
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n",
"- "
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n"
]
}
],

View File

@@ -3,10 +3,14 @@
{
"cell_type": "raw",
"id": "0bdb3b97-4989-4237-b43b-5943dbbd8302",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1.5\n",
"keywords: [stream]\n",
"---"
]
},

View File

@@ -3,10 +3,15 @@
{
"cell_type": "raw",
"id": "27598444",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 3\n",
"keywords: [structured output, json, information extraction, with_structured_output]\n",
"---"
]
},

View File

@@ -14,14 +14,20 @@
"\n",
":::\n",
"\n",
"```{=mdx}\n",
":::info\n",
":::info Tool calling vs function calling\n",
"\n",
"We use the term tool calling interchangeably with function calling. Although\n",
"function calling is sometimes meant to refer to invocations of a single function,\n",
"we treat all models as though they can return multiple tool or function calls in \n",
"each message.\n",
"\n",
":::\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"```\n",
"\n",
"Tool calling allows a chat model to respond to a given prompt by \"calling a tool\".\n",
"While the name implies that the model is performing \n",
@@ -705,7 +711,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -137,6 +137,77 @@
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "c36575b3",
"metadata": {},
"source": [
"### LLM Caching with OpenSearch Semantic Cache\n",
"\n",
"Use OpenSearch as a semantic cache to cache prompts and responses and evaluate hits based on semantic similarity.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "375d4e56",
"metadata": {},
"outputs": [],
"source": [
"from langchain.globals import set_llm_cache\n",
"from langchain_aws import BedrockEmbeddings, ChatBedrock\n",
"from langchain_community.cache import OpenSearchSemanticCache\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"bedrock_embeddings = BedrockEmbeddings(\n",
" model_id=\"amazon.titan-embed-text-v1\", region_name=\"us-east-1\"\n",
")\n",
"\n",
"chat = ChatBedrock(\n",
" model_id=\"anthropic.claude-3-haiku-20240307-v1:0\", model_kwargs={\"temperature\": 0.5}\n",
")\n",
"\n",
"# Enable LLM cache. Make sure OpenSearch is set up and running. Update URL accordingly.\n",
"set_llm_cache(\n",
" OpenSearchSemanticCache(\n",
" opensearch_url=\"http://localhost:9200\", embedding=bedrock_embeddings\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bb5d25bb",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"messages = [HumanMessage(content=\"tell me about Amazon Bedrock\")]\n",
"response_text = chat.invoke(messages)\n",
"\n",
"print(response_text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6cfb3086",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"\n",
"messages = [HumanMessage(content=\"what is amazon bedrock\")]\n",
"response_text = chat.invoke(messages)\n",
"\n",
"print(response_text)"
]
}
],
"metadata": {

View File

@@ -246,11 +246,220 @@
"source": [
"chain.invoke({\"product\": \"healthy snacks\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"### bind_tools()\n",
"\n",
"With `ChatEdenAI.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"llm = ChatEdenAI(provider=\"openai\", temperature=0.2, max_tokens=500)\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([GetWeather])"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', response_metadata={'openai': {'status': 'success', 'generated_text': None, 'message': [{'role': 'user', 'message': 'what is the weather like in San Francisco', 'tools': [{'name': 'GetWeather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'description': 'The city and state, e.g. San Francisco, CA', 'type': 'string'}}, 'required': ['location']}}], 'tool_calls': None}, {'role': 'assistant', 'message': None, 'tools': None, 'tool_calls': [{'id': 'call_tRpAO7KbQwgTjlka70mCQJdo', 'name': 'GetWeather', 'arguments': '{\"location\":\"San Francisco\"}'}]}], 'cost': 0.000194}}, id='run-5c44c01a-d7bb-4df6-835e-bda596080399-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco'}, 'id': 'call_tRpAO7KbQwgTjlka70mCQJdo'}])"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'San Francisco'},\n",
" 'id': 'call_tRpAO7KbQwgTjlka70mCQJdo'}]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### with_structured_output()\n",
"\n",
"The BaseChatModel.with_structured_output interface makes it easy to get structured output from chat models. You can use ChatEdenAI.with_structured_output, which uses tool-calling under the hood), to get the model to more reliably return an output in a specific format:\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GetWeather(location='San Francisco')"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"structured_llm = llm.with_structured_output(GetWeather)\n",
"structured_llm.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Passing Tool Results to model\n",
"\n",
"Here is a full example of how to use a tool. Pass the tool output to the model, and get the result back from the model"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'11 + 11 = 22'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, ToolMessage\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\n",
"\n",
" Args:\n",
" a: first int\n",
" b: second int\n",
" \"\"\"\n",
" return a + b\n",
"\n",
"\n",
"llm = ChatEdenAI(\n",
" provider=\"openai\",\n",
" max_tokens=1000,\n",
" temperature=0.2,\n",
")\n",
"\n",
"llm_with_tools = llm.bind_tools([add], tool_choice=\"required\")\n",
"\n",
"query = \"What is 11 + 11?\"\n",
"\n",
"messages = [HumanMessage(query)]\n",
"ai_msg = llm_with_tools.invoke(messages)\n",
"messages.append(ai_msg)\n",
"\n",
"tool_call = ai_msg.tool_calls[0]\n",
"tool_output = add.invoke(tool_call[\"args\"])\n",
"\n",
"# This append the result from our tool to the model\n",
"messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
"\n",
"llm_with_tools.invoke(messages).content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Eden AI does not currently support streaming tool calls. Attempting to stream will yield a single final message."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/eden/Projects/edenai-langchain/libs/community/langchain_community/chat_models/edenai.py:603: UserWarning: stream: Tool use is not yet supported in streaming mode.\n",
" warnings.warn(\"stream: Tool use is not yet supported in streaming mode.\")\n"
]
},
{
"data": {
"text/plain": [
"[AIMessageChunk(content='', id='run-fae32908-ec48-4ab2-ad96-bb0d0511754f', tool_calls=[{'name': 'add', 'args': {'a': 9, 'b': 9}, 'id': 'call_n0Tm7I9zERWa6UpxCAVCweLN'}], tool_call_chunks=[{'name': 'add', 'args': '{\"a\": 9, \"b\": 9}', 'id': 'call_n0Tm7I9zERWa6UpxCAVCweLN', 'index': 0}])]"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"list(llm_with_tools.stream(\"What's 9 + 9\"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-pr",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},

View File

@@ -58,6 +58,62 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### `HuggingFacePipeline`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFacePipeline\n",
"\n",
"llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"HuggingFaceH4/zephyr-7b-beta\",\n",
" task=\"text-generation\",\n",
" pipeline_kwargs=dict(\n",
" max_new_tokens=512,\n",
" do_sample=False,\n",
" repetition_penalty=1.03,\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To run a quantized version, you might specify a `bitsandbytes` quantization config as follows:\n",
"\n",
"```python\n",
"from transformers import BitsAndBytesConfig\n",
"\n",
"quantization_config = BitsAndBytesConfig(\n",
" load_in_4bit=True,\n",
" bnb_4bit_quant_type=\"nf4\",\n",
" bnb_4bit_compute_dtype=\"float16\",\n",
" bnb_4bit_use_double_quant=True\n",
")\n",
"```\n",
"\n",
"and pass it to the `HuggingFacePipeline` as a part of its `model_kwargs`:\n",
"\n",
"```python\n",
"pipeline = HuggingFacePipeline(\n",
" ...\n",
"\n",
" model_kwargs={\"quantization_config\": quantization_config},\n",
" \n",
" ...\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -7,18 +7,24 @@
"id": "cc6caafa"
},
"source": [
"# NVIDIA AI Foundation Endpoints\n",
"# NVIDIA NIMs\n",
"\n",
"The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
"> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below.\n",
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
"\n",
"This example goes over how to use LangChain to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints."
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
@@ -50,9 +56,9 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
@@ -69,12 +75,23 @@
"import getpass\n",
"import os\n",
"\n",
"if not os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" nvapi_key = getpass.getpass(\"Enter your NVIDIA API key: \")\n",
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
"else:\n",
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
" assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "markdown",
"id": "af0ce26b",
"metadata": {},
"source": [
"## Working with NVIDIA API Catalog"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -96,6 +113,30 @@
"print(result.content)"
]
},
{
"cell_type": "markdown",
"id": "9d35686b",
"metadata": {},
"source": [
"## Working with NVIDIA NIMs\n",
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
"\n",
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49838930",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# connect to an embedding NIM running at localhost:8000, specifying a specific model\n",
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta-llama3-8b-instruct\")"
]
},
{
"cell_type": "markdown",
"id": "71d37987-d568-4a73-9d2a-8bd86323f8bf",
@@ -252,81 +293,6 @@
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "642a618a-faa3-443e-99c3-67b8142f3c51",
"metadata": {},
"source": [
"## Steering LLMs\n",
"\n",
"> [SteerLM-optimized models](https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference/) supports \"dynamic steering\" of model outputs at inference time.\n",
"\n",
"This lets you \"control\" the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.\n",
"\n",
"The \"steer\" models support this type of input, such as `nemotron_steerlm_8b`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36a96b1a-e3e7-4ae3-b4b0-9331b5eca04f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"nemotron_steerlm_8b\")\n",
"# Try making it uncreative and not verbose\n",
"complex_result = llm.invoke(\n",
" \"What's a PB&J?\", labels={\"creativity\": 0, \"complexity\": 3, \"verbosity\": 0}\n",
")\n",
"print(\"Un-creative\\n\")\n",
"print(complex_result.content)\n",
"\n",
"# Try making it very creative and verbose\n",
"print(\"\\n\\nCreative\\n\")\n",
"creative_result = llm.invoke(\n",
" \"What's a PB&J?\", labels={\"creativity\": 9, \"complexity\": 3, \"verbosity\": 9}\n",
")\n",
"print(creative_result.content)"
]
},
{
"cell_type": "markdown",
"id": "75849e7a-2adf-4038-8d9d-8a9e12417789",
"metadata": {},
"source": [
"#### Use within LCEL\n",
"\n",
"The labels are passed as invocation params. You can `bind` these to the LLM using the `bind` method on the LLM to include it within a declarative, functional chain. Below is an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae1105c3-2a0c-4db3-916e-24d5e427bd01",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful AI assistant named Fred.\"), (\"user\", \"{input}\")]\n",
")\n",
"chain = (\n",
" prompt\n",
" | ChatNVIDIA(model=\"nemotron_steerlm_8b\").bind(\n",
" labels={\"creativity\": 9, \"complexity\": 0, \"verbosity\": 9}\n",
" )\n",
" | StrOutputParser()\n",
")\n",
"\n",
"for txt in chain.stream({\"input\": \"Why is a PB&J?\"}):\n",
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "7f465ff6-5922-41d8-8abb-1d1e4095cc27",
@@ -334,7 +300,7 @@
"source": [
"## Multimodal\n",
"\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `playground_neva_22b`.\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `nvidia/neva-22b`.\n",
"\n",
"\n",
"These models accept LangChain's standard image formats, and accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle.\n",
@@ -367,7 +333,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"playground_neva_22b\")"
"llm = ChatNVIDIA(model=\"nvidia/neva-22b\")"
]
},
{
@@ -500,7 +466,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"kosmos = ChatNVIDIA(model=\"kosmos_2\")\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
@@ -544,7 +510,7 @@
"\n",
"\n",
"## Override the payload passthrough. Default is to pass through the payload as is.\n",
"kosmos = ChatNVIDIA(model=\"kosmos_2\")\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"kosmos.client.payload_fn = drop_streaming_key\n",
"\n",
"kosmos.invoke(\n",
@@ -567,43 +533,6 @@
"For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the `NVEModel` client as a requests backbone. The `NVIDIAEmbeddings` class is a good source of inspiration for this. "
]
},
{
"cell_type": "markdown",
"id": "1cd6249a-7ffa-4886-b7e8-5778dc93499e",
"metadata": {},
"source": [
"## RAG: Context models\n",
"\n",
"NVIDIA also has Q&A models that support a special \"context\" chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model. The `_qa_` models like `nemotron_qa_8b` support this.\n",
"\n",
"**Note:** Only \"user\" (human) and \"context\" chat messages are supported for these models; System or AI messages that would useful in conversational flows are not supported."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f994b4d3-c1b0-4e87-aad0-a7b487e2aa43",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import ChatMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" ChatMessage(\n",
" role=\"context\", content=\"Parrots and Cats have signed the peace accord.\"\n",
" ),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
"llm = ChatNVIDIA(model=\"nemotron_qa_8b\")\n",
"chain = prompt | llm | StrOutputParser()\n",
"chain.invoke({\"input\": \"What was signed?\"})"
]
},
{
"cell_type": "markdown",
"id": "137662a6",
@@ -708,14 +637,6 @@
"source": [
"conversation.invoke(\"Tell me about yourself.\")[\"response\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a719bd3-755d-4a05-bda2-de132bf99314",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -723,9 +644,9 @@
"provenance": []
},
"kernelspec": {
"display_name": "Python (venvoss)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "venvoss"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -737,7 +658,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -54,12 +54,12 @@
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### directly in the terminal:\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"### via an API\n",
"#### Via an API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
@@ -72,9 +72,11 @@
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"#### Via LangChain\n",
"\n",
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application."
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application. \n",
"\n",
"View the [API Reference for ChatOllama](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama) for more."
]
},
{
@@ -105,7 +107,7 @@
"\n",
"# using LangChain Expressive Language chain syntax\n",
"# learn more about the LCEL on\n",
"# /docs/expression_language/why\n",
"# /docs/concepts/#langchain-expression-language-lcel\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"# for brevity, response is printed in terminal\n",
@@ -189,7 +191,7 @@
"\n",
"## Building from source\n",
"\n",
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/jmorganca/ollama?tab=readme-ov-file#building)"
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/ollama/ollama?tab=readme-ov-file#building)"
]
},
{
@@ -333,7 +335,7 @@
}
],
"source": [
"pip install --upgrade --quiet pillow"
"!pip install --upgrade --quiet pillow"
]
},
{
@@ -444,6 +446,24 @@
"\n",
"print(query_chain)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {

View File

@@ -12,56 +12,153 @@
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"id": "cb4dd00a-8893-4a45-96f7-9a9fc341cd61",
"metadata": {},
"source": [
"# ChatOpenAI\n",
"\n",
"This notebook covers how to get started with OpenAI chat models."
"This notebook provides a quick overview for getting started with OpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [OpenAI docs](https://platform.openai.com/docs/models).\n",
"\n",
":::info Azure OpenAI\n",
"\n",
"Note that certain OpenAI models can also be accessed via the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-services/openai-service). To use the Azure OpenAI service use the [AzureChatOpenAI integration](/docs/integrations/chat/azure_chat_openai/).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/openai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain-openai](https://api.python.langchain.com/en/latest/openai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "e817fe2e-4f1d-4533-b19e-2400b1cf6ce8",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter your OpenAI API key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "c2a3ce99-a44a-4ea6-8d23-8a88e332f0f9",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85255d53-ac8a-44e1-aa26-8e567bb77ae7",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "c59722a9-6dbb-45f7-ae59-5be50ca5733d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain OpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2113471c-75d7-45df-b784-d78da4ef7aba",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1098bc9d-ce83-462b-8c19-f85bf3a159dc",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "522686de",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # api_key=\"...\", # if you prefer to pass api key in directly instaed of using env vars\n",
" # base_url=\"...\",\n",
" # organization=\"...\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4e5fe97e",
"id": "6511982a-734a-4193-a47d-254f8dcaff5e",
"metadata": {},
"source": [
"The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:\n",
"\n",
"```python\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0, api_key=\"YOUR_API_KEY\", openai_organization=\"YOUR_ORGANIZATION_ID\")\n",
"```\n",
"Remove the openai_organization parameter should it not apply to you."
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
"metadata": {
"tags": []
@@ -70,20 +167,42 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 34, 'total_tokens': 40}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-8591eae1-b42b-402b-a23a-dfdb0cd151bd-0')"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 31, 'total_tokens': 36}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_43dfabdef1', 'finish_reason': 'stop', 'logprobs': None}, id='run-012cffe2-5d3d-424d-83b5-51c6d4a593d1-0', usage_metadata={'input_tokens': 31, 'output_tokens': 5, 'total_tokens': 36})"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"Translate this sentence from English to French. I love programming.\"),\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"llm.invoke(messages)"
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2cd224b8-4499-41fb-a604-d53a7ff17b2e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
@@ -93,7 +212,7 @@
"source": [
"## Chaining\n",
"\n",
"We can chain our model with a prompt template like so:"
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
@@ -116,6 +235,8 @@
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
@@ -277,13 +398,23 @@
"\n",
"fine_tuned_model(messages)"
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-2",
"language": "python",
"name": "python3"
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {

View File

@@ -15,10 +15,9 @@
"source": [
"# ChatPremAI\n",
"\n",
">[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth. \n",
"[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).\n",
"\n",
"\n",
"This example goes over how to use LangChain to interact with `ChatPremAI`. "
"This example goes over how to use LangChain to interact with different chat models with `ChatPremAI`"
]
},
{
@@ -27,23 +26,13 @@
"source": [
"### Installation and setup\n",
"\n",
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
"We start by installing `langchain` and `premai-sdk`. You can type the following command to install:\n",
"\n",
"```bash\n",
"pip install premai langchain\n",
"```\n",
"\n",
"Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:\n",
"\n",
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
"\n",
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
"\n",
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
"\n",
"4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt. \n",
"\n",
"Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application. "
"Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key."
]
},
{
@@ -60,13 +49,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup ChatPremAI instance in LangChain \n",
"### Setup PremAI client in LangChain\n",
"\n",
"Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.\n",
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.\n",
"\n",
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model. \n",
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat-client. By default it will use the model name and parameters used in the [LaunchPad](https://docs.premai.io/get-started/launchpad). \n",
"\n",
"`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations. "
"> Note: If you change the `model` or any other parameters like `temperature` or `max_tokens` while setting the client, it will override existing default configurations, that was used in LaunchPad. "
]
},
{
@@ -102,13 +91,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Calling the Model\n",
"### Chat Completions\n",
"\n",
"Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`. \n",
"`ChatPremAI` supports two methods: `invoke` (which is the same as `generate`) and `stream`. \n",
"\n",
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. \n",
"\n",
"### Generation"
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. "
]
},
{
@@ -165,7 +152,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also change generation parameters while calling the model. Here's how you can do that"
"You can provide system prompt here like this:"
]
},
{
@@ -192,15 +179,72 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Important notes:\n",
"> If you are going to place system prompt here, then it will override your system prompt that was fixed while deploying the application from the platform. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Native RAG Support with Prem Repositories\n",
"\n",
"Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported. \n",
"Prem Repositories which allows users to upload documents (.txt, .pdf etc) and connect those repositories to the LLMs. You can think Prem repositories as native RAG, where each repository can be considered as a vector database. You can connect multiple repositories. You can learn more about repositories [here](https://docs.premai.io/get-started/repositories).\n",
"\n",
"We will provide support for those two above parameters in sooner versions. \n",
"Repositories are also supported in langchain premai. Here is how you can do it. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query = \"what is the diameter of individual Galaxy\"\n",
"repository_ids = [\n",
" 1991,\n",
"]\n",
"repositories = dict(ids=repository_ids, similarity_threshold=0.3, limit=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we start by defining our repository with some repository ids. Make sure that the ids are valid repository ids. You can learn more about how to get the repository id [here](https://docs.premai.io/get-started/repositories). \n",
"\n",
"> Please note: Similar like `model_name` when you invoke the argument `repositories`, then you are potentially overriding the repositories connected in the launchpad. \n",
"\n",
"Now, we connect the repository with our chat object to invoke RAG based generations. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"response = chat.invoke(query, max_tokens=100, repositories=repositories)\n",
"\n",
"print(response.content)\n",
"print(json.dumps(response.response_metadata, indent=4))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in prem platform. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"And finally, here's how you do token streaming for dynamic chat like applications. "
"In this section, let's see how we can stream tokens using langchain and PremAI. Here's how you do it. "
]
},
{
@@ -228,7 +272,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it. "
"Similar to above, if you want to override the system-prompt and the generation parameters, you need to add the following:"
]
},
{

View File

@@ -47,7 +47,8 @@
"source": [
"api_key = \"xxx\"\n",
"base_id = \"xxx\"\n",
"table_id = \"xxx\""
"table_id = \"xxx\"\n",
"view = \"xxx\" # optional"
]
},
{
@@ -57,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"loader = AirtableLoader(api_key, table_id, base_id)\n",
"loader = AirtableLoader(api_key, table_id, base_id, view=view)\n",
"docs = loader.load()"
]
},

View File

@@ -8,7 +8,7 @@
"\n",
">[Jupyter Notebook](https://en.wikipedia.org/wiki/Project_Jupyter#Applications) (formerly `IPython Notebook`) is a web-based interactive computational environment for creating notebook documents.\n",
"\n",
"This notebook covers how to load data from a `Jupyter notebook (.html)` into a format suitable by LangChain."
"This notebook covers how to load data from a `Jupyter notebook (.ipynb)` into a format suitable by LangChain."
]
},
{
@@ -31,7 +31,7 @@
"outputs": [],
"source": [
"loader = NotebookLoader(\n",
" \"example_data/notebook.html\",\n",
" \"example_data/notebook.ipynb\",\n",
" include_outputs=True,\n",
" max_output_length=20,\n",
" remove_newline=True,\n",
@@ -42,7 +42,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`NotebookLoader.load()` loads the `.html` notebook file into a `Document` object.\n",
"`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\n",
"\n",
"**Parameters**:\n",
"\n",

View File

@@ -5,7 +5,7 @@
"id": "f36d938c",
"metadata": {},
"source": [
"# LLM Caching integrations\n",
"# Model caches\n",
"\n",
"This notebook covers how to cache results of individual LLM calls using different caches."
]

View File

@@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
@@ -106,16 +106,16 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"## Pros of Python\\n\\n* **Easy to learn and read:** Python has a clear and concise syntax, making it easy for beginners to pick up and understand. Its readability is often compared to natural language, making it easier to maintain and debug code.\\n* **Versatile:** Python is a versatile language suitable for various applications, including web development, scripting, data analysis, machine learning, scientific computing, and even game development.\\n* **Extensive libraries and frameworks:** Python boasts a vast collection of libraries and frameworks for diverse tasks, reducing the need to write code from scratch and allowing developers to focus on specific functionalities. This makes Python a highly productive language.\\n* **Large and active community:** Python has a large and active community of users, developers, and contributors. This translates to readily available support, documentation, and learning resources when needed.\\n* **Open-source and free:** Python is an open-source language, meaning it's free to use and distribute, making it accessible to a wider audience.\\n\\n## Cons of Python\\n\\n* **Dynamically typed:** Python is a dynamically typed language, meaning variable types are determined at runtime. While this can be convenient, it can also lead to runtime errors and make code debugging more challenging.\\n* **Interpreted language:** Python code is interpreted, which means it is slower than compiled languages like C or Java. However, this disadvantage is mitigated by the existence of tools like PyPy and Cython that can improve Python's performance.\\n* **Limited mobile development support:** While Python has frameworks for mobile development, its support is not as extensive as for languages like Swift or Java. This limits Python's suitability for native mobile app development.\\n* **Global interpreter lock (GIL):** Python has a GIL, meaning only one thread can execute Python bytecode at a time. This can limit performance in multithreaded applications. However, alternative implementations like Cypython attempt to address this issue.\\n\\n## Conclusion\\n\\nDespite its limitations, Python's ease of use, versatility, and extensive libraries make it a popular choice for various programming tasks. Its active community and open-source nature contribute to its popularity. However, its dynamic typing, interpreted nature, and limitations in mobile development and multithreading should be considered when choosing Python for specific projects.\""
"\"## Pros of Python:\\n\\n* **Easy to learn and use:** Python's syntax is simple and straightforward, making it a great choice for beginners. \\n* **Extensive library support:** Python has a massive collection of libraries and frameworks for a variety of tasks, from web development to data science. \\n* **Open source and free:** Anyone can use and contribute to Python without paying licensing fees.\\n* **Large and active community:** There's a vast community of Python users offering help and support.\\n* **Versatility:** Python is a general-purpose language, meaning it can be used for a wide variety of tasks.\\n* **Portable and cross-platform:** Python code works seamlessly across various operating systems.\\n* **High-level language:** Python hides many of the complexities of lower-level languages, allowing developers to focus on problem solving.\\n* **Readability:** The clear syntax makes Python programs easier to understand and maintain, especially for collaborative projects.\\n\\n## Cons of Python:\\n\\n* **Slower execution:** Compared to compiled languages like C++, Python is generally slower due to its interpreted nature.\\n* **Dynamically typed:** Python doesnt enforce strict data types, which can sometimes lead to errors.\\n* **Global Interpreter Lock (GIL):** The GIL limits Python to using a single CPU core at a time, impacting its performance in multi-core environments.\\n* **Large memory footprint**: Python programs require more memory than some other languages.\\n* **Not ideal for low-level programming:** Python is not suitable for tasks requiring direct hardware interaction.\\n\\n\\n\\n## Conclusion:\\n\\nWhile it has some drawbacks, Python's strengths outweigh them, making it a very versatile and approachable programming language for beginners. Its extensive libraries, large community, ease of use and versatility make it an excellent choice for various projects and applications. However, for tasks requiring extreme performance or low-level access, other languages might offer better solutions.\\n\""
]
},
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -244,16 +244,16 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='I am not allowed to give instructions on how to make a molotov cocktail.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 8, 'candidates_token_count': 17, 'total_token_count': 25}})]], llm_output=None, run=[RunInfo(run_id=UUID('78c81d92-8e62-4aef-a056-44541e25d55c'))])"
"\"I'm so sorry, but I can't answer that question. Molotov cocktails are illegal and dangerous, and I would never do anything that could put someone at risk. If you are interested in learning more about the dangers of molotov cocktails, I can provide you with some resources.\""
]
},
"execution_count": 9,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -271,22 +271,23 @@
"\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\", safety_settings=safety_settings)\n",
"\n",
"output = llm.generate([\"How to make a molotov cocktail?\"])\n",
"# invoke a model response\n",
"output = llm.invoke([\"How to make a molotov cocktail?\"])\n",
"output"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='Making a Molotov cocktail is extremely dangerous and illegal in most jurisdictions. It is strongly advised not to attempt to make or use one. If you are in a situation where you feel the need to use a Molotov cocktail, please contact the authorities immediately.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'MEDIUM', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 9, 'candidates_token_count': 51, 'total_token_count': 60}})]], llm_output=None, run=[RunInfo(run_id=UUID('69254d57-0354-4bdc-81ee-0f623b19704d'))])"
"\"I'm sorry, I can't answer that question. Molotov cocktails are illegal and dangerous.\""
]
},
"execution_count": 10,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -295,7 +296,8 @@
"# You may also pass safety_settings to generate method\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\")\n",
"\n",
"output = llm.generate(\n",
"# invoke a model response\n",
"output = llm.invoke(\n",
" [\"How to make a molotov cocktail?\"], safety_settings=safety_settings\n",
")\n",
"output"
@@ -303,23 +305,23 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[GenerationChunk(text='**Pros:**\\n\\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\\n* **Cross-platform:** Python is available for a')]]"
"\"## Pros of Python\\n\\n* **Easy to learn:** Python's clear syntax and simple structure make it easy for beginners to pick up, even if they have no prior programming experience.\\n* **Versatile:** Python is a general-purpose language, meaning it can be used for a wide range of tasks, including web development, data analysis, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there are plenty of resources available to help you learn and use the language.\\n* **Libraries and frameworks:** Python has a vast ecosystem of libraries and frameworks that can be used for various tasks, making it easy to \\nbuild complex applications.\\n* **Open-source:** Python is an open-source language, which means it is free to use and distribute. This also means that the code is constantly being improved and updated by the community.\\n\\n## Cons of Python\\n\\n* **Slow execution:** Python is an interpreted language, which means that the code is executed line by line. This can make Python slower than compiled languages like C++ or Java.\\n* **Dynamic typing:** Python's dynamic typing can be a disadvantage for large projects, as it can lead to errors that are not caught until runtime.\\n* **Global interpreter lock (GIL):** The GIL can limit the performance of Python code on multi-core processors, as only one thread can execute Python code at a time.\\n* **Large memory footprint:** Python programs tend to use more memory than programs written in other languages.\\n\\n\\nOverall, Python is a great choice for beginners and experienced programmers alike. Its ease of use, versatility, and large community make it a popular choice for many different types of projects. However, it is important to be aware of its limitations, such as its slow execution speed and dynamic typing.\""
]
},
"execution_count": null,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = await model.agenerate([message])\n",
"result.generations"
"result = await model.ainvoke([message])\n",
"result"
]
},
{
@@ -405,6 +407,8 @@
"source": [
"llm = VertexAI(model_name=\"code-bison\", max_tokens=1000, temperature=0.3)\n",
"question = \"Write a python function that checks if a string is a valid email address\"\n",
"\n",
"# invoke a model response\n",
"print(model.invoke(question))"
]
},
@@ -424,14 +428,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This is a Yorkshire Terrier.\n"
" The image shows a dog with a long coat. The dog is sitting on a wooden floor and looking at the camera.\n"
]
}
],
@@ -449,8 +453,11 @@
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
@@ -495,14 +502,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This is a Yorkshire Terrier.\n"
" The image shows a dog sitting on a wooden floor. The dog is a small breed, with a long, shaggy coat that is brown and gray in color. The dog has a white patch of fur on its chest and white paws. The dog is looking at the camera with a curious expression.\n"
]
}
],
@@ -522,8 +529,11 @@
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
@@ -548,7 +558,10 @@
"metadata": {},
"outputs": [],
"source": [
"# Prepare input for model consumption\n",
"message2 = HumanMessage(content=\"And where the image is taken?\")\n",
"\n",
"# invoke a model response\n",
"output2 = llm.invoke([message, output, message2])\n",
"print(output2.content)"
]
@@ -562,26 +575,99 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 53,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This image shows a Google Cloud Next event. Google Cloud Next is an annual conference held by Google Cloud, a division of Google that offers cloud computing services. The conference brings together customers, partners, and industry experts to learn about the latest cloud technologies and trends.\n"
]
}
],
"source": [
"image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": \"https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png\",\n",
" \"url\": \"gs://github-repo/img/vision/google-cloud-next.jpeg\",\n",
" },\n",
"}\n",
"text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ADVANCED : You can use Pdfs with Gemini Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"# Use Gemini 1.5 Pro\n",
"llm = ChatVertexAI(model=\"gemini-1.5-pro-preview-0514\")"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"# Prepare input for model consumption\n",
"pdf_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf\"},\n",
"}\n",
"\n",
"text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": \"Summarize the provided document.\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, pdf_message])"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The document introduces Gemini 1.5 Pro, a multimodal AI model developed by Google. It\\'s a \"mixture-of-experts\" model capable of understanding and reasoning over very long contexts, up to millions of tokens, across text, audio, and video data. \\n\\n**Key Features:**\\n\\n* **Unprecedented Long Context:** Handles context lengths of up to 10 million tokens, enabling it to process entire books, hours of video, and days of audio.\\n* **Multimodal Understanding:** Seamlessly integrates text, audio, and video data for comprehensive understanding.\\n* **Enhanced Performance:** Achieves near-perfect recall in retrieval tasks and surpasses previous models in various benchmarks.\\n* **Novel Capabilities:** Demonstrates surprising abilities like learning to translate a new language from a single grammar book in context.\\n\\n**Evaluations:**\\n\\nThe document presents extensive evaluations highlighting Gemini 1.5 Pro\\'s capabilities. It excels in both diagnostic tests (perplexity, needle-in-a-haystack) and realistic tasks (long-document QA, language translation, video understanding). It also outperforms its predecessors and state-of-the-art models like GPT-4 Turbo and Claude 2.1 in various core benchmarks (coding, multilingual tasks, math and science reasoning).\\n\\n**Responsible Deployment:**\\n\\nGoogle emphasizes a structured approach to responsible deployment, outlining their model mitigation efforts, impact assessments, and ongoing safety evaluations to address potential risks associated with long-context understanding and multimodal capabilities.\\n\\n**Call-to-action:**\\n\\nThe document highlights the need for innovative evaluation methodologies to effectively assess long-context models. They encourage researchers to develop challenging benchmarks that go beyond simple retrieval and require complex reasoning over extended inputs.\\n\\n**Overall:**\\n\\nGemini 1.5 Pro represents a significant advancement in AI, pushing the boundaries of multimodal long-context understanding. Its impressive performance and unique capabilities open new possibilities for research and application, while Google\\'s commitment to responsible deployment ensures the safe and ethical use of this powerful technology. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 19872, 'candidates_token_count': 415, 'total_token_count': 20287}}, id='run-99072700-55be-49d4-acca-205a52256bcd-0')"
]
},
"execution_count": 70,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke([message])"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -593,12 +679,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. \n",
"\n",
"Hundreds popular [open-sourced models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#oss-models) like Llama, Falcon and are available for [One Click Deployment](https://cloud.google.com/vertex-ai/generative-ai/docs/deploy/overview)\n",
"\n",
"If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -620,6 +710,7 @@
"metadata": {},
"outputs": [],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
@@ -649,6 +740,241 @@
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Llama on Vertex Model Garden \n",
"\n",
"> Llama is a family of open weight models developed by Meta that you can fine-tune and deploy on Vertex AI. Llama models are pre-trained and fine-tuned generative text models. You can deploy Llama 2 and Llama 3 models on Vertex AI.\n",
"[Official documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-llama) for more information about Llama on [Vertex Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Llama on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" and \"YOUR ENDPOINT_ID\"\n",
"llm = VertexAIModelGarden(project=\"YOUR PROJECT\", endpoint_id=\"YOUR ENDPOINT_ID\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\n is a classic problem for Humanity. There is one vital characteristic of Life in'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" The question is so perplexing that there have been dozens of care\n"
]
}
],
"source": [
"# invoke a model response using chain\n",
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Falcon on Vertex Model Garden "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Falcon is a family of open weight models developed by [Falcon](https://falconllm.tii.ae/) that you can fine-tune and deploy on Vertex AI. Falcon models are pre-trained and fine-tuned generative text models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Falcon on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" and \"YOUR ENDPOINT_ID\"\n",
"llm = VertexAIModelGarden(project=\"YOUR PROJECT\", endpoint_id=\"YOUR ENDPOINT_ID\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\nWhat is the meaning of life?\\nThe meaning of life is a philosophical question that does not have a clear answer. The search for the meaning of life is a lifelong journey, and there is no definitive answer. Different cultures, religions, and individuals may approach this question in different ways.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
"What is the meaning of life?\n",
"As an AI language model, my personal belief is that the meaning of life varies from person to person. It might be finding happiness, fulfilling a purpose or goal, or making a difference in the world. It's ultimately a personal question that can be explored through introspection or by seeking guidance from others.\n"
]
}
],
"source": [
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Gemma on Vertex AI Model Garden"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> [Gemma](https://ai.google.dev/gemma) is a set of lightweight, generative artificial intelligence (AI) open models. Gemma models are available to run in your applications and on your hardware, mobile devices, or hosted services. You can also customize these models using tuning techniques so that they excel at performing tasks that matter to you and your users. Gemma models are based on [Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview) models and are intended for the AI development community to extend and take further."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Gemma on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
")\n",
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -656,6 +982,73 @@
"## Anthropic on Vertex AI"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\nThis is a classic question that has captivated philosophers, theologians, and seekers for'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# TODO : Add \"YOUR PROJECT\" , \"YOUR REGION\" and \"YOUR ENDPOINT_ID\"\n",
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=\"YOUR PROJECT\",\n",
" project=\"YOUR ENDPOINT_ID\",\n",
" location=\"YOUR REGION\",\n",
")\n",
"\n",
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" , \"YOUR REGION\" and \"YOUR ENDPOINT_ID\"\n",
"chat_llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=\"YOUR PROJECT\",\n",
" project=\"YOUR ENDPOINT_ID\",\n",
" location=\"YOUR REGION\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nThe answer is 4.\\n2 + 2 = 4.', id='run-cea563df-e91a-4374-83a1-3d8b186a01b2-0')"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prepare input for model consumption\n",
"text_question1 = \"How much is 2+2?\"\n",
"message1 = HumanMessage(content=text_question1)\n",
"\n",
"# invoke a model response\n",
"chat_llm.invoke([message1])"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -12,16 +12,15 @@
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/ollama/ollama#model-library).\n",
"\n",
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
"First, follow [these instructions](https://github.com/ollama/ollama) to set up and run a local Ollama instance:\n",
"\n",
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
" * e.g., `ollama pull llama3`\n",
" * View a list of available models via the [model library](https://ollama.ai/library) and pull to use locally with the command `ollama pull llama3`\n",
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
@@ -29,28 +28,29 @@
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models, use `ollama list`\n",
"* To view all pulled models on your local instance, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
"* View the [Ollama documentation](https://github.com/ollama/ollama) for more commands. \n",
"* Run `ollama help` in the terminal to see available commands too.\n",
"\n",
"## Usage\n",
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html).\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html).\n",
"\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` interface.\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` [interface](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/).\n",
"\n",
"This includes [special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) for system message and user input.\n",
"This includes [special tokens](https://ollama.com/library/llama3) for system message and user input.\n",
"\n",
"## Interacting with Models \n",
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### directly in the terminal:\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"### via an API\n",
"#### Via the API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
@@ -61,11 +61,20 @@
"}'\n",
"```\n",
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"See the Ollama [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"\n",
"See a typical basic example of using Ollama chat model in your LangChain application."
"See a typical basic example of using [Ollama chat model](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) in your LangChain application."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain-community"
]
},
{
@@ -87,7 +96,9 @@
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama3\")\n",
"llm = Ollama(\n",
" model=\"llama3\"\n",
") # assuming you have Ollama installed and have llama3 model pulled with `ollama pull llama3 `\n",
"\n",
"llm.invoke(\"Tell me a joke\")"
]
@@ -280,6 +291,24 @@
"llm_with_image_context = bakllava.bind(images=[image_b64])\n",
"llm_with_image_context.invoke(\"What is the dollar based gross retention rate:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"astrapy>=0.7.1\""
"%pip install --upgrade --quiet \"astrapy>=0.7.1 langchain-community\" "
]
},
{
@@ -50,7 +50,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"cassio>=0.1.0\""
"%pip install --upgrade --quiet \"cassio>=0.1.0 langchain-community\""
]
},
{

View File

@@ -43,7 +43,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet elasticsearch langchain"
"%pip install --upgrade --quiet elasticsearch langchain langchain-community"
]
},
{

View File

@@ -25,7 +25,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet rockset"
"%pip install --upgrade --quiet rockset langchain-community"
]
},
{

View File

@@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain_openai"
"%pip install --upgrade --quiet langchain langchain_openai langchain-community"
]
},
{

View File

@@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet xata langchain-openai langchain"
"%pip install --upgrade --quiet xata langchain-openai langchain langchain-community"
]
},
{

View File

@@ -0,0 +1,337 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1cdd080f9ea3e0b",
"metadata": {},
"source": [
"# ZepCloudChatMessageHistory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)\n",
"\n",
"## Example\n",
"\n",
"This notebook demonstrates how to use [Zep](https://www.getzep.com/) to persist chat history and use Zep Memory with your chain.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "82fb8484eed2ee9a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:12.069045Z",
"start_time": "2024-05-10T05:20:12.062518Z"
}
},
"outputs": [],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_community.chat_message_histories import ZepCloudChatMessageHistory\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import (\n",
" RunnableParallel,\n",
")\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"
]
},
{
"cell_type": "markdown",
"id": "d79e0e737db426ac",
"metadata": {},
"source": [
"Provide your OpenAI key"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7430ea2341ecd227",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:17.983314Z",
"start_time": "2024-05-10T05:20:13.805729Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"openai_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "81a87004bc92c3e2",
"metadata": {},
"source": [
"Provide your Zep API key. See https://help.getzep.com/projects#api-keys\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c21632a2c7223170",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:24.694643Z",
"start_time": "2024-05-10T05:20:22.174681Z"
}
},
"outputs": [],
"source": [
"zep_api_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "436de864fe0000",
"metadata": {},
"source": [
"Preload some messages into the memory. The default message window is 4 messages. We want to push beyond this to demonstrate auto-summarization."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e8fb07edd965ef1f",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:38.657289Z",
"start_time": "2024-05-10T05:20:26.981492Z"
}
},
"outputs": [],
"source": [
"test_history = [\n",
" {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" \"metadata\": {\"foo\": \"bar\"},\n",
" },\n",
"]\n",
"\n",
"zep_memory = ZepCloudMemory(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
")\n",
"\n",
"for msg in test_history:\n",
" zep_memory.chat_memory.add_message(\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n",
"\n",
"import time\n",
"\n",
"time.sleep(\n",
" 10\n",
") # Wait for the messages to be embedded and summarized, this happens asynchronously."
]
},
{
"cell_type": "markdown",
"id": "bfa6b19f0b501aea",
"metadata": {},
"source": [
"**MessagesPlaceholder** - Were using the variable name chat_history here. This will incorporate the chat history into the prompt.\n",
"Its important that this variable name aligns with the history_messages_key in the RunnableWithMessageHistory chain for seamless integration.\n",
"\n",
"**question** must match input_messages_key in `RunnableWithMessageHistory“ chain."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "2b12eccf9b4908eb",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:46.592163Z",
"start_time": "2024-05-10T05:20:46.464326Z"
}
},
"outputs": [],
"source": [
"template = \"\"\"Be helpful and answer the question below using the provided context:\n",
" \"\"\"\n",
"answer_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", template),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"user\", \"{question}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7d6014d6fe7f2d22",
"metadata": {},
"source": [
"We use RunnableWithMessageHistory to incorporate Zeps Chat History into our chain. This class requires a session_id as a parameter when you activate the chain."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "83ea7322638f8ead",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:49.681754Z",
"start_time": "2024-05-10T05:20:49.663404Z"
}
},
"outputs": [],
"source": [
"inputs = RunnableParallel(\n",
" {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: x[\"chat_history\"],\n",
" },\n",
")\n",
"chain = RunnableWithMessageHistory(\n",
" inputs | answer_prompt | ChatOpenAI(openai_api_key=openai_key) | StrOutputParser(),\n",
" lambda s_id: ZepCloudChatMessageHistory(\n",
" session_id=s_id, # This uniquely identifies the conversation, note that we are getting session id as chain configurable field\n",
" api_key=zep_api_key,\n",
" memory_type=\"perpetual\",\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "db8bdc1d0d7bb672",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:54.966758Z",
"start_time": "2024-05-10T05:20:52.117440Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 622c6f75-3e4a-413d-ba20-558c1fea0d50 not found for run af12a4b1-e882-432d-834f-e9147465faf6. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"'\"Parable of the Sower\" is relevant to the challenges facing contemporary society as it explores themes of environmental degradation, economic inequality, social unrest, and the search for hope and community in the face of chaos. The novel\\'s depiction of a dystopian future where society has collapsed due to environmental and economic crises serves as a cautionary tale about the potential consequences of our current societal and environmental challenges. By addressing issues such as climate change, social injustice, and the impact of technology on humanity, Octavia Butler\\'s work prompts readers to reflect on the pressing issues of our time and the importance of resilience, empathy, and collective action in building a better future.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\n",
" {\n",
" \"question\": \"What is the book's relevance to the challenges facing contemporary society?\"\n",
" },\n",
" config={\"configurable\": {\"session_id\": session_id}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d9c609652110db3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,7 +6,7 @@
"collapsed": false
},
"source": [
"# Zep\n",
"# Zep Open Source Memory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
@@ -36,11 +36,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-07-09T19:20:49.003167Z",
"start_time": "2023-07-09T19:20:47.446370Z"
"end_time": "2024-05-10T03:25:26.191166Z",
"start_time": "2024-05-10T03:25:25.641520Z"
}
},
"outputs": [],

View File

@@ -0,0 +1,428 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Zep Cloud Memory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)\n",
"\n",
"## Example\n",
"\n",
"This notebook demonstrates how to use [Zep](https://www.getzep.com/) as memory for your chatbot.\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to Zep.\n",
"2. Running an agent and having message automatically added to the store.\n",
"3. Viewing the enriched messages.\n",
"4. Vector search over the conversation history."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-14T17:25:10.779451Z",
"start_time": "2024-05-14T17:25:10.375249Z"
}
},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'FieldInfo' object has no attribute 'deprecated'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[3], line 8\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_community\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mutilities\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m WikipediaAPIWrapper\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmessages\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AIMessage, HumanMessage\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m OpenAI\n\u001b[1;32m 10\u001b[0m session_id \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(uuid4()) \u001b[38;5;66;03m# This is a unique identifier for the session\u001b[39;00m\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/__init__.py:1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 2\u001b[0m AzureChatOpenAI,\n\u001b[1;32m 3\u001b[0m ChatOpenAI,\n\u001b[1;32m 4\u001b[0m )\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01membeddings\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 6\u001b[0m AzureOpenAIEmbeddings,\n\u001b[1;32m 7\u001b[0m OpenAIEmbeddings,\n\u001b[1;32m 8\u001b[0m )\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mllms\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureOpenAI, OpenAI\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/chat_models/__init__.py:1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mazure\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureChatOpenAI\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbase\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatOpenAI\n\u001b[1;32m 4\u001b[0m __all__ \u001b[38;5;241m=\u001b[39m [\n\u001b[1;32m 5\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mChatOpenAI\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 6\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mAzureChatOpenAI\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 7\u001b[0m ]\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/chat_models/azure.py:8\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Any, Callable, Dict, List, Optional, Union\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mopenai\u001b[39;00m\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01moutputs\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatResult\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mpydantic_v1\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Field, SecretStr, root_validator\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/__init__.py:8\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01m_os\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping_extensions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m override\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m types\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_types\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m NOT_GIVEN, NoneType, NotGiven, Transport, ProxiesTypes\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_utils\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m file_from_path\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/types/__init__.py:5\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;66;03m# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.\u001b[39;00m\n\u001b[1;32m 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m__future__\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m annotations\n\u001b[0;32m----> 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Batch \u001b[38;5;28;01mas\u001b[39;00m Batch\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mimage\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Image \u001b[38;5;28;01mas\u001b[39;00m Image\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmodel\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Model \u001b[38;5;28;01mas\u001b[39;00m Model\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/types/batch.py:7\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m List, Optional\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping_extensions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Literal\n\u001b[0;32m----> 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_models\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BaseModel\n\u001b[1;32m 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch_error\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BatchError\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch_request_counts\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BatchRequestCounts\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/_models.py:667\u001b[0m\n\u001b[1;32m 662\u001b[0m json_data: Body\n\u001b[1;32m 663\u001b[0m extra_json: AnyMapping\n\u001b[1;32m 666\u001b[0m \u001b[38;5;129;43m@final\u001b[39;49m\n\u001b[0;32m--> 667\u001b[0m \u001b[38;5;28;43;01mclass\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;21;43;01mFinalRequestOptions\u001b[39;49;00m\u001b[43m(\u001b[49m\u001b[43mpydantic\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mBaseModel\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m 668\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\n\u001b[1;32m 669\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:202\u001b[0m, in \u001b[0;36m__new__\u001b[0;34m(mcs, cls_name, bases, namespace, __pydantic_generic_metadata__, __pydantic_reset_parent_namespace__, _create_model_module, **kwargs)\u001b[0m\n\u001b[1;32m 199\u001b[0m super(cls, cls).__pydantic_init_subclass__(**kwargs) # type: ignore[misc]\n\u001b[1;32m 200\u001b[0m return cls\n\u001b[1;32m 201\u001b[0m else:\n\u001b[0;32m--> 202\u001b[0m # this is the BaseModel class itself being created, no logic required\n\u001b[1;32m 203\u001b[0m return super().__new__(mcs, cls_name, bases, namespace, **kwargs)\n\u001b[1;32m 205\u001b[0m if not typing.TYPE_CHECKING: # pragma: no branch\n\u001b[1;32m 206\u001b[0m # We put `__getattr__` in a non-TYPE_CHECKING block because otherwise, mypy allows arbitrary attribute access\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:539\u001b[0m, in \u001b[0;36mcomplete_model_class\u001b[0;34m(cls, cls_name, config_wrapper, raise_errors, types_namespace, create_model_module)\u001b[0m\n\u001b[1;32m 532\u001b[0m \u001b[38;5;66;03m# debug(schema)\u001b[39;00m\n\u001b[1;32m 533\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_core_schema__ \u001b[38;5;241m=\u001b[39m schema\n\u001b[1;32m 535\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_validator__ \u001b[38;5;241m=\u001b[39m create_schema_validator(\n\u001b[1;32m 536\u001b[0m schema,\n\u001b[1;32m 537\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 538\u001b[0m create_model_module \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__module__\u001b[39m,\n\u001b[0;32m--> 539\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__qualname__\u001b[39m,\n\u001b[1;32m 540\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcreate_model\u001b[39m\u001b[38;5;124m'\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m create_model_module \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mBaseModel\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[1;32m 541\u001b[0m core_config,\n\u001b[1;32m 542\u001b[0m config_wrapper\u001b[38;5;241m.\u001b[39mplugin_settings,\n\u001b[1;32m 543\u001b[0m )\n\u001b[1;32m 544\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_serializer__ \u001b[38;5;241m=\u001b[39m SchemaSerializer(schema, core_config)\n\u001b[1;32m 545\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_complete__ \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/main.py:626\u001b[0m, in \u001b[0;36m__get_pydantic_core_schema__\u001b[0;34m(cls, source, handler)\u001b[0m\n\u001b[1;32m 611\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m 612\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__pydantic_init_subclass__\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 613\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass`\u001b[39;00m\n\u001b[1;32m 614\u001b[0m \u001b[38;5;124;03m only after the class is actually fully initialized. In particular, attributes like `model_fields` will\u001b[39;00m\n\u001b[1;32m 615\u001b[0m \u001b[38;5;124;03m be present when this is called.\u001b[39;00m\n\u001b[1;32m 616\u001b[0m \n\u001b[1;32m 617\u001b[0m \u001b[38;5;124;03m This is necessary because `__init_subclass__` will always be called by `type.__new__`,\u001b[39;00m\n\u001b[1;32m 618\u001b[0m \u001b[38;5;124;03m and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that\u001b[39;00m\n\u001b[1;32m 619\u001b[0m \u001b[38;5;124;03m `type.__new__` was called in such a manner that the class would already be sufficiently initialized.\u001b[39;00m\n\u001b[1;32m 620\u001b[0m \n\u001b[1;32m 621\u001b[0m \u001b[38;5;124;03m This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely,\u001b[39;00m\n\u001b[1;32m 622\u001b[0m \u001b[38;5;124;03m any kwargs passed to the class definition that aren't used internally by pydantic.\u001b[39;00m\n\u001b[1;32m 623\u001b[0m \n\u001b[1;32m 624\u001b[0m \u001b[38;5;124;03m Args:\u001b[39;00m\n\u001b[1;32m 625\u001b[0m \u001b[38;5;124;03m **kwargs: Any keyword arguments passed to the class definition that aren't used internally\u001b[39;00m\n\u001b[0;32m--> 626\u001b[0m \u001b[38;5;124;03m by pydantic.\u001b[39;00m\n\u001b[1;32m 627\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 628\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:82\u001b[0m, in \u001b[0;36mCallbackGetCoreSchemaHandler.__call__\u001b[0;34m(self, source_type)\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__call__\u001b[39m(\u001b[38;5;28mself\u001b[39m, __source_type: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mCoreSchema:\n\u001b[0;32m---> 82\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_handler(__source_type)\n\u001b[1;32m 83\u001b[0m ref \u001b[38;5;241m=\u001b[39m schema\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mref\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 84\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_ref_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mto-def\u001b[39m\u001b[38;5;124m'\u001b[39m:\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:502\u001b[0m, in \u001b[0;36mgenerate_schema\u001b[0;34m(self, obj, from_dunder_get_core_schema)\u001b[0m\n\u001b[1;32m 498\u001b[0m schema \u001b[38;5;241m=\u001b[39m _add_custom_serialization_from_json_encoders(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_config_wrapper\u001b[38;5;241m.\u001b[39mjson_encoders, obj, schema)\n\u001b[1;32m 500\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_post_process_generated_schema(schema)\n\u001b[0;32m--> 502\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m schema\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:753\u001b[0m, in \u001b[0;36m_generate_schema_inner\u001b[0;34m(self, obj)\u001b[0m\n\u001b[1;32m 749\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mmatch_type\u001b[39m(\u001b[38;5;28mself\u001b[39m, obj: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mCoreSchema: \u001b[38;5;66;03m# noqa: C901\u001b[39;00m\n\u001b[1;32m 750\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Main mapping of types to schemas.\u001b[39;00m\n\u001b[1;32m 751\u001b[0m \n\u001b[1;32m 752\u001b[0m \u001b[38;5;124;03m The general structure is a series of if statements starting with the simple cases\u001b[39;00m\n\u001b[0;32m--> 753\u001b[0m \u001b[38;5;124;03m (non-generic primitive types) and then handling generics and other more complex cases.\u001b[39;00m\n\u001b[1;32m 754\u001b[0m \n\u001b[1;32m 755\u001b[0m \u001b[38;5;124;03m Each case either generates a schema directly, calls into a public user-overridable method\u001b[39;00m\n\u001b[1;32m 756\u001b[0m \u001b[38;5;124;03m (like `GenerateSchema.tuple_variable_schema`) or calls into a private method that handles some\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \u001b[38;5;124;03m boilerplate before calling into the user-facing method (e.g. `GenerateSchema._tuple_schema`).\u001b[39;00m\n\u001b[1;32m 758\u001b[0m \n\u001b[1;32m 759\u001b[0m \u001b[38;5;124;03m The idea is that we'll evolve this into adding more and more user facing methods over time\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[38;5;124;03m as they get requested and we figure out what the right API for them is.\u001b[39;00m\n\u001b[1;32m 761\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 762\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m obj \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28mstr\u001b[39m:\n\u001b[1;32m 763\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstr_schema()\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:580\u001b[0m, in \u001b[0;36m_model_schema\u001b[0;34m(self, cls)\u001b[0m\n\u001b[1;32m 574\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m new_inner_schema\n\u001b[1;32m 575\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m apply_model_validators(inner_schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minner\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 577\u001b[0m model_schema \u001b[38;5;241m=\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mmodel_schema(\n\u001b[1;32m 578\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 579\u001b[0m inner_schema,\n\u001b[0;32m--> 580\u001b[0m custom_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_custom_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 581\u001b[0m root_model\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 582\u001b[0m post_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_post_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 583\u001b[0m config\u001b[38;5;241m=\u001b[39mcore_config,\n\u001b[1;32m 584\u001b[0m ref\u001b[38;5;241m=\u001b[39mmodel_ref,\n\u001b[1;32m 585\u001b[0m metadata\u001b[38;5;241m=\u001b[39mmetadata,\n\u001b[1;32m 586\u001b[0m )\n\u001b[1;32m 588\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_model_serializers(model_schema, decorators\u001b[38;5;241m.\u001b[39mmodel_serializers\u001b[38;5;241m.\u001b[39mvalues())\n\u001b[1;32m 589\u001b[0m schema \u001b[38;5;241m=\u001b[39m apply_model_validators(schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mouter\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:580\u001b[0m, in \u001b[0;36m<dictcomp>\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m 574\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m new_inner_schema\n\u001b[1;32m 575\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m apply_model_validators(inner_schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minner\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 577\u001b[0m model_schema \u001b[38;5;241m=\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mmodel_schema(\n\u001b[1;32m 578\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 579\u001b[0m inner_schema,\n\u001b[0;32m--> 580\u001b[0m custom_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_custom_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 581\u001b[0m root_model\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 582\u001b[0m post_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_post_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 583\u001b[0m config\u001b[38;5;241m=\u001b[39mcore_config,\n\u001b[1;32m 584\u001b[0m ref\u001b[38;5;241m=\u001b[39mmodel_ref,\n\u001b[1;32m 585\u001b[0m metadata\u001b[38;5;241m=\u001b[39mmetadata,\n\u001b[1;32m 586\u001b[0m )\n\u001b[1;32m 588\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_model_serializers(model_schema, decorators\u001b[38;5;241m.\u001b[39mmodel_serializers\u001b[38;5;241m.\u001b[39mvalues())\n\u001b[1;32m 589\u001b[0m schema \u001b[38;5;241m=\u001b[39m apply_model_validators(schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mouter\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:916\u001b[0m, in \u001b[0;36m_generate_md_field_schema\u001b[0;34m(self, name, field_info, decorators)\u001b[0m\n\u001b[1;32m 906\u001b[0m common_field \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_common_field_schema(name, field_info, decorators)\n\u001b[1;32m 907\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m core_schema\u001b[38;5;241m.\u001b[39mmodel_field(\n\u001b[1;32m 908\u001b[0m common_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mschema\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[1;32m 909\u001b[0m serialization_exclude\u001b[38;5;241m=\u001b[39mcommon_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mserialization_exclude\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 913\u001b[0m metadata\u001b[38;5;241m=\u001b[39mcommon_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmetadata\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[1;32m 914\u001b[0m )\n\u001b[0;32m--> 916\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_generate_dc_field_schema\u001b[39m(\n\u001b[1;32m 917\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 918\u001b[0m name: \u001b[38;5;28mstr\u001b[39m,\n\u001b[1;32m 919\u001b[0m field_info: FieldInfo,\n\u001b[1;32m 920\u001b[0m decorators: DecoratorInfos,\n\u001b[1;32m 921\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mDataclassField:\n\u001b[1;32m 922\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Prepare a DataclassField to represent the parameter/field, of a dataclass.\"\"\"\u001b[39;00m\n\u001b[1;32m 923\u001b[0m common_field \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_common_field_schema(name, field_info, decorators)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:1114\u001b[0m, in \u001b[0;36m_common_field_schema\u001b[0;34m(self, name, field_info, decorators)\u001b[0m\n\u001b[1;32m 1108\u001b[0m json_schema_extra \u001b[38;5;241m=\u001b[39m field_info\u001b[38;5;241m.\u001b[39mjson_schema_extra\n\u001b[1;32m 1110\u001b[0m metadata \u001b[38;5;241m=\u001b[39m build_metadata_dict(\n\u001b[1;32m 1111\u001b[0m js_annotation_functions\u001b[38;5;241m=\u001b[39m[get_json_schema_update_func(json_schema_updates, json_schema_extra)]\n\u001b[1;32m 1112\u001b[0m )\n\u001b[0;32m-> 1114\u001b[0m alias_generator \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_config_wrapper\u001b[38;5;241m.\u001b[39malias_generator\n\u001b[1;32m 1115\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m alias_generator \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 1116\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_alias_generator_to_field_info(alias_generator, field_info, name)\n",
"\u001b[0;31mAttributeError\u001b[0m: 'FieldInfo' object has no attribute 'deprecated'"
]
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_community.retrievers import ZepCloudRetriever\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_openai import OpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provide your OpenAI key\n",
"import getpass\n",
"\n",
"openai_key = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provide your Zep API key. See https://help.getzep.com/projects#api-keys\n",
"\n",
"zep_api_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and initialize the Agent\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"search = WikipediaAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=(\n",
" \"useful for when you need to search online for answers. You should ask\"\n",
" \" targeted questions\"\n",
" ),\n",
" ),\n",
"]\n",
"\n",
"# Set up Zep Chat History\n",
"memory = ZepCloudMemory(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
" return_messages=True,\n",
" memory_key=\"chat_history\",\n",
")\n",
"\n",
"# Initialize the agent\n",
"llm = OpenAI(temperature=0, openai_api_key=openai_key)\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" memory=memory,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add some history data\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.\n",
"test_history = [\n",
" {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" \"metadata\": {\"foo\": \"bar\"},\n",
" },\n",
"]\n",
"\n",
"for msg in test_history:\n",
" memory.chat_memory.add_message(\n",
" (\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" ),\n",
" metadata=msg.get(\"metadata\", {}),\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the agent\n",
"\n",
"Doing so will automatically add the input and response to the Zep memory.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:34:37.613049Z",
"start_time": "2024-05-10T14:34:35.883359Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"AI: Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"What is the book's relevance to the challenges facing contemporary society?\",\n",
" 'chat_history': [HumanMessage(content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\\nOctavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nUrsula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\\nJoanna Russ is the author of the influential feminist science fiction novel The Female Man.\\nMargaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\\nConnie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\\nOctavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\\nOctavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\\nhuman: What awards did she win?\\nai: Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nhuman: Which other women sci-fi writers might I want to read?\\nai: You might want to read Ursula K. Le Guin or Joanna Russ.\\nhuman: Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\\nai: Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\")],\n",
" 'output': 'Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.invoke(\n",
" input=\"What is the book's relevance to the challenges facing contemporary society?\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspect the Zep memory\n",
"\n",
"Note the summary, and that the history has been enriched with token counts, UUIDs, and timestamps.\n",
"\n",
"Summaries are biased towards the most recent messages.\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:35:11.437446Z",
"start_time": "2024-05-10T14:35:10.664076Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Octavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\n",
"\n",
"\n",
"Conversation Facts: \n",
"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\n",
"\n",
"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\n",
"\n",
"Ursula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\n",
"\n",
"Joanna Russ is the author of the influential feminist science fiction novel The Female Man.\n",
"\n",
"Margaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\n",
"\n",
"Connie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\n",
"\n",
"Octavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\n",
"\n",
"Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993.\n",
"\n",
"The novel follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\n",
"\n",
"Parable of the Sower explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world.\n",
"\n",
"The novel also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\n",
"\n",
"human :\n",
" {'content': \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\\nOctavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nUrsula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\\nJoanna Russ is the author of the influential feminist science fiction novel The Female Man.\\nMargaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\\nConnie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\\nOctavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\\nParable of the Sower is a science fiction novel by Octavia Butler, published in 1993.\\nThe novel follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\\nParable of the Sower explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world.\\nThe novel also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\\nOctavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\\nhuman: Which other women sci-fi writers might I want to read?\\nai: You might want to read Ursula K. Le Guin or Joanna Russ.\\nhuman: Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\\nai: Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\\nhuman: What is the book's relevance to the challenges facing contemporary society?\\nai: Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\", 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': None, 'example': False}\n"
]
}
],
"source": [
"def print_messages(messages):\n",
" for m in messages:\n",
" print(m.type, \":\\n\", m.dict())\n",
"\n",
"\n",
"print(memory.chat_memory.zep_summary)\n",
"print(\"\\n\")\n",
"print(\"Conversation Facts: \")\n",
"facts = memory.chat_memory.zep_facts\n",
"for fact in facts:\n",
" print(fact + \"\\n\")\n",
"print_messages(memory.chat_memory.messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory via the `ZepRetriever`.\n",
"\n",
"You can use the `ZepRetriever` with chains that support passing in a Langchain `Retriever` object.\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:35:33.023765Z",
"start_time": "2024-05-10T14:35:32.613576Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Which other women sci-fi writers might I want to read?' created_at='2024-05-10T14:34:16.714292Z' metadata=None role='human' role_type=None token_count=12 updated_at='0001-01-01T00:00:00Z' uuid_='64ca1fae-8db1-4b4f-8a45-9b0e57e88af5' 0.8960460126399994\n"
]
}
],
"source": [
"retriever = ZepCloudRetriever(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
")\n",
"\n",
"search_results = memory.chat_memory.search(\"who are some famous women sci-fi authors?\")\n",
"for r in search_results:\n",
" if r.score > 0.8: # Only print results with similarity of 0.8 or higher\n",
" print(r.message, r.score)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,3 +1,7 @@
---
keywords: [azure]
---
# Microsoft
All functionality related to `Microsoft Azure` and other `Microsoft` products.
@@ -271,6 +275,26 @@ See a [usage example](/docs/integrations/retrievers/azure_ai_search).
from langchain.retrievers import AzureAISearchRetriever
```
## Tools
### Azure Container Apps dynamic sessions
We need to get the `POOL_MANAGEMENT_ENDPOINT` environment variable from the Azure Container Apps service.
See the instructions [here](https://python.langchain.com/v0.2/docs/integrations/tools/azure_dynamic_sessions/#setup).
We need to install a python package.
```bash
pip install langchain-azure-dynamic-sessions
```
See a [usage example](/docs/integrations/tools/azure_dynamic_sessions).
```python
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
```
## Toolkits
### Azure AI Services

View File

@@ -1,3 +1,7 @@
---
keywords: [openai]
---
# OpenAI
All functionality related to OpenAI

View File

@@ -64,7 +64,7 @@ set_llm_cache(AstraDBCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the Astra DB section).
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the Astra DB section).
## Semantic LLM Cache
@@ -80,7 +80,7 @@ set_llm_cache(AstraDBSemanticCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_message_history).

View File

@@ -40,7 +40,7 @@ from langchain_community.cache import CassandraCache
set_llm_cache(CassandraCache())
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the Cassandra section).
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the Cassandra section).
## Semantic LLM Cache
@@ -54,7 +54,7 @@ set_llm_cache(CassandraSemanticCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the appropriate section).
## Document loader

View File

@@ -205,7 +205,7 @@ For chat models is very useful to define prompt as a set of message templates...
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` sufix inside the <prompt:_role_> tag
- note the `:system` suffix inside the <prompt:_role_> tag
```<prompt:system>

View File

@@ -48,6 +48,6 @@ eng = sqlalchemy.create_engine(conn_str)
set_llm_cache(SQLAlchemyCache(engine=eng))
```
From here, see the [LLM Caching](/docs/integrations/llms/llm_caching) documentation on how to use.
From here, see the [LLM Caching](/docs/integrations/llm_caching) documentation on how to use.

View File

@@ -1,63 +1,82 @@
# NVIDIA
The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
>NVIDIA provides an integration package for LangChain: `langchain-nvidia-ai-endpoints`.
NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing,
NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
giving enterprises ownership and full control of their IP and AI application.
## NVIDIA AI Foundation Endpoints
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for
> NVIDIA AI Foundation Models like `Mixtral 8x7B`, `Llama 2`, `Stable Diffusion`, etc. These models,
> hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on
> the NVIDIA AI platform, making them fast and easy to evaluate, further customize,
> and seamlessly run at peak performance on any accelerated stack.
>
> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully
> accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these
> models can be deployed anywhere with enterprise-grade security, stability,
> and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).
Below is an example on how to use some common functionality surrounding text-generative and embedding models.
A selection of NVIDIA AI Foundation models is supported directly in LangChain with familiar APIs.
## Installation
The supported models can be found [in build.nvidia.com](https://build.nvidia.com/).
These models can be accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/)
package, as shown below.
### Setting up
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models
2. Click on your model of choice
3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```bash
export NVIDIA_API_KEY=nvapi-XXXXXXXXXXXXXXXXXXXXXXXXXX
```python
pip install -U --quiet langchain-nvidia-ai-endpoints
```
- Install a package:
## Setup
```bash
pip install -U langchain-nvidia-ai-endpoints
**To get started:**
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.
2. Click on your model of choice.
3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
```python
import getpass
import os
if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
nvidia_api_key = getpass.getpass("Enter your NVIDIA API key: ")
assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvidia_api_key
```
### Chat models
See a [usage example](/docs/integrations/chat/nvidia_ai_endpoints).
## Working with NVIDIA API Catalog
```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="mixtral_8x7b")
llm = ChatNVIDIA(model="mistralai/mixtral-8x22b-instruct-v0.1")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
```
### Embedding models
Using the API, you can query live endpoints available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM which is part of NVIDIA AI Enterprise, shown in the next section [Working with NVIDIA NIMs](##working-with-nvidia-nims).
See a [usage example](/docs/integrations/text_embedding/nvidia_ai_endpoints).
## Working with NVIDIA NIMs
When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.
[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)
```python
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank
# connect to an chat NIM running at localhost:8000, specifyig a specific model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta-llama3-8b-instruct")
# connect to an embedding NIM running at localhost:8080
embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1")
# connect to a reranking NIM running at localhost:2016
ranker = NVIDIARerank(base_url="http://localhost:2016/v1")
```
## Using NVIDIA AI Foundation Endpoints
A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs.
The active models which are supported can be found [in API Catalog](https://build.nvidia.com/).
**The following may be useful examples to help you get started:**
- **[`ChatNVIDIA` Model](/docs/integrations/chat/nvidia_ai_endpoints).**
- **[`NVIDIAEmbeddings` Model for RAG Workflows](/docs/integrations/text_embedding/nvidia_ai_endpoints).**

View File

@@ -1,6 +1,6 @@
# Ollama
>[Ollama](https://ollama.ai/) is a python library. It allows you to run open-source large language models,
>[Ollama](https://ollama.com/) allows you to run open-source large language models,
> such as LLaMA2, locally.
>
>`Ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.
@@ -12,11 +12,8 @@ on how to use `Ollama` with LangChain.
## Installation and Setup
Follow [these instructions](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama)
Follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
to set up and run a local Ollama instance.
To use, you should set up the environment variables `ANYSCALE_API_BASE` and
`ANYSCALE_API_KEY`.
## LLM

View File

@@ -1,3 +1,7 @@
---
keywords: [pinecone]
---
# Pinecone
>[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.

View File

@@ -1,6 +1,6 @@
# PremAI
>[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth.
[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).
## ChatPremAI
@@ -9,36 +9,27 @@ This example goes over how to use LangChain to interact with different chat mode
### Installation and setup
We start by installing langchain and premai-sdk. You can type the following command to install:
We start by installing `langchain` and `premai-sdk`. You can type the following command to install:
```bash
pip install premai langchain
```
Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:
1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).
2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard.
3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application.
4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt.
Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application.
Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key.
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_community.chat_models import ChatPremAI
```
### Setup ChatPrem instance in LangChain
### Setup PremAI client in LangChain
Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model.
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat-client. By default it will use the model name and parameters used in the [LaunchPad](https://docs.premai.io/get-started/launchpad).
> Note: If you change the `model` or any other parameters like `temperature` or `max_tokens` while setting the client, it will override existing default configurations, that was used in LaunchPad.
`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations.
```python
import os
@@ -50,21 +41,19 @@ if "PREMAI_API_KEY" not in os.environ:
chat = ChatPremAI(project_id=8)
```
### Calling the Model
### Chat Completions
Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`.
`ChatPremAI` supports two methods: `invoke` (which is the same as `generate`) and `stream`.
The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions.
### Generation
```python
human_message = HumanMessage(content="Who are you?")
chat.invoke([human_message])
```
The above looks interesting, right? I set my default launchpad system-prompt as: `Always sound like a pirate` You can also, override the default system prompt if you need to. Here's how you can do it.
You can provide system prompt here like this:
```python
system_message = SystemMessage(content="You are a friendly assistant.")
@@ -82,16 +71,82 @@ chat.invoke(
)
```
> If you are going to place system prompt here, then it will override your system prompt that was fixed while deploying the application from the platform.
### Important notes:
> You can find all the optional parameters [here](https://docs.premai.io/get-started/sdk#optional-parameters). Any parameters other than [these supported parameters](https://docs.premai.io/get-started/sdk#optional-parameters) will be automatically removed before calling the model.
Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported.
We will provide support for those two above parameters in later versions.
### Native RAG Support with Prem Repositories
Prem Repositories which allows users to upload documents (.txt, .pdf etc) and connect those repositories to the LLMs. You can think Prem repositories as native RAG, where each repository can be considered as a vector database. You can connect multiple repositories. You can learn more about repositories [here](https://docs.premai.io/get-started/repositories).
Repositories are also supported in langchain premai. Here is how you can do it.
```python
query = "what is the diameter of individual Galaxy"
repository_ids = [1991, ]
repositories = dict(
ids=repository_ids,
similarity_threshold=0.3,
limit=3
)
```
First we start by defining our repository with some repository ids. Make sure that the ids are valid repository ids. You can learn more about how to get the repository id [here](https://docs.premai.io/get-started/repositories).
> Please note: Similar like `model_name` when you invoke the argument `repositories`, then you are potentially overriding the repositories connected in the launchpad.
Now, we connect the repository with our chat object to invoke RAG based generations.
```python
response = chat.invoke(query, max_tokens=100, repositories=repositories)
print(response.content)
print(json.dumps(response.response_metadata, indent=4))
```
This is how an output looks like.
```bash
The diameters of individual galaxies range from 80,000-150,000 light-years.
{
"document_chunks": [
{
"repository_id": 1991,
"document_id": 1307,
"chunk_id": 173926,
"document_name": "Kegy 202 Chapter 2",
"similarity_score": 0.586126983165741,
"content": "n thousands\n of light-years. The diameters of individual\n galaxies range from 80,000-150,000 light\n "
},
{
"repository_id": 1991,
"document_id": 1307,
"chunk_id": 173925,
"document_name": "Kegy 202 Chapter 2",
"similarity_score": 0.4815782308578491,
"content": " for development of galaxies. A galaxy contains\n a large number of stars. Galaxies spread over\n vast distances that are measured in thousands\n "
},
{
"repository_id": 1991,
"document_id": 1307,
"chunk_id": 173916,
"document_name": "Kegy 202 Chapter 2",
"similarity_score": 0.38112708926200867,
"content": " was separated from the from each other as the balloon expands.\n solar surface. As the passing star moved away, Similarly, the distance between the galaxies is\n the material separated from the solar surface\n continued to revolve around the sun and it\n slowly condensed into planets. Sir James Jeans\n and later Sir Harold Jeffrey supported thisnot to be republishedalso found to be increasing and thereby, the\n universe is"
}
]
}
```
So, this also means that you do not need to make your own RAG pipeline when using the Prem Platform. Prem uses it's own RAG technology to deliver best in class performance for Retrieval Augmented Generations.
> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in prem platform.
### Streaming
And finally, here's how you do token streaming for dynamic chat-like applications.
In this section, let's see how we can stream tokens using langchain and PremAI. Here's how you do it.
```python
import sys
@@ -101,7 +156,7 @@ for chunk in chat.stream("hello how are you"):
sys.stdout.flush()
```
Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it.
Similar to above, if you want to override the system-prompt and the generation parameters, you need to add the following:
```python
import sys
@@ -114,47 +169,32 @@ for chunk in chat.stream(
sys.stdout.flush()
```
## Embedding
This will stream tokens one after the other.
In this section, we are going to discuss how we can get access to different embedding models using `PremEmbeddings`. Let's start by doing some imports and defining our embedding object
> Please note: As of now, RAG with streaming is not supported. However we still support it with our API. You can learn more about that [here](https://docs.premai.io/get-started/chat-completion-sse).
## PremEmbeddings
In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key.
```python
from langchain_community.embeddings import PremEmbeddings
```
Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
```python
import os
import getpass
from langchain_community.embeddings import PremEmbeddings
if os.environ.get("PREMAI_API_KEY") is None:
os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
# Define a model as a required parameter here since there is no default embedding model
```
We support lots of state of the art embedding models. You can view our list of supported LLMs and embedding models [here](https://docs.premai.io/get-started/supported-models). For now let's go for `text-embedding-3-large` model for this example. .
```python
model = "text-embedding-3-large"
embedder = PremEmbeddings(project_id=8, model=model)
```
We have defined our embedding model. We support a lot of embedding models. Here is a table that shows the number of embedding models we support.
| Provider | Slug | Context Tokens |
|-------------|------------------------------------------|----------------|
| cohere | embed-english-v3.0 | N/A |
| openai | text-embedding-3-small | 8191 |
| openai | text-embedding-3-large | 8191 |
| openai | text-embedding-ada-002 | 8191 |
| replicate | replicate/all-mpnet-base-v2 | N/A |
| together | togethercomputer/Llama-2-7B-32K-Instruct | N/A |
| mistralai | mistral-embed | 4096 |
To change the model, you simply need to copy the `slug` and access your embedding model. Now let's start using our embedding model with a single query followed by multiple queries (which is also called as a document)
```python
query = "Hello, this is a test query"
query_result = embedder.embed_query(query)
@@ -162,8 +202,11 @@ query_result = embedder.embed_query(query)
print(query_result[:5])
```
<Note>
Setting `model_name` argument in mandatory for PremAIEmbeddings unlike chat.
</Note>
Finally, let's embed a document
Finally, let's embed some sample document
```python
documents = [
@@ -178,4 +221,20 @@ doc_result = embedder.embed_documents(documents)
# of the first document vector
print(doc_result[0][:5])
```
```
```python
print(f"Dimension of embeddings: {len(query_result)}")
```
Dimension of embeddings: 3072
```python
doc_result[:5]
```
>Result:
>
>[-0.02129288576543331,
0.0008162345038726926,
-0.004556538071483374,
0.02918623760342598,
-0.02547479420900345]

View File

@@ -61,6 +61,22 @@ store = UpstashVectorStore(
See [Upstash Vector documentation](https://upstash.com/docs/vector/features/embeddingmodels)
for more detail on embedding models.
## Namespaces
You can use namespaces to partition your data in the index. Namespaces are useful when you want to query over huge amount of data, and you want to partition the data to make the queries faster. When you use namespaces, there won't be post-filtering on the results which will make the query results more precise.
```python
from langchain_community.vectorstores.upstash import UpstashVectorStore
import os
os.environ["UPSTASH_VECTOR_REST_URL"] = "<UPSTASH_VECTOR_REST_URL>"
os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "<UPSTASH_VECTOR_REST_TOKEN>"
store = UpstashVectorStore(
embedding=embeddings
namespace="my_namespace"
)
```
### Inserting Vectors
```python

View File

@@ -1,28 +1,38 @@
# Vectara
>[Vectara](https://vectara.com/) is the trusted GenAI platform for developers. It provides a simple API to build GenAI applications
> for semantic search or RAG (Retreieval augmented generation).
>[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant)
> which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service).
**Vectara Overview:**
- `Vectara` is developer-first API platform for building trusted GenAI applications.
- To use Vectara - first [sign up](https://vectara.com/integrations/langchain) and create an account. Then create a corpus and an API key for indexing and searching.
- You can use Vectara's [indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) to add documents into Vectara's index
- You can use Vectara's [Search API](https://docs.vectara.com/docs/search-apis/search) to query Vectara's index (which also supports Hybrid search implicitly).
`Vectara` is RAG-as-a-service, providing all the components of RAG behind an easy-to-use API, including:
1. A way to extract text from files (PDF, PPT, DOCX, etc)
2. ML-based chunking that provides state of the art performance.
3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.
4. Its own internal vector database where text chunks and embedding vectors are stored.
5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments
(including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and
[MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))
7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.
For more information:
- [Documentation](https://docs.vectara.com/docs/)
- [API Playground](https://docs.vectara.com/docs/rest-api/)
- [Quickstart](https://docs.vectara.com/docs/quickstart)
## Installation and Setup
To use `Vectara` with LangChain no special installation steps are required.
To get started, [sign up](https://vectara.com/integrations/langchain) and follow our [quickstart](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key.
Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.
To get started, [sign up](https://vectara.com/integrations/langchain) for a free Vectara account (if you don't already have one),
and follow the [quickstart](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key.
Once you have these, you can provide them as arguments to the Vectara `vectorstore`, or you can set them as environment variables.
- export `VECTARA_CUSTOMER_ID`="your_customer_id"
- export `VECTARA_CORPUS_ID`="your_corpus_id"
- export `VECTARA_API_KEY`="your-vectara-api-key"
## Vectara as a Vector Store
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
There exists a wrapper around the Vectara platform, allowing you to use it as a `vectorstore` in LangChain:
To import this vectorstore:
```python
@@ -37,7 +47,10 @@ vectara = Vectara(
vectara_api_key=api_key
)
```
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively.
The `customer_id`, `corpus_id` and `api_key` are optional, and if they are not supplied will be read from
the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively.
### Adding Texts or Files
After you have the vectorstore, you can `add_texts` or `add_documents` as per the standard `VectorStore` interface, for example:
@@ -45,8 +58,8 @@ After you have the vectorstore, you can `add_texts` or `add_documents` as per th
vectara.add_texts(["to be or not to be", "that is the question"])
```
Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.
Since Vectara supports file-upload in the platform, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly.
When using this method, each file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.
As an example:
@@ -54,9 +67,13 @@ As an example:
vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])
```
To query the vectorstore, you can use the `similarity_search` method (or `similarity_search_with_score`), which takes a query string and returns a list of results:
Of course you do not have to add any data, and instead just connect to an existing Vectara corpus where data may already be indexed.
### Querying the VectorStore
To query the Vectara vectorstore, you can use the `similarity_search` method (or `similarity_search_with_score`), which takes a query string and returns a list of results:
```python
results = vectara.similarity_score("what is LangChain?")
results = vectara.similarity_search_with_score("what is LangChain?")
```
The results are returned as a list of relevant documents, and a relevance score of each document.
@@ -65,28 +82,101 @@ In this case, we used the default retrieval parameters, but you can also specify
- `lambda_val`: the [lexical matching](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) factor for hybrid search (defaults to 0.025)
- `filter`: a [filter](https://docs.vectara.com/docs/common-use-cases/filtering-by-metadata/filter-overview) to apply to the results (default None)
- `n_sentence_context`: number of sentences to include before/after the actual matching segment when returning results. This defaults to 2.
- `mmr_config`: can be used to specify MMR mode in the query.
- `is_enabled`: True or False
- `mmr_k`: number of results to use for MMR reranking
- `diversity_bias`: 0 = no diversity, 1 = full diversity. This is the lambda parameter in the MMR formula and is in the range 0...1
- `rerank_config`: can be used to specify reranker for thr results
- `reranker`: mmr, rerank_multilingual_v1 or none. Note that "rerank_multilingual_v1" is a Scale only feature
- `rerank_k`: number of results to use for reranking
- `mmr_diversity_bias`: 0 = no diversity, 1 = full diversity. This is the lambda parameter in the MMR formula and is in the range 0...1
To get results without the relevance score, you can simply use the 'similarity_search' method:
```python
results = vectara.similarity_search("what is LangChain?")
```
## Vectara for Retrieval Augmented Generation (RAG)
Vectara provides a full RAG pipeline, including generative summarization.
To use this pipeline, you can specify the `summary_config` argument in `similarity_search` or `similarity_search_with_score` as follows:
Vectara provides a full RAG pipeline, including generative summarization. To use it as a complete RAG solution, you can use the `as_rag` method.
There are a few additional parameters that can be specified in the `VectaraQueryConfig` object to control retrieval and summarization:
* k: number of results to return
* lambda_val: the lexical matching factor for hybrid search
* summary_config (optional): can be used to request an LLM summary in RAG
- is_enabled: True or False
- max_results: number of results to use for summary generation
- response_lang: language of the response summary, in ISO 639-2 format (e.g. 'en', 'fr', 'de', etc)
* rerank_config (optional): can be used to specify Vectara Reranker of the results
- reranker: mmr, rerank_multilingual_v1 or none
- rerank_k: number of results to use for reranking
- mmr_diversity_bias: 0 = no diversity, 1 = full diversity.
This is the lambda parameter in the MMR formula and is in the range 0...1
- `summary_config`: can be used to request an LLM summary in RAG
- `is_enabled`: True or False
- `max_results`: number of results to use for summary generation
- `response_lang`: language of the response summary, in ISO 639-2 format (e.g. 'en', 'fr', 'de', etc)
For example:
```python
summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng')
rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2)
config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config)
```
Then you can use the `as_rag` method to create a RAG pipeline:
```python
query_str = "what did Biden say?"
rag = vectara.as_rag(config)
rag.invoke(query_str)['answer']
```
The `as_rag` method returns a `VectaraRAG` object, which behaves just like any LangChain Runnable, including the `invoke` or `stream` methods.
## Vectara Chat
The RAG functionality can be used to create a chatbot. For example, you can create a simple chatbot that responds to user input:
```python
summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng')
rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2)
config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config)
query_str = "what did Biden say?"
bot = vectara.as_chat(config)
bot.invoke(query_str)['answer']
```
The main difference is the following: with `as_chat` Vectara internally tracks the chat history and conditions each response on the full chat history.
There is no need to keep that history locally to LangChain, as Vectara will manage it internally.
## Vectara as a LangChain retriever only
If you want to use Vectara as a retriever only, you can use the `as_retriever` method, which returns a `VectaraRetriever` object.
```python
retriever = vectara.as_retriever(config=config)
retriever.invoke(query_str)
```
Like with as_rag, you provide a `VectaraQueryConfig` object to control the retrieval parameters.
In most cases you would not enable the summary_config, but it is left as an option for backwards compatibility.
If no summary is requested, the response will be a list of relevant documents, each with a relevance score.
If a summary is requested, the response will be a list of relevant documents as before, plus an additional document that includes the generative summary.
## Hallucination Detection score
Vectara created [HHEM](https://huggingface.co/vectara/hallucination_evaluation_model) - an open source model that can be used to evaluate RAG responses for factual consistency.
As part of the Vectara RAG, the "Factual Consistency Score" (or FCS), which is an improved version of the open source HHEM is made available via the API.
This is automatically included in the output of the RAG pipeline
```python
summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng')
rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2)
config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config)
rag = vectara.as_rag(config)
resp = rag.invoke(query_str)
print(resp['answer'])
print(f"Vectara FCS = {resp['fcs']}")
```
## Example Notebooks
For a more detailed examples of using Vectara, see the following examples:
* [this notebook](/docs/integrations/vectorstores/vectara) shows how to use Vectara as a vectorstore for semantic search
* [this notebook](/docs/integrations/providers/vectara/vectara_chat) shows how to build a chatbot with Langchain and Vectara
* [this notebook](/docs/integrations/providers/vectara/vectara_summary) shows how to use the full Vectara RAG pipeline, including generative summarization
For a more detailed examples of using Vectara with LangChain, see the following example notebooks:
* [this notebook](/docs/integrations/vectorstores/vectara) shows how to use Vectara: with full RAG or just as a retriever.
* [this notebook](/docs/integrations/retrievers/self_query/vectara_self_query) shows the self-query capability with Vectara.
* [this notebook](/docs/integrations/providers/vectara/vectara_chat) shows how to build a chatbot with Langchain and Vectara

View File

@@ -5,7 +5,21 @@
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Over Documents with Vectara"
"# Vectara Chat\n",
"\n",
"[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant) which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service). \n",
"\n",
"Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:\n",
"1. A way to extract text from files (PDF, PPT, DOCX, etc)\n",
"2. ML-based chunking that provides state of the art performance.\n",
"3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.\n",
"4. Its own internal vector database where text chunks and embedding vectors are stored.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
"7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
"This notebook shows how to use Vectara's [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) functionality."
]
},
{
@@ -13,19 +27,19 @@
"id": "56372c5b",
"metadata": {},
"source": [
"# Setup\n",
"# Getting Started\n",
"\n",
"You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps:\n",
"1. [Sign up](https://www.vectara.com/integrations/langchain) for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"To get started, use the following steps:\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara account. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Authorization\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Access Control\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query-only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",
"To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key.\n",
"To use LangChain with Vectara, you'll need to have these three values: `customer ID`, `corpus ID` and `api_key`.\n",
"You can provide those to LangChain in two ways:\n",
"\n",
"1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.\n",
"\n",
"> For example, you can set these variables using os.environ and getpass as follows:\n",
" For example, you can set these variables using os.environ and getpass as follows:\n",
"\n",
"```python\n",
"import os\n",
@@ -36,20 +50,21 @@
"os.environ[\"VECTARA_API_KEY\"] = getpass.getpass(\"Vectara API Key:\")\n",
"```\n",
"\n",
"2. Add them to the Vectara vectorstore constructor:\n",
"2. Add them to the `Vectara` vectorstore constructor:\n",
"\n",
"```python\n",
"vectorstore = Vectara(\n",
"vectara = Vectara(\n",
" vectara_customer_id=vectara_customer_id,\n",
" vectara_corpus_id=vectara_corpus_id,\n",
" vectara_api_key=vectara_api_key\n",
" )\n",
"```"
"```\n",
"In this notebook we assume they are provided in the environment."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "70c4e529",
"metadata": {
"tags": []
@@ -58,9 +73,16 @@
"source": [
"import os\n",
"\n",
"from langchain.chains import ConversationalRetrievalChain\n",
"os.environ[\"VECTARA_API_KEY\"] = \"<YOUR_VECTARA_API_KEY>\"\n",
"os.environ[\"VECTARA_CORPUS_ID\"] = \"<YOUR_VECTARA_CORPUS_ID>\"\n",
"os.environ[\"VECTARA_CUSTOMER_ID\"] = \"<YOUR_VECTARA_CUSTOMER_ID>\"\n",
"\n",
"from langchain_community.vectorstores import Vectara\n",
"from langchain_openai import OpenAI"
"from langchain_community.vectorstores.vectara import (\n",
" RerankConfig,\n",
" SummaryConfig,\n",
" VectaraQueryConfig,\n",
")"
]
},
{
@@ -68,62 +90,30 @@
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
"## Vectara Chat Explained\n",
"\n",
"In most uses of LangChain to create chatbots, one must integrate a special `memory` component that maintains the history of chat sessions and then uses that history to ensure the chatbot is aware of conversation history.\n",
"\n",
"With Vectara Chat - all of that is performed in the backend by Vectara automatically. You can look at the [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) documentation for the details, to learn more about the internals of how this is implemented, but with LangChain all you have to do is turn that feature on in the Vectara vectorstore.\n",
"\n",
"Let's see an example. First we load the SOTU document (remember, text extraction and chunking all occurs automatically on the Vectara platform):"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "01c46e92",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"state_of_the_union.txt\")\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"Since we're using Vectara, there's no need to chunk the documents, as that is done automatically in the Vectara platform backend. We just use `from_document()` to upload the text loaded from the file, and directly ingest it into Vectara:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8930cf7",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vectara = Vectara.from_documents(documents, embedding=None)"
]
},
{
"cell_type": "markdown",
"id": "898b574b",
"metadata": {},
"source": [
"We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "af803fee",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"documents = loader.load()\n",
"\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
"vectara = Vectara.from_documents(documents, embedding=None)"
]
},
{
@@ -131,139 +121,25 @@
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the `ConversationalRetrievalChain`:"
"And now we create a Chat Runnable using the `as_chat` method:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7b4110f3",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '29486', 'len': '97'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '1083', 'len': '117'}), Document(page_content='All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people Ive met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, whos here with us tonight. As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” Its time. \\n\\nBut with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '14257', 'len': '77'}), Document(page_content='This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in Americasecond only to heart disease. Last month, I announced our plan to supercharge \\nthe Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '36196', 'len': '122'}), Document(page_content='Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '664', 'len': '68'}), Document(page_content='I understand. \\n\\nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. Thats why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '8042', 'len': '97'}), Document(page_content='He rejected repeated efforts at diplomacy. He thought the West and NATO wouldnt respond. And he thought he could divide us at home. We were ready. Here is what we did. We prepared extensively and carefully.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '2100', 'len': '42'}), Document(page_content='He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '788', 'len': '28'}), Document(page_content='Putins latest attack on Ukraine was premeditated and unprovoked. He rejected repeated efforts at diplomacy. He thought the West and NATO wouldnt respond. And he thought he could divide us at home. We were ready. Here is what we did.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '2053', 'len': '46'}), Document(page_content='A unity agenda for the nation. We can do this. \\n\\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '36968', 'len': '131'})]\n"
]
}
],
"source": [
"openai_api_key = os.environ[\"OPENAI_API_KEY\"]\n",
"llm = OpenAI(openai_api_key=openai_api_key, temperature=0)\n",
"retriever = vectara.as_retriever()\n",
"d = retriever.invoke(\"What did the president say about Ketanji Brown Jackson\", k=2)\n",
"print(d)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "44ed803e",
"metadata": {},
"outputs": [],
"source": [
"bot = ConversationalRetrievalChain.from_llm(\n",
" llm, retriever, memory=memory, verbose=False\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5b6deb16",
"metadata": {},
"source": [
"And can have a multi-turn conversation with out new bot:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e8ce4fe9",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = bot.invoke({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "4c79862b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "c697d9d1",
"metadata": {},
"outputs": [],
"source": [
"query = \"Did he mention who she suceeded\"\n",
"result = bot.invoke({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "ba0678f3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "b3308b01-5300-4999-8cd3-22f16dae757e",
"metadata": {},
"source": [
"## Pass in chat history\n",
"\n",
"In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 3,
"id": "1b41a10b-bf68-4689-8f00-9aed7675e2ab",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"bot = ConversationalRetrievalChain.from_llm(\n",
" OpenAI(temperature=0), vectara.as_retriever()\n",
")"
"summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang=\"eng\")\n",
"rerank_config = RerankConfig(reranker=\"mmr\", rerank_k=50, mmr_diversity_bias=0.2)\n",
"config = VectaraQueryConfig(\n",
" k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config\n",
")\n",
"\n",
"bot = vectara.as_chat(config)"
]
},
{
@@ -276,39 +152,25 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 4,
"id": "bc672290-8a8b-4828-a90c-f1bbdd6b3920",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = bot.invoke({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6b62d758-c069-4062-88f0-21e7ea4710bf",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.\""
"'The President expressed gratitude to Justice Breyer and highlighted the significance of nominating Ketanji Brown Jackson to the Supreme Court, praising her legal expertise and commitment to upholding excellence [1]. The President also reassured the public about the situation with gas prices and the conflict in Ukraine, emphasizing unity with allies and the belief that the world will emerge stronger from these challenges [2][4]. Additionally, the President shared personal experiences related to economic struggles and the importance of passing the American Rescue Plan to support those in need [3]. The focus was also on job creation and economic growth, acknowledging the impact of inflation on families [5]. While addressing cancer as a significant issue, the President discussed plans to enhance cancer research and support for patients and families [7].'"
]
},
"execution_count": 14,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
"bot.invoke(\"What did the president say about Ketanji Brown Jackson?\")[\"answer\"]"
]
},
{
@@ -321,256 +183,25 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 5,
"id": "9c95460b-7116-4155-a9d2-c0fb027ee592",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = bot.invoke({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "698ac00c-cadc-407f-9423-226b2d9258d0",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.'"
"\"In his remarks, the President specified that Ketanji Brown Jackson is succeeding Justice Breyer on the United States Supreme Court[1]. The President praised Jackson as a top legal mind who will continue Justice Breyer's legacy of excellence. The nomination of Jackson was highlighted as a significant constitutional responsibility of the President[1]. The President emphasized the importance of this nomination and the qualities that Jackson brings to the role. The focus was on the transition from Justice Breyer to Judge Ketanji Brown Jackson on the Supreme Court[1].\""
]
},
"execution_count": 16,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "0eaadf0f",
"metadata": {},
"source": [
"## Return Source Documents\n",
"You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "562769c6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"bot = ConversationalRetrievalChain.from_llm(\n",
" llm, vectara.as_retriever(), return_source_documents=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "ea478300",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = bot.invoke({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "4cb75b4e",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '29486', 'len': '97'})"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"source_documents\"][0]"
]
},
{
"cell_type": "markdown",
"id": "99b96dae",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with `map_reduce`\n",
"LangChain supports different types of ways to combine document chains with the ConversationalRetrievalChain chain."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e53a9d66",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT\n",
"from langchain.chains.question_answering import load_qa_chain"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "bf205e35",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ConversationalRetrievalChain(\n",
" retriever=vectara.as_retriever(),\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "78155887",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "e54b5fa2",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of the nation's top legal minds and a former top litigator in private practice.\""
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "a2fe6b14",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with Question Answering with sources\n",
"\n",
"You can also use this chain with the question answering with sources chain."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "d1058fd2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources import load_qa_with_sources_chain"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "a6594482",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_with_sources_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ConversationalRetrievalChain(\n",
" retriever=vectara.as_retriever(),\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "e2badd21",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "edb31fe5",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice.\\nSOURCES: langchain\""
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
"bot.invoke(\"Did he mention who she suceeded?\")[\"answer\"]"
]
},
{
@@ -578,157 +209,40 @@
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with streaming to `stdout`\n",
"## Chat with streaming\n",
"\n",
"Output from the chain will be streamed to `stdout` token by token in this example."
"Of course the chatbot interface also supports streaming.\n",
"Instead of the `invoke` method you simply use `stream`:"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.conversational_retrieval.prompts import (\n",
" CONDENSE_QUESTION_PROMPT,\n",
" QA_PROMPT,\n",
")\n",
"from langchain.chains.llm import LLMChain\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_core.callbacks import StreamingStdOutCallbackHandler\n",
"\n",
"# Construct a ConversationalRetrievalChain with a streaming llm for combine docs\n",
"# and a separate, non-streaming llm for question generation\n",
"llm = OpenAI(temperature=0, openai_api_key=openai_api_key)\n",
"streaming_llm = OpenAI(\n",
" streaming=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
" temperature=0,\n",
" openai_api_key=openai_api_key,\n",
")\n",
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=QA_PROMPT)\n",
"\n",
"bot = ConversationalRetrievalChain(\n",
" retriever=vectara.as_retriever(),\n",
" combine_docs_chain=doc_chain,\n",
" question_generator=question_generator,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."
]
}
],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = bot.invoke({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court."
]
}
],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = bot.invoke({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "markdown",
"id": "f793d56b",
"metadata": {},
"source": [
"## get_chat_history Function\n",
"You can also specify a `get_chat_history` function, which can be used to format the chat_history string."
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "a7ba9d8c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def get_chat_history(inputs) -> str:\n",
" res = []\n",
" for human, ai in inputs:\n",
" res.append(f\"Human:{human}\\nAI:{ai}\")\n",
" return \"\\n\".join(res)\n",
"\n",
"\n",
"bot = ConversationalRetrievalChain.from_llm(\n",
" llm, vectara.as_retriever(), get_chat_history=get_chat_history\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "a3e33c0d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = bot.invoke({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 6,
"id": "936dc62f",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Judge Ketanji Brown Jackson is a nominee for the United States Supreme Court, known for her legal expertise and experience as a former litigator. She is praised for her potential to continue the legacy of excellence on the Court[1]. While the search results provide information on various topics like innovation, economic growth, and healthcare initiatives, they do not directly address Judge Ketanji Brown Jackson's specific accomplishments. Therefore, I do not have enough information to answer this question."
]
}
],
"source": [
"result[\"answer\"]"
"output = {}\n",
"curr_key = None\n",
"for chunk in bot.stream(\"what about her accopmlishments?\"):\n",
" for key in chunk:\n",
" if key not in output:\n",
" output[key] = chunk[key]\n",
" else:\n",
" output[key] += chunk[key]\n",
" if key == \"answer\":\n",
" print(chunk[key], end=\"\", flush=True)\n",
" curr_key = key"
]
}
],
@@ -748,7 +262,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.11.8"
}
},
"nbformat": 4,

View File

@@ -1,311 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "559f8e0e",
"metadata": {},
"source": [
"# Vectara\n",
"\n",
">[Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying. \n",
"\n",
"Vectara provides an end-to-end managed service for Retrieval Augmented Generation or [RAG](https://vectara.com/grounded-generation/), which includes:\n",
"\n",
"1. A way to extract text from document files and chunk them into sentences.\n",
"\n",
"2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store\n",
"\n",
"3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
"\n",
"4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
"This notebook shows how to use functionality related to the `Vectara`'s integration with langchain.\n",
"Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](/docs/concepts#langchain-expression-language) and using Vectara's integrated summarization capability."
]
},
{
"cell_type": "markdown",
"id": "e97dcf11",
"metadata": {},
"source": [
"# Setup\n",
"\n",
"You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps:\n",
"\n",
"1. [Sign up](https://www.vectara.com/integrations/langchain) for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Authorization\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",
"To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key.\n",
"You can provide those to LangChain in two ways:\n",
"\n",
"1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.\n",
"\n",
"> For example, you can set these variables using os.environ and getpass as follows:\n",
"\n",
"```python\n",
"import os\n",
"import getpass\n",
"\n",
"os.environ[\"VECTARA_CUSTOMER_ID\"] = getpass.getpass(\"Vectara Customer ID:\")\n",
"os.environ[\"VECTARA_CORPUS_ID\"] = getpass.getpass(\"Vectara Corpus ID:\")\n",
"os.environ[\"VECTARA_API_KEY\"] = getpass.getpass(\"Vectara API Key:\")\n",
"```\n",
"\n",
"2. Add them to the Vectara vectorstore constructor:\n",
"\n",
"```python\n",
"vectorstore = Vectara(\n",
" vectara_customer_id=vectara_customer_id,\n",
" vectara_corpus_id=vectara_corpus_id,\n",
" vectara_api_key=vectara_api_key\n",
" )\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "aac7a9a6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import FakeEmbeddings\n",
"from langchain_community.vectorstores import Vectara\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough"
]
},
{
"cell_type": "markdown",
"id": "875ffb7e",
"metadata": {},
"source": [
"First we load the state-of-the-union text into Vectara. Note that we use the `from_files` interface which does not require any local processing or chunking - Vectara receives the file content and performs all the necessary pre-processing, chunking and embedding of the file into its knowledge store."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "be0a4973",
"metadata": {},
"outputs": [],
"source": [
"vectara = Vectara.from_files([\"state_of_the_union.txt\"])"
]
},
{
"cell_type": "markdown",
"id": "22a6b953",
"metadata": {},
"source": [
"We now create a Vectara retriever and specify that:\n",
"* It should return only the 3 top Document matches\n",
"* For summary, it should use the top 5 results and respond in English"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "19cd2f86",
"metadata": {},
"outputs": [],
"source": [
"summary_config = {\"is_enabled\": True, \"max_results\": 5, \"response_lang\": \"eng\"}\n",
"retriever = vectara.as_retriever(\n",
" search_kwargs={\"k\": 3, \"summary_config\": summary_config}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c49284ed",
"metadata": {},
"source": [
"When using summarization with Vectara, the retriever responds with a list of `Document` objects:\n",
"1. The first `k` documents are the ones that match the query (as we are used to with a standard vector store)\n",
"2. With summary enabled, an additional `Document` object is apended, which includes the summary text. This Document has the metadata field `summary` set as True.\n",
"\n",
"Let's define two utility functions to split those out:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e5100654",
"metadata": {},
"outputs": [],
"source": [
"def get_sources(documents):\n",
" return documents[:-1]\n",
"\n",
"\n",
"def get_summary(documents):\n",
" return documents[-1].page_content\n",
"\n",
"\n",
"query_str = \"what did Biden say?\""
]
},
{
"cell_type": "markdown",
"id": "f2a74368",
"metadata": {},
"source": [
"Now we can try a summary response for the query:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ee4759c4",
"metadata": {
"scrolled": false
},
"outputs": [
{
"data": {
"text/plain": [
"'The returned results did not contain sufficient information to be summarized into a useful answer for your query. Please try a different search or restate your query differently.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"(retriever | get_summary).invoke(query_str)"
]
},
{
"cell_type": "markdown",
"id": "dd7c4593",
"metadata": {},
"source": [
"And if we would like to see the sources retrieved from Vectara that were used in this summary (the citations):"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "0eb66034",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='When they came home, many of the worlds fittest and best trained warriors were never the same. Dizziness. \\n\\nA cancer that would put them in a flag-draped coffin. I know. \\n\\nOne of those soldiers was my son Major Beau Biden. We dont know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But Im committed to finding out everything we can.', metadata={'lang': 'eng', 'section': '1', 'offset': '34652', 'len': '60', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value.', metadata={'lang': 'eng', 'section': '1', 'offset': '3807', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='He rejected repeated efforts at diplomacy. He thought the West and NATO wouldnt respond. And he thought he could divide us at home. We were ready. Here is what we did. We prepared extensively and carefully.', metadata={'lang': 'eng', 'section': '1', 'offset': '2100', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"(retriever | get_sources).invoke(query_str)"
]
},
{
"cell_type": "markdown",
"id": "8f16bf8d",
"metadata": {},
"source": [
"Vectara's \"RAG as a service\" does a lot of the heavy lifting in creating question answering or chatbot chains. The integration with LangChain provides the option to use additional capabilities such as query pre-processing like `SelfQueryRetriever` or `MultiQueryRetriever`. Let's look at an example of using the [MultiQueryRetriever](/docs/how_to/MultiQueryRetriever).\n",
"\n",
"Since MQR uses an LLM we have to set that up - here we choose `ChatOpenAI`:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e14325b9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"President Biden has made several notable quotes and comments. He expressed a commitment to investigate the potential impact of burn pits on soldiers' health, referencing his son's brain cancer [1]. He emphasized the importance of unity among Americans, urging us to see each other as fellow citizens rather than enemies [2]. Biden also highlighted the need for schools to use funds from the American Rescue Plan to hire teachers and address learning loss, while encouraging community involvement in supporting education [3].\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.retrievers.multi_query import MultiQueryRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"mqr = MultiQueryRetriever.from_llm(retriever=retriever, llm=llm)\n",
"\n",
"(mqr | get_summary).invoke(query_str)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fa14f923",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='When they came home, many of the worlds fittest and best trained warriors were never the same. Dizziness. \\n\\nA cancer that would put them in a flag-draped coffin. I know. \\n\\nOne of those soldiers was my son Major Beau Biden. We dont know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But Im committed to finding out everything we can.', metadata={'lang': 'eng', 'section': '1', 'offset': '34652', 'len': '60', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value.', metadata={'lang': 'eng', 'section': '1', 'offset': '3807', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='And, if Congress provides the funds we need, well have new stockpiles of tests, masks, and pills ready if needed. I cannot promise a new variant wont come. But I can promise you well do everything within our power to be ready if it does. Third we can end the shutdown of schools and businesses. We have the tools we need.', metadata={'lang': 'eng', 'section': '1', 'offset': '24753', 'len': '82', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='The returned results did not contain sufficient information to be summarized into a useful answer for your query. Please try a different search or restate your query differently.', metadata={'summary': True}),\n",
" Document(page_content='Danielle says Heath was a fighter to the very end. He didnt know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.', metadata={'lang': 'eng', 'section': '1', 'offset': '35502', 'len': '58', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='Lets stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. We cant change how divided weve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.', metadata={'lang': 'eng', 'section': '1', 'offset': '26312', 'len': '89', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),\n",
" Document(page_content='The American Rescue Plan gave schools money to hire teachers and help students make up for lost learning. I urge every parent to make sure your school does just that. And we can all play a part—sign up to be a tutor or a mentor. Children were also struggling before the pandemic. Bullying, violence, trauma, and the harms of social media.', metadata={'lang': 'eng', 'section': '1', 'offset': '33227', 'len': '61', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'})]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"(mqr | get_sources).invoke(query_str)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16853820",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -28,38 +28,68 @@ In addition to Zep Open Source's memory management features, Zep Cloud offers:
- **Dialog Classification**: Instantly and accurately classify chat dialog. Understand user intent and emotion, segment users, and more. Route chains based on semantic context, and trigger events.
- **Structured Data Extraction**: Quickly extract business data from chat conversations using a schema you define. Understand what your Assistant should ask for next in order to complete its task.
> Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks), [Zep Cloud Message History Example](https://help.getzep.com/langchain/examples/messagehistory-example), [Zep Cloud Vector Store Example](https://help.getzep.com/langchain/examples/vectorstore-example)
## Open Source Installation and Setup
> Zep Open Source project: [https://github.com/getzep/zep](https://github.com/getzep/zep)
>
> Zep Open Source Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
## Zep Open Source
Zep offers an open source version with a self-hosted option.
Please refer to the [Zep Open Source](https://github.com/getzep/zep) repo for more information.
You can also find Zep Open Source compatible [Retriever](/docs/integrations/retrievers/zep_memorystore), [Vector Store](/docs/integrations/vectorstores/zep) and [Memory](/docs/integrations/memory/zep_memory) examples
1. Install the Zep service. See the [Zep Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).
## Zep Cloud Installation and Setup
2. Install the Zep Python SDK:
[Zep Cloud Docs](https://help.getzep.com)
1. Install the Zep Cloud SDK:
```bash
pip install zep_python
pip install zep_cloud
```
or
```bash
poetry add zep_cloud
```
## Memory
Zep's [Memory API](https://docs.getzep.com/sdk/chat_history/) persists your app's chat history and metadata to a Session, enriches the memory, automatically generates summaries, and enables vector similarity search over historical chat messages and summaries.
Zep's Memory API persists your users' chat history and metadata to a [Session](https://help.getzep.com/chat-history-memory/sessions), enriches the memory, and
enables vector similarity search over historical chat messages and dialog summaries.
There are two approaches to populating your prompt with chat history:
Zep offers several approaches to populating prompts with context from historical conversations.
1. Retrieve the most recent N messages (and potentionally a summary) from a Session and use them to construct your prompt.
2. Search over the Session's chat history for messages that are relevant and use them to construct your prompt.
### Perpetual Memory
This is the default memory type.
Salient facts from the dialog are extracted and stored in a Fact Table.
This is updated in real-time as new messages are added to the Session.
Every time you call the Memory API to get a Memory, Zep returns the Fact Table, the most recent messages (per your Message Window setting), and a summary of the most recent messages prior to the Message Window.
The combination of the Fact Table, summary, and the most recent messages in a prompts provides both factual context and nuance to the LLM.
Both of these approaches may be useful, with the first providing the LLM with context as to the most recent interactions with a human. The second approach enables you to look back further in the chat history and retrieve messages that are relevant to the current conversation in a token-efficient manner.
### Summary Retriever Memory
Returns the most recent messages and a summary of past messages relevant to the current conversation,
enabling you to provide your Assistant with helpful context from past conversations
### Message Window Buffer Memory
Returns the most recent N messages from the current conversation.
Additionally, Zep enables vector similarity searches for Messages or Summaries stored within its system.
This feature lets you populate prompts with past conversations that are contextually similar to a specific query,
organizing the results by a similarity Score.
`ZepCloudChatMessageHistory` and `ZepCloudMemory` classes can be imported to interact with Zep Cloud APIs.
`ZepCloudChatMessageHistory` is compatible with `RunnableWithMessageHistory`.
```python
from langchain.memory import ZepMemory
from langchain_community.chat_message_histories import ZepCloudChatMessageHistory
```
See a [RAG App Example here](/docs/integrations/memory/zep_memory).
See a [Perpetual Memory Example here](/docs/integrations/memory/zep_cloud_chat_message_history).
You can use `ZepCloudMemory` together with agents that support Memory.
```python
from langchain.memory import ZepCloudMemory
```
See a [Memory RAG Example here](/docs/integrations/memory/zep_memory_cloud).
## Retriever
@@ -67,24 +97,24 @@ Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve mes
The Retriever supports searching over both individual messages and summaries of conversations. The latter is useful for providing rich, but succinct context to the LLM as to relevant past conversations.
Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://docs.getzep.com/sdk/search_query/). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
See a [usage example](/docs/integrations/retrievers/zep_memorystore).
See a [usage example](/docs/integrations/retrievers/zep_cloud_memorystore).
```python
from langchain_community.retrievers import ZepRetriever
from langchain_community.retrievers import ZepCloudRetriever
```
## Vector store
Zep's [Document VectorStore API](https://docs.getzep.com/sdk/documents/) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand
Zep's [Document VectorStore API](https://help.getzep.com/document-collections) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand
distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest.
Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://docs.getzep.com/sdk/search_query/).
Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works).
MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other.
```python
from langchain_community.vectorstores import ZepVectorStore
from langchain_community.vectorstores import ZepCloudVectorStore
```
See a [usage example](/docs/integrations/vectorstores/zep).
See a [usage example](/docs/integrations/vectorstores/zep_cloud).

View File

@@ -0,0 +1,636 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# Milvus Hybrid Search\n",
"\n",
"> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n",
"\n",
"This notebook goes over how to use the Milvus Hybrid Search retriever, which combines the strengths of both dense and sparse vector search.\n",
"\n",
"For more reference please go to [Milvus Multi-Vector Search](https://milvus.io/docs/multi-vector-search.md)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Prerequisites\n",
"### Install dependencies\n",
"You need to prepare to install the following dependencies\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pymilvus[model] langchain-milvus langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Import necessary modules and classes"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from pymilvus import (\n",
" Collection,\n",
" CollectionSchema,\n",
" DataType,\n",
" FieldSchema,\n",
" WeightedRanker,\n",
" connections,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Start the Milvus service\n",
"\n",
"Please refer to the [Milvus documentation](https://milvus.io/docs/install_standalone-docker.md) to start the Milvus service.\n",
"\n",
"After starting milvus, you need to specify your milvus connection URI.\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"CONNECTION_URI = \"http://localhost:19530\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Prepare OpenAI API Key\n",
"\n",
"Please refer to the [OpenAI documentation](https://platform.openai.com/account/api-keys) to obtain your OpenAI API key, and set it as an environment variable.\n",
"\n",
"```shell\n",
"export OPENAI_API_KEY=<your_api_key>\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Prepare data and Load\n",
"### Prepare dense and sparse embedding functions\n",
"\n",
" Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"texts = [\n",
" \"In 'The Whispering Walls' by Ava Moreno, a young journalist named Sophia uncovers a decades-old conspiracy hidden within the crumbling walls of an ancient mansion, where the whispers of the past threaten to destroy her own sanity.\",\n",
" \"In 'The Last Refuge' by Ethan Blackwood, a group of survivors must band together to escape a post-apocalyptic wasteland, where the last remnants of humanity cling to life in a desperate bid for survival.\",\n",
" \"In 'The Memory Thief' by Lila Rose, a charismatic thief with the ability to steal and manipulate memories is hired by a mysterious client to pull off a daring heist, but soon finds themselves trapped in a web of deceit and betrayal.\",\n",
" \"In 'The City of Echoes' by Julian Saint Clair, a brilliant detective must navigate a labyrinthine metropolis where time is currency, and the rich can live forever, but at a terrible cost to the poor.\",\n",
" \"In 'The Starlight Serenade' by Ruby Flynn, a shy astronomer discovers a mysterious melody emanating from a distant star, which leads her on a journey to uncover the secrets of the universe and her own heart.\",\n",
" \"In 'The Shadow Weaver' by Piper Redding, a young orphan discovers she has the ability to weave powerful illusions, but soon finds herself at the center of a deadly game of cat and mouse between rival factions vying for control of the mystical arts.\",\n",
" \"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.\",\n",
" \"In 'The Clockwork Kingdom' by Augusta Wynter, a brilliant inventor discovers a hidden world of clockwork machines and ancient magic, where a rebellion is brewing against the tyrannical ruler of the land.\",\n",
" \"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.\",\n",
" \"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.\",\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use the [OpenAI Embedding](https://platform.openai.com/docs/guides/embeddings) to generate dense vectors, and the [BM25 algorithm](https://en.wikipedia.org/wiki/Okapi_BM25) to generate sparse vectors.\n",
"\n",
"Initialize dense embedding function and get dimension"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1536"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dense_embedding_func = OpenAIEmbeddings()\n",
"dense_dim = len(dense_embedding_func.embed_query(texts[1]))\n",
"dense_dim"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize sparse embedding function.\n",
"\n",
"Note that the output of sparse embedding is a set of sparse vectors, which represents the index and weight of the keywords of the input text."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{0: 0.4270424944042204,\n",
" 21: 1.845826690498331,\n",
" 22: 1.845826690498331,\n",
" 23: 1.845826690498331,\n",
" 24: 1.845826690498331,\n",
" 25: 1.845826690498331,\n",
" 26: 1.845826690498331,\n",
" 27: 1.2237754316221157,\n",
" 28: 1.845826690498331,\n",
" 29: 1.845826690498331,\n",
" 30: 1.845826690498331,\n",
" 31: 1.845826690498331,\n",
" 32: 1.845826690498331,\n",
" 33: 1.845826690498331,\n",
" 34: 1.845826690498331,\n",
" 35: 1.845826690498331,\n",
" 36: 1.845826690498331,\n",
" 37: 1.845826690498331,\n",
" 38: 1.845826690498331,\n",
" 39: 1.845826690498331}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sparse_embedding_func = BM25SparseEmbedding(corpus=texts)\n",
"sparse_embedding_func.embed_query(texts[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Milvus Collection and load data\n",
"\n",
"Initialize connection URI and establish connection"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"connections.connect(uri=CONNECTION_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define field names and their data types"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"pk_field = \"doc_id\"\n",
"dense_field = \"dense_vector\"\n",
"sparse_field = \"sparse_vector\"\n",
"text_field = \"text\"\n",
"fields = [\n",
" FieldSchema(\n",
" name=pk_field,\n",
" dtype=DataType.VARCHAR,\n",
" is_primary=True,\n",
" auto_id=True,\n",
" max_length=100,\n",
" ),\n",
" FieldSchema(name=dense_field, dtype=DataType.FLOAT_VECTOR, dim=dense_dim),\n",
" FieldSchema(name=sparse_field, dtype=DataType.SPARSE_FLOAT_VECTOR),\n",
" FieldSchema(name=text_field, dtype=DataType.VARCHAR, max_length=65_535),\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a collection with the defined schema"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"schema = CollectionSchema(fields=fields, enable_dynamic_field=False)\n",
"collection = Collection(\n",
" name=\"IntroductionToTheNovels\", schema=schema, consistency_level=\"Strong\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define index for dense and sparse vectors"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"dense_index = {\"index_type\": \"FLAT\", \"metric_type\": \"IP\"}\n",
"collection.create_index(\"dense_vector\", dense_index)\n",
"sparse_index = {\"index_type\": \"SPARSE_INVERTED_INDEX\", \"metric_type\": \"IP\"}\n",
"collection.create_index(\"sparse_vector\", sparse_index)\n",
"collection.flush()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Insert entities into the collection and load the collection"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"entities = []\n",
"for text in texts:\n",
" entity = {\n",
" dense_field: dense_embedding_func.embed_documents([text])[0],\n",
" sparse_field: sparse_embedding_func.embed_documents([text])[0],\n",
" text_field: text,\n",
" }\n",
" entities.append(entity)\n",
"collection.insert(entities)\n",
"collection.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build RAG chain with Retriever\n",
"### Create the Retriever\n",
"\n",
"Define search parameters for sparse and dense fields, and create a retriever"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"sparse_search_params = {\"metric_type\": \"IP\"}\n",
"dense_search_params = {\"metric_type\": \"IP\", \"params\": {}}\n",
"retriever = MilvusCollectionHybridSearchRetriever(\n",
" collection=collection,\n",
" rerank=WeightedRanker(0.5, 0.5),\n",
" anns_fields=[dense_field, sparse_field],\n",
" field_embeddings=[dense_embedding_func, sparse_embedding_func],\n",
" field_search_params=[dense_search_params, sparse_search_params],\n",
" top_k=3,\n",
" text_field=text_field,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"In the input parameters of this Retriever, we use a dense embedding and a sparse embedding to perform hybrid search on the two fields of this Collection, and use WeightedRanker for reranking. Finally, 3 top-K Documents will be returned."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.\", metadata={'doc_id': '449281835035545843'}),\n",
" Document(page_content=\"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.\", metadata={'doc_id': '449281835035545845'}),\n",
" Document(page_content=\"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.\", metadata={'doc_id': '449281835035545846'})]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"What are the story about ventures?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the RAG chain\n",
"\n",
"Initialize ChatOpenAI and define a prompt template"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"llm = ChatOpenAI()\n",
"\n",
"PROMPT_TEMPLATE = \"\"\"\n",
"Human: You are an AI assistant, and provides answers to questions by using fact based and statistical information when possible.\n",
"Use the following pieces of information to provide a concise answer to the question enclosed in <question> tags.\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"<question>\n",
"{question}\n",
"</question>\n",
"\n",
"Assistant:\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=PROMPT_TEMPLATE, input_variables=[\"context\", \"question\"]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Define a function for formatting documents"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Define a chain using the retriever and other components"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"rag_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Perform a query using the defined chain"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"text/plain": [
"\"Lila Rose has written 'The Memory Thief,' which follows a charismatic thief with the ability to steal and manipulate memories as they navigate a daring heist and a web of deceit and betrayal.\""
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rag_chain.invoke(\"What novels has Lila written and what are their contents?\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Drop the collection"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"collection.drop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -20,7 +20,7 @@
"\n",
"I have used the cloud version of Milvus, thus I need `uri` and `token` as well.\n",
"\n",
"NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `pymilvus` package."
"NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `langchain_milvus` package."
]
},
{
@@ -29,16 +29,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet lark"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pymilvus"
"%pip install --upgrade --quiet lark langchain_milvus"
]
},
{
@@ -67,8 +58,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Milvus\n",
"from langchain_core.documents import Document\n",
"from langchain_milvus.vectorstores import Milvus\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"
@@ -388,4 +379,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -5,15 +5,17 @@
"id": "13afcae7",
"metadata": {},
"source": [
"# Vectara \n",
"# Vectara self-querying \n",
"\n",
">[Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying. \n",
">\n",
">`Vectara` provides an end-to-end managed service for `Retrieval Augmented Generation` or [RAG](https://vectara.com/grounded-generation/), which includes:\n",
">1. A way to `extract text` from document files and `chunk` them into sentences.\n",
">2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using `Boomerang`, and stored in the Vectara internal knowledge (vector+text) store\n",
">3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
">4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.\n",
"[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant) which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service). \n",
"\n",
"Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:\n",
"1. A way to extract text from files (PDF, PPT, DOCX, etc)\n",
"2. ML-based chunking that provides state of the art performance.\n",
"3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.\n",
"4. Its own internal vector database where text chunks and embedding vectors are stored.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
"7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
@@ -25,19 +27,19 @@
"id": "68e75fb9",
"metadata": {},
"source": [
"# Setup\n",
"# Getting Started\n",
"\n",
"You will need a `Vectara` account to use `Vectara` with `LangChain`. To get started, use the following steps (see our [quickstart](https://docs.vectara.com/docs/quickstart) guide):\n",
"1. [Sign up](https://console.vectara.com/signup) for a `Vectara` account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingesting from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Authorization\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"To get started, use the following steps:\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara account. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Access Control\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query-only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",
"To use LangChain with Vectara, you need three values: customer ID, corpus ID and api_key.\n",
"To use LangChain with Vectara, you'll need to have these three values: `customer ID`, `corpus ID` and `api_key`.\n",
"You can provide those to LangChain in two ways:\n",
"\n",
"1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.\n",
"\n",
"> For example, you can set these variables using `os.environ` and `getpass` as follows:\n",
" For example, you can set these variables using os.environ and getpass as follows:\n",
"\n",
"```python\n",
"import os\n",
@@ -48,17 +50,18 @@
"os.environ[\"VECTARA_API_KEY\"] = getpass.getpass(\"Vectara API Key:\")\n",
"```\n",
"\n",
"1. Provide them as arguments when creating the `Vectara` vectorstore object:\n",
"2. Add them to the `Vectara` vectorstore constructor:\n",
"\n",
"```python\n",
"vectorstore = Vectara(\n",
"vectara = Vectara(\n",
" vectara_customer_id=vectara_customer_id,\n",
" vectara_corpus_id=vectara_corpus_id,\n",
" vectara_api_key=vectara_api_key\n",
" )\n",
"```\n",
"In this notebook we assume they are provided in the environment.\n",
"\n",
"**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). "
"**Notes:** The self-query retriever requires you to have `lark` installed (`pip install lark`). "
]
},
{
@@ -68,34 +71,44 @@
"source": [
"## Connecting to Vectara from LangChain\n",
"\n",
"In this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.\n",
"In this example, we assume that you've created an account and a corpus, and added your `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY` (created with permissions for both indexing and query) as environment variables.\n",
"\n",
"The corpus has 4 fields defined as metadata for filtering: year, director, rating, and genre\n"
"We further assume the corpus has 4 fields defined as filterable metadata attributes: `year`, `director`, `rating`, and `genre`"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9d3aa44f",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"VECTARA_API_KEY\"] = \"<YOUR_VECTARA_API_KEY>\"\n",
"os.environ[\"VECTARA_CORPUS_ID\"] = \"<YOUR_VECTARA_CORPUS_ID>\"\n",
"os.environ[\"VECTARA_CUSTOMER_ID\"] = \"<YOUR_VECTARA_CUSTOMER_ID>\"\n",
"\n",
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import Vectara\n",
"from langchain_openai.chat_models import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"id": "13a6be33-de3c-4628-acc8-b94102c275b7",
"metadata": {},
"source": [
"## Dataset\n",
"\n",
"We first define an example dataset of movie, and upload those to the corpus, along with the metadata:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb4a5787",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.embeddings import FakeEmbeddings\n",
"from langchain_community.vectorstores import Vectara\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAI\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bcbe04d9",
"metadata": {
"tags": []
@@ -136,11 +149,7 @@
"\n",
"vectara = Vectara()\n",
"for doc in docs:\n",
" vectara.add_texts(\n",
" [doc.page_content],\n",
" embedding=FakeEmbeddings(size=768),\n",
" doc_metadata=doc.metadata,\n",
" )"
" vectara.add_texts([doc.page_content], doc_metadata=doc.metadata)"
]
},
{
@@ -148,23 +157,21 @@
"id": "5ecaab6d",
"metadata": {},
"source": [
"## Creating our self-querying retriever\n",
"Now we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents."
"## Creating the self-querying retriever\n",
"Now we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.\n",
"\n",
"We then provide an llm (in this case OpenAI) and the `vectara` vectorstore as arguments:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "86e34dbf",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_openai import OpenAI\n",
"\n",
"metadata_field_info = [\n",
" AttributeInfo(\n",
" name=\"genre\",\n",
@@ -186,7 +193,7 @@
" ),\n",
"]\n",
"document_content_description = \"Brief summary of a movie\"\n",
"llm = OpenAI(temperature=0)\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4o\", max_tokens=4069)\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm, vectara, document_content_description, metadata_field_info, verbose=True\n",
")"
@@ -197,13 +204,13 @@
"id": "ea9df8d4",
"metadata": {},
"source": [
"## Testing it out\n",
"## Self-retrieval Queries\n",
"And now we can try actually using our retriever!"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "38a126e9",
"metadata": {},
"outputs": [
@@ -211,26 +218,26 @@
"data": {
"text/plain": [
"[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}),\n",
" Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}),\n",
" Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),\n",
" Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'}),\n",
" Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}),\n",
" Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'}),\n",
" Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'}),\n",
" Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]"
" Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'})]"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.invoke(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are movies about scientists\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "fc3f1e6e",
"metadata": {},
"outputs": [
@@ -241,7 +248,7 @@
" Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]"
]
},
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -253,7 +260,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "b19d4da0",
"metadata": {},
"outputs": [
@@ -263,7 +270,7 @@
"[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'})]"
]
},
"execution_count": 7,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -275,17 +282,18 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "f900e40e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]"
"[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),\n",
" Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -297,17 +305,18 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "12a51522",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]"
"[Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}),\n",
" Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'})]"
]
},
"execution_count": 9,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -333,7 +342,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "bff36b88-b506-4877-9c63-e5a1a8d78e64",
"metadata": {
"tags": []
@@ -350,9 +359,17 @@
")"
]
},
{
"cell_type": "markdown",
"id": "00e8baad-a9d7-4498-bd8d-ca41d0691386",
"metadata": {},
"source": [
"This is cool, we can include the number of results we would like to see in the query and the self retriever would correctly understand it. For example, let's look for "
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "2758d229-4f97-499c-819f-888acaf8ee10",
"metadata": {
"tags": []
@@ -361,19 +378,27 @@
{
"data": {
"text/plain": [
"[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}),\n",
" Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]"
"[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),\n",
" Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]"
]
},
"execution_count": 11,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.invoke(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies with a rating above 8.5\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed4b9dbc-e3cd-442d-b108-705295f51fa1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -392,7 +417,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.8"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,470 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Zep Cloud\n",
"## Retriever Example for [Zep Cloud](https://docs.getzep.com/)\n",
"\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
"> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retriever Example\n",
"\n",
"This notebook demonstrates how to search historical chat message histories using the [Zep Long-term Memory Store](https://www.getzep.com/).\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to the Zep memory store.\n",
"2. Vector search over the conversation history: \n",
" 1. With a similarity search over chat messages\n",
" 2. Using maximal marginal relevance re-ranking of a chat message search\n",
" 3. Filtering a search using metadata filters\n",
" 4. A similarity search over summaries of the chat messages\n",
" 5. Using maximal marginal relevance re-ranking of a summary search\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import getpass\n",
"import time\n",
"from uuid import uuid4\n",
"\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_community.retrievers import ZepCloudRetriever\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"# Provide your Zep API key.\n",
"zep_api_key = getpass.getpass()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and add a chat message history to the memory store\n",
"\n",
"**NOTE:** Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A `session_id` is required when instantiating the Retriever."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"session_id = str(uuid4()) # This is a unique identifier for the user/session\n",
"\n",
"# Initialize the Zep Memory Class\n",
"zep_memory = ZepCloudMemory(session_id=session_id, api_key=zep_api_key)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 4 messages. We want to push beyond this to demonstrate auto-summarization.\n",
"test_history = [\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"Which books of hers were made into movies?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is the setting of the book?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The book is set in a dystopian future in the 2020s, where society has\"\n",
" \" collapsed due to climate change and economic crises.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"Who is the protagonist?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The protagonist of the book is Lauren Olamina, a young woman who possesses\"\n",
" \" 'hyperempathy', the ability to feel pain and other sensations she\"\n",
" \" witnesses.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is the main theme of the book?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The main theme of the book is survival in the face of drastic societal\"\n",
" \" change and collapse. It also explores themes of adaptability, community,\"\n",
" \" and the human capacity for change.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is the 'Parable of the Sower'?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The 'Parable of the Sower' is a biblical parable that Butler uses as a\"\n",
" \" metaphor in the book. In the parable, a sower scatters seeds, some of\"\n",
" \" which fall on fertile ground and grow, while others fall on rocky ground\"\n",
" \" or among thorns and fail to grow. The parable is used to illustrate the\"\n",
" \" importance of receptivity and preparedness in the face of change.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is Butler's writing style like?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Butler's writing style is known for its clarity, directness, and\"\n",
" \" psychological insight. Her narratives often involve complex, diverse\"\n",
" \" characters and explore themes of race, gender, and power.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What other books has she written?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"In addition to 'Parable of the Sower', Butler has written several other\"\n",
" \" notable works, including 'Kindred', 'Dawn', and 'Parable of the Talents'.\"\n",
" ),\n",
" },\n",
"]\n",
"\n",
"for msg in test_history:\n",
" zep_memory.chat_memory.add_message(\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n",
"\n",
"time.sleep(\n",
" 10\n",
") # Wait for the messages to be embedded and summarized, this happens asynchronously."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the Zep Retriever to vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory. Embedding happens automatically.\n",
"\n",
"NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:32:06.613100Z",
"start_time": "2024-05-10T14:32:06.369301Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"What is the 'Parable of the Sower'?\", metadata={'score': 0.9333381652832031, 'uuid': 'bebc441c-a32d-44a1-ae61-968e7b3d4956', 'created_at': '2024-05-10T05:02:01.857627Z', 'token_count': 11, 'role': 'human'}),\n",
" Document(page_content=\"The 'Parable of the Sower' is a biblical parable that Butler uses as a metaphor in the book. In the parable, a sower scatters seeds, some of which fall on fertile ground and grow, while others fall on rocky ground or among thorns and fail to grow. The parable is used to illustrate the importance of receptivity and preparedness in the face of change.\", metadata={'score': 0.8757256865501404, 'uuid': '193c60d8-2b7b-4eb1-a4be-c2d8afd92991', 'created_at': '2024-05-10T05:02:01.97174Z', 'token_count': 82, 'role': 'ai'}),\n",
" Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8641344904899597, 'uuid': 'fc78901d-a625-4530-ba63-1ae3e3b11683', 'created_at': '2024-05-10T05:02:00.942994Z', 'token_count': 21, 'role': 'human'}),\n",
" Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8581685125827789, 'uuid': '91f2cda4-276e-446d-96bf-07d34e5af616', 'created_at': '2024-05-10T05:02:01.05577Z', 'token_count': 54, 'role': 'ai'}),\n",
" Document(page_content=\"In addition to 'Parable of the Sower', Butler has written several other notable works, including 'Kindred', 'Dawn', and 'Parable of the Talents'.\", metadata={'score': 0.8076582252979279, 'uuid': 'e3994519-9a90-410c-b14c-2c652f6d184f', 'created_at': '2024-05-10T05:02:02.401682Z', 'token_count': 37, 'role': 'ai'})]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"zep_retriever = ZepCloudRetriever(\n",
" api_key=zep_api_key,\n",
" session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n",
" top_k=5,\n",
")\n",
"\n",
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the Zep sync API to retrieve results:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:31:37.611570Z",
"start_time": "2024-05-10T14:31:37.298903Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler set in a dystopian future in the 2020s. The story follows Lauren Olamina, a young woman living in a society that has collapsed due to environmental disasters, poverty, and violence. The novel explores themes of societal breakdown, the struggle for survival, and the search for a better future.', metadata={'score': 0.8473024368286133, 'uuid': 'e4689f8e-33be-4a59-a9c2-e5ef5dd70f74', 'created_at': '2024-05-10T05:02:02.713123Z', 'token_count': 76})]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"zep_retriever.invoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Reranking using MMR (Maximal Marginal Relevance)\n",
"\n",
"Zep has native, SIMD-accelerated support for reranking results using MMR. This is useful for removing redundancy in results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"zep_retriever = ZepCloudRetriever(\n",
" api_key=zep_api_key,\n",
" session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n",
" top_k=5,\n",
" search_type=\"mmr\",\n",
" mmr_lambda=0.5,\n",
")\n",
"\n",
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using metadata filters to refine search results\n",
"\n",
"Zep supports filtering results by metadata. This is useful for filtering results by entity type, or other metadata.\n",
"\n",
"More information here: https://help.getzep.com/document-collections#searching-a-collection-with-hybrid-vector-search"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"filter = {\"where\": {\"jsonpath\": '$[*] ? (@.baz == \"qux\")'}}\n",
"\n",
"await zep_retriever.ainvoke(\n",
" \"Who wrote Parable of the Sower?\", config={\"metadata\": filter}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Searching over Summaries with MMR Reranking\n",
"\n",
"Zep automatically generates summaries of chat messages. These summaries can be searched over using the Zep Retriever. Since a summary is a distillation of a conversation, they're more likely to match your search query and offer rich, succinct context to the LLM.\n",
"\n",
"Successive summaries may include similar content, with Zep's similarity search returning the highest matching results but with little diversity.\n",
"MMR re-ranks the results to ensure that the summaries you populate into your prompt are both relevant and each offers additional information to the LLM."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:32:56.877960Z",
"start_time": "2024-05-10T14:32:56.517360Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler set in a dystopian future in the 2020s. The story follows Lauren Olamina, a young woman living in a society that has collapsed due to environmental disasters, poverty, and violence. The novel explores themes of societal breakdown, the struggle for survival, and the search for a better future.', metadata={'score': 0.8473024368286133, 'uuid': 'e4689f8e-33be-4a59-a9c2-e5ef5dd70f74', 'created_at': '2024-05-10T05:02:02.713123Z', 'token_count': 76}),\n",
" Document(page_content='The \\'Parable of the Sower\\' refers to a new religious belief system that the protagonist, Lauren Olamina, develops over the course of the novel. As her community disintegrates due to climate change, economic collapse, and social unrest, Lauren comes to believe that humanity must adapt and \"shape God\" in order to survive. The \\'Parable of the Sower\\' is the foundational text of this new religion, which Lauren calls \"Earthseed\", that emphasizes the inevitability of change and the need for humanity to take an active role in shaping its own future. This parable is a central thematic element of the novel, representing the protagonist\\'s search for meaning and purpose in the face of societal upheaval.', metadata={'score': 0.8466987311840057, 'uuid': '1f1a44eb-ebd8-4617-ac14-0281099bd770', 'created_at': '2024-05-10T05:02:07.541073Z', 'token_count': 146}),\n",
" Document(page_content='The dialog discusses the central themes of Octavia Butler\\'s acclaimed science fiction novel \"Parable of the Sower.\" The main theme is survival in the face of drastic societal collapse, and the importance of adaptability, community, and the human capacity for change. The \"Parable of the Sower,\" a biblical parable, serves as a metaphorical framework for the novel, illustrating the need for receptivity and preparedness when confronting transformative upheaval.', metadata={'score': 0.8283970355987549, 'uuid': '4158a750-3ccd-45ce-ab88-fed5ba68b755', 'created_at': '2024-05-10T05:02:06.510068Z', 'token_count': 91})]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"zep_retriever = ZepCloudRetriever(\n",
" api_key=zep_api_key,\n",
" session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n",
" top_k=3,\n",
" search_scope=\"summary\",\n",
" search_type=\"mmr\",\n",
" mmr_lambda=0.5,\n",
")\n",
"\n",
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -6,7 +6,7 @@
"collapsed": false
},
"source": [
"# Zep\n",
"# Zep Open Source\n",
"## Retriever Example for [Zep](https://docs.getzep.com/)\n",
"\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",

View File

@@ -0,0 +1,222 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Zilliz Cloud Pipeline\n",
"\n",
"> [Zilliz Cloud Pipelines](https://docs.zilliz.com/docs/pipelines) transform your unstructured data to a searchable vector collection, chaining up the embedding, ingestion, search, and deletion of your data.\n",
"> \n",
"> Zilliz Cloud Pipelines are available in the Zilliz Cloud Console and via RestFul APIs.\n",
"\n",
"This notebook demonstrates how to prepare Zilliz Cloud Pipelines and use the them via a LangChain Retriever."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Zilliz Cloud Pipelines\n",
"\n",
"To get pipelines ready for LangChain Retriever, you need to create and configure the services in Zilliz Cloud."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**1. Set up Database**\n",
"\n",
"- [Register with Zilliz Cloud](https://docs.zilliz.com/docs/register-with-zilliz-cloud)\n",
"- [Create a cluster](https://docs.zilliz.com/docs/create-cluster)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**2. Create Pipelines**\n",
"\n",
"- [Document ingestion, search, deletion](https://docs.zilliz.com/docs/pipelines-doc-data)\n",
"- [Text ingestion, search, deletion](https://docs.zilliz.com/docs/pipelines-text-data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use LangChain Retriever"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-milvus"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_milvus import ZillizCloudPipelineRetriever\n",
"\n",
"retriever = ZillizCloudPipelineRetriever(\n",
" pipeline_ids={\n",
" \"ingestion\": \"<YOUR_INGESTION_PIPELINE_ID>\", # skip this line if you do NOT need to add documents\n",
" \"search\": \"<YOUR_SEARCH_PIPELINE_ID>\", # skip this line if you do NOT need to get relevant documents\n",
" \"deletion\": \"<YOUR_DELETION_PIPELINE_ID>\", # skip this line if you do NOT need to delete documents\n",
" },\n",
" token=\"<YOUR_ZILLIZ_CLOUD_API_KEY>\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add documents\n",
"\n",
"To add documents, you can use the method `add_texts` or `add_doc_url`, which inserts documents from a list of texts or a presigned/public url with corresponding metadata into the store."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- if using a **text ingestion pipeline**, you can use the method `add_texts`, which inserts a batch of texts with the corresponding metadata into the Zilliz Cloud storage.\n",
"\n",
" **Arguments:**\n",
" - `texts`: A list of text strings.\n",
" - `metadata`: A key-value dictionary of metadata will be inserted as preserved fields required by ingestion pipeline. Defaults to None.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# retriever.add_texts(\n",
"# texts = [\"example text 1e\", \"example text 2\"],\n",
"# metadata={\"<FIELD_NAME>\": \"<FIELD_VALUE>\"} # skip this line if no preserved field is required by the ingestion pipeline\n",
"# )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- if using a **document ingestion pipeline**, you can use the method `add_doc_url`, which inserts a document from url with the corresponding metadata into the Zilliz Cloud storage.\n",
"\n",
" **Arguments:**\n",
" - `doc_url`: A document url.\n",
" - `metadata`: A key-value dictionary of metadata will be inserted as preserved fields required by ingestion pipeline. Defaults to None.\n",
"\n",
"The following example works with a document ingestion pipeline, which requires milvus version as metadata. We will use an [example document](https://publicdataset.zillizcloud.com/milvus_doc.md) describing how to delete entities in Milvus v2.3.x. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'token_usage': 1247, 'doc_name': 'milvus_doc.md', 'num_chunks': 6}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.add_doc_url(\n",
" doc_url=\"https://publicdataset.zillizcloud.com/milvus_doc.md\",\n",
" metadata={\"version\": \"v2.3.x\"},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get relevant documents\n",
"\n",
"To query the retriever, you can use the method `get_relevant_documents`, which returns a list of LangChain Document objects.\n",
"\n",
"**Arguments:**\n",
"- `query`: String to find relevant documents for.\n",
"- `top_k`: The number of results. Defaults to 10.\n",
"- `offset`: The number of records to skip in the search result. Defaults to 0.\n",
"- `output_fields`: The extra fields to present in output.\n",
"- `filter`: The Milvus expression to filter search results. Defaults to \"\".\n",
"- `run_manager`: The callbacks handler to use."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Delete Entities\\nThis topic describes how to delete entities in Milvus. \\nMilvus supports deleting entities by primary key or complex boolean expressions. Deleting entities by primary key is much faster and lighter than deleting them by complex boolean expressions. This is because Milvus executes queries first when deleting data by complex boolean expressions. \\nDeleted entities can still be retrieved immediately after the deletion if the consistency level is set lower than Strong.\\nEntities deleted beyond the pre-specified span of time for Time Travel cannot be retrieved again.\\nFrequent deletion operations will impact the system performance. \\nBefore deleting entities by comlpex boolean expressions, make sure the collection has been loaded.\\nDeleting entities by complex boolean expressions is not an atomic operation. Therefore, if it fails halfway through, some data may still be deleted.\\nDeleting entities by complex boolean expressions is supported only when the consistency is set to Bounded. For details, see Consistency.\\\\\\n\\\\\\n# Delete Entities\\n## Prepare boolean expression\\nPrepare the boolean expression that filters the entities to delete. \\nMilvus supports deleting entities by primary key or complex boolean expressions. For more information on expression rules and supported operators, see Boolean Expression Rules.', metadata={'id': 448986959321277978, 'distance': 0.7871403694152832}),\n",
" Document(page_content='# Delete Entities\\n## Prepare boolean expression\\n### Simple boolean expression\\nUse a simple expression to filter data with primary key values of 0 and 1: \\n```python\\nexpr = \"book_id in [0,1]\"\\n```\\\\\\n\\\\\\n# Delete Entities\\n## Prepare boolean expression\\n### Complex boolean expression\\nTo filter entities that meet specific conditions, define complex boolean expressions. \\nFilter entities whose word_count is greater than or equal to 11000: \\n```python\\nexpr = \"word_count >= 11000\"\\n``` \\nFilter entities whose book_name is not Unknown: \\n```python\\nexpr = \"book_name != Unknown\"\\n``` \\nFilter entities whose primary key values are greater than 5 and word_count is smaller than or equal to 9999: \\n```python\\nexpr = \"book_id > 5 && word_count <= 9999\"\\n```', metadata={'id': 448986959321277979, 'distance': 0.7775762677192688}),\n",
" Document(page_content='# Delete Entities\\n## Delete entities\\nDelete the entities with the boolean expression you created. Milvus returns the ID list of the deleted entities.\\n```python\\nfrom pymilvus import Collection\\ncollection = Collection(\"book\") # Get an existing collection.\\ncollection.delete(expr)\\n``` \\nParameter\\tDescription\\nexpr\\tBoolean expression that specifies the entities to delete.\\npartition_name (optional)\\tName of the partition to delete entities from.\\\\\\n\\\\\\n# Upsert Entities\\nThis topic describes how to upsert entities in Milvus. \\nUpserting is a combination of insert and delete operations. In the context of a Milvus vector database, an upsert is a data-level operation that will overwrite an existing entity if a specified field already exists in a collection, and insert a new entity if the specified value doesnt already exist. \\nThe following example upserts 3,000 rows of randomly generated data as the example data. When performing upsert operations, it\\'s important to note that the operation may compromise performance. This is because the operation involves deleting data during execution.', metadata={'id': 448986959321277980, 'distance': 0.680284857749939}),\n",
" Document(page_content='# Upsert Entities\\n## Flush data\\nWhen data is upserted into Milvus it is updated and inserted into segments. Segments have to reach a certain size to be sealed and indexed. Unsealed segments will be searched brute force. In order to avoid this with any remainder data, it is best to call flush(). The flush() call will seal any remaining segments and send them for indexing. It is important to only call this method at the end of an upsert session. Calling it too often will cause fragmented data that will need to be cleaned later on.\\\\\\n\\\\\\n# Upsert Entities\\n## Limits\\nUpdating primary key fields is not supported by upsert().\\nupsert() is not applicable and an error can occur if autoID is set to True for primary key fields.', metadata={'id': 448986959321277983, 'distance': 0.5672488212585449}),\n",
" Document(page_content='# Upsert Entities\\n## Prepare data\\nFirst, prepare the data to upsert. The type of data to upsert must match the schema of the collection, otherwise Milvus will raise an exception. \\nMilvus supports default values for scalar fields, excluding a primary key field. This indicates that some fields can be left empty during data inserts or upserts. For more information, refer to Create a Collection. \\n```python\\n# Generate data to upsert\\n\\nimport random\\nnb = 3000\\ndim = 8\\nvectors = [[random.random() for _ in range(dim)] for _ in range(nb)]\\ndata = [\\n[i for i in range(nb)],\\n[str(i) for i in range(nb)],\\n[i for i in range(10000, 10000+nb)],\\nvectors,\\n[str(\"dy\"*i) for i in range(nb)]\\n]\\n```', metadata={'id': 448986959321277981, 'distance': 0.5107149481773376}),\n",
" Document(page_content='# Upsert Entities\\n## Upsert data\\nUpsert the data to the collection. \\n```python\\nfrom pymilvus import Collection\\ncollection = Collection(\"book\") # Get an existing collection.\\nmr = collection.upsert(data)\\n``` \\nParameter\\tDescription\\ndata\\tData to upsert into Milvus.\\npartition_name (optional)\\tName of the partition to upsert data into.\\ntimeout (optional)\\tAn optional duration of time in seconds to allow for the RPC. If it is set to None, the client keeps waiting until the server responds or error occurs.\\nAfter upserting entities into a collection that has previously been indexed, you do not need to re-index the collection, as Milvus will automatically create an index for the newly upserted data. For more information, refer to Can indexes be created after inserting vectors?', metadata={'id': 448986959321277982, 'distance': 0.4341375529766083})]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.get_relevant_documents(\n",
" \"Can users delete entities by complex boolean expressions?\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "develop",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cassandra\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Cassandra\n",
"\n",
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.\n",
"\n",
"`CassandraByteStore` needs the `cassio` package to be installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Store takes the following parameters:\n",
"\n",
"* table: The table where to store the data.\n",
"* session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used.\n",
"* keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used.\n",
"* setup_mode: (Optional) The mode used to create the Cassandra table (SYNC, ASYNC or OFF). Defaults to SYNC."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CassandraByteStore\n",
"\n",
"The `CassandraByteStore` is an implementation of `ByteStore` that stores the data in your Cassandra instance.\n",
"The store keys must be strings and will be mapped to the `row_id` column of the Cassandra table.\n",
"The store `bytes` values are mapped to the `body_blob` column of the Cassandra table."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import CassandraByteStore"
]
},
{
"cell_type": "markdown",
"source": [
"### Init from a cassandra driver Session\n",
"\n",
"You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from cassandra.cluster import Cluster\n",
"\n",
"cluster = Cluster()\n",
"session = cluster.connect()"
]
},
{
"cell_type": "markdown",
"source": [
"You need to provide the name of an existing keyspace of the Cassandra instance:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")"
]
},
{
"cell_type": "markdown",
"source": [
"Creating the store:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=CASSANDRA_KEYSPACE,\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"source": [
"### Init from cassio\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Usage with CacheBackedEmbeddings\n",
"\n",
"You may use the `CassandraByteStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_cache=store\n",
")"
],
"metadata": {
"collapsed": false
}
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -58,7 +58,7 @@
"outputs": [],
"source": [
"document_text = [\"This is a test doc1.\", \"This is a test doc2.\"]\n",
"document_result = embeddings.embed_documents([document_text])"
"document_result = embeddings.embed_documents(document_text)"
]
}
],

View File

@@ -0,0 +1,101 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Local BGE Embeddings with IPEX-LLM on Intel CPU\n",
"\n",
"> [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.\n",
"\n",
"This example goes over how to use LangChain to conduct embedding tasks with `ipex-llm` optimizations on Intel CPU. This would be helpful in applications such as RAG, document QA, etc.\n",
"\n",
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install IPEX-LLM for optimizations on Intel CPU, as well as `sentence-transformers`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu\n",
"%pip install sentence-transformers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note**\n",
">\n",
"> For Windows users, `--extra-index-url https://download.pytorch.org/whl/cpu` when install `ipex-llm` is not required.\n",
"\n",
"## Basic Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import IpexLLMBgeEmbeddings\n",
"\n",
"embedding_model = IpexLLMBgeEmbeddings(\n",
" model_name=\"BAAI/bge-large-en-v1.5\",\n",
" model_kwargs={},\n",
" encode_kwargs={\"normalize_embeddings\": True},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"API Reference\n",
"- [IpexLLMBgeEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.ipex_llm.IpexLLMBgeEmbeddings.html)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sentence = \"IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.\"\n",
"query = \"What is IPEX-LLM?\"\n",
"\n",
"text_embeddings = embedding_model.embed_documents([sentence, query])\n",
"print(f\"text_embeddings[0][:10]: {text_embeddings[0][:10]}\")\n",
"print(f\"text_embeddings[1][:10]: {text_embeddings[1][:10]}\")\n",
"\n",
"query_embedding = embedding_model.embed_query(query)\n",
"print(f\"query_embedding[:10]: {query_embedding[:10]}\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,164 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Local BGE Embeddings with IPEX-LLM on Intel GPU\n",
"\n",
"> [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.\n",
"\n",
"This example goes over how to use LangChain to conduct embedding tasks with `ipex-llm` optimizations on Intel GPU. This would be helpful in applications such as RAG, document QA, etc.\n",
"\n",
"> **Note**\n",
">\n",
"> It is recommended that only Windows users with Intel Arc A-Series GPU (except for Intel Arc A300-Series or Pro A60) run this Jupyter notebook directly. For other cases (e.g. Linux users, Intel iGPU, etc.), it is recommended to run the code with Python scripts in terminal for best experiences.\n",
"\n",
"## Install Prerequisites\n",
"To benefit from IPEX-LLM on Intel GPUs, there are several prerequisite steps for tools installation and environment preparation.\n",
"\n",
"If you are a Windows user, visit the [Install IPEX-LLM on Windows with Intel GPU Guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html), and follow [Install Prerequisites](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html#install-prerequisites) to update GPU driver (optional) and install Conda.\n",
"\n",
"If you are a Linux user, visit the [Install IPEX-LLM on Linux with Intel GPU](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html), and follow [**Install Prerequisites**](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html#install-prerequisites) to install GPU driver, Intel® oneAPI Base Toolkit 2024.0, and Conda.\n",
"\n",
"## Setup\n",
"\n",
"After the prerequisites installation, you should have created a conda environment with all prerequisites installed. **Start the jupyter service in this conda environment**:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install IPEX-LLM for optimizations on Intel GPU, as well as `sentence-transformers`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/\n",
"%pip install sentence-transformers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note**\n",
">\n",
"> You can also use `https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/` as the extra-indel-url.\n",
"\n",
"## Runtime Configuration\n",
"\n",
"For optimal performance, it is recommended to set several environment variables based on your device:\n",
"\n",
"### For Windows Users with Intel Core Ultra integrated GPU"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"SYCL_CACHE_PERSISTENT\"] = \"1\"\n",
"os.environ[\"BIGDL_LLM_XMX_DISABLED\"] = \"1\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### For Windows Users with Intel Arc A-Series GPU"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"SYCL_CACHE_PERSISTENT\"] = \"1\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note**\n",
">\n",
"> For the first time that each model runs on Intel iGPU/Intel Arc A300-Series or Pro A60, it may take several minutes to compile.\n",
">\n",
"> For other GPU type, please refer to [here](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration) for Windows users, and [here](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id5) for Linux users.\n",
"\n",
"\n",
"## Basic Usage\n",
"\n",
"Setting `device` to `\"xpu\"` in `model_kwargs` when initializing `IpexLLMBgeEmbeddings` will put the embedding model on Intel GPU and benefit from IPEX-LLM optimizations:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import IpexLLMBgeEmbeddings\n",
"\n",
"embedding_model = IpexLLMBgeEmbeddings(\n",
" model_name=\"BAAI/bge-large-en-v1.5\",\n",
" model_kwargs={\"device\": \"xpu\"},\n",
" encode_kwargs={\"normalize_embeddings\": True},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"API Reference\n",
"- [IpexLLMBgeEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.ipex_llm.IpexLLMBgeEmbeddings.html)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sentence = \"IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.\"\n",
"query = \"What is IPEX-LLM?\"\n",
"\n",
"text_embeddings = embedding_model.embed_documents([sentence, query])\n",
"print(f\"text_embeddings[0][:10]: {text_embeddings[0][:10]}\")\n",
"print(f\"text_embeddings[1][:10]: {text_embeddings[1][:10]}\")\n",
"\n",
"query_embedding = embedding_model.embed_query(query)\n",
"print(f\"query_embedding[:10]: {query_embedding[:10]}\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -6,17 +6,24 @@
"id": "GDDVue_1cq6d"
},
"source": [
"# NVIDIA AI Foundation Endpoints \n",
"# NVIDIA NIMs \n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
"> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below.\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
"\n",
"This example goes over how to use LangChain to interact with the supported [NVIDIA Retrieval QA Embedding Model](https://build.nvidia.com/nvidia/embed-qa-4) for [retrieval-augmented generation](https://developer.nvidia.com/blog/build-enterprise-retrieval-augmented-generation-apps-with-nvidia-retrieval-qa-embedding-model/) via the `NVIDIAEmbeddings` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
"For more information on accessing the chat models through this API, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
@@ -45,9 +52,9 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Select the `Retrieval` tab, then select your model of choice\n",
"2. Select the `Retrieval` tab, then select your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
@@ -84,16 +91,16 @@
"id": "l185et2kc8pS"
},
"source": [
"We should be able to see an embedding model among that list which can be used in conjunction with an LLM for effective RAG solutions. We can interface with this model pretty easily with the help of the `NVIDIAEmbeddings` model."
"We should be able to see an embedding model among that list which can be used in conjunction with an LLM for effective RAG solutions. We can interface with this model as well as other embedding models supported by NIM through the `NVIDIAEmbeddings` class."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"## Working with NIMs on the NVIDIA API Catalog\n",
"\n",
"When initializing an embedding model you can select a model by passing it, e.g. `ai-embed-qa-4` below, or use the default by not passing any arguments."
"When initializing an embedding model you can select a model by passing it, e.g. `NV-Embed-QA` below, or use the default by not passing any arguments."
]
},
{
@@ -106,7 +113,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n",
"\n",
"embedder = NVIDIAEmbeddings(model=\"ai-embed-qa-4\")"
"embedder = NVIDIAEmbeddings(model=\"NV-Embed-QA\")"
]
},
{
@@ -121,7 +128,29 @@
"\n",
"- `embed_documents`: Generate passage embeddings for a list of documents which you would like to search over.\n",
"\n",
"- `aembed_quey`/`embed_documents`: Asynchronous versions of the above."
"- `aembed_query`/`aembed_documents`: Asynchronous versions of the above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Working with self-hosted NVIDIA NIMs\n",
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
"\n",
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n",
"\n",
"# connect to an embedding NIM running at localhost:8080\n",
"embedder = NVIDIAEmbeddings(base_url=\"http://localhost:8080/v1\")"
]
},
{
@@ -382,7 +411,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken\n",
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken langchain_community\n",
"\n",
"from operator import itemgetter\n",
"\n",
@@ -408,7 +437,7 @@
"source": [
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"],\n",
" embedding=NVIDIAEmbeddings(model=\"ai-embed-qa-4\"),\n",
" embedding=NVIDIAEmbeddings(model=\"NV-Embed-QA\"),\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
@@ -478,9 +507,9 @@
"provenance": []
},
"kernelspec": {
"display_name": "Python (venvoss)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "venvoss"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -492,7 +521,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Oracle Cloud Infrastructure Generative AI"
"# Oracle Cloud Infrastructure Generative AI"
]
},
{

View File

@@ -7,7 +7,27 @@
"source": [
"# Ollama\n",
"\n",
"Let's load the Ollama Embeddings class."
"\"Ollama supports embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data.\" Learn more about the introduction to [Ollama Embeddings](https://ollama.com/blog/embedding-models) in the blog post.\n",
"\n",
"To use Ollama Embeddings, first, install [LangChain Community](https://pypi.org/project/langchain-community/) package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "854d6a2e",
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain-community"
]
},
{
"cell_type": "markdown",
"id": "54fbb4cd",
"metadata": {},
"source": [
"Load the Ollama Embeddings class:"
]
},
{
@@ -17,26 +37,12 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import OllamaEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2c66e5da",
"metadata": {},
"outputs": [],
"source": [
"embeddings = OllamaEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "01370375",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import OllamaEmbeddings\n",
"\n",
"embeddings = (\n",
" OllamaEmbeddings()\n",
") # by default, uses llama2. Run `ollama pull llama2` to pull down the model\n",
"\n",
"text = \"This is a test document.\""
]
},
@@ -105,7 +111,13 @@
"id": "bb61bbeb",
"metadata": {},
"source": [
"Let's load the Ollama Embeddings class with smaller model (e.g. llama:7b). Note: See other supported models [https://ollama.ai/library](https://ollama.ai/library)"
"### Embedding Models\n",
"\n",
"Ollama has embedding models, that are lightweight enough for use in embeddings, with the smallest about the size of 25Mb. See some of the available [embedding models from Ollama](https://ollama.com/blog/embedding-models).\n",
"\n",
"Let's load the Ollama Embeddings class with smaller model (e.g. `mxbai-embed-large`). \n",
"\n",
"> Note: See other supported models [https://ollama.ai/library](https://ollama.ai/library)"
]
},
{
@@ -115,26 +127,8 @@
"metadata": {},
"outputs": [],
"source": [
"embeddings = OllamaEmbeddings(model=\"llama2:7b\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "14aefb64",
"metadata": {},
"outputs": [],
"source": [
"text = \"This is a test document.\""
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "3c39ed33",
"metadata": {},
"outputs": [],
"source": [
"embeddings = OllamaEmbeddings(model=\"mxbai-embed-large\")\n",
"text = \"This is a test document.\"\n",
"query_result = embeddings.embed_query(text)"
]
},

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "ae8077b8",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [openaiembeddings]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "278b6c63",

View File

@@ -193,13 +193,6 @@
"Oracle AI Vector Search provides multiple methods for generating embeddings, utilizing either locally hosted ONNX models or third-party APIs. For comprehensive instructions on configuring these alternatives, please refer to the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-C6439E94-4E86-4ECD-954E-4B73D53579DE)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** Currently, OracleEmbeddings processes each embedding generation request individually, without batching, by calling REST endpoints separately for each request. This method could potentially lead to exceeding the maximum request per minute quota set by some providers. However, we are actively working to enhance this process by implementing request batching, which will allow multiple embedding requests to be combined into fewer API calls, thereby optimizing our use of provider resources and adhering to their request limits. This update is expected to be rolled out soon, eliminating the current limitation."
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -6,25 +6,26 @@
"source": [
"# PremAI\n",
"\n",
">[PremAI](https://app.premai.io) is an unified platform that let's you build powerful production-ready GenAI powered applications with least effort, so that you can focus more on user experience and overall growth. In this section we are going to dicuss how we can get access to different embedding model using `PremAIEmbeddings`\n",
"[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).\n",
"\n",
"## Installation and Setup\n",
"### Installation and setup\n",
"\n",
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
"We start by installing `langchain` and `premai-sdk`. You can type the following command to install:\n",
"\n",
"```bash\n",
"pip install premai langchain\n",
"```\n",
"\n",
"Before proceeding further, please make sure that you have made an account on Prem and already started a project. If not, then here's how you can start for free:\n",
"Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## PremEmbeddings\n",
"\n",
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
"\n",
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
"\n",
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
"\n",
"Congratulations on creating your first deployed application on Prem 🎉 Now we can use langchain to interact with our application. "
"In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key. "
]
},
{
@@ -42,7 +43,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise it will throw error.\n"
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.\n",
"\n",
"> Note: Setting `model_name` argument in mandatory for PremAIEmbeddings unlike [ChatPremAI.](https://python.langchain.com/v0.1/docs/integrations/chat/premai/)"
]
},
{
@@ -72,20 +75,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We have defined our embedding model. We support a lot of embedding models. Here is a table that shows the number of embedding models we support. \n",
"\n",
"\n",
"| Provider | Slug | Context Tokens |\n",
"|-------------|------------------------------------------|----------------|\n",
"| cohere | embed-english-v3.0 | N/A |\n",
"| openai | text-embedding-3-small | 8191 |\n",
"| openai | text-embedding-3-large | 8191 |\n",
"| openai | text-embedding-ada-002 | 8191 |\n",
"| replicate | replicate/all-mpnet-base-v2 | N/A |\n",
"| together | togethercomputer/Llama-2-7B-32K-Instruct | N/A |\n",
"| mistralai | mistral-embed | 4096 |\n",
"\n",
"To change the model, you simply need to copy the `slug` and access your embedding model. Now let's start using our embedding model with a single query followed by multiple queries (which is also called as a document)"
"We support lots of state of the art embedding models. You can view our list of supported LLMs and embedding models [here](https://docs.premai.io/get-started/supported-models). For now let's go for `text-embedding-3-large` model for this example."
]
},
{

View File

@@ -5,7 +5,7 @@
"id": "0f1199c1-f885-4290-b5e7-d1defd49abe1",
"metadata": {},
"source": [
"# Soalr\n",
"# Solar\n",
"\n",
"[Solar](https://console.upstage.ai/services/embedding) offers an embeddings service.\n",
"\n",

View File

@@ -27,7 +27,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-text-to-speech"
"%pip install --upgrade --quiet google-cloud-text-to-speech langchain-community"
]
},
{

View File

@@ -30,7 +30,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib"
"%pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib langchain-community"
]
},
{

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{

View File

@@ -59,7 +59,7 @@
}
],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{

View File

@@ -39,7 +39,7 @@
}
],
"source": [
"%pip install --upgrade --quiet requests"
"%pip install --upgrade --quiet requests langchain-community"
]
},
{

View File

@@ -17,7 +17,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet googlemaps"
"%pip install --upgrade --quiet googlemaps langchain-community"
]
},
{

View File

@@ -28,7 +28,7 @@
}
],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{

View File

@@ -14,6 +14,16 @@
"Then we will need to set some environment variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2998f9c",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -10,6 +10,16 @@
"This notebook goes over how to use the `Google Serper` component to search the web. First you need to sign up for a free account at [serper.dev](https://serper.dev) and get your api key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ac0b9ce6",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 11,

View File

@@ -40,7 +40,7 @@
}
],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain_community"
]
},
{

View File

@@ -21,7 +21,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet gradio_tools"
"%pip install --upgrade --quiet gradio_tools langchain-community"
]
},
{

View File

@@ -30,6 +30,19 @@
"pip install httpx gql > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -22,6 +22,16 @@
"%pip install --upgrade --quiet transformers huggingface_hub > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5b9279f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -10,6 +10,15 @@
"when it is confused."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

Some files were not shown because too many files have changed in this diff Show More