Compare commits

...

72 Commits

Author SHA1 Message Date
ChengZi
2443e85533 docs: fix milvus import and update template (#22306)
docs: fix milvus import problem
update milvus-rag template with milvus-lite

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-05-30 08:28:55 -07:00
WU LIFU
86698b02a9 doc: fix wrong documentation on FAISS load_local function (#22310)
### Issue: #22299 

### descriptions
The documentation appears to be wrong. When the user actually sets this
parameter "asynchronous" to be True, it fails because the __init__
function of FAISS class doesn't allow this parameter. In fact, most of
the class/instance functions of this class have both the sync/async
version, so it looks like what we need is just to remove this parameter
from the doc.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
2024-05-30 15:15:04 +00:00
maang-h
596c062cba community[patch]: Standardize qianfan model init args name (#22322)
- **Description:**  
    - Standardize qianfan chat model intialization arguments name
        - qianfan_ak (qianfan api key)  -> api_key
        - qianfan_sk (qianfan secret key)  ->  secret_key
       
    - Delete unuse variable
- **Issue:** #20085
2024-05-30 11:08:32 -04:00
KhoPhi
c64b0a3095 Docs: Ollama (LLM, Chat Model & Text Embedding) (#22321)
- [x] Docs Update: Ollama
  - llm/ollama 
- Switched to using llama3 as model with reference to templating and
prompting
      - Added concurrency notes to llm/ollama docs
  - chat_models/ollama
      - Added concurrency notes to llm/ollama docs
  - text_embedding/ollama
     - include example for specific embedding models from Ollama
2024-05-30 11:06:45 -04:00
Dobiichi-Origami
10b12e1c08 community: adding tool_call_id for every ToolCall (#22323)
- **Description:** This PR contains a bugfix which result in malfunction
of multi-turn conversation in QianfanChatEndpoint and adaption for
ToolCall and ToolMessage
2024-05-30 10:59:08 -04:00
Bagatur
569d325a59 docs: link GH org (#22308) 2024-05-30 00:17:59 -07:00
Bagatur
93049d1563 docs: make llm cache its own section (#22301) 2024-05-30 00:17:33 -07:00
Bagatur
04631439c9 docs: add v0.2 links to README (#22300) 2024-05-29 16:22:01 -07:00
ccurme
f39e1a2288 community, docs: update token usage tracking callback + how-to guides (#22145) 2024-05-29 17:00:47 -04:00
Bagatur
2bc50fb895 docs, cli[patch]: chat model template nit (#22294) 2024-05-29 20:53:58 +00:00
Bagatur
aa6c31df53 cli[patch]: Release 0.0.24 (#22293) 2024-05-29 13:37:34 -07:00
Bagatur
627a337887 docs, cli[patch]: chat model doc template (#22290)
Update ChatModel integration doc template, integration docstring, and
adds langchain-cli command to easily create just doc (for updating
existing integrations):

```bash
langchain-cli integration create-doc --name "foo-bar"
```
2024-05-29 13:34:58 -07:00
Wu Enze
f40e341a03 docs : Added integrations for memory with langchain_community (#22265)
PR title: Integration Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: [#22005](https://github.com/langchain-ai/langchain/issues/22005)
2024-05-29 16:12:05 -04:00
ccurme
6e1df72a88 openai[patch]: Release 0.1.8 (#22291) 2024-05-29 20:08:30 +00:00
ccurme
e71b0b5827 core[patch]: Release 0.2.2 (#22289) 2024-05-29 19:51:37 +00:00
William FH
9d6cabe84a Update sequence.ipynb (#22288) 2024-05-29 19:34:44 +00:00
Daniel Glogowski
7ff05357ba docs: updating NIM documentation (#22258)
Updating NVIDIA NIM notebooks and readme file.

Thanks!
Daniel
2024-05-29 10:28:39 -07:00
Bagatur
6dd0f095c3 docs: revamp ChatOpenAI (#22253)
Can build API ref docs by running
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
only builds openai ref, takes ~20 sec
2024-05-29 10:20:14 -07:00
Erick Friis
00c70d98c2 robocorp: release 0.0.9 (#22282) 2024-05-29 16:49:18 +00:00
Mikko Korpela
fc5909ad6f langchain-robocorp: Fix parsing of Union types (such as Optional). (#22277) 2024-05-29 09:47:02 -07:00
ccurme
af1f723ada openai: don't override stream_options default (#22242)
ChatOpenAI supports a kwarg `stream_options` which can take values
`{"include_usage": True}` and `{"include_usage": False}`.

Setting include_usage to True adds a message chunk to the end of the
stream with usage_metadata populated. In this case the final chunk no
longer includes `"finish_reason"` in the `response_metadata`. This is
the current default and is not yet released. Because this could be
disruptive to workflows, here we remove this default. The default will
now be consistent with OpenAI's API (see parameter
[here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options)).

Examples:
```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='Hello' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='!' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='' response_metadata={'finish_reason': 'stop'} id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
```

```python
for chunk in llm.stream("hi", stream_options={"include_usage": True}):
    print(chunk)
```
```
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='Hello' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='!' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' response_metadata={'finish_reason': 'stop'} id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```

```python
llm = ChatOpenAI().bind(stream_options={"include_usage": True})

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='Hello' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='!' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' response_metadata={'finish_reason': 'stop'} id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```
2024-05-29 10:30:40 -04:00
Karim Lalani
a1899439fc [experimental][llms][ollama_functions] Update OllamaFunctions to send tool_calls attribute (#21625)
Update OllamaFunctions to return `tool_calls` for AIMessages when used
for tool calling.
2024-05-29 09:38:33 -04:00
Bagatur
d61bdeba25 core[patch]: allow access RunnableWithFallbacks.runnable attrs (#22139)
RFC, candidate fix for #13095 #22134
2024-05-28 13:18:09 -07:00
SteveLiao
7496fe2b16 Update parent_document_retriever.py about **kwargs (#22219)
Add kwargs in add_documents function

**langchain**: Add **kwargs in parent_document_retriever"
 - **Add kwargs for `add_document` in `parent_document_retriever.py`** 


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-28 11:35:38 -07:00
Mark Cusack
8dfa3c5f1a Update/fix docs to list Yellowbrick as a supported indexed vectorstore (#22235)
Update/fix docs to list Yellowbrick as a supported indexed vectorstore
and fix the Jupyter notebook.
2024-05-28 11:34:49 -07:00
Erick Friis
93240fac68 milvus: fix core dep (#22239) 2024-05-28 10:21:37 -07:00
Erick Friis
611faa22c7 infra: allow first releases 2 (#22237) 2024-05-28 09:53:21 -07:00
Erick Friis
26c6e4a5ef infra: allow first releases (#22236) 2024-05-28 09:39:40 -07:00
ChengZi
404d92ded0 milvus: New langchain_milvus package and new milvus features (#21077)
New features:

- New langchain_milvus package in partner
- Milvus collection hybrid search retriever
- Zilliz cloud pipeline retriever
- Milvus Local guid
- Rag-milvus template

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jackson <jacksonxie612@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-28 08:24:20 -07:00
Leonid Ganeline
d7f70535ba docs: arxiv page, added cookbooks (#22215)
Issue: The `arXiv` page is missing the arxiv paper references from the
`langchain/cookbook`.
PR: Added the cookbook references.
Result: `Found 29 arXiv references in the 3 docs, 21 API Refs, 5
Templates, and 18 Cookbooks.` - much more references are visible now.
2024-05-27 15:47:02 -07:00
Leonid Ganeline
d6995e814b ai21[patch]: added license (#22153)
The `pyproject.toml` missed the `license` parameter. I've added it as
`MIT`
2024-05-27 15:14:14 -07:00
Maddy Adams
8332a36f69 infra: update langchainhub and add integration test (#22154)
**Description:** Update langchainhub integration test dependency and add
an integration test for pulling private prompt
**Dependencies:** langchainhub 0.1.16
2024-05-27 14:58:10 -07:00
Will Higgins
83d10df78d community[patch]: Update firecrawl api key name (#22183)
Change 'FIREWALL' to 'FIRECRAWL' as I believe this may have been in
error. Other docs refer to 'FIRECRAWL_API_KEY'.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:39:29 +00:00
hmasdev
bbd7015b5d core[patch]: Add TypeError handler into get_graph of Runnable (#19856)
# Description

## Problem

`Runnable.get_graph` fails when `InputType` or `OutputType` property
raises `TypeError`.

-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L250-L274)
-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L394-L396)

This problem prevents getting a graph of `Runnable` objects whose
`InputType` or `OutputType` property raises `TypeError` but whose
`invoke` works well, such as `langchain.output_parsers.RegexParser`,
which I have already pointed out in #19792 that a `TypeError` would
occur.

## Solution

- Add `try-except` syntax to handle `TypeError` to the codes which get
`input_node` and `output_node`.

# Issue
- #19801 

# Twitter Handle
- [hmdev3](https://twitter.com/hmdev3)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:34:34 +00:00
acho98
753353411f docs: Fix Clova embeddings example document (#22181)
- [ ] **PR title**: "Fix list handling in Clova embeddings example
documentation"
  - Description:
Fixes a bug in the Clova Embeddings example documentation where
document_text was incorrectly wrapped in an additional list.
   - Rationale
The embed_documents method expects a list, but the previous example
wrapped document_text in an unnecessary additional list, causing an
error. The updated example correctly passes document_text directly to
the method, ensuring it functions as intended.
2024-05-27 14:31:34 -07:00
Mohammad Mohtashim
577ed68b59 mistralai[patch]: Added Json Mode for ChatMistralAI (#22213)
- **Description:** Powered
[ChatMistralAI.with_structured_output](fbfed65fb1/libs/partners/mistralai/langchain_mistralai/chat_models.py (L609))
via json mode
 

-  **Issue:** #22081

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:16:52 +00:00
Pranith
25c270b5a5 docs : Added integrations for tools with langchain_community (#22188)
PR title: Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: https://github.com/langchain-ai/langchain/issues/22005
2024-05-27 14:06:40 -07:00
Ibrahim
cfea0e231a Update llm_chain.ipynb text (#22198)
Added the missing verb "is" and a comma to the text in the Prompt
Templates description within the Build a Simple LLM Application tutorial
for more clarity.
2024-05-27 19:57:41 +00:00
Aditya
bf81ecd3b4 docs:updated documentation for llama, falcon and gemma on Vertex AI Model garden (#22201)
- **Description:** updated documentation for llama, falcona and gemma on
Vertex AI Model garden
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-27 12:56:11 -07:00
Pavlo Paliychuk
342df7cf83 community[minor]: Add Zep Cloud components + docs + examples (#21671)
Thank you for contributing to LangChain!

- [x] **PR title**: community: Add Zep Cloud components + docs +
examples

- [x] **PR message**: 
We have recently released our new zep-cloud sdks that are compatible
with Zep Cloud (not Zep Open Source). We have also maintained our Cloud
version of langchain components (ChatMessageHistory, VectorStore) as
part of our sdks. This PRs goal is to port these components to langchain
community repo, and close the gap with the existing Zep Open Source
components already present in community repo (added
ZepCloudMemory,ZepCloudVectorStore,ZepCloudRetriever).
Also added a ZepCloudChatMessageHistory components together with an
expression language example ported from our repo. We have left the
original open source components intact on purpose as to not introduce
any breaking changes.
    - **Issue:** -
- **Dependencies:** Added optional dependency of our new cloud sdk
`zep-cloud`
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-27 12:50:13 -07:00
Jan Soubusta
cccc8fbe2f community[patch]: DuckDB VS - expose similarity, improve performance of from_texts (#20971)
3 fixes of DuckDB vector store:
- unify defaults in constructor and from_texts (users no longer have to
specify `vector_key`).
- include search similarity into output metadata (fixes #20969)
- significantly improve performance of `from_documents`

Dependencies: added Pandas to speed up `from_documents`.
I was thinking about CSV and JSON options, but I expect trouble loading
JSON values this way and also CSV and JSON options require storing data
to disk.
Anyway, the poetry file for langchain-community already contains a
dependency on Pandas.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 15:17:52 -07:00
Surya Pratap Singh Shekhawat
42207f5bef Update agent_executor.ipynb (#22104)
fixed typos in the doc.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 22:14:41 +00:00
Erick Friis
8acadc34f5 docs: edit links, direct for notebooks (#22051) 2024-05-24 19:44:46 +00:00
Erick Friis
42ffcb2ff1 anthropic: release 0.1.14rc2, test release note gen (#22147) 2024-05-24 12:40:10 -07:00
Erick Friis
6ee8de62c0 infra: auto-generated release notes based on git log (#22141)
Generates release notes based on a `git log` command with title names

Aiming to improve to splitting out features vs. bugfixes using
conventional commits in the coming weeks.

Will work for any monorepo packages
2024-05-24 11:43:28 -07:00
Ameya Shenoy
8ba492ed6a community[minor]: clickhouse -- ability to use secure connection (#22108)
- **Description:** this PR gives clickhouse client the ability to use a
secure connection to the clickhosue server
- **Issue:** fixes #22082
- **Dependencies:** -
- **Twitter handle:** `_codingcoffee_`

Signed-off-by: Ameya Shenoy <shenoy.ameya@gmail.com>
Co-authored-by: Shresth Rana <shresth@grapevine.in>
2024-05-24 17:30:22 +00:00
ccurme
9a010fb761 openai: read stream_options (#21548)
OpenAI recently added a `stream_options` parameter to its chat
completions API (see [release
notes](https://platform.openai.com/docs/changelog/added-chat-completions-stream-usage)).
When this parameter is set to `{"usage": True}`, an extra "empty"
message is added to the end of a stream containing token usage. Here we
propagate token usage to `AIMessage.usage_metadata`.

We enable this feature by default. Streams would now include an extra
chunk at the end, **after** the chunk with
`response_metadata={'finish_reason': 'stop'}`.

New behavior:
```
[AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='Hello', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='!', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde', usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17})]
```

Old behavior (accessible by passing `stream_options={"include_usage":
False}` into (a)stream:
```
[AIMessageChunk(content='', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='Hello', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='!', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-1312b971-c5ea-4d92-9015-e6604535f339')]
```

From what I can tell this is not yet implemented in Azure, so we enable
only for ChatOpenAI.
2024-05-24 13:20:56 -04:00
Patrick Zhang
eb7c767e5b docs: update the name of the tool passio_nutrition_ai (#22116)
Updating the name of the Passion Nutrition AI tool so that the name of
the tool is correctly displayed in the sidebar menu.

Currently the name of the tool says "Quickstart" in the side bar.
The patch fixed the name to be Passio Nutrition AI.

<img width="681" alt="image"
src="https://github.com/langchain-ai/langchain/assets/4603110/9609975e-78ea-4032-9024-10c4f838170a">
2024-05-24 17:15:16 +00:00
Leonid Ganeline
fd4ee08167 docs: integrations/platforms/microsoft update (#22100)
Added the `Azure Container Apps dynamic sessions` tool reference
2024-05-24 13:14:51 -04:00
Rahul Triptahi
1a485f59b9 community[patch]: Put authorized identities behind a feature flag in SharepointLoader (#22125)
Description: Put authorised identities behind a feature flag, load_auth.
Documentation: N/A
Unit tests: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-24 12:42:57 -04:00
Anindyadeep
ee689412ab docs: Update PremAI Docs (#22114)
Thank you for contributing to LangChain!

- [X] **PR title**: community: Updated langchain-community PremAI
documentation

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-24 11:55:32 -04:00
sasha
1c9ceff503 community: add metadata to chain logging; (#22122)
Hey, I'm Sasha. The SDK engineer from [Comet](https://comet.com).
This PR updates the CometTracer class.
Added metadata to CometTracerr. From now on, both chains and spans will
send it.
2024-05-24 15:29:40 +00:00
Jirka Lhotka
7c0459faf2 community: Update costs of openai finetuned models (#22124)
- **Description:** Update costs of finetuned models and add
gpt-3-turbo-0125. Source: https://openai.com/api/pricing/
  - **Issue:** N/A
  - **Dependencies:** None
2024-05-24 15:25:17 +00:00
Eugene Yurtsev
d3db83abe3 community[major]: lint for usage of xml library (#22132)
* Lint for usage of standard xml library
* Add forced opt-in for quip client
* Actual security issue is with underlying QuipClient not LangChain
integration (since the client is doing the parsing), but adding
enforcement at the LangChain level.
2024-05-24 15:23:53 +00:00
Tom Aarsen
5b5ea2af30 docs: Add explanation on how to use Hugging Face embeddings (#22118)
- **Description:** I've added a tab on embedding text with LangChain
using Hugging Face models to here:
https://python.langchain.com/v0.2/docs/how_to/embed_text/. HF was
mentioned in the running text, but not in the tabs, which I thought was
odd.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** No need, this is tiny :) 

Also, I had a ton of issues with the poetry docs/lint install, so I
haven't linted this. Apologies for that.

cc @Jofthomas 

- Tom Aarsen
2024-05-24 11:21:03 -04:00
Bagatur
baa3c975cb anthropic[patch]: allow tool call mutation (#22130)
If tool_use blocks and tool_calls with overlapping IDs are present,
prefer the values of the tool_calls. Allows for mutating AIMessages just
via tool_calls.
2024-05-24 08:18:14 -07:00
Christophe Bornet
c838de5027 doc: Add doc for CassandraByteStore (#22126)
Preview:
https://langchain-git-fork-cbornet-doc-cassandrabytestore-langchain.vercel.app/v0.2/docs/integrations/stores/cassandra/
2024-05-24 10:57:55 -04:00
Vadym Barda
2edb512282 docs: improve how-to docs for message history (#22072)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 20:12:24 -04:00
Artem
eb7c453b98 docs: update hub.pull("rlm/map-prompt") to hub.pull("rlm/reduce-prompt") for reduce prompt (#22088)
**PR message**: 
Update `hub.pull("rlm/map-prompt")` to `hub.pull("rlm/reduce-prompt")`
in summarization.ipynb

**Description:** 
Fix typo in prompt hub link from `reduce_prompt =
hub.pull("rlm/map-prompt")` to `reduce_prompt =
hub.pull("rlm/reduce-prompt")` following next issue

**Issue:** #22014

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 23:07:37 +00:00
Leonid Ganeline
2416737c5f docs: compact the API Reference links (#21285)
This PR is opinionated. 
Issue: the `API Reference` sections in the examples hold too much
vertical space and make us scroll the page too much. See an
[example](https://python.langchain.com/docs/get_started/quickstart/#conversation-retrieval-chain).
These sections are **important**. So, the compacting should not make
these sections less noticeable.
Change: compacting the `API Reference` sections. See the [same example
after change
applied](https://langchain-j6nya46lf-langchain.vercel.app/docs/get_started/quickstart/#conversation-retrieval-chain).
It is more compact and now looks like references (footnotes).
Note: I would also change the section style, so it would be more
noticeable (maybe to look like the footnotes. Smaller wider font?)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 15:50:23 -07:00
ccurme
0ea1e89b2c groq: read tool calls from .tool_calls attribute (#22096) 2024-05-23 18:16:06 -04:00
Bagatur
96c21dfe56 docs: hf feat table tool calling (#22091) 2024-05-23 15:09:30 -07:00
Eugene Yurtsev
63004a0945 codespell ignore remaining issues (#22097) 2024-05-23 21:51:39 +00:00
Eugene Yurtsev
2d693c484e docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
2024-05-23 16:59:11 -04:00
Bagatur
38783d07c9 infra: api docs quick preview (#22093) 2024-05-23 13:57:45 -07:00
Pavel Zloi
fe26f937e4 community[minor]: ManticoreSearch engine added to vectorstore (#19117)
**Description:** ManticoreSearch engine added to vectorstores
**Issue:** no issue, just a new feature
**Dependencies:** https://pypi.org/project/manticoresearch-dev/
**Twitter handle:** @EvilFreelancer

- Example notebook with test integration:

https://github.com/EvilFreelancer/langchain/blob/manticore-search-vectorstore/docs/docs/integrations/vectorstores/manticore_search.ipynb

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 13:56:18 -07:00
Erick Friis
95c3e5f85f cli: model name substitution fix, release 0.0.23 (#22089) 2024-05-23 13:09:38 -07:00
Kartheek Yakkala
18b8c8628a docs : Added integrations for tools with langchain_community (#22056)
- **PR title**:  Docs enhancement

- **Description:** Adding installation instructions for integrations
requiring `langchain-community` package since 0.2
    - **Issue:** https://github.com/langchain-ai/langchain/issues/22005
2024-05-23 15:09:34 -04:00
ccurme
152c8cac33 anthropic, openai: cut pre-releases (#22083) 2024-05-23 15:02:23 -04:00
ccurme
cd07521170 core: bump to 0.2.1rc (#22080) 2024-05-23 18:36:50 +00:00
Harrison Chase
170cc8aec3 docs: add multi-modal-docs (#21734)
We dont really have any abstractions around multi-modal... so add a
section explaining we dont have any abstrations and then how to guides
for openai and anthropic (probably need to add for more)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: junefish <junefish@users.noreply.github.com>
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 18:33:25 +00:00
ccurme
fbfed65fb1 core, partners: add token usage attribute to AIMessage (#21944)
```python
class UsageMetadata(TypedDict):
    """Usage metadata for a message, such as token counts.

    Attributes:
        input_tokens: (int) count of input (or prompt) tokens
        output_tokens: (int) count of output (or completion) tokens
        total_tokens: (int) total token count
    """

    input_tokens: int
    output_tokens: int
    total_tokens: int
```
```python
class AIMessage(BaseMessage):
    ...
    usage_metadata: Optional[UsageMetadata] = None
    """If provided, token usage information associated with the message."""
    ...
```
2024-05-23 14:21:58 -04:00
223 changed files with 18018 additions and 1626 deletions

7
.github/workflows/.codespell-exclude vendored Normal file
View File

@@ -0,0 +1,7 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -72,10 +72,67 @@ jobs:
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo "# Release $TAG"
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
with:
@@ -86,6 +143,7 @@ jobs:
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
steps:
@@ -229,6 +287,7 @@ jobs:
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
@@ -270,6 +329,7 @@ jobs:
mark-release:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
@@ -306,6 +366,6 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
body: ${{ needs.release-notes.outputs.release-body }}
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}

View File

@@ -29,9 +29,9 @@ jobs:
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

1
.gitignore vendored
View File

@@ -178,3 +178,4 @@ _dist
docs/docs/templates
prof
virtualenv/

View File

@@ -32,10 +32,19 @@ api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
cd docs/api_reference && poetry run make clean
git clean -fdX ./docs/api_reference
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:

View File

@@ -2,17 +2,17 @@
⚡ Build context-aware reasoning applications ⚡
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![Downloads](https://static.pepy.tech/badge/langchain-core/month)](https://pepy.tech/project/langchain-core)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -38,22 +38,22 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) and [components](https://python.langchain.com/v0.2/docs/concepts/#components). Integrate with hundreds of [third-party providers](https://python.langchain.com/v0.2/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/v0.2/docs/langserve/).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
- **[LangServe](https://python.langchain.com/v0.2/docs/langserve/)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
@@ -61,20 +61,20 @@ For these applications, LangChain simplifies the entire application lifecycle:
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
@@ -87,49 +87,50 @@ Off-the-shelf chains make it easy to get started. Components make it easy to cus
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
**📚 Retrieval:**
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents:**
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents) along with the [LangGraph](https://github.com/langchain-ai/langgraph) extension for building custom agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
- [LangChain Templates](https://python.langchain.com/v0.2/docs/templates/): Example applications hosted with LangServe.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
## 🌟 Contributors

View File

@@ -35,8 +35,6 @@ generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)

View File

@@ -2,32 +2,150 @@
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
and Templates.
Templates, and Cookbooks.
## Summary
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|------------------|---------|-------------------|------------------------|
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **arXiv id:** 2402.03620v1
- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
- **Published Date:** 2024-02-06
- **URL:** http://arxiv.org/abs/2402.03620v1
- **LangChain:**
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
task-intrinsic reasoning structures to tackle complex reasoning problems that
are challenging for typical prompting methods. Core to the framework is a
self-discovery process where LLMs select multiple atomic reasoning modules such
as critical thinking and step-by-step thinking, and compose them into an
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
outperforms inference-intensive methods such as CoT-Self-Consistency by more
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
the self-discovered reasoning structures are universally applicable across
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
commonalities with human reasoning patterns.
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **arXiv id:** 2401.18059v1
- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
- **Published Date:** 2024-01-31
- **URL:** http://arxiv.org/abs/2401.18059v1
- **LangChain:**
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
state and incorporate long-tail knowledge. However, most existing methods
retrieve only short contiguous chunks from a retrieval corpus, limiting
holistic understanding of the overall document context. We introduce the novel
approach of recursively embedding, clustering, and summarizing chunks of text,
constructing a tree with differing levels of summarization from the bottom up.
At inference time, our RAPTOR model retrieves from this tree, integrating
information across lengthy documents at different levels of abstraction.
Controlled experiments show that retrieval with recursive summaries offers
significant improvements over traditional retrieval-augmented LMs on several
tasks. On question-answering tasks that involve complex, multi-step reasoning,
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
with the use of GPT-4, we can improve the best performance on the QuALITY
benchmark by 20% in absolute accuracy.
## Corrective Retrieval Augmented Generation
- **arXiv id:** 2401.15884v2
- **Title:** Corrective Retrieval Augmented Generation
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
- **Published Date:** 2024-01-29
- **URL:** http://arxiv.org/abs/2401.15884v2
- **LangChain:**
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
accuracy of generated texts cannot be secured solely by the parametric
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
practicable complement to LLMs, it relies heavily on the relevance of retrieved
documents, raising concerns about how the model behaves if retrieval goes
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
(CRAG) to improve the robustness of generation. Specifically, a lightweight
retrieval evaluator is designed to assess the overall quality of retrieved
documents for a query, returning a confidence degree based on which different
knowledge retrieval actions can be triggered. Since retrieval from static and
limited corpora can only return sub-optimal documents, large-scale web searches
are utilized as an extension for augmenting the retrieval results. Besides, a
decompose-then-recompose algorithm is designed for retrieved documents to
selectively focus on key information and filter out irrelevant information in
them. CRAG is plug-and-play and can be seamlessly coupled with various
RAG-based approaches. Experiments on four datasets covering short- and
long-form generation tasks show that CRAG can significantly improve the
performance of RAG-based approaches.
## Mixtral of Experts
- **arXiv id:** 2401.04088v1
- **Title:** Mixtral of Experts
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
- **Published Date:** 2024-01-08
- **URL:** http://arxiv.org/abs/2401.04088v1
- **LangChain:**
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and combine their outputs. Even though each token only sees two experts, the
selected experts can be different at each timestep. As a result, each token has
access to 47B parameters, but only uses 13B active parameters during inference.
Mixtral was trained with a context size of 32k tokens and it outperforms or
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
multilingual benchmarks. We also provide a model fine-tuned to follow
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
the base and instruct models are released under the Apache 2.0 license.
## Dense X Retrieval: What Retrieval Granularity Should We Use?
- **arXiv id:** 2312.06648v2
@@ -91,6 +209,39 @@ average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **arXiv id:** 2310.11511v1
- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
- **Published Date:** 2023-10-17
- **URL:** http://arxiv.org/abs/2310.11511v1
- **LangChain:**
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **arXiv id:** 2310.06117v2
@@ -101,6 +252,7 @@ outside the pre-training knowledge scope.
- **LangChain:**
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
@@ -113,6 +265,27 @@ including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
## Llama 2: Open Foundation and Fine-Tuned Chat Models
- **arXiv id:** 2307.09288v2
- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
- **Published Date:** 2023-07-18
- **URL:** http://arxiv.org/abs/2307.09288v2
- **LangChain:**
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
## Query Rewriting for Retrieval-Augmented Large Language Models
- **arXiv id:** 2305.14283v3
@@ -123,6 +296,7 @@ and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
- **LangChain:**
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
@@ -152,6 +326,7 @@ for retrieval-augmented LLM.
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
approach aimed at improving the problem-solving capabilities of auto-regressive
@@ -171,6 +346,132 @@ significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **arXiv id:** 2305.04091v3
- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
- **Published Date:** 2023-05-06
- **URL:** http://arxiv.org/abs/2305.04091v3
- **LangChain:**
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
## Visual Instruction Tuning
- **arXiv id:** 2304.08485v2
- **Title:** Visual Instruction Tuning
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
- **Published Date:** 2023-04-17
- **URL:** http://arxiv.org/abs/2304.08485v2
- **LangChain:**
- **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
## Generative Agents: Interactive Simulacra of Human Behavior
- **arXiv id:** 2304.03442v2
- **Title:** Generative Agents: Interactive Simulacra of Human Behavior
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
- **Published Date:** 2023-04-07
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
communication to prototyping tools. In this paper, we introduce generative
agents--computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint,
while authors write; they form opinions, notice each other, and initiate
conversations; they remember and reflect on days past as they plan the next
day. To enable generative agents, we describe an architecture that extends a
large language model to store a complete record of the agent's experiences
using natural language, synthesize those memories over time into higher-level
reflections, and retrieve them dynamically to plan behavior. We instantiate
generative agents to populate an interactive sandbox environment inspired by
The Sims, where end users can interact with a small town of twenty five agents
using natural language. In an evaluation, these generative agents produce
believable individual and emergent social behaviors: for example, starting with
only a single user-specified notion that one agent wants to throw a Valentine's
Day party, the agents autonomously spread invitations to the party over the
next two days, make new acquaintances, ask each other out on dates to the
party, and coordinate to show up for the party together at the right time. We
demonstrate through ablation that the components of our agent
architecture--observation, planning, and reflection--each contribute critically
to the believability of agent behavior. By fusing large language models with
computational, interactive agents, this work introduces architectural and
interaction patterns for enabling believable simulations of human behavior.
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **arXiv id:** 2303.17760v2
- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
- **Published Date:** 2023-03-31
- **URL:** http://arxiv.org/abs/2303.17760v2
- **LangChain:**
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **arXiv id:** 2303.17580v4
@@ -181,6 +482,7 @@ implementation of the ToT-based Sudoku solver is available on GitHub:
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are numerous AI models
@@ -235,7 +537,7 @@ more than 1/1,000th the compute of GPT-4.
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
@@ -262,6 +564,7 @@ family, and discuss robustness and security.
- **API Reference:** [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
languages, it remains difficult to create effective fully zero-shot dense
@@ -351,7 +654,8 @@ performance across three real-world tasks on multiple LLMs.
- **URL:** http://arxiv.org/abs/2211.10435v2
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
- **API Reference:** [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
@@ -442,7 +746,7 @@ encoders, mine bitexts, and validate the bitexts by training NMT systems.
- **URL:** http://arxiv.org/abs/2204.00498v1
- **LangChain:**
- **API Reference:** [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
- **API Reference:** [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
language model. We find that, without any finetuning, Codex is a strong
@@ -461,7 +765,7 @@ few-shot examples.
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
@@ -525,7 +829,7 @@ https://github.com/OpenAI/CLIP.
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We

View File

@@ -174,7 +174,7 @@ The `content` property describes the content of the message.
This can be a few different things:
- A string (most models deal this type of content)
- A List of dictionaries (this is used for multi-modal input, where the dictionary contains information about that input type and that input location)
- A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location)
#### HumanMessage
@@ -476,6 +476,12 @@ If you are still using AgentExecutor, do not fear: we still have a guide on [how
It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent)
### Multimodal
Some models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
### Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
@@ -642,3 +648,7 @@ Table columns:
| Character | [CharacterTextSplitter](/docs/how_to/character_text_splitter/) | A user defined character | | Splits text based on a user defined character. One of the simpler methods. |
| Semantic Chunker (Experimental) | [SemanticChunker](/docs/how_to/semantic-chunker/) | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) |
| Integration: AI21 Semantic | [AI21SemanticTextSplitter](/docs/integrations/document_transformers/ai21_semantic_text_splitter/) | ✅ | Identifies distinct topics that form coherent pieces of text and splits along those. |

View File

@@ -71,6 +71,8 @@ make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
@@ -78,6 +80,18 @@ make docs_build
make api_docs_build
```
:::tip
The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
```bash
make api_docs_quick_preview
```
which will just build a small subset of the API reference.
:::
Finally, run the link checker to ensure all links are valid:
```bash

View File

@@ -19,13 +19,13 @@
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.\n",
"Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.\n",
"\n",
"In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"\n",
":::{.callout-important}\n",
"This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
"This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
":::\n",
"\n",
"## Concepts\n",
@@ -34,7 +34,7 @@
"- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
"- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n",
"- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n",
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
"\n",
"## Setup\n",

View File

@@ -14,35 +14,51 @@
"\n",
":::\n",
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls."
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-openai >= 0.1.8`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c7d1338-dd1b-4d06-b33d-d5cffc49fd6a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1a55e87a-3291-4e7f-8e8e-4c69b0854384",
"id": "598ae1e2-a52d-4459-81fd-cdc68b06742a",
"metadata": {},
"source": [
"## Using AIMessage.response_metadata\n",
"## Using LangSmith\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](/docs/how_to/response_metadata) field. Here's an example with OpenAI:"
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using AIMessage.usage_metadata\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n",
"\n",
"LangChain `AIMessage` objects include a [usage_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n",
"\n",
"Examples:\n",
"\n",
"**OpenAI**:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "467ccdeb-6b62-45e5-816e-167cd24d2586",
"id": "b39bf807-4125-4db4-bbf7-28a46afff6b4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'token_usage': {'completion_tokens': 225,\n",
" 'prompt_tokens': 17,\n",
" 'total_tokens': 242},\n",
" 'model_name': 'gpt-4-turbo',\n",
" 'system_fingerprint': 'fp_76f018034d',\n",
" 'finish_reason': 'stop',\n",
" 'logprobs': None}"
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}"
]
},
"execution_count": 1,
@@ -51,37 +67,33 @@
}
],
"source": [
"# !pip install -qU langchain-openai\n",
"# # !pip install -qU langchain-openai\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"openai_response = llm.invoke(\"hello\")\n",
"openai_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "9d5026e9-3ad4-41e6-9946-9f1a26f4a21f",
"id": "2299c44a-2fe6-4d52-a6a2-99ff6d231c73",
"metadata": {},
"source": [
"And here's an example with Anthropic:"
"**Anthropic**:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "145404f1-e088-4824-b468-236c486a9903",
"id": "9c82ff80-ec4e-4049-b019-5f0bbd7df82a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'id': 'msg_01P61rdHbapEo6h3fjpfpCQT',\n",
" 'model': 'claude-3-sonnet-20240229',\n",
" 'stop_reason': 'end_turn',\n",
" 'stop_sequence': None,\n",
" 'usage': {'input_tokens': 17, 'output_tokens': 306}}"
"{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}"
]
},
"execution_count": 2,
@@ -94,9 +106,222 @@
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
"anthropic_response = llm.invoke(\"hello\")\n",
"anthropic_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f",
"metadata": {},
"source": [
"### Using AIMessage.response_metadata\n",
"\n",
"Metadata from the model response is also included in the AIMessage [response_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f156f9da-21f2-4c81-a714-54cbf9ad393e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n",
"\n",
"Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n"
]
}
],
"source": [
"print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n",
"print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')"
]
},
{
"cell_type": "markdown",
"id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Some providers support token count metadata in a streaming context.\n",
"\n",
"#### OpenAI\n",
"\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={\"include_usage\": True}`.\n",
"\n",
"```{=mdx}\n",
":::note\n",
"By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "07f0c872-6b6c-4fed-a129-9b5a858505be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"aggregate = None\n",
"for chunk in llm.stream(\"hello\", stream_options={\"include_usage\": True}):\n",
" print(chunk)\n",
" aggregate = chunk if aggregate is None else aggregate + chunk"
]
},
{
"cell_type": "markdown",
"id": "dd809ded-8b13-4d5f-be5e-277b79d51802",
"metadata": {},
"source": [
"Note that the usage metadata will be included in the sum of the individual message chunks:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3db7bc03-a7d4-4704-92ab-f8ba92ef59ae",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! How can I assist you today?\n",
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"print(aggregate.content)\n",
"print(aggregate.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "7dba63e8-0ed7-4533-8f0f-78e19c38a25c",
"metadata": {},
"source": [
"To disable streaming token counts for OpenAI, set `\"include_usage\"` to False in `stream_options`, or omit it from the parameters:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "67117f2b-ce68-4c1e-9556-2d3849f90e1b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n"
]
}
],
"source": [
"aggregate = None\n",
"for chunk in llm.stream(\"hello\"):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "6a5d9617-be3a-419a-9276-de9c29fa50ae",
"metadata": {},
"source": [
"You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"\n",
"See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "57dec1fb-bd9c-4c98-8798-8fbbe67f6b2c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}\n",
"\n",
"setup='Why was the math book sad?' punchline='Because it had too many problems.'\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"question to set up a joke\")\n",
" punchline: str = Field(description=\"answer to resolve the joke\")\n",
"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" model_kwargs={\"stream_options\": {\"include_usage\": True}},\n",
")\n",
"# Under the hood, .with_structured_output binds tools to the\n",
"# chat model and appends a parser.\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n",
" elif event[\"event\"] == \"on_chain_end\":\n",
" print(event[\"data\"][\"output\"])\n",
" else:\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "2bc8d313-4bef-463e-89a5-236d8bb6ab2f",
"metadata": {},
"source": [
"Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model."
]
},
{
@@ -115,7 +340,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"id": "31667d54",
"metadata": {},
"outputs": [
@@ -123,11 +348,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 26\n",
"Tokens Used: 27\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 15\n",
"\tCompletion Tokens: 16\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.00056\n"
"Total Cost (USD): $2.95e-05\n"
]
}
],
@@ -136,7 +361,7 @@
"\n",
"from langchain_community.callbacks.manager import get_openai_callback\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
@@ -153,7 +378,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "e09420f4",
"metadata": {},
"outputs": [
@@ -161,7 +386,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"52\n"
"55\n"
]
}
],
@@ -172,6 +397,39 @@
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "9ac51188-c8f4-4230-90fd-3cd78cdd955d",
"metadata": {},
"source": [
"```{=mdx}\n",
":::note\n",
"Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available:\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b241069a-265d-4497-af34-b0a5f95ae67f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"28\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" for chunk in llm.stream(\"Tell me a joke\", stream_options={\"include_usage\": True}):\n",
" pass\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
@@ -182,7 +440,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 12,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
@@ -211,15 +469,15 @@
"source": [
"```{=mdx}\n",
":::note\n",
"We have to set `stream_runnable=False` for token counting to work. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events. However, OpenAI does not return token counts when streaming model responses, so we need to turn off the underlying streaming.\n",
"We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "2f98c536",
"execution_count": 13,
"id": "3950d88b-8bfb-4294-b75b-e6fd421e633c",
"metadata": {},
"outputs": [
{
@@ -230,46 +488,51 @@
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Hummingbird`\n",
"Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Hummingbird\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.\n",
"Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.\n",
"Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 115 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).\n",
"Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.\n",
"\n",
"Page: Rufous hummingbird\n",
"Summary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.\n",
"\n",
"\n",
"Page: Bee hummingbird\n",
"Summary: The bee hummingbird, zunzuncito or Helena hummingbird (Mellisuga helenae) is a species of hummingbird, native to the island of Cuba in the Caribbean. It is the smallest known bird. The bee hummingbird feeds on nectar of flowers and bugs found in Cuba.\n",
"\n",
"Page: Hummingbird cake\n",
"Summary: Hummingbird cake is a banana-pineapple spice cake originating in Jamaica and a popular dessert in the southern United States since the 1970s. Ingredients include flour, sugar, salt, vegetable oil, ripe banana, pineapple, cinnamon, pecans, vanilla extract, eggs, and leavening agent. It is often served with cream cheese frosting.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Fastest bird`\n",
"Page: Anna's hummingbird\n",
"Summary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna Masséna, Duchess of Rivoli.\n",
"It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.\n",
"These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `{'query': 'fastest bird species'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Fastest animals\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"\n",
"\n",
"Page: Fastest animals\n",
"Summary: This is a list of the fastest animals in the world, by types of animal.\n",
"\n",
"\n",
"\n",
"Page: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon, able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"Page: Ostrich\n",
"Summary: Ostriches are large flightless birds. They are the heaviest and largest living birds, with adult common ostriches weighing anywhere between 63.5 and 145 kilograms and laying the largest eggs of any living land animal. With the ability to run at 70 km/h (43.5 mph), they are the fastest birds on land. They are farmed worldwide, with significant industries in the Philippines and in Namibia. Ostrich leather is a lucrative commodity, and the large feathers are used as plumes for the decoration of ceremonial headgear. Ostrich eggs have been used by humans for millennia.\n",
"Ostriches are of the genus Struthio in the order Struthioniformes, part of the infra-class Palaeognathae, a diverse group of flightless birds also known as ratites that includes the emus, rheas, cassowaries, kiwis and the extinct elephant birds and moas. There are two living species of ostrich: the common ostrich, native to large areas of sub-Saharan Africa, and the Somali ostrich, native to the Horn of Africa. The common ostrich was historically native to the Arabian Peninsula, and ostriches were present across Asia as far east as China and Mongolia during the Late Pleistocene and possibly into the Holocene.\u001b[0m\u001b[32;1m\u001b[1;3m### Hummingbird's Scientific Name\n",
"The scientific name for the bee hummingbird, which is the smallest known bird and a species of hummingbird, is **Mellisuga helenae**. It is native to Cuba.\n",
"\n",
"### Fastest Bird Species\n",
"The fastest bird in terms of airspeed is the **peregrine falcon**, which can exceed speeds of 320 km/h (200 mph) during its diving flight. In level flight, the fastest confirmed speed is held by the **common swift**, which can fly at 111.5 km/h (69.3 mph).\u001b[0m\n",
"Page: Falcon\n",
"Summary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.\n",
"Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.\n",
"The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial \"tooth\" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.\n",
"The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.\n",
"Some small falcons with long, narrow wings are called \"hobbies\" and some which hover while hunting are called \"kestrels\".\n",
"As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).\u001b[0m\u001b[32;1m\u001b[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1583\n",
"Prompt Tokens: 1412\n",
"Completion Tokens: 171\n",
"Total Cost (USD): $0.019250000000000003\n"
"Total Tokens: 1787\n",
"Prompt Tokens: 1687\n",
"Completion Tokens: 100\n",
"Total Cost (USD): $0.0009935\n"
]
}
],
@@ -298,19 +561,19 @@
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4a3eced5-2ff7-49a7-a48b-768af8658323",
"execution_count": 12,
"id": "1837c807-136a-49d8-9c33-060e58dc16d2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 0\n",
"\tPrompt Tokens: 0\n",
"\tCompletion Tokens: 0\n",
"Tokens Used: 96\n",
"\tPrompt Tokens: 26\n",
"\tCompletion Tokens: 70\n",
"Successful Requests: 2\n",
"Total Cost (USD): $0.0\n"
"Total Cost (USD): $0.001888\n"
]
}
],
@@ -364,7 +627,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -75,6 +75,31 @@ Otherwise you can initialize without any params:
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings()
```
</TabItem>
<TabItem value="huggingface" label="Hugging Face">
To start we'll need to install the Hugging Face partner package:
```bash
pip install langchain-huggingface
```
You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
```
You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings()
```
</TabItem>

View File

@@ -174,7 +174,12 @@ LangChain Tools contain a description of the tool (to pass to the language model
- [How to: add ad-hoc tool calling capability to LLMs and chat models](/docs/how_to/tools_prompting)
- [How to: add a human in the loop to tool usage](/docs/how_to/tools_human)
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: call tools using multi-modal data](/docs/how_to/tool_calls_multi_modal)
### Multimodal
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
### Agents

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -2,169 +2,226 @@
"cells": [
{
"cell_type": "markdown",
"id": "e5715368",
"id": "90dff237-bc28-4185-a2c0-d5203bbdeacd",
"metadata": {},
"source": [
"# How to track token usage for LLMs\n",
"\n",
"This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single LLM call."
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [LLMs](/docs/concepts/#llms)\n",
":::\n",
"\n",
"## Using LangSmith\n",
"\n",
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using callbacks\n",
"\n",
"There are some API-specific callback context managers that allow you to track token usage across multiple calls. You'll need to check whether such an integration is available for your particular model.\n",
"\n",
"If such an integration is not available for your model, you can create a custom callback manager by adapting the implementation of the [OpenAI callback manager](https://api.python.langchain.com/en/latest/_modules/langchain_community/callbacks/openai_info.html#OpenAICallbackHandler).\n",
"\n",
"### OpenAI\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call.\n",
"\n",
":::{.callout-danger}\n",
"\n",
"The callback handler does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). For support in a streaming context, refer to the corresponding guide for chat models [here](/docs/how_to/chat_token_usage_tracking).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "f790edd9-823e-4bc5-befa-e9529c7237a0",
"metadata": {},
"source": [
"### Single call"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9455db35",
"id": "2eebbee2-6ca1-4fa8-a3aa-0376888ceefb",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 18\n",
"Prompt Tokens: 4\n",
"Completion Tokens: 14\n",
"Total Cost (USD): $3.4e-05\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI"
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "7df3be35-dd97-4e3a-bd51-52434ab2249d",
"metadata": {},
"source": [
"### Multiple calls\n",
"\n",
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence to a chain. This will also work for an agent which may use multiple steps."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d1c55cc9",
"id": "3ec10419-294c-44bf-af85-86aabf457cb6",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why did the chicken go to the seance?\n",
"\n",
"To talk to the other side of the road!\n",
"--\n",
"\n",
"\n",
"Why did the fish need a lawyer?\n",
"\n",
"Because it got caught in a net!\n",
"\n",
"---\n",
"Total Tokens: 50\n",
"Prompt Tokens: 12\n",
"Completion Tokens: 38\n",
"Total Cost (USD): $9.400000000000001e-05\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = template | llm\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = chain.invoke({\"topic\": \"birds\"})\n",
" print(response)\n",
" response = chain.invoke({\"topic\": \"fish\"})\n",
" print(\"--\")\n",
" print(response)\n",
"\n",
"\n",
"print()\n",
"print(\"---\")\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "ad7a3fba-9fac-4222-8f87-d1d276d27d6e",
"metadata": {
"tags": []
},
"source": [
"## Streaming\n",
"\n",
":::{.callout-danger}\n",
"\n",
"`get_openai_callback` does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). If you want to count tokens correctly in a streaming context, there are a number of options:\n",
"\n",
"- Use chat models as described in [this guide](/docs/how_to/chat_token_usage_tracking);\n",
"- Implement a [custom callback handler](/docs/how_to/custom_callbacks/) that uses appropriate tokenizers to count the tokens;\n",
"- Use a monitoring platform such as [LangSmith](https://www.langchain.com/langsmith).\n",
":::\n",
"\n",
"Note that when using legacy language models in a streaming context, token counts are not updated:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "31667d54",
"metadata": {},
"id": "cd61ed79-7858-49bb-afb5-d41291f597ba",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 37\n",
"\tPrompt Tokens: 4\n",
"\tCompletion Tokens: 33\n",
"Successful Requests: 1\n",
"Total Cost (USD): $7.2e-05\n"
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 0\n",
"Prompt Tokens: 0\n",
"Completion Tokens: 0\n",
"Total Cost (USD): $0.0\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "c0ab6d27",
"metadata": {},
"source": [
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e09420f4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"72\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
"metadata": {},
"source": [
"If a chain or agent with multiple steps in it is used, it will track all those steps."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2f98c536",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[\"Olivia Wilde and Harry Styles took fans by surprise with their whirlwind romance, which began when they met on the set of Don't Worry Darling.\", 'Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.', 'Olivia Wilde and Harry Styles were spotted early on in their relationship walking around London. (. Image ...', \"Looks like Olivia Wilde and Jason Sudeikis are starting 2023 on good terms. Amid their highly publicized custody battle and the actress' ...\", 'The two started dating after Wilde split up with actor Jason Sudeikisin 2020. However, their relationship came to an end last November.', \"Olivia Wilde and Harry Styles started dating during the filming of Don't Worry Darling. While the movie got a lot of backlash because of the ...\", \"Here's what we know so far about Harry Styles and Olivia Wilde's relationship.\", 'Olivia and the Grammy winner kept their romance out of the spotlight as their relationship began just two months after her split from ex-fiancé ...', \"Harry Styles and Olivia Wilde first met on the set of Don't Worry Darling and stepped out as a couple in January 2021. Relive all their biggest relationship ...\"]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Harry Styles is Olivia Wilde's boyfriend.\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m29 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 29 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 29^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 2205\n",
"Prompt Tokens: 2053\n",
"Completion Tokens: 152\n",
"Total Cost (USD): $0.0441\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = agent.run(\n",
" \"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\"\n",
" )\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
" for chunk in llm.stream(\"Tell me a joke\"):\n",
" print(chunk, end=\"\", flush=True)\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca77a3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -183,7 +240,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -73,7 +73,6 @@
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -147,8 +146,18 @@
"id": "01acb505-3fd3-4ab4-9f04-5ea07e81542e",
"metadata": {},
"source": [
":::info\n",
"\n",
"Note that we've specified `input_messages_key` (the key to be treated as the latest input message) and `history_messages_key` (the key to add historical messages to).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "35222c30",
"metadata": {},
"source": [
"When invoking this new runnable, we specify the corresponding chat history via a configuration parameter:"
]
},
@@ -161,7 +170,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse of a right triangle.', response_metadata={'id': 'msg_017rAM9qrBTSdJ5i1rwhB7bT', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 32, 'output_tokens': 31}}, id='run-65e94a5e-a804-40de-ba88-d01b6cd06864-0')"
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse of a right triangle.', response_metadata={'id': 'msg_01DH8iRBELVbF3sqM8U5sk8A', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 32, 'output_tokens': 31}}, id='run-e07fc012-a4f6-4e47-8ef8-250f296eba5b-0')"
]
},
"execution_count": 4,
@@ -185,7 +194,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse of a right triangle.', response_metadata={'id': 'msg_017hK1Q63ganeQZ9wdeqruLP', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 68, 'output_tokens': 31}}, id='run-a42177ef-b04a-4968-8606-446fb465b943-0')"
"AIMessage(content='The inverse of the cosine function is called the arccosine or inverse cosine.', response_metadata={'id': 'msg_015TeeRQBvTvc7XG1JxYqZyq', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 72, 'output_tokens': 22}}, id='run-32ae22ea-3b2f-4d38-8c8a-cb8702e2f3e7-0')"
]
},
"execution_count": 5,
@@ -196,11 +205,31 @@
"source": [
"# Remembers\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What?\"},\n",
" {\"ability\": \"math\", \"input\": \"What is its inverse called?\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e0c651e5",
"metadata": {},
"source": [
":::info\n",
"\n",
"Note that in this case the context is preserved via the chat history for the provided `session_id`, so the model knows that \"it\" refers to \"cosine\" in this case.\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "a44f8d5f",
"metadata": {},
"source": [
"Now let's try a different `session_id`"
]
},
{
"cell_type": "code",
"execution_count": 6,
@@ -210,7 +239,7 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI assistant skilled in mathematics. How can I help you with a math-related task?\", response_metadata={'id': 'msg_01AYwfQ6SH5qz8ZQMW3nYtGU', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 28, 'output_tokens': 24}}, id='run-c57d93e3-305f-4c0e-bdb9-ef82f5b49f61-0')"
"AIMessage(content='The inverse of a function is the function that undoes the original function.', response_metadata={'id': 'msg_01M8WbHWg2sjWTz3m3NKqZuF', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 32, 'output_tokens': 18}}, id='run-b64c73d6-03ee-4b0a-85e0-34beb45408d4-0')"
]
},
"execution_count": 6,
@@ -221,11 +250,27 @@
"source": [
"# New session_id --> does not remember.\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What?\"},\n",
" {\"ability\": \"math\", \"input\": \"What is its inverse called?\"},\n",
" config={\"configurable\": {\"session_id\": \"def234\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5416e195",
"metadata": {},
"source": [
"When we pass a different `session_id`, we start a new chat history, so the model does not know what \"it\" refers to."
]
},
{
"cell_type": "markdown",
"id": "a6710e65",
"metadata": {},
"source": [
"### Customization"
]
},
{
"cell_type": "markdown",
"id": "d29497be-3366-408d-bbb9-d4a8bf4ef37c",
@@ -243,7 +288,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you with math today?', response_metadata={'id': 'msg_01UdhnwghuSE7oRM57STFhHL', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 27, 'output_tokens': 14}}, id='run-3d53f67a-4ea7-4d78-8e67-37db43d4af5d-0')"
"AIMessage(content=\"Why can't a bicycle stand up on its own? It's two-tired!\", response_metadata={'id': 'msg_011qHi8pvbNkKhRb9XYRm2kc', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 30, 'output_tokens': 20}}, id='run-5d1d5b5a-ccec-4c2c-b11a-f1953dbe85a3-0')"
]
},
"execution_count": 7,
@@ -289,11 +334,69 @@
")\n",
"\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"Hello\"},\n",
" {\"ability\": \"jokes\", \"input\": \"Tell me a joke\"},\n",
" config={\"configurable\": {\"user_id\": \"123\", \"conversation_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "4f282883",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The joke was about a bicycle not being able to stand up on its own because it\\'s \"two-tired\" (too tired).', response_metadata={'id': 'msg_01LbrkfidZgseBMxxRjQXJQH', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 59, 'output_tokens': 30}}, id='run-8b2ca810-77d7-44b8-b27b-677e0062b19a-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# remembers\n",
"with_message_history.invoke(\n",
" {\"ability\": \"jokes\", \"input\": \"What was the joke about?\"},\n",
" config={\"configurable\": {\"user_id\": \"123\", \"conversation_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "fc122c18",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm afraid I don't have enough context to provide a relevant joke. As an AI assistant, I don't actually have pre-programmed jokes. I'd be happy to try generating a humorous response if you provide more details about the context.\", response_metadata={'id': 'msg_01PgSp46hNJnKyNfNKPDauQ9', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 32, 'output_tokens': 54}}, id='run-ed202892-27e4-4da9-a26d-e0dc16b10940-0')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# New user_id --> does not remember\n",
"with_message_history.invoke(\n",
" {\"ability\": \"jokes\", \"input\": \"What was the joke about?\"},\n",
" config={\"configurable\": {\"user_id\": \"456\", \"conversation_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3ce37565",
"metadata": {},
"source": [
"Note that in this case the context was preserved for the same `user_id`, but once we changed it, the new chat history was started, even though the `conversation_id` was the same."
]
},
{
"cell_type": "markdown",
"id": "18f1a459-3f88-4ee6-8542-76a907070dd6",
@@ -314,17 +417,17 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "17733d4f-3a32-4055-9d44-5d58b9446a26",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content='Simone de Beauvoir was a prominent French existentialist philosopher who had some key beliefs about free will:\\n\\n1. Radical Freedom: De Beauvoir believed that humans have radical freedom - the ability to choose and define themselves through their actions. She rejected determinism and believed that we are not simply products of our biology, upbringing, or social circumstances.\\n\\n2. Ambiguity of the Human Condition: However, de Beauvoir also recognized the ambiguity of the human condition. While we have radical freedom, we are also situated beings constrained by our facticity (our given circumstances and limitations). This creates a tension and anguish in the human experience.\\n\\n3. Responsibility and Bad Faith: With this radical freedom comes great responsibility. De Beauvoir criticized \"bad faith\" - the tendency of people to deny their freedom and responsibility by making excuses or hiding behind social roles and norms.\\n\\n4. Ethical Engagement: For de Beauvoir, true freedom and authenticity required ethical engagement with the world and with others. We must take responsibility for our choices and their impact on others.\\n\\nOverall, de Beauvoir saw free will as a core aspect of the human condition, but one that is fraught with difficulty and ambiguity. Her philosophy emphasized the importance of owning our freedom and using it to ethically shape our lives and world.', response_metadata={'id': 'msg_01A78LdxxsCm6uR8vcAdMQBt', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 20, 'output_tokens': 293}}, id='run-9447a229-5d17-4b20-a48b-7507b78b225a-0')}"
"{'output_message': AIMessage(content='Simone de Beauvoir was a prominent French existentialist philosopher who had some key beliefs about free will:\\n\\n1. Radical Freedom: De Beauvoir believed that humans have radical freedom - the ability to choose and define themselves through their actions. She rejected determinism and believed that we are not simply products of our biology, upbringing, or social circumstances.\\n\\n2. Ambiguity of the Human Condition: However, de Beauvoir also recognized the ambiguity of the human condition. While we have radical freedom, we are also situated beings constrained by our facticity (our given circumstances and limitations). This creates a tension and anguish in the human experience.\\n\\n3. Responsibility and Bad Faith: With radical freedom comes great responsibility. De Beauvoir criticized \"bad faith\" - the denial or avoidance of this responsibility by making excuses or pretending we lack free will. She believed we must courageously embrace our freedom and the burdens it entails.\\n\\n4. Ethical Engagement: For de Beauvoir, freedom is not just an abstract philosophical concept, but something that must be exercised through ethical engagement with the world and others. Our choices and actions have moral implications that we must grapple with.\\n\\nOverall, de Beauvoir\\'s perspective on free will was grounded in existentialist principles - the belief that we are fundamentally free, yet this freedom is fraught with difficulty and responsibility. Her views emphasized the centrality of human agency and the ethical dimensions of our choices.', response_metadata={'id': 'msg_01QFXHx74GSzcMWnWc8YxYSJ', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 20, 'output_tokens': 324}}, id='run-752513bc-2b4f-4cad-87f0-b96fee6ebe43-0')}"
]
},
"execution_count": 9,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -356,17 +459,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"id": "efb57ef5-91f9-426b-84b9-b77f071a9dd7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content=\"Simone de Beauvoir's views on free will were quite similar, but not identical, to those of her long-time partner Jean-Paul Sartre, another prominent existentialist philosopher.\\n\\nKey similarities:\\n\\n1. Radical Freedom: Both de Beauvoir and Sartre believed that humans have radical, unconditioned freedom to choose and define themselves.\\n\\n2. Rejection of Determinism: They both rejected deterministic views that see humans as products of their circumstances or biology.\\n\\n3. Emphasis on Responsibility: They agreed that with radical freedom comes great responsibility for one's choices and their consequences.\\n\\nKey differences:\\n\\n1. Ambiguity of the Human Condition: While Sartre emphasized the pure, unconditioned nature of human freedom, de Beauvoir recognized the ambiguity of the human condition - our freedom is constrained by our facticity (circumstances).\\n\\n2. Ethical Engagement: De Beauvoir placed more emphasis on the importance of ethical engagement with the world and others, whereas Sartre's focus was more on the individual's freedom.\\n\\n3. Gendered Perspectives: As a woman, de Beauvoir's perspective was more attuned to issues of gender and the lived experience of women, which shaped her views on freedom and ethics.\\n\\nSo in summary, while Sartre and de Beauvoir shared a core existentialist philosophy centered on radical human freedom, de Beauvoir's thought incorporated a greater recognition of the ambiguity and ethical dimensions of the human condition. This reflected her distinct feminist and phenomenological approach.\", response_metadata={'id': 'msg_01U6X3KNPufVg3zFvnx24eKq', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 324, 'output_tokens': 338}}, id='run-c4a984bd-33c6-4e26-a4d1-d58b666d065c-0')}"
"{'output_message': AIMessage(content='Simone de Beauvoir\\'s views on free will were quite similar to those of her long-time partner and fellow existentialist philosopher, Jean-Paul Sartre. There are some key parallels and differences:\\n\\nSimilarities:\\n\\n1. Radical Freedom: Both de Beauvoir and Sartre believed that humans have radical, unconditioned freedom to choose and define themselves.\\n\\n2. Rejection of Determinism: They both rejected deterministic views that see humans as products of their circumstances or nature.\\n\\n3. Emphasis on Responsibility: They agreed that with radical freedom comes great responsibility for one\\'s choices and actions.\\n\\n4. Critique of \"Bad Faith\": Both philosophers criticized the tendency of people to deny or avoid their freedom through self-deception and making excuses.\\n\\nDifferences:\\n\\n1. Gendered Perspectives: While Sartre developed a more gender-neutral existentialist philosophy, de Beauvoir brought a distinctly feminist lens, exploring the unique challenges and experiences of women\\'s freedom.\\n\\n2. Ethical Engagement: De Beauvoir placed more emphasis on the importance of ethical engagement with the world and others, whereas Sartre\\'s focus was more individualistic.\\n\\n3. Ambiguity of the Human Condition: De Beauvoir was more attuned to the ambiguity and tensions inherent in the human condition, whereas Sartre\\'s views were sometimes seen as more absolutist.\\n\\n4. Influence of Phenomenology: De Beauvoir was more influenced by the phenomenological tradition, which shaped her understanding of embodied, situated freedom.\\n\\nOverall, while Sartre and de Beauvoir shared a core existentialist framework, de Beauvoir\\'s unique feminist perspective and emphasis on ethical engagement with others distinguished her views on free will and the human condition.', response_metadata={'id': 'msg_01BEANW4VX6cUWYjkv3CanLz', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 355, 'output_tokens': 388}}, id='run-e786ab3a-1a42-45f3-94a3-f0c591430df3-0')}"
]
},
"execution_count": 10,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -396,7 +499,7 @@
"data": {
"text/plain": [
"RunnableWithMessageHistory(bound=RunnableBinding(bound=RunnableBinding(bound=RunnableLambda(_enter_history), config={'run_name': 'load_history'})\n",
"| RunnableBinding(bound=ChatAnthropic(model='claude-3-haiku-20240307', temperature=0.0, anthropic_api_url='https://api.anthropic.com', anthropic_api_key=SecretStr('**********'), _client=<anthropic.Anthropic object at 0x1077ff5b0>, _async_client=<anthropic.AsyncAnthropic object at 0x1321c71f0>), config_factories=[<function Runnable.with_listeners.<locals>.<lambda> at 0x1473dd000>]), config={'run_name': 'RunnableWithMessageHistory'}), get_session_history=<function get_session_history at 0x1374c7be0>, history_factory_config=[ConfigurableFieldSpec(id='session_id', annotation=<class 'str'>, name='Session ID', description='Unique identifier for a session.', default='', is_shared=True, dependencies=None)])"
"| RunnableBinding(bound=ChatAnthropic(model='claude-3-haiku-20240307', temperature=0.0, anthropic_api_url='https://api.anthropic.com', anthropic_api_key=SecretStr('**********'), _client=<anthropic.Anthropic object at 0x105682720>, _async_client=<anthropic.AsyncAnthropic object at 0x106a08fe0>), config_factories=[<function Runnable.with_listeners.<locals>.<lambda> at 0x106aeef20>]), config={'run_name': 'RunnableWithMessageHistory'}), get_session_history=<function get_session_history at 0x106aee520>, history_factory_config=[ConfigurableFieldSpec(id='session_id', annotation=<class 'str'>, name='Session ID', description='Unique identifier for a session.', default='', is_shared=True, dependencies=None)])"
]
},
"execution_count": 12,
@@ -432,7 +535,7 @@
" input_messages: RunnableBinding(bound=RunnableLambda(_enter_history), config={'run_name': 'load_history'})\n",
"}), config={'run_name': 'insert_history'})\n",
"| RunnableBinding(bound=RunnableLambda(itemgetter('input_messages'))\n",
" | ChatAnthropic(model='claude-3-haiku-20240307', temperature=0.0, anthropic_api_url='https://api.anthropic.com', anthropic_api_key=SecretStr('**********'), _client=<anthropic.Anthropic object at 0x1077ff5b0>, _async_client=<anthropic.AsyncAnthropic object at 0x1321c71f0>), config_factories=[<function Runnable.with_listeners.<locals>.<lambda> at 0x1473df6d0>]), config={'run_name': 'RunnableWithMessageHistory'}), get_session_history=<function get_session_history at 0x1374c7be0>, input_messages_key='input_messages', history_factory_config=[ConfigurableFieldSpec(id='session_id', annotation=<class 'str'>, name='Session ID', description='Unique identifier for a session.', default='', is_shared=True, dependencies=None)])"
" | ChatAnthropic(model='claude-3-haiku-20240307', temperature=0.0, anthropic_api_url='https://api.anthropic.com', anthropic_api_key=SecretStr('**********'), _client=<anthropic.Anthropic object at 0x105682720>, _async_client=<anthropic.AsyncAnthropic object at 0x106a08fe0>), config_factories=[<function Runnable.with_listeners.<locals>.<lambda> at 0x106aef560>]), config={'run_name': 'RunnableWithMessageHistory'}), get_session_history=<function get_session_history at 0x106aee520>, input_messages_key='input_messages', history_factory_config=[ConfigurableFieldSpec(id='session_id', annotation=<class 'str'>, name='Session ID', description='Unique identifier for a session.', default='', is_shared=True, dependencies=None)])"
]
},
"execution_count": 13,
@@ -478,7 +581,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"id": "477d04b3-c2b6-4ba5-962f-492c0d625cd5",
"metadata": {},
"outputs": [],
@@ -499,7 +602,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 15,
"id": "cd6a250e-17fe-4368-a39d-1fe6b2cbde68",
"metadata": {},
"outputs": [],
@@ -522,7 +625,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 16,
"id": "2afc1556-8da1-4499-ba11-983b66c58b18",
"metadata": {},
"outputs": [],
@@ -541,7 +644,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 17,
"id": "ca7c64d8-e138-4ef8-9734-f82076c47d80",
"metadata": {},
"outputs": [],
@@ -571,17 +674,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 18,
"id": "a85bcc22-ca4c-4ad5-9440-f94be7318f3e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse in a right triangle.')"
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse of a right triangle.', response_metadata={'id': 'msg_01DwU2BD8KPLoXeZ6bZPqxxJ', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 164, 'output_tokens': 31}}, id='run-c2a443c4-79b1-4b07-bb42-5e9112e5bbfc-0')"
]
},
"execution_count": 11,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@@ -595,17 +698,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 19,
"id": "ab29abd3-751f-41ce-a1b0-53f6b565e79d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The inverse of cosine is the arccosine function, denoted as acos or cos^-1, which gives the angle corresponding to a given cosine value.')"
"AIMessage(content='The inverse of cosine is called arccosine or inverse cosine.', response_metadata={'id': 'msg_01XYH5iCUokxV1UDhUa8xzna', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 202, 'output_tokens': 19}}, id='run-97dda3a2-01e3-42e5-8241-f948e7535ffc-0')"
]
},
"execution_count": 12,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -622,7 +725,7 @@
"id": "da3d1feb-b4bb-4624-961c-7db2e1180df7",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"[Langsmith trace](https://smith.langchain.com/public/bd73e122-6ec1-48b2-82df-e6483dc9cb63/r)\n",
"\n",
@@ -666,7 +769,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.12.3"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to pass multimodal data directly to models\n",
"\n",
"Here we demonstrate how to pass multimodal input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"\n",
"In this example we will ask a model to describe an image."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fb896ce9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "markdown",
"id": "4fca4da7",
"metadata": {},
"source": [
"The most commonly supported way to pass in images is to pass it in as a byte string.\n",
"This should work for most model integrations."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ca1040c",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ec680b6b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_data}\"},\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"We can feed the image URL directly in a content block of type \"image_url\". Note that only some model providers support this."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered clouds, suggesting good visibility and a likely pleasant temperature. The bright sunlight is casting distinct shadows on the grass and vegetation, indicating it is likely daytime, possibly late morning or early afternoon. The overall ambiance suggests a warm and inviting day, suitable for outdoor activities.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "1c470309",
"metadata": {},
"source": [
"We can also pass in multiple images."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "325fb4ca",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Yes, the two images are the same. They both depict a wooden boardwalk extending through a grassy field under a blue sky with light clouds. The scenery, lighting, and composition are identical.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"are these two images the same?\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "71bd28cf-d76c-44e2-a55e-c5f265db986e",
"metadata": {},
"source": [
"## Tool calls\n",
"\n",
"Some multimodal models support [tool calling](/docs/concepts/#functiontool-calling) features as well. To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cd22ea82-2f93-46f9-9f7a-6aaf479fcaa9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_BSX4oq4SKnLlp2WlzDhToHBr'}]\n"
]
}
],
"source": [
"from typing import Literal\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def weather_tool(weather: Literal[\"sunny\", \"cloudy\", \"rainy\"]) -> None:\n",
" \"\"\"Describe the weather\"\"\"\n",
" pass\n",
"\n",
"\n",
"model_with_tools = model.bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model_with_tools.invoke([message])\n",
"print(response.tool_calls)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,184 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to use multimodal prompts\n",
"\n",
"Here we demonstrate how to use prompt templates to format multimodal inputs to models. \n",
"\n",
"In this example we will ask a model to describe an image."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2671f995",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4ee35e4f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [{\"type\": \"image_url\", \"image_url\": \"data:image/jpeg;base64,{image_data}\"}],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "089f75c2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "02744b06",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "e9b9ebf6",
"metadata": {},
"source": [
"We can also pass in multiple images."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "02190ee3",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"compare the two pictures provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": \"data:image/jpeg;base64,{image_data1}\",\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": \"data:image/jpeg;base64,{image_data2}\",\n",
" },\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "42af057b",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "513abe00",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data1\": image_data, \"image_data2\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8152c3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -250,8 +250,7 @@
"source": [
"## Related\n",
"\n",
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n",
"- "
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n"
]
}
],

View File

@@ -1,160 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to call tools with multi-modal data\n",
"\n",
"Here we demonstrate how to call tools with multi-modal data, such as images.\n",
"\n",
"Some multi-modal models, such as those that can reason over images or audio, support [tool calling](/docs/concepts/#functiontool-calling) features as well.\n",
"\n",
"To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data).\n",
"\n",
"Below, we demonstrate examples using [OpenAI](/docs/integrations/platforms/openai) and [Anthropic](/docs/integrations/platforms/anthropic). We will use the same image and tool in all cases. Let's first select an image, and build a placeholder tool that expects as input the string \"sunny\", \"cloudy\", or \"rainy\". We will ask the models to describe the weather in the image."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"from typing import Literal\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"\n",
"\n",
"@tool\n",
"def weather_tool(weather: Literal[\"sunny\", \"cloudy\", \"rainy\"]) -> None:\n",
" \"\"\"Describe the weather\"\"\"\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"For OpenAI, we can feed the image URL directly in a content block of type \"image_url\":"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_mRYL50MtHdeNuNIjSCm5UPmB'}]\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\").bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.tool_calls)"
]
},
{
"cell_type": "markdown",
"id": "e5738224-1109-4bf8-8976-ff1570dd1d46",
"metadata": {},
"source": [
"Note that we recover tool calls with parsed arguments in LangChain's [standard format](/docs/how_to/tool_calling) in the model response."
]
},
{
"cell_type": "markdown",
"id": "0cee63ff-e09f-4dd8-8323-912edbde94f6",
"metadata": {},
"source": [
"## Anthropic\n",
"\n",
"For Anthropic, we can format a base64-encoded image into a content block of type \"image\", as below:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d90c4590-71c8-42b1-99ff-03a9eca8082e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'toolu_016m9KfknJqx5fVRYk4tkF6s'}]\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\n",
" \"type\": \"image\",\n",
" \"source\": {\n",
" \"type\": \"base64\",\n",
" \"media_type\": \"image/jpeg\",\n",
" \"data\": image_data,\n",
" },\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.tool_calls)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -7,18 +7,24 @@
"id": "cc6caafa"
},
"source": [
"# NVIDIA AI Foundation Endpoints\n",
"# NVIDIA NIMs\n",
"\n",
"The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
"> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below.\n",
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
"\n",
"This example goes over how to use LangChain to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints."
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
@@ -50,9 +56,9 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
@@ -69,12 +75,23 @@
"import getpass\n",
"import os\n",
"\n",
"if not os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" nvapi_key = getpass.getpass(\"Enter your NVIDIA API key: \")\n",
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
"else:\n",
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
" assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "markdown",
"id": "af0ce26b",
"metadata": {},
"source": [
"## Working with NVIDIA API Catalog"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -96,6 +113,30 @@
"print(result.content)"
]
},
{
"cell_type": "markdown",
"id": "9d35686b",
"metadata": {},
"source": [
"## Working with NVIDIA NIMs\n",
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
"\n",
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49838930",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# connect to an embedding NIM running at localhost:8000, specifying a specific model\n",
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta-llama3-8b-instruct\")"
]
},
{
"cell_type": "markdown",
"id": "71d37987-d568-4a73-9d2a-8bd86323f8bf",
@@ -252,81 +293,6 @@
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "642a618a-faa3-443e-99c3-67b8142f3c51",
"metadata": {},
"source": [
"## Steering LLMs\n",
"\n",
"> [SteerLM-optimized models](https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference/) supports \"dynamic steering\" of model outputs at inference time.\n",
"\n",
"This lets you \"control\" the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.\n",
"\n",
"The \"steer\" models support this type of input, such as `nemotron_steerlm_8b`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36a96b1a-e3e7-4ae3-b4b0-9331b5eca04f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"nemotron_steerlm_8b\")\n",
"# Try making it uncreative and not verbose\n",
"complex_result = llm.invoke(\n",
" \"What's a PB&J?\", labels={\"creativity\": 0, \"complexity\": 3, \"verbosity\": 0}\n",
")\n",
"print(\"Un-creative\\n\")\n",
"print(complex_result.content)\n",
"\n",
"# Try making it very creative and verbose\n",
"print(\"\\n\\nCreative\\n\")\n",
"creative_result = llm.invoke(\n",
" \"What's a PB&J?\", labels={\"creativity\": 9, \"complexity\": 3, \"verbosity\": 9}\n",
")\n",
"print(creative_result.content)"
]
},
{
"cell_type": "markdown",
"id": "75849e7a-2adf-4038-8d9d-8a9e12417789",
"metadata": {},
"source": [
"#### Use within LCEL\n",
"\n",
"The labels are passed as invocation params. You can `bind` these to the LLM using the `bind` method on the LLM to include it within a declarative, functional chain. Below is an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae1105c3-2a0c-4db3-916e-24d5e427bd01",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful AI assistant named Fred.\"), (\"user\", \"{input}\")]\n",
")\n",
"chain = (\n",
" prompt\n",
" | ChatNVIDIA(model=\"nemotron_steerlm_8b\").bind(\n",
" labels={\"creativity\": 9, \"complexity\": 0, \"verbosity\": 9}\n",
" )\n",
" | StrOutputParser()\n",
")\n",
"\n",
"for txt in chain.stream({\"input\": \"Why is a PB&J?\"}):\n",
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "7f465ff6-5922-41d8-8abb-1d1e4095cc27",
@@ -334,7 +300,7 @@
"source": [
"## Multimodal\n",
"\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `playground_neva_22b`.\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `nvidia/neva-22b`.\n",
"\n",
"\n",
"These models accept LangChain's standard image formats, and accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle.\n",
@@ -367,7 +333,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"playground_neva_22b\")"
"llm = ChatNVIDIA(model=\"nvidia/neva-22b\")"
]
},
{
@@ -500,7 +466,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"kosmos = ChatNVIDIA(model=\"kosmos_2\")\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
@@ -544,7 +510,7 @@
"\n",
"\n",
"## Override the payload passthrough. Default is to pass through the payload as is.\n",
"kosmos = ChatNVIDIA(model=\"kosmos_2\")\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"kosmos.client.payload_fn = drop_streaming_key\n",
"\n",
"kosmos.invoke(\n",
@@ -567,43 +533,6 @@
"For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the `NVEModel` client as a requests backbone. The `NVIDIAEmbeddings` class is a good source of inspiration for this. "
]
},
{
"cell_type": "markdown",
"id": "1cd6249a-7ffa-4886-b7e8-5778dc93499e",
"metadata": {},
"source": [
"## RAG: Context models\n",
"\n",
"NVIDIA also has Q&A models that support a special \"context\" chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model. The `_qa_` models like `nemotron_qa_8b` support this.\n",
"\n",
"**Note:** Only \"user\" (human) and \"context\" chat messages are supported for these models; System or AI messages that would useful in conversational flows are not supported."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f994b4d3-c1b0-4e87-aad0-a7b487e2aa43",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import ChatMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" ChatMessage(\n",
" role=\"context\", content=\"Parrots and Cats have signed the peace accord.\"\n",
" ),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
"llm = ChatNVIDIA(model=\"nemotron_qa_8b\")\n",
"chain = prompt | llm | StrOutputParser()\n",
"chain.invoke({\"input\": \"What was signed?\"})"
]
},
{
"cell_type": "markdown",
"id": "137662a6",
@@ -708,14 +637,6 @@
"source": [
"conversation.invoke(\"Tell me about yourself.\")[\"response\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a719bd3-755d-4a05-bda2-de132bf99314",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -723,9 +644,9 @@
"provenance": []
},
"kernelspec": {
"display_name": "Python (venvoss)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "venvoss"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -737,7 +658,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -54,12 +54,12 @@
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### directly in the terminal:\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"### via an API\n",
"#### Via an API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
@@ -72,9 +72,11 @@
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"#### Via LangChain\n",
"\n",
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application."
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application. \n",
"\n",
"View the [API Reference for ChatOllama](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama) for more."
]
},
{
@@ -105,7 +107,7 @@
"\n",
"# using LangChain Expressive Language chain syntax\n",
"# learn more about the LCEL on\n",
"# /docs/expression_language/why\n",
"# /docs/concepts/#langchain-expression-language-lcel\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"# for brevity, response is printed in terminal\n",
@@ -189,7 +191,7 @@
"\n",
"## Building from source\n",
"\n",
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/jmorganca/ollama?tab=readme-ov-file#building)"
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/ollama/ollama?tab=readme-ov-file#building)"
]
},
{
@@ -333,7 +335,7 @@
}
],
"source": [
"pip install --upgrade --quiet pillow"
"!pip install --upgrade --quiet pillow"
]
},
{
@@ -444,6 +446,24 @@
"\n",
"print(query_chain)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {

View File

@@ -12,56 +12,153 @@
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"id": "cb4dd00a-8893-4a45-96f7-9a9fc341cd61",
"metadata": {},
"source": [
"# ChatOpenAI\n",
"\n",
"This notebook covers how to get started with OpenAI chat models."
"This notebook provides a quick overview for getting started with OpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [OpenAI docs](https://platform.openai.com/docs/models).\n",
"\n",
":::info Azure OpenAI\n",
"\n",
"Note that certain OpenAI models can also be accessed via the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-services/openai-service). To use the Azure OpenAI service use the [AzureChatOpenAI integration](/docs/integrations/chat/azure_chat_openai/).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/openai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain-openai](https://api.python.langchain.com/en/latest/openai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "e817fe2e-4f1d-4533-b19e-2400b1cf6ce8",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter your OpenAI API key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "c2a3ce99-a44a-4ea6-8d23-8a88e332f0f9",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85255d53-ac8a-44e1-aa26-8e567bb77ae7",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "c59722a9-6dbb-45f7-ae59-5be50ca5733d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain OpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2113471c-75d7-45df-b784-d78da4ef7aba",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1098bc9d-ce83-462b-8c19-f85bf3a159dc",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "522686de",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # api_key=\"...\", # if you prefer to pass api key in directly instaed of using env vars\n",
" # base_url=\"...\",\n",
" # organization=\"...\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4e5fe97e",
"id": "6511982a-734a-4193-a47d-254f8dcaff5e",
"metadata": {},
"source": [
"The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:\n",
"\n",
"```python\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0, api_key=\"YOUR_API_KEY\", openai_organization=\"YOUR_ORGANIZATION_ID\")\n",
"```\n",
"Remove the openai_organization parameter should it not apply to you."
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
"metadata": {
"tags": []
@@ -70,20 +167,42 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 34, 'total_tokens': 40}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-8591eae1-b42b-402b-a23a-dfdb0cd151bd-0')"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 31, 'total_tokens': 36}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_43dfabdef1', 'finish_reason': 'stop', 'logprobs': None}, id='run-012cffe2-5d3d-424d-83b5-51c6d4a593d1-0', usage_metadata={'input_tokens': 31, 'output_tokens': 5, 'total_tokens': 36})"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"Translate this sentence from English to French. I love programming.\"),\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"llm.invoke(messages)"
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2cd224b8-4499-41fb-a604-d53a7ff17b2e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
@@ -93,7 +212,7 @@
"source": [
"## Chaining\n",
"\n",
"We can chain our model with a prompt template like so:"
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
@@ -116,6 +235,8 @@
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
@@ -277,13 +398,23 @@
"\n",
"fine_tuned_model(messages)"
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-2",
"language": "python",
"name": "python3"
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {

View File

@@ -15,10 +15,9 @@
"source": [
"# ChatPremAI\n",
"\n",
">[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth. \n",
"[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).\n",
"\n",
"\n",
"This example goes over how to use LangChain to interact with `ChatPremAI`. "
"This example goes over how to use LangChain to interact with different chat models with `ChatPremAI`"
]
},
{
@@ -27,23 +26,13 @@
"source": [
"### Installation and setup\n",
"\n",
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
"We start by installing `langchain` and `premai-sdk`. You can type the following command to install:\n",
"\n",
"```bash\n",
"pip install premai langchain\n",
"```\n",
"\n",
"Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:\n",
"\n",
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
"\n",
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
"\n",
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
"\n",
"4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt. \n",
"\n",
"Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application. "
"Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key."
]
},
{
@@ -60,13 +49,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup ChatPremAI instance in LangChain \n",
"### Setup PremAI client in LangChain\n",
"\n",
"Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.\n",
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.\n",
"\n",
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model. \n",
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat-client. By default it will use the model name and parameters used in the [LaunchPad](https://docs.premai.io/get-started/launchpad). \n",
"\n",
"`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations. "
"> Note: If you change the `model` or any other parameters like `temperature` or `max_tokens` while setting the client, it will override existing default configurations, that was used in LaunchPad. "
]
},
{
@@ -102,13 +91,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Calling the Model\n",
"### Chat Completions\n",
"\n",
"Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`. \n",
"`ChatPremAI` supports two methods: `invoke` (which is the same as `generate`) and `stream`. \n",
"\n",
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. \n",
"\n",
"### Generation"
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. "
]
},
{
@@ -165,7 +152,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also change generation parameters while calling the model. Here's how you can do that"
"You can provide system prompt here like this:"
]
},
{
@@ -192,15 +179,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Important notes:\n",
"> If you are going to place system prompt here, then it will override your system prompt that was fixed while deploying the application from the platform. \n",
"\n",
"Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported. \n",
"\n",
"We will provide support for those two above parameters in sooner versions. \n",
"> Please note that the current version of ChatPremAI does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop). \n",
"\n",
"### Streaming\n",
"\n",
"And finally, here's how you do token streaming for dynamic chat like applications. "
"In this section, let's see how we can stream tokens using langchain and PremAI. Here's how you do it. "
]
},
{
@@ -228,7 +213,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it. "
"Similar to above, if you want to override the system-prompt and the generation parameters, you need to add the following:"
]
},
{

View File

@@ -5,7 +5,7 @@
"id": "f36d938c",
"metadata": {},
"source": [
"# LLM Caching integrations\n",
"# Model caches\n",
"\n",
"This notebook covers how to cache results of individual LLM calls using different caches."
]

View File

@@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
@@ -106,16 +106,16 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"## Pros of Python\\n\\n* **Easy to learn and read:** Python has a clear and concise syntax, making it easy for beginners to pick up and understand. Its readability is often compared to natural language, making it easier to maintain and debug code.\\n* **Versatile:** Python is a versatile language suitable for various applications, including web development, scripting, data analysis, machine learning, scientific computing, and even game development.\\n* **Extensive libraries and frameworks:** Python boasts a vast collection of libraries and frameworks for diverse tasks, reducing the need to write code from scratch and allowing developers to focus on specific functionalities. This makes Python a highly productive language.\\n* **Large and active community:** Python has a large and active community of users, developers, and contributors. This translates to readily available support, documentation, and learning resources when needed.\\n* **Open-source and free:** Python is an open-source language, meaning it's free to use and distribute, making it accessible to a wider audience.\\n\\n## Cons of Python\\n\\n* **Dynamically typed:** Python is a dynamically typed language, meaning variable types are determined at runtime. While this can be convenient, it can also lead to runtime errors and make code debugging more challenging.\\n* **Interpreted language:** Python code is interpreted, which means it is slower than compiled languages like C or Java. However, this disadvantage is mitigated by the existence of tools like PyPy and Cython that can improve Python's performance.\\n* **Limited mobile development support:** While Python has frameworks for mobile development, its support is not as extensive as for languages like Swift or Java. This limits Python's suitability for native mobile app development.\\n* **Global interpreter lock (GIL):** Python has a GIL, meaning only one thread can execute Python bytecode at a time. This can limit performance in multithreaded applications. However, alternative implementations like Cypython attempt to address this issue.\\n\\n## Conclusion\\n\\nDespite its limitations, Python's ease of use, versatility, and extensive libraries make it a popular choice for various programming tasks. Its active community and open-source nature contribute to its popularity. However, its dynamic typing, interpreted nature, and limitations in mobile development and multithreading should be considered when choosing Python for specific projects.\""
"\"## Pros of Python:\\n\\n* **Easy to learn and use:** Python's syntax is simple and straightforward, making it a great choice for beginners. \\n* **Extensive library support:** Python has a massive collection of libraries and frameworks for a variety of tasks, from web development to data science. \\n* **Open source and free:** Anyone can use and contribute to Python without paying licensing fees.\\n* **Large and active community:** There's a vast community of Python users offering help and support.\\n* **Versatility:** Python is a general-purpose language, meaning it can be used for a wide variety of tasks.\\n* **Portable and cross-platform:** Python code works seamlessly across various operating systems.\\n* **High-level language:** Python hides many of the complexities of lower-level languages, allowing developers to focus on problem solving.\\n* **Readability:** The clear syntax makes Python programs easier to understand and maintain, especially for collaborative projects.\\n\\n## Cons of Python:\\n\\n* **Slower execution:** Compared to compiled languages like C++, Python is generally slower due to its interpreted nature.\\n* **Dynamically typed:** Python doesnt enforce strict data types, which can sometimes lead to errors.\\n* **Global Interpreter Lock (GIL):** The GIL limits Python to using a single CPU core at a time, impacting its performance in multi-core environments.\\n* **Large memory footprint**: Python programs require more memory than some other languages.\\n* **Not ideal for low-level programming:** Python is not suitable for tasks requiring direct hardware interaction.\\n\\n\\n\\n## Conclusion:\\n\\nWhile it has some drawbacks, Python's strengths outweigh them, making it a very versatile and approachable programming language for beginners. Its extensive libraries, large community, ease of use and versatility make it an excellent choice for various projects and applications. However, for tasks requiring extreme performance or low-level access, other languages might offer better solutions.\\n\""
]
},
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -244,16 +244,16 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='I am not allowed to give instructions on how to make a molotov cocktail.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 8, 'candidates_token_count': 17, 'total_token_count': 25}})]], llm_output=None, run=[RunInfo(run_id=UUID('78c81d92-8e62-4aef-a056-44541e25d55c'))])"
"\"I'm so sorry, but I can't answer that question. Molotov cocktails are illegal and dangerous, and I would never do anything that could put someone at risk. If you are interested in learning more about the dangers of molotov cocktails, I can provide you with some resources.\""
]
},
"execution_count": 9,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -271,22 +271,23 @@
"\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\", safety_settings=safety_settings)\n",
"\n",
"output = llm.generate([\"How to make a molotov cocktail?\"])\n",
"# invoke a model response\n",
"output = llm.invoke([\"How to make a molotov cocktail?\"])\n",
"output"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='Making a Molotov cocktail is extremely dangerous and illegal in most jurisdictions. It is strongly advised not to attempt to make or use one. If you are in a situation where you feel the need to use a Molotov cocktail, please contact the authorities immediately.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'MEDIUM', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 9, 'candidates_token_count': 51, 'total_token_count': 60}})]], llm_output=None, run=[RunInfo(run_id=UUID('69254d57-0354-4bdc-81ee-0f623b19704d'))])"
"\"I'm sorry, I can't answer that question. Molotov cocktails are illegal and dangerous.\""
]
},
"execution_count": 10,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -295,7 +296,8 @@
"# You may also pass safety_settings to generate method\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\")\n",
"\n",
"output = llm.generate(\n",
"# invoke a model response\n",
"output = llm.invoke(\n",
" [\"How to make a molotov cocktail?\"], safety_settings=safety_settings\n",
")\n",
"output"
@@ -303,23 +305,23 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[GenerationChunk(text='**Pros:**\\n\\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\\n* **Cross-platform:** Python is available for a')]]"
"\"## Pros of Python\\n\\n* **Easy to learn:** Python's clear syntax and simple structure make it easy for beginners to pick up, even if they have no prior programming experience.\\n* **Versatile:** Python is a general-purpose language, meaning it can be used for a wide range of tasks, including web development, data analysis, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there are plenty of resources available to help you learn and use the language.\\n* **Libraries and frameworks:** Python has a vast ecosystem of libraries and frameworks that can be used for various tasks, making it easy to \\nbuild complex applications.\\n* **Open-source:** Python is an open-source language, which means it is free to use and distribute. This also means that the code is constantly being improved and updated by the community.\\n\\n## Cons of Python\\n\\n* **Slow execution:** Python is an interpreted language, which means that the code is executed line by line. This can make Python slower than compiled languages like C++ or Java.\\n* **Dynamic typing:** Python's dynamic typing can be a disadvantage for large projects, as it can lead to errors that are not caught until runtime.\\n* **Global interpreter lock (GIL):** The GIL can limit the performance of Python code on multi-core processors, as only one thread can execute Python code at a time.\\n* **Large memory footprint:** Python programs tend to use more memory than programs written in other languages.\\n\\n\\nOverall, Python is a great choice for beginners and experienced programmers alike. Its ease of use, versatility, and large community make it a popular choice for many different types of projects. However, it is important to be aware of its limitations, such as its slow execution speed and dynamic typing.\""
]
},
"execution_count": null,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = await model.agenerate([message])\n",
"result.generations"
"result = await model.ainvoke([message])\n",
"result"
]
},
{
@@ -405,6 +407,8 @@
"source": [
"llm = VertexAI(model_name=\"code-bison\", max_tokens=1000, temperature=0.3)\n",
"question = \"Write a python function that checks if a string is a valid email address\"\n",
"\n",
"# invoke a model response\n",
"print(model.invoke(question))"
]
},
@@ -424,14 +428,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This is a Yorkshire Terrier.\n"
" The image shows a dog with a long coat. The dog is sitting on a wooden floor and looking at the camera.\n"
]
}
],
@@ -449,8 +453,11 @@
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
@@ -495,14 +502,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This is a Yorkshire Terrier.\n"
" The image shows a dog sitting on a wooden floor. The dog is a small breed, with a long, shaggy coat that is brown and gray in color. The dog has a white patch of fur on its chest and white paws. The dog is looking at the camera with a curious expression.\n"
]
}
],
@@ -522,8 +529,11 @@
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
@@ -548,7 +558,10 @@
"metadata": {},
"outputs": [],
"source": [
"# Prepare input for model consumption\n",
"message2 = HumanMessage(content=\"And where the image is taken?\")\n",
"\n",
"# invoke a model response\n",
"output2 = llm.invoke([message, output, message2])\n",
"print(output2.content)"
]
@@ -562,26 +575,99 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 53,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" This image shows a Google Cloud Next event. Google Cloud Next is an annual conference held by Google Cloud, a division of Google that offers cloud computing services. The conference brings together customers, partners, and industry experts to learn about the latest cloud technologies and trends.\n"
]
}
],
"source": [
"image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": \"https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png\",\n",
" \"url\": \"gs://github-repo/img/vision/google-cloud-next.jpeg\",\n",
" },\n",
"}\n",
"text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": \"What is shown in this image?\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"# invoke a model response\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ADVANCED : You can use Pdfs with Gemini Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"# Use Gemini 1.5 Pro\n",
"llm = ChatVertexAI(model=\"gemini-1.5-pro-preview-0514\")"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"# Prepare input for model consumption\n",
"pdf_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf\"},\n",
"}\n",
"\n",
"text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": \"Summarize the provided document.\",\n",
"}\n",
"\n",
"# Prepare input for model consumption\n",
"message = HumanMessage(content=[text_message, pdf_message])"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The document introduces Gemini 1.5 Pro, a multimodal AI model developed by Google. It\\'s a \"mixture-of-experts\" model capable of understanding and reasoning over very long contexts, up to millions of tokens, across text, audio, and video data. \\n\\n**Key Features:**\\n\\n* **Unprecedented Long Context:** Handles context lengths of up to 10 million tokens, enabling it to process entire books, hours of video, and days of audio.\\n* **Multimodal Understanding:** Seamlessly integrates text, audio, and video data for comprehensive understanding.\\n* **Enhanced Performance:** Achieves near-perfect recall in retrieval tasks and surpasses previous models in various benchmarks.\\n* **Novel Capabilities:** Demonstrates surprising abilities like learning to translate a new language from a single grammar book in context.\\n\\n**Evaluations:**\\n\\nThe document presents extensive evaluations highlighting Gemini 1.5 Pro\\'s capabilities. It excels in both diagnostic tests (perplexity, needle-in-a-haystack) and realistic tasks (long-document QA, language translation, video understanding). It also outperforms its predecessors and state-of-the-art models like GPT-4 Turbo and Claude 2.1 in various core benchmarks (coding, multilingual tasks, math and science reasoning).\\n\\n**Responsible Deployment:**\\n\\nGoogle emphasizes a structured approach to responsible deployment, outlining their model mitigation efforts, impact assessments, and ongoing safety evaluations to address potential risks associated with long-context understanding and multimodal capabilities.\\n\\n**Call-to-action:**\\n\\nThe document highlights the need for innovative evaluation methodologies to effectively assess long-context models. They encourage researchers to develop challenging benchmarks that go beyond simple retrieval and require complex reasoning over extended inputs.\\n\\n**Overall:**\\n\\nGemini 1.5 Pro represents a significant advancement in AI, pushing the boundaries of multimodal long-context understanding. Its impressive performance and unique capabilities open new possibilities for research and application, while Google\\'s commitment to responsible deployment ensures the safe and ethical use of this powerful technology. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 19872, 'candidates_token_count': 415, 'total_token_count': 20287}}, id='run-99072700-55be-49d4-acca-205a52256bcd-0')"
]
},
"execution_count": 70,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke([message])"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -593,12 +679,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
"Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. \n",
"\n",
"Hundreds popular [open-sourced models](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#oss-models) like Llama, Falcon and are available for [One Click Deployment](https://cloud.google.com/vertex-ai/generative-ai/docs/deploy/overview)\n",
"\n",
"If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -620,6 +710,7 @@
"metadata": {},
"outputs": [],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
@@ -649,6 +740,241 @@
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Llama on Vertex Model Garden \n",
"\n",
"> Llama is a family of open weight models developed by Meta that you can fine-tune and deploy on Vertex AI. Llama models are pre-trained and fine-tuned generative text models. You can deploy Llama 2 and Llama 3 models on Vertex AI.\n",
"[Official documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-llama) for more information about Llama on [Vertex Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Llama on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" and \"YOUR ENDPOINT_ID\"\n",
"llm = VertexAIModelGarden(project=\"YOUR PROJECT\", endpoint_id=\"YOUR ENDPOINT_ID\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\n is a classic problem for Humanity. There is one vital characteristic of Life in'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" The question is so perplexing that there have been dozens of care\n"
]
}
],
"source": [
"# invoke a model response using chain\n",
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Falcon on Vertex Model Garden "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Falcon is a family of open weight models developed by [Falcon](https://falconllm.tii.ae/) that you can fine-tune and deploy on Vertex AI. Falcon models are pre-trained and fine-tuned generative text models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Falcon on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAIModelGarden"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" and \"YOUR ENDPOINT_ID\"\n",
"llm = VertexAIModelGarden(project=\"YOUR PROJECT\", endpoint_id=\"YOUR ENDPOINT_ID\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\nWhat is the meaning of life?\\nThe meaning of life is a philosophical question that does not have a clear answer. The search for the meaning of life is a lifelong journey, and there is no definitive answer. Different cultures, religions, and individuals may approach this question in different ways.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like all LLMs, we can then compose it with other components:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
"What is the meaning of life?\n",
"As an AI language model, my personal belief is that the meaning of life varies from person to person. It might be finding happiness, fulfilling a purpose or goal, or making a difference in the world. It's ultimately a personal question that can be explored through introspection or by seeking guidance from others.\n"
]
}
],
"source": [
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Gemma on Vertex AI Model Garden"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> [Gemma](https://ai.google.dev/gemma) is a set of lightweight, generative artificial intelligence (AI) open models. Gemma models are available to run in your applications and on your hardware, mobile devices, or hosted services. You can also customize these models using tuning techniques so that they excel at performing tasks that matter to you and your users. Gemma models are based on [Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview) models and are intended for the AI development community to extend and take further."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Gemma on Vertex Model Garden you must first [deploy it to Vertex AI Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models#deploy-a-model)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
")\n",
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -656,6 +982,73 @@
"## Anthropic on Vertex AI"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Prompt:\\nWhat is the meaning of life?\\nOutput:\\nThis is a classic question that has captivated philosophers, theologians, and seekers for'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# TODO : Add \"YOUR PROJECT\" , \"YOUR REGION\" and \"YOUR ENDPOINT_ID\"\n",
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=\"YOUR PROJECT\",\n",
" project=\"YOUR ENDPOINT_ID\",\n",
" location=\"YOUR REGION\",\n",
")\n",
"\n",
"# invoke a model response\n",
"llm.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Add \"YOUR PROJECT\" , \"YOUR REGION\" and \"YOUR ENDPOINT_ID\"\n",
"chat_llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=\"YOUR PROJECT\",\n",
" project=\"YOUR ENDPOINT_ID\",\n",
" location=\"YOUR REGION\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nThe answer is 4.\\n2 + 2 = 4.', id='run-cea563df-e91a-4374-83a1-3d8b186a01b2-0')"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prepare input for model consumption\n",
"text_question1 = \"How much is 2+2?\"\n",
"message1 = HumanMessage(content=text_question1)\n",
"\n",
"# invoke a model response\n",
"chat_llm.invoke([message1])"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -12,16 +12,15 @@
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/ollama/ollama#model-library).\n",
"\n",
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
"First, follow [these instructions](https://github.com/ollama/ollama) to set up and run a local Ollama instance:\n",
"\n",
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
" * e.g., `ollama pull llama3`\n",
" * View a list of available models via the [model library](https://ollama.ai/library) and pull to use locally with the command `ollama pull llama3`\n",
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
@@ -29,28 +28,29 @@
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models, use `ollama list`\n",
"* To view all pulled models on your local instance, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
"* View the [Ollama documentation](https://github.com/ollama/ollama) for more commands. \n",
"* Run `ollama help` in the terminal to see available commands too.\n",
"\n",
"## Usage\n",
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html).\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html).\n",
"\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` interface.\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` [interface](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/).\n",
"\n",
"This includes [special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) for system message and user input.\n",
"This includes [special tokens](https://ollama.com/library/llama3) for system message and user input.\n",
"\n",
"## Interacting with Models \n",
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### directly in the terminal:\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"### via an API\n",
"#### Via the API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
@@ -61,11 +61,20 @@
"}'\n",
"```\n",
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"See the Ollama [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"\n",
"See a typical basic example of using Ollama chat model in your LangChain application."
"See a typical basic example of using [Ollama chat model](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) in your LangChain application."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain-community"
]
},
{
@@ -87,7 +96,9 @@
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama3\")\n",
"llm = Ollama(\n",
" model=\"llama3\"\n",
") # assuming you have Ollama installed and have llama3 model pulled with `ollama pull llama3 `\n",
"\n",
"llm.invoke(\"Tell me a joke\")"
]
@@ -280,6 +291,24 @@
"llm_with_image_context = bakllava.bind(images=[image_b64])\n",
"llm_with_image_context.invoke(\"What is the dollar based gross retention rate:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"astrapy>=0.7.1\""
"%pip install --upgrade --quiet \"astrapy>=0.7.1 langchain-community\" "
]
},
{
@@ -50,7 +50,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"cassio>=0.1.0\""
"%pip install --upgrade --quiet \"cassio>=0.1.0 langchain-community\""
]
},
{

View File

@@ -43,7 +43,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet elasticsearch langchain"
"%pip install --upgrade --quiet elasticsearch langchain langchain-community"
]
},
{

View File

@@ -25,7 +25,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet rockset"
"%pip install --upgrade --quiet rockset langchain-community"
]
},
{

View File

@@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain_openai"
"%pip install --upgrade --quiet langchain langchain_openai langchain-community"
]
},
{

View File

@@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet xata langchain-openai langchain"
"%pip install --upgrade --quiet xata langchain-openai langchain langchain-community"
]
},
{

View File

@@ -0,0 +1,337 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1cdd080f9ea3e0b",
"metadata": {},
"source": [
"# ZepCloudChatMessageHistory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)\n",
"\n",
"## Example\n",
"\n",
"This notebook demonstrates how to use [Zep](https://www.getzep.com/) to persist chat history and use Zep Memory with your chain.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "82fb8484eed2ee9a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:12.069045Z",
"start_time": "2024-05-10T05:20:12.062518Z"
}
},
"outputs": [],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_community.chat_message_histories import ZepCloudChatMessageHistory\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import (\n",
" RunnableParallel,\n",
")\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"
]
},
{
"cell_type": "markdown",
"id": "d79e0e737db426ac",
"metadata": {},
"source": [
"Provide your OpenAI key"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7430ea2341ecd227",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:17.983314Z",
"start_time": "2024-05-10T05:20:13.805729Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"openai_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "81a87004bc92c3e2",
"metadata": {},
"source": [
"Provide your Zep API key. See https://help.getzep.com/projects#api-keys\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c21632a2c7223170",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:24.694643Z",
"start_time": "2024-05-10T05:20:22.174681Z"
}
},
"outputs": [],
"source": [
"zep_api_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "436de864fe0000",
"metadata": {},
"source": [
"Preload some messages into the memory. The default message window is 4 messages. We want to push beyond this to demonstrate auto-summarization."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e8fb07edd965ef1f",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:38.657289Z",
"start_time": "2024-05-10T05:20:26.981492Z"
}
},
"outputs": [],
"source": [
"test_history = [\n",
" {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" \"metadata\": {\"foo\": \"bar\"},\n",
" },\n",
"]\n",
"\n",
"zep_memory = ZepCloudMemory(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
")\n",
"\n",
"for msg in test_history:\n",
" zep_memory.chat_memory.add_message(\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n",
"\n",
"import time\n",
"\n",
"time.sleep(\n",
" 10\n",
") # Wait for the messages to be embedded and summarized, this happens asynchronously."
]
},
{
"cell_type": "markdown",
"id": "bfa6b19f0b501aea",
"metadata": {},
"source": [
"**MessagesPlaceholder** - Were using the variable name chat_history here. This will incorporate the chat history into the prompt.\n",
"Its important that this variable name aligns with the history_messages_key in the RunnableWithMessageHistory chain for seamless integration.\n",
"\n",
"**question** must match input_messages_key in `RunnableWithMessageHistory“ chain."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "2b12eccf9b4908eb",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:46.592163Z",
"start_time": "2024-05-10T05:20:46.464326Z"
}
},
"outputs": [],
"source": [
"template = \"\"\"Be helpful and answer the question below using the provided context:\n",
" \"\"\"\n",
"answer_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", template),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"user\", \"{question}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7d6014d6fe7f2d22",
"metadata": {},
"source": [
"We use RunnableWithMessageHistory to incorporate Zeps Chat History into our chain. This class requires a session_id as a parameter when you activate the chain."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "83ea7322638f8ead",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:49.681754Z",
"start_time": "2024-05-10T05:20:49.663404Z"
}
},
"outputs": [],
"source": [
"inputs = RunnableParallel(\n",
" {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: x[\"chat_history\"],\n",
" },\n",
")\n",
"chain = RunnableWithMessageHistory(\n",
" inputs | answer_prompt | ChatOpenAI(openai_api_key=openai_key) | StrOutputParser(),\n",
" lambda s_id: ZepCloudChatMessageHistory(\n",
" session_id=s_id, # This uniquely identifies the conversation, note that we are getting session id as chain configurable field\n",
" api_key=zep_api_key,\n",
" memory_type=\"perpetual\",\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "db8bdc1d0d7bb672",
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T05:20:54.966758Z",
"start_time": "2024-05-10T05:20:52.117440Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 622c6f75-3e4a-413d-ba20-558c1fea0d50 not found for run af12a4b1-e882-432d-834f-e9147465faf6. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"'\"Parable of the Sower\" is relevant to the challenges facing contemporary society as it explores themes of environmental degradation, economic inequality, social unrest, and the search for hope and community in the face of chaos. The novel\\'s depiction of a dystopian future where society has collapsed due to environmental and economic crises serves as a cautionary tale about the potential consequences of our current societal and environmental challenges. By addressing issues such as climate change, social injustice, and the impact of technology on humanity, Octavia Butler\\'s work prompts readers to reflect on the pressing issues of our time and the importance of resilience, empathy, and collective action in building a better future.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\n",
" {\n",
" \"question\": \"What is the book's relevance to the challenges facing contemporary society?\"\n",
" },\n",
" config={\"configurable\": {\"session_id\": session_id}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d9c609652110db3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,7 +6,7 @@
"collapsed": false
},
"source": [
"# Zep\n",
"# Zep Open Source Memory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
@@ -36,11 +36,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-07-09T19:20:49.003167Z",
"start_time": "2023-07-09T19:20:47.446370Z"
"end_time": "2024-05-10T03:25:26.191166Z",
"start_time": "2024-05-10T03:25:25.641520Z"
}
},
"outputs": [],

View File

@@ -0,0 +1,428 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Zep Cloud Memory\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
">[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)\n",
"\n",
"## Example\n",
"\n",
"This notebook demonstrates how to use [Zep](https://www.getzep.com/) as memory for your chatbot.\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to Zep.\n",
"2. Running an agent and having message automatically added to the store.\n",
"3. Viewing the enriched messages.\n",
"4. Vector search over the conversation history."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-14T17:25:10.779451Z",
"start_time": "2024-05-14T17:25:10.375249Z"
}
},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'FieldInfo' object has no attribute 'deprecated'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[3], line 8\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_community\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mutilities\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m WikipediaAPIWrapper\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmessages\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AIMessage, HumanMessage\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m OpenAI\n\u001b[1;32m 10\u001b[0m session_id \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(uuid4()) \u001b[38;5;66;03m# This is a unique identifier for the session\u001b[39;00m\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/__init__.py:1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 2\u001b[0m AzureChatOpenAI,\n\u001b[1;32m 3\u001b[0m ChatOpenAI,\n\u001b[1;32m 4\u001b[0m )\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01membeddings\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 6\u001b[0m AzureOpenAIEmbeddings,\n\u001b[1;32m 7\u001b[0m OpenAIEmbeddings,\n\u001b[1;32m 8\u001b[0m )\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mllms\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureOpenAI, OpenAI\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/chat_models/__init__.py:1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mazure\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AzureChatOpenAI\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_openai\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchat_models\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbase\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatOpenAI\n\u001b[1;32m 4\u001b[0m __all__ \u001b[38;5;241m=\u001b[39m [\n\u001b[1;32m 5\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mChatOpenAI\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 6\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mAzureChatOpenAI\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 7\u001b[0m ]\n",
"File \u001b[0;32m~/job/integrations/langchain/libs/partners/openai/langchain_openai/chat_models/azure.py:8\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Any, Callable, Dict, List, Optional, Union\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mopenai\u001b[39;00m\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01moutputs\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ChatResult\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain_core\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mpydantic_v1\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Field, SecretStr, root_validator\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/__init__.py:8\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mos\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01m_os\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping_extensions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m override\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m types\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_types\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m NOT_GIVEN, NoneType, NotGiven, Transport, ProxiesTypes\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_utils\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m file_from_path\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/types/__init__.py:5\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;66;03m# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.\u001b[39;00m\n\u001b[1;32m 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m__future__\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m annotations\n\u001b[0;32m----> 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Batch \u001b[38;5;28;01mas\u001b[39;00m Batch\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mimage\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Image \u001b[38;5;28;01mas\u001b[39;00m Image\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mmodel\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Model \u001b[38;5;28;01mas\u001b[39;00m Model\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/types/batch.py:7\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m List, Optional\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtyping_extensions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Literal\n\u001b[0;32m----> 7\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01m_models\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BaseModel\n\u001b[1;32m 8\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch_error\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BatchError\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbatch_request_counts\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m BatchRequestCounts\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/openai/_models.py:667\u001b[0m\n\u001b[1;32m 662\u001b[0m json_data: Body\n\u001b[1;32m 663\u001b[0m extra_json: AnyMapping\n\u001b[1;32m 666\u001b[0m \u001b[38;5;129;43m@final\u001b[39;49m\n\u001b[0;32m--> 667\u001b[0m \u001b[38;5;28;43;01mclass\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;21;43;01mFinalRequestOptions\u001b[39;49;00m\u001b[43m(\u001b[49m\u001b[43mpydantic\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mBaseModel\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m 668\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\n\u001b[1;32m 669\u001b[0m \u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:202\u001b[0m, in \u001b[0;36m__new__\u001b[0;34m(mcs, cls_name, bases, namespace, __pydantic_generic_metadata__, __pydantic_reset_parent_namespace__, _create_model_module, **kwargs)\u001b[0m\n\u001b[1;32m 199\u001b[0m super(cls, cls).__pydantic_init_subclass__(**kwargs) # type: ignore[misc]\n\u001b[1;32m 200\u001b[0m return cls\n\u001b[1;32m 201\u001b[0m else:\n\u001b[0;32m--> 202\u001b[0m # this is the BaseModel class itself being created, no logic required\n\u001b[1;32m 203\u001b[0m return super().__new__(mcs, cls_name, bases, namespace, **kwargs)\n\u001b[1;32m 205\u001b[0m if not typing.TYPE_CHECKING: # pragma: no branch\n\u001b[1;32m 206\u001b[0m # We put `__getattr__` in a non-TYPE_CHECKING block because otherwise, mypy allows arbitrary attribute access\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:539\u001b[0m, in \u001b[0;36mcomplete_model_class\u001b[0;34m(cls, cls_name, config_wrapper, raise_errors, types_namespace, create_model_module)\u001b[0m\n\u001b[1;32m 532\u001b[0m \u001b[38;5;66;03m# debug(schema)\u001b[39;00m\n\u001b[1;32m 533\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_core_schema__ \u001b[38;5;241m=\u001b[39m schema\n\u001b[1;32m 535\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_validator__ \u001b[38;5;241m=\u001b[39m create_schema_validator(\n\u001b[1;32m 536\u001b[0m schema,\n\u001b[1;32m 537\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 538\u001b[0m create_model_module \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__module__\u001b[39m,\n\u001b[0;32m--> 539\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__qualname__\u001b[39m,\n\u001b[1;32m 540\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcreate_model\u001b[39m\u001b[38;5;124m'\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m create_model_module \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mBaseModel\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[1;32m 541\u001b[0m core_config,\n\u001b[1;32m 542\u001b[0m config_wrapper\u001b[38;5;241m.\u001b[39mplugin_settings,\n\u001b[1;32m 543\u001b[0m )\n\u001b[1;32m 544\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_serializer__ \u001b[38;5;241m=\u001b[39m SchemaSerializer(schema, core_config)\n\u001b[1;32m 545\u001b[0m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m__pydantic_complete__ \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/main.py:626\u001b[0m, in \u001b[0;36m__get_pydantic_core_schema__\u001b[0;34m(cls, source, handler)\u001b[0m\n\u001b[1;32m 611\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[1;32m 612\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__pydantic_init_subclass__\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 613\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass`\u001b[39;00m\n\u001b[1;32m 614\u001b[0m \u001b[38;5;124;03m only after the class is actually fully initialized. In particular, attributes like `model_fields` will\u001b[39;00m\n\u001b[1;32m 615\u001b[0m \u001b[38;5;124;03m be present when this is called.\u001b[39;00m\n\u001b[1;32m 616\u001b[0m \n\u001b[1;32m 617\u001b[0m \u001b[38;5;124;03m This is necessary because `__init_subclass__` will always be called by `type.__new__`,\u001b[39;00m\n\u001b[1;32m 618\u001b[0m \u001b[38;5;124;03m and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that\u001b[39;00m\n\u001b[1;32m 619\u001b[0m \u001b[38;5;124;03m `type.__new__` was called in such a manner that the class would already be sufficiently initialized.\u001b[39;00m\n\u001b[1;32m 620\u001b[0m \n\u001b[1;32m 621\u001b[0m \u001b[38;5;124;03m This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely,\u001b[39;00m\n\u001b[1;32m 622\u001b[0m \u001b[38;5;124;03m any kwargs passed to the class definition that aren't used internally by pydantic.\u001b[39;00m\n\u001b[1;32m 623\u001b[0m \n\u001b[1;32m 624\u001b[0m \u001b[38;5;124;03m Args:\u001b[39;00m\n\u001b[1;32m 625\u001b[0m \u001b[38;5;124;03m **kwargs: Any keyword arguments passed to the class definition that aren't used internally\u001b[39;00m\n\u001b[0;32m--> 626\u001b[0m \u001b[38;5;124;03m by pydantic.\u001b[39;00m\n\u001b[1;32m 627\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 628\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:82\u001b[0m, in \u001b[0;36mCallbackGetCoreSchemaHandler.__call__\u001b[0;34m(self, source_type)\u001b[0m\n\u001b[1;32m 81\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__call__\u001b[39m(\u001b[38;5;28mself\u001b[39m, __source_type: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mCoreSchema:\n\u001b[0;32m---> 82\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_handler(__source_type)\n\u001b[1;32m 83\u001b[0m ref \u001b[38;5;241m=\u001b[39m schema\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mref\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 84\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_ref_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mto-def\u001b[39m\u001b[38;5;124m'\u001b[39m:\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:502\u001b[0m, in \u001b[0;36mgenerate_schema\u001b[0;34m(self, obj, from_dunder_get_core_schema)\u001b[0m\n\u001b[1;32m 498\u001b[0m schema \u001b[38;5;241m=\u001b[39m _add_custom_serialization_from_json_encoders(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_config_wrapper\u001b[38;5;241m.\u001b[39mjson_encoders, obj, schema)\n\u001b[1;32m 500\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_post_process_generated_schema(schema)\n\u001b[0;32m--> 502\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m schema\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:753\u001b[0m, in \u001b[0;36m_generate_schema_inner\u001b[0;34m(self, obj)\u001b[0m\n\u001b[1;32m 749\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mmatch_type\u001b[39m(\u001b[38;5;28mself\u001b[39m, obj: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mCoreSchema: \u001b[38;5;66;03m# noqa: C901\u001b[39;00m\n\u001b[1;32m 750\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Main mapping of types to schemas.\u001b[39;00m\n\u001b[1;32m 751\u001b[0m \n\u001b[1;32m 752\u001b[0m \u001b[38;5;124;03m The general structure is a series of if statements starting with the simple cases\u001b[39;00m\n\u001b[0;32m--> 753\u001b[0m \u001b[38;5;124;03m (non-generic primitive types) and then handling generics and other more complex cases.\u001b[39;00m\n\u001b[1;32m 754\u001b[0m \n\u001b[1;32m 755\u001b[0m \u001b[38;5;124;03m Each case either generates a schema directly, calls into a public user-overridable method\u001b[39;00m\n\u001b[1;32m 756\u001b[0m \u001b[38;5;124;03m (like `GenerateSchema.tuple_variable_schema`) or calls into a private method that handles some\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \u001b[38;5;124;03m boilerplate before calling into the user-facing method (e.g. `GenerateSchema._tuple_schema`).\u001b[39;00m\n\u001b[1;32m 758\u001b[0m \n\u001b[1;32m 759\u001b[0m \u001b[38;5;124;03m The idea is that we'll evolve this into adding more and more user facing methods over time\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[38;5;124;03m as they get requested and we figure out what the right API for them is.\u001b[39;00m\n\u001b[1;32m 761\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 762\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m obj \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28mstr\u001b[39m:\n\u001b[1;32m 763\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstr_schema()\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:580\u001b[0m, in \u001b[0;36m_model_schema\u001b[0;34m(self, cls)\u001b[0m\n\u001b[1;32m 574\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m new_inner_schema\n\u001b[1;32m 575\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m apply_model_validators(inner_schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minner\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 577\u001b[0m model_schema \u001b[38;5;241m=\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mmodel_schema(\n\u001b[1;32m 578\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 579\u001b[0m inner_schema,\n\u001b[0;32m--> 580\u001b[0m custom_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_custom_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 581\u001b[0m root_model\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 582\u001b[0m post_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_post_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 583\u001b[0m config\u001b[38;5;241m=\u001b[39mcore_config,\n\u001b[1;32m 584\u001b[0m ref\u001b[38;5;241m=\u001b[39mmodel_ref,\n\u001b[1;32m 585\u001b[0m metadata\u001b[38;5;241m=\u001b[39mmetadata,\n\u001b[1;32m 586\u001b[0m )\n\u001b[1;32m 588\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_model_serializers(model_schema, decorators\u001b[38;5;241m.\u001b[39mmodel_serializers\u001b[38;5;241m.\u001b[39mvalues())\n\u001b[1;32m 589\u001b[0m schema \u001b[38;5;241m=\u001b[39m apply_model_validators(schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mouter\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:580\u001b[0m, in \u001b[0;36m<dictcomp>\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m 574\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m new_inner_schema\n\u001b[1;32m 575\u001b[0m inner_schema \u001b[38;5;241m=\u001b[39m apply_model_validators(inner_schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124minner\u001b[39m\u001b[38;5;124m'\u001b[39m)\n\u001b[1;32m 577\u001b[0m model_schema \u001b[38;5;241m=\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mmodel_schema(\n\u001b[1;32m 578\u001b[0m \u001b[38;5;28mcls\u001b[39m,\n\u001b[1;32m 579\u001b[0m inner_schema,\n\u001b[0;32m--> 580\u001b[0m custom_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_custom_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 581\u001b[0m root_model\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m 582\u001b[0m post_init\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mcls\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m__pydantic_post_init__\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m),\n\u001b[1;32m 583\u001b[0m config\u001b[38;5;241m=\u001b[39mcore_config,\n\u001b[1;32m 584\u001b[0m ref\u001b[38;5;241m=\u001b[39mmodel_ref,\n\u001b[1;32m 585\u001b[0m metadata\u001b[38;5;241m=\u001b[39mmetadata,\n\u001b[1;32m 586\u001b[0m )\n\u001b[1;32m 588\u001b[0m schema \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_model_serializers(model_schema, decorators\u001b[38;5;241m.\u001b[39mmodel_serializers\u001b[38;5;241m.\u001b[39mvalues())\n\u001b[1;32m 589\u001b[0m schema \u001b[38;5;241m=\u001b[39m apply_model_validators(schema, model_validators, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mouter\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:916\u001b[0m, in \u001b[0;36m_generate_md_field_schema\u001b[0;34m(self, name, field_info, decorators)\u001b[0m\n\u001b[1;32m 906\u001b[0m common_field \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_common_field_schema(name, field_info, decorators)\n\u001b[1;32m 907\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m core_schema\u001b[38;5;241m.\u001b[39mmodel_field(\n\u001b[1;32m 908\u001b[0m common_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mschema\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[1;32m 909\u001b[0m serialization_exclude\u001b[38;5;241m=\u001b[39mcommon_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mserialization_exclude\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 913\u001b[0m metadata\u001b[38;5;241m=\u001b[39mcommon_field[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmetadata\u001b[39m\u001b[38;5;124m'\u001b[39m],\n\u001b[1;32m 914\u001b[0m )\n\u001b[0;32m--> 916\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_generate_dc_field_schema\u001b[39m(\n\u001b[1;32m 917\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 918\u001b[0m name: \u001b[38;5;28mstr\u001b[39m,\n\u001b[1;32m 919\u001b[0m field_info: FieldInfo,\n\u001b[1;32m 920\u001b[0m decorators: DecoratorInfos,\n\u001b[1;32m 921\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m core_schema\u001b[38;5;241m.\u001b[39mDataclassField:\n\u001b[1;32m 922\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Prepare a DataclassField to represent the parameter/field, of a dataclass.\"\"\"\u001b[39;00m\n\u001b[1;32m 923\u001b[0m common_field \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_common_field_schema(name, field_info, decorators)\n",
"File \u001b[0;32m~/job/zep-proprietary/venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:1114\u001b[0m, in \u001b[0;36m_common_field_schema\u001b[0;34m(self, name, field_info, decorators)\u001b[0m\n\u001b[1;32m 1108\u001b[0m json_schema_extra \u001b[38;5;241m=\u001b[39m field_info\u001b[38;5;241m.\u001b[39mjson_schema_extra\n\u001b[1;32m 1110\u001b[0m metadata \u001b[38;5;241m=\u001b[39m build_metadata_dict(\n\u001b[1;32m 1111\u001b[0m js_annotation_functions\u001b[38;5;241m=\u001b[39m[get_json_schema_update_func(json_schema_updates, json_schema_extra)]\n\u001b[1;32m 1112\u001b[0m )\n\u001b[0;32m-> 1114\u001b[0m alias_generator \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_config_wrapper\u001b[38;5;241m.\u001b[39malias_generator\n\u001b[1;32m 1115\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m alias_generator \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 1116\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_apply_alias_generator_to_field_info(alias_generator, field_info, name)\n",
"\u001b[0;31mAttributeError\u001b[0m: 'FieldInfo' object has no attribute 'deprecated'"
]
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_community.retrievers import ZepCloudRetriever\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_openai import OpenAI\n",
"\n",
"session_id = str(uuid4()) # This is a unique identifier for the session"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provide your OpenAI key\n",
"import getpass\n",
"\n",
"openai_key = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provide your Zep API key. See https://help.getzep.com/projects#api-keys\n",
"\n",
"zep_api_key = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and initialize the Agent\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"search = WikipediaAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=(\n",
" \"useful for when you need to search online for answers. You should ask\"\n",
" \" targeted questions\"\n",
" ),\n",
" ),\n",
"]\n",
"\n",
"# Set up Zep Chat History\n",
"memory = ZepCloudMemory(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
" return_messages=True,\n",
" memory_key=\"chat_history\",\n",
")\n",
"\n",
"# Initialize the agent\n",
"llm = OpenAI(temperature=0, openai_api_key=openai_key)\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" memory=memory,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add some history data\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.\n",
"test_history = [\n",
" {\"role\": \"human\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Which books of hers were made into movies?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" \"metadata\": {\"foo\": \"bar\"},\n",
" },\n",
"]\n",
"\n",
"for msg in test_history:\n",
" memory.chat_memory.add_message(\n",
" (\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" ),\n",
" metadata=msg.get(\"metadata\", {}),\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the agent\n",
"\n",
"Doing so will automatically add the input and response to the Zep memory.\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:34:37.613049Z",
"start_time": "2024-05-10T14:34:35.883359Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"AI: Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"What is the book's relevance to the challenges facing contemporary society?\",\n",
" 'chat_history': [HumanMessage(content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\\nOctavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nUrsula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\\nJoanna Russ is the author of the influential feminist science fiction novel The Female Man.\\nMargaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\\nConnie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\\nOctavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\\nOctavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\\nhuman: What awards did she win?\\nai: Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nhuman: Which other women sci-fi writers might I want to read?\\nai: You might want to read Ursula K. Le Guin or Joanna Russ.\\nhuman: Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\\nai: Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\")],\n",
" 'output': 'Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.invoke(\n",
" input=\"What is the book's relevance to the challenges facing contemporary society?\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspect the Zep memory\n",
"\n",
"Note the summary, and that the history has been enriched with token counts, UUIDs, and timestamps.\n",
"\n",
"Summaries are biased towards the most recent messages.\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:35:11.437446Z",
"start_time": "2024-05-10T14:35:10.664076Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Octavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\n",
"\n",
"\n",
"Conversation Facts: \n",
"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\n",
"\n",
"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\n",
"\n",
"Ursula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\n",
"\n",
"Joanna Russ is the author of the influential feminist science fiction novel The Female Man.\n",
"\n",
"Margaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\n",
"\n",
"Connie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\n",
"\n",
"Octavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\n",
"\n",
"Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993.\n",
"\n",
"The novel follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\n",
"\n",
"Parable of the Sower explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world.\n",
"\n",
"The novel also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\n",
"\n",
"human :\n",
" {'content': \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\\nOctavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.\\nUrsula K. Le Guin is known for novels like The Left Hand of Darkness and The Dispossessed.\\nJoanna Russ is the author of the influential feminist science fiction novel The Female Man.\\nMargaret Atwood is known for works like The Handmaid's Tale and the MaddAddam trilogy.\\nConnie Willis is an award-winning author of science fiction and fantasy, known for novels like Doomsday Book.\\nOctavia Butler is a pioneering black female science fiction author, known for Kindred and the Parable series.\\nParable of the Sower is a science fiction novel by Octavia Butler, published in 1993.\\nThe novel follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\\nParable of the Sower explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world.\\nThe novel also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\\nOctavia Estelle Butler was an acclaimed American science fiction author. While none of her books were directly adapted into movies, her novel Kindred was adapted into a TV series on FX. Butler was part of a generation of prominent science fiction writers in the 20th century, including contemporaries such as Ursula K. Le Guin, Samuel R. Delany, Chip Delany, and Nalo Hopkinson.\\nhuman: Which other women sci-fi writers might I want to read?\\nai: You might want to read Ursula K. Le Guin or Joanna Russ.\\nhuman: Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\\nai: Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.\\nhuman: What is the book's relevance to the challenges facing contemporary society?\\nai: Parable of the Sower is highly relevant to contemporary society as it explores themes of environmental degradation, social and economic inequality, and the struggle for survival in a chaotic world. It also delves into issues of race, gender, and religion, making it a thought-provoking and timely read.\", 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': None, 'example': False}\n"
]
}
],
"source": [
"def print_messages(messages):\n",
" for m in messages:\n",
" print(m.type, \":\\n\", m.dict())\n",
"\n",
"\n",
"print(memory.chat_memory.zep_summary)\n",
"print(\"\\n\")\n",
"print(\"Conversation Facts: \")\n",
"facts = memory.chat_memory.zep_facts\n",
"for fact in facts:\n",
" print(fact + \"\\n\")\n",
"print_messages(memory.chat_memory.messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory via the `ZepRetriever`.\n",
"\n",
"You can use the `ZepRetriever` with chains that support passing in a Langchain `Retriever` object.\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:35:33.023765Z",
"start_time": "2024-05-10T14:35:32.613576Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Which other women sci-fi writers might I want to read?' created_at='2024-05-10T14:34:16.714292Z' metadata=None role='human' role_type=None token_count=12 updated_at='0001-01-01T00:00:00Z' uuid_='64ca1fae-8db1-4b4f-8a45-9b0e57e88af5' 0.8960460126399994\n"
]
}
],
"source": [
"retriever = ZepCloudRetriever(\n",
" session_id=session_id,\n",
" api_key=zep_api_key,\n",
")\n",
"\n",
"search_results = memory.chat_memory.search(\"who are some famous women sci-fi authors?\")\n",
"for r in search_results:\n",
" if r.score > 0.8: # Only print results with similarity of 0.8 or higher\n",
" print(r.message, r.score)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -271,6 +271,26 @@ See a [usage example](/docs/integrations/retrievers/azure_ai_search).
from langchain.retrievers import AzureAISearchRetriever
```
## Tools
### Azure Container Apps dynamic sessions
We need to get the `POOL_MANAGEMENT_ENDPOINT` environment variable from the Azure Container Apps service.
See the instructions [here](https://python.langchain.com/v0.2/docs/integrations/tools/azure_dynamic_sessions/#setup).
We need to install a python package.
```bash
pip install langchain-azure-dynamic-sessions
```
See a [usage example](/docs/integrations/tools/azure_dynamic_sessions).
```python
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
```
## Toolkits
### Azure AI Services

View File

@@ -64,7 +64,7 @@ set_llm_cache(AstraDBCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the Astra DB section).
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the Astra DB section).
## Semantic LLM Cache
@@ -80,7 +80,7 @@ set_llm_cache(AstraDBSemanticCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_message_history).

View File

@@ -40,7 +40,7 @@ from langchain_community.cache import CassandraCache
set_llm_cache(CassandraCache())
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the Cassandra section).
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the Cassandra section).
## Semantic LLM Cache
@@ -54,7 +54,7 @@ set_llm_cache(CassandraSemanticCache(
))
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the appropriate section).
## Document loader

View File

@@ -205,7 +205,7 @@ For chat models is very useful to define prompt as a set of message templates...
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` sufix inside the <prompt:_role_> tag
- note the `:system` suffix inside the <prompt:_role_> tag
```<prompt:system>

View File

@@ -48,6 +48,6 @@ eng = sqlalchemy.create_engine(conn_str)
set_llm_cache(SQLAlchemyCache(engine=eng))
```
From here, see the [LLM Caching](/docs/integrations/llms/llm_caching) documentation on how to use.
From here, see the [LLM Caching](/docs/integrations/llm_caching) documentation on how to use.

View File

@@ -1,63 +1,82 @@
# NVIDIA
The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
>NVIDIA provides an integration package for LangChain: `langchain-nvidia-ai-endpoints`.
NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing,
NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
giving enterprises ownership and full control of their IP and AI application.
## NVIDIA AI Foundation Endpoints
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for
> NVIDIA AI Foundation Models like `Mixtral 8x7B`, `Llama 2`, `Stable Diffusion`, etc. These models,
> hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on
> the NVIDIA AI platform, making them fast and easy to evaluate, further customize,
> and seamlessly run at peak performance on any accelerated stack.
>
> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully
> accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these
> models can be deployed anywhere with enterprise-grade security, stability,
> and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).
Below is an example on how to use some common functionality surrounding text-generative and embedding models.
A selection of NVIDIA AI Foundation models is supported directly in LangChain with familiar APIs.
## Installation
The supported models can be found [in build.nvidia.com](https://build.nvidia.com/).
These models can be accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/)
package, as shown below.
### Setting up
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models
2. Click on your model of choice
3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```bash
export NVIDIA_API_KEY=nvapi-XXXXXXXXXXXXXXXXXXXXXXXXXX
```python
pip install -U --quiet langchain-nvidia-ai-endpoints
```
- Install a package:
## Setup
```bash
pip install -U langchain-nvidia-ai-endpoints
**To get started:**
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.
2. Click on your model of choice.
3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
```python
import getpass
import os
if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
nvidia_api_key = getpass.getpass("Enter your NVIDIA API key: ")
assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvidia_api_key
```
### Chat models
See a [usage example](/docs/integrations/chat/nvidia_ai_endpoints).
## Working with NVIDIA API Catalog
```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="mixtral_8x7b")
llm = ChatNVIDIA(model="mistralai/mixtral-8x22b-instruct-v0.1")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
```
### Embedding models
Using the API, you can query live endpoints available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM which is part of NVIDIA AI Enterprise, shown in the next section [Working with NVIDIA NIMs](##working-with-nvidia-nims).
See a [usage example](/docs/integrations/text_embedding/nvidia_ai_endpoints).
## Working with NVIDIA NIMs
When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.
[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)
```python
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank
# connect to an chat NIM running at localhost:8000, specifyig a specific model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta-llama3-8b-instruct")
# connect to an embedding NIM running at localhost:8080
embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1")
# connect to a reranking NIM running at localhost:2016
ranker = NVIDIARerank(base_url="http://localhost:2016/v1")
```
## Using NVIDIA AI Foundation Endpoints
A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs.
The active models which are supported can be found [in API Catalog](https://build.nvidia.com/).
**The following may be useful examples to help you get started:**
- **[`ChatNVIDIA` Model](/docs/integrations/chat/nvidia_ai_endpoints).**
- **[`NVIDIAEmbeddings` Model for RAG Workflows](/docs/integrations/text_embedding/nvidia_ai_endpoints).**

View File

@@ -1,6 +1,6 @@
# PremAI
>[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth.
[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).
## ChatPremAI
@@ -9,36 +9,27 @@ This example goes over how to use LangChain to interact with different chat mode
### Installation and setup
We start by installing langchain and premai-sdk. You can type the following command to install:
We start by installing `langchain` and `premai-sdk`. You can type the following command to install:
```bash
pip install premai langchain
```
Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:
1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).
2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard.
3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application.
4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt.
Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application.
Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key.
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_community.chat_models import ChatPremAI
```
### Setup ChatPrem instance in LangChain
### Setup PremAI client in LangChain
Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model.
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat-client. By default it will use the model name and parameters used in the [LaunchPad](https://docs.premai.io/get-started/launchpad).
> Note: If you change the `model` or any other parameters like `temperature` or `max_tokens` while setting the client, it will override existing default configurations, that was used in LaunchPad.
`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations.
```python
import os
@@ -50,21 +41,19 @@ if "PREMAI_API_KEY" not in os.environ:
chat = ChatPremAI(project_id=8)
```
### Calling the Model
### Chat Completions
Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`.
`ChatPremAI` supports two methods: `invoke` (which is the same as `generate`) and `stream`.
The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions.
### Generation
```python
human_message = HumanMessage(content="Who are you?")
chat.invoke([human_message])
```
The above looks interesting, right? I set my default launchpad system-prompt as: `Always sound like a pirate` You can also, override the default system prompt if you need to. Here's how you can do it.
You can provide system prompt here like this:
```python
system_message = SystemMessage(content="You are a friendly assistant.")
@@ -82,16 +71,13 @@ chat.invoke(
)
```
> If you are going to place system prompt here, then it will override your system prompt that was fixed while deploying the application from the platform.
### Important notes:
Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported.
We will provide support for those two above parameters in later versions.
> Please note that the current version of ChatPremAI does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop).
### Streaming
And finally, here's how you do token streaming for dynamic chat-like applications.
In this section, let's see how we can stream tokens using langchain and PremAI. Here's how you do it.
```python
import sys
@@ -101,7 +87,7 @@ for chunk in chat.stream("hello how are you"):
sys.stdout.flush()
```
Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it.
Similar to above, if you want to override the system-prompt and the generation parameters, you need to add the following:
```python
import sys
@@ -114,47 +100,30 @@ for chunk in chat.stream(
sys.stdout.flush()
```
## Embedding
This will stream tokens one after the other.
In this section, we are going to discuss how we can get access to different embedding models using `PremEmbeddings`. Let's start by doing some imports and defining our embedding object
## PremEmbeddings
In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key.
```python
from langchain_community.embeddings import PremEmbeddings
```
Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
```python
import os
import getpass
from langchain_community.embeddings import PremEmbeddings
if os.environ.get("PREMAI_API_KEY") is None:
os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
# Define a model as a required parameter here since there is no default embedding model
```
We support lots of state of the art embedding models. You can view our list of supported LLMs and embedding models [here](https://docs.premai.io/get-started/supported-models). For now let's go for `text-embedding-3-large` model for this example. .
```python
model = "text-embedding-3-large"
embedder = PremEmbeddings(project_id=8, model=model)
```
We have defined our embedding model. We support a lot of embedding models. Here is a table that shows the number of embedding models we support.
| Provider | Slug | Context Tokens |
|-------------|------------------------------------------|----------------|
| cohere | embed-english-v3.0 | N/A |
| openai | text-embedding-3-small | 8191 |
| openai | text-embedding-3-large | 8191 |
| openai | text-embedding-ada-002 | 8191 |
| replicate | replicate/all-mpnet-base-v2 | N/A |
| together | togethercomputer/Llama-2-7B-32K-Instruct | N/A |
| mistralai | mistral-embed | 4096 |
To change the model, you simply need to copy the `slug` and access your embedding model. Now let's start using our embedding model with a single query followed by multiple queries (which is also called as a document)
```python
query = "Hello, this is a test query"
query_result = embedder.embed_query(query)
@@ -162,8 +131,11 @@ query_result = embedder.embed_query(query)
print(query_result[:5])
```
<Note>
Setting `model_name` argument in mandatory for PremAIEmbeddings unlike chat.
</Note>
Finally, let's embed a document
Finally, let's embed some sample document
```python
documents = [
@@ -178,4 +150,20 @@ doc_result = embedder.embed_documents(documents)
# of the first document vector
print(doc_result[0][:5])
```
```
```python
print(f"Dimension of embeddings: {len(query_result)}")
```
Dimension of embeddings: 3072
```python
doc_result[:5]
```
>Result:
>
>[-0.02129288576543331,
0.0008162345038726926,
-0.004556538071483374,
0.02918623760342598,
-0.02547479420900345]

View File

@@ -28,38 +28,68 @@ In addition to Zep Open Source's memory management features, Zep Cloud offers:
- **Dialog Classification**: Instantly and accurately classify chat dialog. Understand user intent and emotion, segment users, and more. Route chains based on semantic context, and trigger events.
- **Structured Data Extraction**: Quickly extract business data from chat conversations using a schema you define. Understand what your Assistant should ask for next in order to complete its task.
> Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks), [Zep Cloud Message History Example](https://help.getzep.com/langchain/examples/messagehistory-example), [Zep Cloud Vector Store Example](https://help.getzep.com/langchain/examples/vectorstore-example)
## Open Source Installation and Setup
> Zep Open Source project: [https://github.com/getzep/zep](https://github.com/getzep/zep)
>
> Zep Open Source Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
## Zep Open Source
Zep offers an open source version with a self-hosted option.
Please refer to the [Zep Open Source](https://github.com/getzep/zep) repo for more information.
You can also find Zep Open Source compatible [Retriever](/docs/integrations/retrievers/zep_memorystore), [Vector Store](/docs/integrations/vectorstores/zep) and [Memory](/docs/integrations/memory/zep_memory) examples
1. Install the Zep service. See the [Zep Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).
## Zep Cloud Installation and Setup
2. Install the Zep Python SDK:
[Zep Cloud Docs](https://help.getzep.com)
1. Install the Zep Cloud SDK:
```bash
pip install zep_python
pip install zep_cloud
```
or
```bash
poetry add zep_cloud
```
## Memory
Zep's [Memory API](https://docs.getzep.com/sdk/chat_history/) persists your app's chat history and metadata to a Session, enriches the memory, automatically generates summaries, and enables vector similarity search over historical chat messages and summaries.
Zep's Memory API persists your users' chat history and metadata to a [Session](https://help.getzep.com/chat-history-memory/sessions), enriches the memory, and
enables vector similarity search over historical chat messages and dialog summaries.
There are two approaches to populating your prompt with chat history:
Zep offers several approaches to populating prompts with context from historical conversations.
1. Retrieve the most recent N messages (and potentionally a summary) from a Session and use them to construct your prompt.
2. Search over the Session's chat history for messages that are relevant and use them to construct your prompt.
### Perpetual Memory
This is the default memory type.
Salient facts from the dialog are extracted and stored in a Fact Table.
This is updated in real-time as new messages are added to the Session.
Every time you call the Memory API to get a Memory, Zep returns the Fact Table, the most recent messages (per your Message Window setting), and a summary of the most recent messages prior to the Message Window.
The combination of the Fact Table, summary, and the most recent messages in a prompts provides both factual context and nuance to the LLM.
Both of these approaches may be useful, with the first providing the LLM with context as to the most recent interactions with a human. The second approach enables you to look back further in the chat history and retrieve messages that are relevant to the current conversation in a token-efficient manner.
### Summary Retriever Memory
Returns the most recent messages and a summary of past messages relevant to the current conversation,
enabling you to provide your Assistant with helpful context from past conversations
### Message Window Buffer Memory
Returns the most recent N messages from the current conversation.
Additionally, Zep enables vector similarity searches for Messages or Summaries stored within its system.
This feature lets you populate prompts with past conversations that are contextually similar to a specific query,
organizing the results by a similarity Score.
`ZepCloudChatMessageHistory` and `ZepCloudMemory` classes can be imported to interact with Zep Cloud APIs.
`ZepCloudChatMessageHistory` is compatible with `RunnableWithMessageHistory`.
```python
from langchain.memory import ZepMemory
from langchain_community.chat_message_histories import ZepCloudChatMessageHistory
```
See a [RAG App Example here](/docs/integrations/memory/zep_memory).
See a [Perpetual Memory Example here](/docs/integrations/memory/zep_cloud_chat_message_history).
You can use `ZepCloudMemory` together with agents that support Memory.
```python
from langchain.memory import ZepCloudMemory
```
See a [Memory RAG Example here](/docs/integrations/memory/zep_memory_cloud).
## Retriever
@@ -67,24 +97,24 @@ Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve mes
The Retriever supports searching over both individual messages and summaries of conversations. The latter is useful for providing rich, but succinct context to the LLM as to relevant past conversations.
Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://docs.getzep.com/sdk/search_query/). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
See a [usage example](/docs/integrations/retrievers/zep_memorystore).
See a [usage example](/docs/integrations/retrievers/zep_cloud_memorystore).
```python
from langchain_community.retrievers import ZepRetriever
from langchain_community.retrievers import ZepCloudRetriever
```
## Vector store
Zep's [Document VectorStore API](https://docs.getzep.com/sdk/documents/) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand
Zep's [Document VectorStore API](https://help.getzep.com/document-collections) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand
distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest.
Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://docs.getzep.com/sdk/search_query/).
Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works).
MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other.
```python
from langchain_community.vectorstores import ZepVectorStore
from langchain_community.vectorstores import ZepCloudVectorStore
```
See a [usage example](/docs/integrations/vectorstores/zep).
See a [usage example](/docs/integrations/vectorstores/zep_cloud).

View File

@@ -0,0 +1,636 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# Milvus Hybrid Search\n",
"\n",
"> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n",
"\n",
"This notebook goes over how to use the Milvus Hybrid Search retriever, which combines the strengths of both dense and sparse vector search.\n",
"\n",
"For more reference please go to [Milvus Multi-Vector Search](https://milvus.io/docs/multi-vector-search.md)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Prerequisites\n",
"### Install dependencies\n",
"You need to prepare to install the following dependencies\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pymilvus[model] langchain-milvus langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Import necessary modules and classes"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from pymilvus import (\n",
" Collection,\n",
" CollectionSchema,\n",
" DataType,\n",
" FieldSchema,\n",
" WeightedRanker,\n",
" connections,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Start the Milvus service\n",
"\n",
"Please refer to the [Milvus documentation](https://milvus.io/docs/install_standalone-docker.md) to start the Milvus service.\n",
"\n",
"After starting milvus, you need to specify your milvus connection URI.\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"CONNECTION_URI = \"http://localhost:19530\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Prepare OpenAI API Key\n",
"\n",
"Please refer to the [OpenAI documentation](https://platform.openai.com/account/api-keys) to obtain your OpenAI API key, and set it as an environment variable.\n",
"\n",
"```shell\n",
"export OPENAI_API_KEY=<your_api_key>\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Prepare data and Load\n",
"### Prepare dense and sparse embedding functions\n",
"\n",
" Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"texts = [\n",
" \"In 'The Whispering Walls' by Ava Moreno, a young journalist named Sophia uncovers a decades-old conspiracy hidden within the crumbling walls of an ancient mansion, where the whispers of the past threaten to destroy her own sanity.\",\n",
" \"In 'The Last Refuge' by Ethan Blackwood, a group of survivors must band together to escape a post-apocalyptic wasteland, where the last remnants of humanity cling to life in a desperate bid for survival.\",\n",
" \"In 'The Memory Thief' by Lila Rose, a charismatic thief with the ability to steal and manipulate memories is hired by a mysterious client to pull off a daring heist, but soon finds themselves trapped in a web of deceit and betrayal.\",\n",
" \"In 'The City of Echoes' by Julian Saint Clair, a brilliant detective must navigate a labyrinthine metropolis where time is currency, and the rich can live forever, but at a terrible cost to the poor.\",\n",
" \"In 'The Starlight Serenade' by Ruby Flynn, a shy astronomer discovers a mysterious melody emanating from a distant star, which leads her on a journey to uncover the secrets of the universe and her own heart.\",\n",
" \"In 'The Shadow Weaver' by Piper Redding, a young orphan discovers she has the ability to weave powerful illusions, but soon finds herself at the center of a deadly game of cat and mouse between rival factions vying for control of the mystical arts.\",\n",
" \"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.\",\n",
" \"In 'The Clockwork Kingdom' by Augusta Wynter, a brilliant inventor discovers a hidden world of clockwork machines and ancient magic, where a rebellion is brewing against the tyrannical ruler of the land.\",\n",
" \"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.\",\n",
" \"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.\",\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use the [OpenAI Embedding](https://platform.openai.com/docs/guides/embeddings) to generate dense vectors, and the [BM25 algorithm](https://en.wikipedia.org/wiki/Okapi_BM25) to generate sparse vectors.\n",
"\n",
"Initialize dense embedding function and get dimension"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1536"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dense_embedding_func = OpenAIEmbeddings()\n",
"dense_dim = len(dense_embedding_func.embed_query(texts[1]))\n",
"dense_dim"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize sparse embedding function.\n",
"\n",
"Note that the output of sparse embedding is a set of sparse vectors, which represents the index and weight of the keywords of the input text."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{0: 0.4270424944042204,\n",
" 21: 1.845826690498331,\n",
" 22: 1.845826690498331,\n",
" 23: 1.845826690498331,\n",
" 24: 1.845826690498331,\n",
" 25: 1.845826690498331,\n",
" 26: 1.845826690498331,\n",
" 27: 1.2237754316221157,\n",
" 28: 1.845826690498331,\n",
" 29: 1.845826690498331,\n",
" 30: 1.845826690498331,\n",
" 31: 1.845826690498331,\n",
" 32: 1.845826690498331,\n",
" 33: 1.845826690498331,\n",
" 34: 1.845826690498331,\n",
" 35: 1.845826690498331,\n",
" 36: 1.845826690498331,\n",
" 37: 1.845826690498331,\n",
" 38: 1.845826690498331,\n",
" 39: 1.845826690498331}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sparse_embedding_func = BM25SparseEmbedding(corpus=texts)\n",
"sparse_embedding_func.embed_query(texts[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Milvus Collection and load data\n",
"\n",
"Initialize connection URI and establish connection"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"connections.connect(uri=CONNECTION_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define field names and their data types"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"pk_field = \"doc_id\"\n",
"dense_field = \"dense_vector\"\n",
"sparse_field = \"sparse_vector\"\n",
"text_field = \"text\"\n",
"fields = [\n",
" FieldSchema(\n",
" name=pk_field,\n",
" dtype=DataType.VARCHAR,\n",
" is_primary=True,\n",
" auto_id=True,\n",
" max_length=100,\n",
" ),\n",
" FieldSchema(name=dense_field, dtype=DataType.FLOAT_VECTOR, dim=dense_dim),\n",
" FieldSchema(name=sparse_field, dtype=DataType.SPARSE_FLOAT_VECTOR),\n",
" FieldSchema(name=text_field, dtype=DataType.VARCHAR, max_length=65_535),\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a collection with the defined schema"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"schema = CollectionSchema(fields=fields, enable_dynamic_field=False)\n",
"collection = Collection(\n",
" name=\"IntroductionToTheNovels\", schema=schema, consistency_level=\"Strong\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define index for dense and sparse vectors"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"dense_index = {\"index_type\": \"FLAT\", \"metric_type\": \"IP\"}\n",
"collection.create_index(\"dense_vector\", dense_index)\n",
"sparse_index = {\"index_type\": \"SPARSE_INVERTED_INDEX\", \"metric_type\": \"IP\"}\n",
"collection.create_index(\"sparse_vector\", sparse_index)\n",
"collection.flush()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Insert entities into the collection and load the collection"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"entities = []\n",
"for text in texts:\n",
" entity = {\n",
" dense_field: dense_embedding_func.embed_documents([text])[0],\n",
" sparse_field: sparse_embedding_func.embed_documents([text])[0],\n",
" text_field: text,\n",
" }\n",
" entities.append(entity)\n",
"collection.insert(entities)\n",
"collection.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build RAG chain with Retriever\n",
"### Create the Retriever\n",
"\n",
"Define search parameters for sparse and dense fields, and create a retriever"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"sparse_search_params = {\"metric_type\": \"IP\"}\n",
"dense_search_params = {\"metric_type\": \"IP\", \"params\": {}}\n",
"retriever = MilvusCollectionHybridSearchRetriever(\n",
" collection=collection,\n",
" rerank=WeightedRanker(0.5, 0.5),\n",
" anns_fields=[dense_field, sparse_field],\n",
" field_embeddings=[dense_embedding_func, sparse_embedding_func],\n",
" field_search_params=[dense_search_params, sparse_search_params],\n",
" top_k=3,\n",
" text_field=text_field,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"In the input parameters of this Retriever, we use a dense embedding and a sparse embedding to perform hybrid search on the two fields of this Collection, and use WeightedRanker for reranking. Finally, 3 top-K Documents will be returned."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.\", metadata={'doc_id': '449281835035545843'}),\n",
" Document(page_content=\"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.\", metadata={'doc_id': '449281835035545845'}),\n",
" Document(page_content=\"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.\", metadata={'doc_id': '449281835035545846'})]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"What are the story about ventures?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the RAG chain\n",
"\n",
"Initialize ChatOpenAI and define a prompt template"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"llm = ChatOpenAI()\n",
"\n",
"PROMPT_TEMPLATE = \"\"\"\n",
"Human: You are an AI assistant, and provides answers to questions by using fact based and statistical information when possible.\n",
"Use the following pieces of information to provide a concise answer to the question enclosed in <question> tags.\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"<question>\n",
"{question}\n",
"</question>\n",
"\n",
"Assistant:\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=PROMPT_TEMPLATE, input_variables=[\"context\", \"question\"]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Define a function for formatting documents"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Define a chain using the retriever and other components"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"rag_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Perform a query using the defined chain"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"text/plain": [
"\"Lila Rose has written 'The Memory Thief,' which follows a charismatic thief with the ability to steal and manipulate memories as they navigate a daring heist and a web of deceit and betrayal.\""
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rag_chain.invoke(\"What novels has Lila written and what are their contents?\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Drop the collection"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"collection.drop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -20,7 +20,7 @@
"\n",
"I have used the cloud version of Milvus, thus I need `uri` and `token` as well.\n",
"\n",
"NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `pymilvus` package."
"NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `langchain_milvus` package."
]
},
{
@@ -29,16 +29,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet lark"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pymilvus"
"%pip install --upgrade --quiet lark langchain_milvus"
]
},
{
@@ -67,8 +58,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Milvus\n",
"from langchain_core.documents import Document\n",
"from langchain_milvus.vectorstores import Milvus\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"
@@ -388,4 +379,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -0,0 +1,470 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Zep Cloud\n",
"## Retriever Example for [Zep Cloud](https://docs.getzep.com/)\n",
"\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",
"\n",
"> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.\n",
"> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,\n",
"> while also reducing hallucinations, latency, and cost.\n",
"\n",
"> See [Zep Cloud Installation Guide](https://help.getzep.com/sdks) and more [Zep Cloud Langchain Examples](https://github.com/getzep/zep-python/tree/main/examples)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retriever Example\n",
"\n",
"This notebook demonstrates how to search historical chat message histories using the [Zep Long-term Memory Store](https://www.getzep.com/).\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to the Zep memory store.\n",
"2. Vector search over the conversation history: \n",
" 1. With a similarity search over chat messages\n",
" 2. Using maximal marginal relevance re-ranking of a chat message search\n",
" 3. Filtering a search using metadata filters\n",
" 4. A similarity search over summaries of the chat messages\n",
" 5. Using maximal marginal relevance re-ranking of a summary search\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import getpass\n",
"import time\n",
"from uuid import uuid4\n",
"\n",
"from langchain_community.memory.zep_cloud_memory import ZepCloudMemory\n",
"from langchain_community.retrievers import ZepCloudRetriever\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"# Provide your Zep API key.\n",
"zep_api_key = getpass.getpass()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and add a chat message history to the memory store\n",
"\n",
"**NOTE:** Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A `session_id` is required when instantiating the Retriever."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"session_id = str(uuid4()) # This is a unique identifier for the user/session\n",
"\n",
"# Initialize the Zep Memory Class\n",
"zep_memory = ZepCloudMemory(session_id=session_id, api_key=zep_api_key)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 4 messages. We want to push beyond this to demonstrate auto-summarization.\n",
"test_history = [\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"Who was Octavia Butler?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American\"\n",
" \" science fiction author.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"Which books of hers were made into movies?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The most well-known adaptation of Octavia Butler's work is the FX series\"\n",
" \" Kindred, based on her novel of the same name.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"Who were her contemporaries?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R.\"\n",
" \" Delany, and Joanna Russ.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"What awards did she win?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur\"\n",
" \" Fellowship.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"Which other women sci-fi writers might I want to read?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": \"You might want to read Ursula K. Le Guin or Joanna Russ.\",\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": (\n",
" \"Write a short synopsis of Butler's book, Parable of the Sower. What is it\"\n",
" \" about?\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Parable of the Sower is a science fiction novel by Octavia Butler,\"\n",
" \" published in 1993. It follows the story of Lauren Olamina, a young woman\"\n",
" \" living in a dystopian future where society has collapsed due to\"\n",
" \" environmental disasters, poverty, and violence.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is the setting of the book?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The book is set in a dystopian future in the 2020s, where society has\"\n",
" \" collapsed due to climate change and economic crises.\"\n",
" ),\n",
" },\n",
" {\"role\": \"human\", \"role_type\": \"user\", \"content\": \"Who is the protagonist?\"},\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The protagonist of the book is Lauren Olamina, a young woman who possesses\"\n",
" \" 'hyperempathy', the ability to feel pain and other sensations she\"\n",
" \" witnesses.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is the main theme of the book?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The main theme of the book is survival in the face of drastic societal\"\n",
" \" change and collapse. It also explores themes of adaptability, community,\"\n",
" \" and the human capacity for change.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is the 'Parable of the Sower'?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"The 'Parable of the Sower' is a biblical parable that Butler uses as a\"\n",
" \" metaphor in the book. In the parable, a sower scatters seeds, some of\"\n",
" \" which fall on fertile ground and grow, while others fall on rocky ground\"\n",
" \" or among thorns and fail to grow. The parable is used to illustrate the\"\n",
" \" importance of receptivity and preparedness in the face of change.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What is Butler's writing style like?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"role_type\": \"assistant\",\n",
" \"content\": (\n",
" \"Butler's writing style is known for its clarity, directness, and\"\n",
" \" psychological insight. Her narratives often involve complex, diverse\"\n",
" \" characters and explore themes of race, gender, and power.\"\n",
" ),\n",
" },\n",
" {\n",
" \"role\": \"human\",\n",
" \"role_type\": \"user\",\n",
" \"content\": \"What other books has she written?\",\n",
" },\n",
" {\n",
" \"role\": \"ai\",\n",
" \"content\": (\n",
" \"In addition to 'Parable of the Sower', Butler has written several other\"\n",
" \" notable works, including 'Kindred', 'Dawn', and 'Parable of the Talents'.\"\n",
" ),\n",
" },\n",
"]\n",
"\n",
"for msg in test_history:\n",
" zep_memory.chat_memory.add_message(\n",
" HumanMessage(content=msg[\"content\"])\n",
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n",
"\n",
"time.sleep(\n",
" 10\n",
") # Wait for the messages to be embedded and summarized, this happens asynchronously."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the Zep Retriever to vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory. Embedding happens automatically.\n",
"\n",
"NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:32:06.613100Z",
"start_time": "2024-05-10T14:32:06.369301Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"What is the 'Parable of the Sower'?\", metadata={'score': 0.9333381652832031, 'uuid': 'bebc441c-a32d-44a1-ae61-968e7b3d4956', 'created_at': '2024-05-10T05:02:01.857627Z', 'token_count': 11, 'role': 'human'}),\n",
" Document(page_content=\"The 'Parable of the Sower' is a biblical parable that Butler uses as a metaphor in the book. In the parable, a sower scatters seeds, some of which fall on fertile ground and grow, while others fall on rocky ground or among thorns and fail to grow. The parable is used to illustrate the importance of receptivity and preparedness in the face of change.\", metadata={'score': 0.8757256865501404, 'uuid': '193c60d8-2b7b-4eb1-a4be-c2d8afd92991', 'created_at': '2024-05-10T05:02:01.97174Z', 'token_count': 82, 'role': 'ai'}),\n",
" Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8641344904899597, 'uuid': 'fc78901d-a625-4530-ba63-1ae3e3b11683', 'created_at': '2024-05-10T05:02:00.942994Z', 'token_count': 21, 'role': 'human'}),\n",
" Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8581685125827789, 'uuid': '91f2cda4-276e-446d-96bf-07d34e5af616', 'created_at': '2024-05-10T05:02:01.05577Z', 'token_count': 54, 'role': 'ai'}),\n",
" Document(page_content=\"In addition to 'Parable of the Sower', Butler has written several other notable works, including 'Kindred', 'Dawn', and 'Parable of the Talents'.\", metadata={'score': 0.8076582252979279, 'uuid': 'e3994519-9a90-410c-b14c-2c652f6d184f', 'created_at': '2024-05-10T05:02:02.401682Z', 'token_count': 37, 'role': 'ai'})]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"zep_retriever = ZepCloudRetriever(\n",
" api_key=zep_api_key,\n",
" session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n",
" top_k=5,\n",
")\n",
"\n",
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the Zep sync API to retrieve results:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:31:37.611570Z",
"start_time": "2024-05-10T14:31:37.298903Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler set in a dystopian future in the 2020s. The story follows Lauren Olamina, a young woman living in a society that has collapsed due to environmental disasters, poverty, and violence. The novel explores themes of societal breakdown, the struggle for survival, and the search for a better future.', metadata={'score': 0.8473024368286133, 'uuid': 'e4689f8e-33be-4a59-a9c2-e5ef5dd70f74', 'created_at': '2024-05-10T05:02:02.713123Z', 'token_count': 76})]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"zep_retriever.invoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Reranking using MMR (Maximal Marginal Relevance)\n",
"\n",
"Zep has native, SIMD-accelerated support for reranking results using MMR. This is useful for removing redundancy in results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"zep_retriever = ZepCloudRetriever(\n",
" api_key=zep_api_key,\n",
" session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n",
" top_k=5,\n",
" search_type=\"mmr\",\n",
" mmr_lambda=0.5,\n",
")\n",
"\n",
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using metadata filters to refine search results\n",
"\n",
"Zep supports filtering results by metadata. This is useful for filtering results by entity type, or other metadata.\n",
"\n",
"More information here: https://help.getzep.com/document-collections#searching-a-collection-with-hybrid-vector-search"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"filter = {\"where\": {\"jsonpath\": '$[*] ? (@.baz == \"qux\")'}}\n",
"\n",
"await zep_retriever.ainvoke(\n",
" \"Who wrote Parable of the Sower?\", config={\"metadata\": filter}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Searching over Summaries with MMR Reranking\n",
"\n",
"Zep automatically generates summaries of chat messages. These summaries can be searched over using the Zep Retriever. Since a summary is a distillation of a conversation, they're more likely to match your search query and offer rich, succinct context to the LLM.\n",
"\n",
"Successive summaries may include similar content, with Zep's similarity search returning the highest matching results but with little diversity.\n",
"MMR re-ranks the results to ensure that the summaries you populate into your prompt are both relevant and each offers additional information to the LLM."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2024-05-10T14:32:56.877960Z",
"start_time": "2024-05-10T14:32:56.517360Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler set in a dystopian future in the 2020s. The story follows Lauren Olamina, a young woman living in a society that has collapsed due to environmental disasters, poverty, and violence. The novel explores themes of societal breakdown, the struggle for survival, and the search for a better future.', metadata={'score': 0.8473024368286133, 'uuid': 'e4689f8e-33be-4a59-a9c2-e5ef5dd70f74', 'created_at': '2024-05-10T05:02:02.713123Z', 'token_count': 76}),\n",
" Document(page_content='The \\'Parable of the Sower\\' refers to a new religious belief system that the protagonist, Lauren Olamina, develops over the course of the novel. As her community disintegrates due to climate change, economic collapse, and social unrest, Lauren comes to believe that humanity must adapt and \"shape God\" in order to survive. The \\'Parable of the Sower\\' is the foundational text of this new religion, which Lauren calls \"Earthseed\", that emphasizes the inevitability of change and the need for humanity to take an active role in shaping its own future. This parable is a central thematic element of the novel, representing the protagonist\\'s search for meaning and purpose in the face of societal upheaval.', metadata={'score': 0.8466987311840057, 'uuid': '1f1a44eb-ebd8-4617-ac14-0281099bd770', 'created_at': '2024-05-10T05:02:07.541073Z', 'token_count': 146}),\n",
" Document(page_content='The dialog discusses the central themes of Octavia Butler\\'s acclaimed science fiction novel \"Parable of the Sower.\" The main theme is survival in the face of drastic societal collapse, and the importance of adaptability, community, and the human capacity for change. The \"Parable of the Sower,\" a biblical parable, serves as a metaphorical framework for the novel, illustrating the need for receptivity and preparedness when confronting transformative upheaval.', metadata={'score': 0.8283970355987549, 'uuid': '4158a750-3ccd-45ce-ab88-fed5ba68b755', 'created_at': '2024-05-10T05:02:06.510068Z', 'token_count': 91})]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"zep_retriever = ZepCloudRetriever(\n",
" api_key=zep_api_key,\n",
" session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever\n",
" top_k=3,\n",
" search_scope=\"summary\",\n",
" search_type=\"mmr\",\n",
" mmr_lambda=0.5,\n",
")\n",
"\n",
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -6,7 +6,7 @@
"collapsed": false
},
"source": [
"# Zep\n",
"# Zep Open Source\n",
"## Retriever Example for [Zep](https://docs.getzep.com/)\n",
"\n",
"> Recall, understand, and extract data from chat histories. Power personalized AI experiences.\n",

View File

@@ -0,0 +1,222 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Zilliz Cloud Pipeline\n",
"\n",
"> [Zilliz Cloud Pipelines](https://docs.zilliz.com/docs/pipelines) transform your unstructured data to a searchable vector collection, chaining up the embedding, ingestion, search, and deletion of your data.\n",
"> \n",
"> Zilliz Cloud Pipelines are available in the Zilliz Cloud Console and via RestFul APIs.\n",
"\n",
"This notebook demonstrates how to prepare Zilliz Cloud Pipelines and use the them via a LangChain Retriever."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Zilliz Cloud Pipelines\n",
"\n",
"To get pipelines ready for LangChain Retriever, you need to create and configure the services in Zilliz Cloud."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**1. Set up Database**\n",
"\n",
"- [Register with Zilliz Cloud](https://docs.zilliz.com/docs/register-with-zilliz-cloud)\n",
"- [Create a cluster](https://docs.zilliz.com/docs/create-cluster)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**2. Create Pipelines**\n",
"\n",
"- [Document ingestion, search, deletion](https://docs.zilliz.com/docs/pipelines-doc-data)\n",
"- [Text ingestion, search, deletion](https://docs.zilliz.com/docs/pipelines-text-data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use LangChain Retriever"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-milvus"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_milvus import ZillizCloudPipelineRetriever\n",
"\n",
"retriever = ZillizCloudPipelineRetriever(\n",
" pipeline_ids={\n",
" \"ingestion\": \"<YOUR_INGESTION_PIPELINE_ID>\", # skip this line if you do NOT need to add documents\n",
" \"search\": \"<YOUR_SEARCH_PIPELINE_ID>\", # skip this line if you do NOT need to get relevant documents\n",
" \"deletion\": \"<YOUR_DELETION_PIPELINE_ID>\", # skip this line if you do NOT need to delete documents\n",
" },\n",
" token=\"<YOUR_ZILLIZ_CLOUD_API_KEY>\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add documents\n",
"\n",
"To add documents, you can use the method `add_texts` or `add_doc_url`, which inserts documents from a list of texts or a presigned/public url with corresponding metadata into the store."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- if using a **text ingestion pipeline**, you can use the method `add_texts`, which inserts a batch of texts with the corresponding metadata into the Zilliz Cloud storage.\n",
"\n",
" **Arguments:**\n",
" - `texts`: A list of text strings.\n",
" - `metadata`: A key-value dictionary of metadata will be inserted as preserved fields required by ingestion pipeline. Defaults to None.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# retriever.add_texts(\n",
"# texts = [\"example text 1e\", \"example text 2\"],\n",
"# metadata={\"<FIELD_NAME>\": \"<FIELD_VALUE>\"} # skip this line if no preserved field is required by the ingestion pipeline\n",
"# )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- if using a **document ingestion pipeline**, you can use the method `add_doc_url`, which inserts a document from url with the corresponding metadata into the Zilliz Cloud storage.\n",
"\n",
" **Arguments:**\n",
" - `doc_url`: A document url.\n",
" - `metadata`: A key-value dictionary of metadata will be inserted as preserved fields required by ingestion pipeline. Defaults to None.\n",
"\n",
"The following example works with a document ingestion pipeline, which requires milvus version as metadata. We will use an [example document](https://publicdataset.zillizcloud.com/milvus_doc.md) describing how to delete entities in Milvus v2.3.x. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'token_usage': 1247, 'doc_name': 'milvus_doc.md', 'num_chunks': 6}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.add_doc_url(\n",
" doc_url=\"https://publicdataset.zillizcloud.com/milvus_doc.md\",\n",
" metadata={\"version\": \"v2.3.x\"},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get relevant documents\n",
"\n",
"To query the retriever, you can use the method `get_relevant_documents`, which returns a list of LangChain Document objects.\n",
"\n",
"**Arguments:**\n",
"- `query`: String to find relevant documents for.\n",
"- `top_k`: The number of results. Defaults to 10.\n",
"- `offset`: The number of records to skip in the search result. Defaults to 0.\n",
"- `output_fields`: The extra fields to present in output.\n",
"- `filter`: The Milvus expression to filter search results. Defaults to \"\".\n",
"- `run_manager`: The callbacks handler to use."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Delete Entities\\nThis topic describes how to delete entities in Milvus. \\nMilvus supports deleting entities by primary key or complex boolean expressions. Deleting entities by primary key is much faster and lighter than deleting them by complex boolean expressions. This is because Milvus executes queries first when deleting data by complex boolean expressions. \\nDeleted entities can still be retrieved immediately after the deletion if the consistency level is set lower than Strong.\\nEntities deleted beyond the pre-specified span of time for Time Travel cannot be retrieved again.\\nFrequent deletion operations will impact the system performance. \\nBefore deleting entities by comlpex boolean expressions, make sure the collection has been loaded.\\nDeleting entities by complex boolean expressions is not an atomic operation. Therefore, if it fails halfway through, some data may still be deleted.\\nDeleting entities by complex boolean expressions is supported only when the consistency is set to Bounded. For details, see Consistency.\\\\\\n\\\\\\n# Delete Entities\\n## Prepare boolean expression\\nPrepare the boolean expression that filters the entities to delete. \\nMilvus supports deleting entities by primary key or complex boolean expressions. For more information on expression rules and supported operators, see Boolean Expression Rules.', metadata={'id': 448986959321277978, 'distance': 0.7871403694152832}),\n",
" Document(page_content='# Delete Entities\\n## Prepare boolean expression\\n### Simple boolean expression\\nUse a simple expression to filter data with primary key values of 0 and 1: \\n```python\\nexpr = \"book_id in [0,1]\"\\n```\\\\\\n\\\\\\n# Delete Entities\\n## Prepare boolean expression\\n### Complex boolean expression\\nTo filter entities that meet specific conditions, define complex boolean expressions. \\nFilter entities whose word_count is greater than or equal to 11000: \\n```python\\nexpr = \"word_count >= 11000\"\\n``` \\nFilter entities whose book_name is not Unknown: \\n```python\\nexpr = \"book_name != Unknown\"\\n``` \\nFilter entities whose primary key values are greater than 5 and word_count is smaller than or equal to 9999: \\n```python\\nexpr = \"book_id > 5 && word_count <= 9999\"\\n```', metadata={'id': 448986959321277979, 'distance': 0.7775762677192688}),\n",
" Document(page_content='# Delete Entities\\n## Delete entities\\nDelete the entities with the boolean expression you created. Milvus returns the ID list of the deleted entities.\\n```python\\nfrom pymilvus import Collection\\ncollection = Collection(\"book\") # Get an existing collection.\\ncollection.delete(expr)\\n``` \\nParameter\\tDescription\\nexpr\\tBoolean expression that specifies the entities to delete.\\npartition_name (optional)\\tName of the partition to delete entities from.\\\\\\n\\\\\\n# Upsert Entities\\nThis topic describes how to upsert entities in Milvus. \\nUpserting is a combination of insert and delete operations. In the context of a Milvus vector database, an upsert is a data-level operation that will overwrite an existing entity if a specified field already exists in a collection, and insert a new entity if the specified value doesnt already exist. \\nThe following example upserts 3,000 rows of randomly generated data as the example data. When performing upsert operations, it\\'s important to note that the operation may compromise performance. This is because the operation involves deleting data during execution.', metadata={'id': 448986959321277980, 'distance': 0.680284857749939}),\n",
" Document(page_content='# Upsert Entities\\n## Flush data\\nWhen data is upserted into Milvus it is updated and inserted into segments. Segments have to reach a certain size to be sealed and indexed. Unsealed segments will be searched brute force. In order to avoid this with any remainder data, it is best to call flush(). The flush() call will seal any remaining segments and send them for indexing. It is important to only call this method at the end of an upsert session. Calling it too often will cause fragmented data that will need to be cleaned later on.\\\\\\n\\\\\\n# Upsert Entities\\n## Limits\\nUpdating primary key fields is not supported by upsert().\\nupsert() is not applicable and an error can occur if autoID is set to True for primary key fields.', metadata={'id': 448986959321277983, 'distance': 0.5672488212585449}),\n",
" Document(page_content='# Upsert Entities\\n## Prepare data\\nFirst, prepare the data to upsert. The type of data to upsert must match the schema of the collection, otherwise Milvus will raise an exception. \\nMilvus supports default values for scalar fields, excluding a primary key field. This indicates that some fields can be left empty during data inserts or upserts. For more information, refer to Create a Collection. \\n```python\\n# Generate data to upsert\\n\\nimport random\\nnb = 3000\\ndim = 8\\nvectors = [[random.random() for _ in range(dim)] for _ in range(nb)]\\ndata = [\\n[i for i in range(nb)],\\n[str(i) for i in range(nb)],\\n[i for i in range(10000, 10000+nb)],\\nvectors,\\n[str(\"dy\"*i) for i in range(nb)]\\n]\\n```', metadata={'id': 448986959321277981, 'distance': 0.5107149481773376}),\n",
" Document(page_content='# Upsert Entities\\n## Upsert data\\nUpsert the data to the collection. \\n```python\\nfrom pymilvus import Collection\\ncollection = Collection(\"book\") # Get an existing collection.\\nmr = collection.upsert(data)\\n``` \\nParameter\\tDescription\\ndata\\tData to upsert into Milvus.\\npartition_name (optional)\\tName of the partition to upsert data into.\\ntimeout (optional)\\tAn optional duration of time in seconds to allow for the RPC. If it is set to None, the client keeps waiting until the server responds or error occurs.\\nAfter upserting entities into a collection that has previously been indexed, you do not need to re-index the collection, as Milvus will automatically create an index for the newly upserted data. For more information, refer to Can indexes be created after inserting vectors?', metadata={'id': 448986959321277982, 'distance': 0.4341375529766083})]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.get_relevant_documents(\n",
" \"Can users delete entities by complex boolean expressions?\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "develop",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cassandra\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Cassandra\n",
"\n",
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.\n",
"\n",
"`CassandraByteStore` needs the `cassio` package to be installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Store takes the following parameters:\n",
"\n",
"* table: The table where to store the data.\n",
"* session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used.\n",
"* keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used.\n",
"* setup_mode: (Optional) The mode used to create the Cassandra table (SYNC, ASYNC or OFF). Defaults to SYNC."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CassandraByteStore\n",
"\n",
"The `CassandraByteStore` is an implementation of `ByteStore` that stores the data in your Cassandra instance.\n",
"The store keys must be strings and will be mapped to the `row_id` column of the Cassandra table.\n",
"The store `bytes` values are mapped to the `body_blob` column of the Cassandra table."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import CassandraByteStore"
]
},
{
"cell_type": "markdown",
"source": [
"### Init from a cassandra driver Session\n",
"\n",
"You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from cassandra.cluster import Cluster\n",
"\n",
"cluster = Cluster()\n",
"session = cluster.connect()"
]
},
{
"cell_type": "markdown",
"source": [
"You need to provide the name of an existing keyspace of the Cassandra instance:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")"
]
},
{
"cell_type": "markdown",
"source": [
"Creating the store:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=CASSANDRA_KEYSPACE,\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"source": [
"### Init from cassio\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Usage with CacheBackedEmbeddings\n",
"\n",
"You may use the `CassandraByteStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_cache=store\n",
")"
],
"metadata": {
"collapsed": false
}
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -58,7 +58,7 @@
"outputs": [],
"source": [
"document_text = [\"This is a test doc1.\", \"This is a test doc2.\"]\n",
"document_result = embeddings.embed_documents([document_text])"
"document_result = embeddings.embed_documents(document_text)"
]
}
],

View File

@@ -6,17 +6,24 @@
"id": "GDDVue_1cq6d"
},
"source": [
"# NVIDIA AI Foundation Endpoints \n",
"# NVIDIA NIMs \n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
"> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below.\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
"\n",
"This example goes over how to use LangChain to interact with the supported [NVIDIA Retrieval QA Embedding Model](https://build.nvidia.com/nvidia/embed-qa-4) for [retrieval-augmented generation](https://developer.nvidia.com/blog/build-enterprise-retrieval-augmented-generation-apps-with-nvidia-retrieval-qa-embedding-model/) via the `NVIDIAEmbeddings` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
"For more information on accessing the chat models through this API, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
@@ -45,9 +52,9 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Select the `Retrieval` tab, then select your model of choice\n",
"2. Select the `Retrieval` tab, then select your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
@@ -84,16 +91,16 @@
"id": "l185et2kc8pS"
},
"source": [
"We should be able to see an embedding model among that list which can be used in conjunction with an LLM for effective RAG solutions. We can interface with this model pretty easily with the help of the `NVIDIAEmbeddings` model."
"We should be able to see an embedding model among that list which can be used in conjunction with an LLM for effective RAG solutions. We can interface with this model as well as other embedding models supported by NIM through the `NVIDIAEmbeddings` class."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"## Working with NIMs on the NVIDIA API Catalog\n",
"\n",
"When initializing an embedding model you can select a model by passing it, e.g. `ai-embed-qa-4` below, or use the default by not passing any arguments."
"When initializing an embedding model you can select a model by passing it, e.g. `NV-Embed-QA` below, or use the default by not passing any arguments."
]
},
{
@@ -106,7 +113,7 @@
"source": [
"from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n",
"\n",
"embedder = NVIDIAEmbeddings(model=\"ai-embed-qa-4\")"
"embedder = NVIDIAEmbeddings(model=\"NV-Embed-QA\")"
]
},
{
@@ -121,7 +128,29 @@
"\n",
"- `embed_documents`: Generate passage embeddings for a list of documents which you would like to search over.\n",
"\n",
"- `aembed_quey`/`embed_documents`: Asynchronous versions of the above."
"- `aembed_query`/`aembed_documents`: Asynchronous versions of the above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Working with self-hosted NVIDIA NIMs\n",
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
"\n",
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n",
"\n",
"# connect to an embedding NIM running at localhost:8080\n",
"embedder = NVIDIAEmbeddings(base_url=\"http://localhost:8080/v1\")"
]
},
{
@@ -382,7 +411,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken\n",
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken langchain_community\n",
"\n",
"from operator import itemgetter\n",
"\n",
@@ -408,7 +437,7 @@
"source": [
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"],\n",
" embedding=NVIDIAEmbeddings(model=\"ai-embed-qa-4\"),\n",
" embedding=NVIDIAEmbeddings(model=\"NV-Embed-QA\"),\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
@@ -478,9 +507,9 @@
"provenance": []
},
"kernelspec": {
"display_name": "Python (venvoss)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "venvoss"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -492,7 +521,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -7,7 +7,27 @@
"source": [
"# Ollama\n",
"\n",
"Let's load the Ollama Embeddings class."
"\"Ollama supports embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data.\" Learn more about the introduction to [Ollama Embeddings](https://ollama.com/blog/embedding-models) in the blog post.\n",
"\n",
"To use Ollama Embeddings, first, install [LangChain Community](https://pypi.org/project/langchain-community/) package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "854d6a2e",
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain-community"
]
},
{
"cell_type": "markdown",
"id": "54fbb4cd",
"metadata": {},
"source": [
"Load the Ollama Embeddings class:"
]
},
{
@@ -17,26 +37,12 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import OllamaEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2c66e5da",
"metadata": {},
"outputs": [],
"source": [
"embeddings = OllamaEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "01370375",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import OllamaEmbeddings\n",
"\n",
"embeddings = (\n",
" OllamaEmbeddings()\n",
") # by default, uses llama2. Run `ollama pull llama2` to pull down the model\n",
"\n",
"text = \"This is a test document.\""
]
},
@@ -105,7 +111,13 @@
"id": "bb61bbeb",
"metadata": {},
"source": [
"Let's load the Ollama Embeddings class with smaller model (e.g. llama:7b). Note: See other supported models [https://ollama.ai/library](https://ollama.ai/library)"
"### Embedding Models\n",
"\n",
"Ollama has embedding models, that are lightweight enough for use in embeddings, with the smallest about the size of 25Mb. See some of the available [embedding models from Ollama](https://ollama.com/blog/embedding-models).\n",
"\n",
"Let's load the Ollama Embeddings class with smaller model (e.g. `mxbai-embed-large`). \n",
"\n",
"> Note: See other supported models [https://ollama.ai/library](https://ollama.ai/library)"
]
},
{
@@ -115,26 +127,8 @@
"metadata": {},
"outputs": [],
"source": [
"embeddings = OllamaEmbeddings(model=\"llama2:7b\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "14aefb64",
"metadata": {},
"outputs": [],
"source": [
"text = \"This is a test document.\""
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "3c39ed33",
"metadata": {},
"outputs": [],
"source": [
"embeddings = OllamaEmbeddings(model=\"mxbai-embed-large\")\n",
"text = \"This is a test document.\"\n",
"query_result = embeddings.embed_query(text)"
]
},

View File

@@ -6,25 +6,26 @@
"source": [
"# PremAI\n",
"\n",
">[PremAI](https://app.premai.io) is an unified platform that let's you build powerful production-ready GenAI powered applications with least effort, so that you can focus more on user experience and overall growth. In this section we are going to dicuss how we can get access to different embedding model using `PremAIEmbeddings`\n",
"[PremAI](https://premai.io/) is an all-in-one platform that simplifies the creation of robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application. You can quickly start using our platform [here](https://docs.premai.io/quick-start).\n",
"\n",
"## Installation and Setup\n",
"### Installation and setup\n",
"\n",
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
"We start by installing `langchain` and `premai-sdk`. You can type the following command to install:\n",
"\n",
"```bash\n",
"pip install premai langchain\n",
"```\n",
"\n",
"Before proceeding further, please make sure that you have made an account on Prem and already started a project. If not, then here's how you can start for free:\n",
"Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the [quick start](https://docs.premai.io/introduction) guide to get started with the PremAI platform. Create your first project and grab your API key."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## PremEmbeddings\n",
"\n",
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
"\n",
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
"\n",
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
"\n",
"Congratulations on creating your first deployed application on Prem 🎉 Now we can use langchain to interact with our application. "
"In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key. "
]
},
{
@@ -42,7 +43,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise it will throw error.\n"
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is `8`. But make sure you use your project-id, otherwise it will throw error.\n",
"\n",
"> Note: Setting `model_name` argument in mandatory for PremAIEmbeddings unlike [ChatPremAI.](https://python.langchain.com/v0.1/docs/integrations/chat/premai/)"
]
},
{
@@ -72,20 +75,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We have defined our embedding model. We support a lot of embedding models. Here is a table that shows the number of embedding models we support. \n",
"\n",
"\n",
"| Provider | Slug | Context Tokens |\n",
"|-------------|------------------------------------------|----------------|\n",
"| cohere | embed-english-v3.0 | N/A |\n",
"| openai | text-embedding-3-small | 8191 |\n",
"| openai | text-embedding-3-large | 8191 |\n",
"| openai | text-embedding-ada-002 | 8191 |\n",
"| replicate | replicate/all-mpnet-base-v2 | N/A |\n",
"| together | togethercomputer/Llama-2-7B-32K-Instruct | N/A |\n",
"| mistralai | mistral-embed | 4096 |\n",
"\n",
"To change the model, you simply need to copy the `slug` and access your embedding model. Now let's start using our embedding model with a single query followed by multiple queries (which is also called as a document)"
"We support lots of state of the art embedding models. You can view our list of supported LLMs and embedding models [here](https://docs.premai.io/get-started/supported-models). For now let's go for `text-embedding-3-large` model for this example."
]
},
{

View File

@@ -24,7 +24,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet apify-client langchain-openai langchain"
"%pip install --upgrade --quiet apify-client langchain-community langchain-openai langchain"
]
},
{

View File

@@ -24,7 +24,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet arxiv"
"%pip install --upgrade --quiet langchain-community arxiv"
]
},
{

View File

@@ -32,7 +32,8 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet boto3 > /dev/null"
"%pip install --upgrade --quiet boto3 > /dev/null\n",
"%pip install --upgrade --quiet langchain-community"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -16,7 +16,17 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "a83d2ea9",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f7b3767b",
"metadata": {
"tags": []

View File

@@ -12,6 +12,16 @@
"Get your api key here: https://bearly.ai/dashboard/developers"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8265cf7f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "markdown",
"id": "3f99f7c9",

View File

@@ -18,6 +18,15 @@
"Then we will need to set some environment variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 20,

View File

@@ -11,6 +11,16 @@
"Go to the [Brave Website](https://brave.com/search/api/) to sign up for a free account and get an API key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2d7e7b3d",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -14,6 +14,16 @@
"Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70d493c8",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -40,6 +40,15 @@
"Here, we use the ID of the **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 11,

View File

@@ -19,7 +19,7 @@
"outputs": [],
"source": [
"# Needed if you would like to display images in the notebook\n",
"%pip install --upgrade --quiet opencv-python scikit-image"
"%pip install --upgrade --quiet opencv-python scikit-image langchain-community"
]
},
{

View File

@@ -13,6 +13,15 @@
"This notebook demonstrates how to use the [DataForSeo API](https://dataforseo.com/apis) to obtain search engine results. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -31,7 +31,8 @@
},
"outputs": [],
"source": [
"pip install dataherald"
"pip install dataherald\n",
"%pip install --upgrade --quiet langchain-community"
]
},
{
@@ -114,4 +115,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -17,7 +17,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet duckduckgo-search"
"%pip install --upgrade --quiet duckduckgo-search langchain-community"
]
},
{

View File

@@ -46,7 +46,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain e2b"
"%pip install --upgrade --quiet langchain e2b langchain-community"
]
},
{

View File

@@ -42,6 +42,15 @@
"Once we have a key we'll want to set it as the environment variable ``EDENAI_API_KEY`` or you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI tools, e.g. ``EdenAiTextModerationTool(edenai_api_key=\"...\")``"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -25,7 +25,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet elevenlabs"
"%pip install --upgrade --quiet elevenlabs langchain-community"
]
},
{

View File

@@ -50,10 +50,10 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-exa\n",
"%pip install --upgrade --quiet langchain-exa \n",
"\n",
"# and some deps for this notebook\n",
"%pip install --upgrade --quiet langchain langchain-openai"
"%pip install --upgrade --quiet langchain langchain-openai langchain-community"
]
},
{

View File

@@ -27,7 +27,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-text-to-speech"
"%pip install --upgrade --quiet google-cloud-text-to-speech langchain-community"
]
},
{

View File

@@ -30,7 +30,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib"
"%pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib langchain-community"
]
},
{

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{

View File

@@ -59,7 +59,7 @@
}
],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{

View File

@@ -39,7 +39,7 @@
}
],
"source": [
"%pip install --upgrade --quiet requests"
"%pip install --upgrade --quiet requests langchain-community"
]
},
{

View File

@@ -17,7 +17,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet googlemaps"
"%pip install --upgrade --quiet googlemaps langchain-community"
]
},
{

View File

@@ -28,7 +28,7 @@
}
],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{

View File

@@ -14,6 +14,16 @@
"Then we will need to set some environment variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2998f9c",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -10,6 +10,16 @@
"This notebook goes over how to use the `Google Serper` component to search the web. First you need to sign up for a free account at [serper.dev](https://serper.dev) and get your api key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ac0b9ce6",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 11,

View File

@@ -40,7 +40,7 @@
}
],
"source": [
"%pip install --upgrade --quiet google-search-results"
"%pip install --upgrade --quiet google-search-results langchain_community"
]
},
{

View File

@@ -21,7 +21,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet gradio_tools"
"%pip install --upgrade --quiet gradio_tools langchain-community"
]
},
{

View File

@@ -30,6 +30,19 @@
"pip install httpx gql > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -22,6 +22,16 @@
"%pip install --upgrade --quiet transformers huggingface_hub > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5b9279f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -10,6 +10,15 @@
"when it is confused."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -44,6 +44,16 @@
"https://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d356bc92",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -5,7 +5,7 @@
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {},
"source": [
"# Quickstart\n",
"# Passio NutritionAI\n",
"\n",
"To best understand how NutritionAI can give your agents super food-nutrition powers, let's build an agent that can find that information via Passio NutritionAI.\n",
"\n",

View File

@@ -14,7 +14,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install duckdb langchain-community"
"! pip install duckdb langchain langchain-community langchain-openai"
]
},
{
@@ -86,7 +86,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -100,9 +100,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -0,0 +1,443 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "bf48a5c8c3d125e1",
"metadata": {
"collapsed": false
},
"source": [
"# ManticoreSearch VectorStore\n",
"\n",
"[ManticoreSearch](https://manticoresearch.com/) is an open-source search engine that offers fast, scalable, and user-friendly capabilities. Originating as a fork of [Sphinx Search](http://sphinxsearch.com/), it has evolved to incorporate modern search engine features and improvements. ManticoreSearch distinguishes itself with its robust performance and ease of integration into various applications.\n",
"\n",
"ManticoreSearch has recently introduced [vector search capabilities](https://manual.manticoresearch.com/dev/Searching/KNN), starting with search engine version 6.2 and only with [manticore-columnar-lib](https://github.com/manticoresoftware/columnar) package installed. This feature is a considerable advancement, allowing for the execution of searches based on vector similarity.\n",
"\n",
"As of now, the vector search functionality is only accessible in the developmental (dev) versions of the search engine. Consequently, it is imperative to employ a developmental [manticoresearch-dev](https://pypi.org/project/manticoresearch-dev/) Python client for utilizing this feature effectively."
]
},
{
"cell_type": "markdown",
"id": "d5050b607ca217ad",
"metadata": {
"collapsed": false
},
"source": [
"## Setting up environments"
]
},
{
"cell_type": "markdown",
"id": "b26c5ab7f89a61fc",
"metadata": {
"collapsed": false
},
"source": [
"Starting Docker-container with ManticoreSearch and installing manticore-columnar-lib package (optional)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "initial_id",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-03T11:28:37.177840Z",
"start_time": "2024-03-03T11:28:26.863511Z"
},
"collapsed": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Get:1 http://repo.manticoresearch.com/repository/manticoresearch_jammy_dev jammy InRelease [3525 kB]\r\n",
"Get:2 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB] \r\n",
"Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB] \r\n",
"Get:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB] \r\n",
"Get:5 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [1074 kB]\r\n",
"Get:6 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [109 kB] \r\n",
"Get:7 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB] \r\n",
"Get:8 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1517 kB]\r\n",
"Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [1889 kB]\r\n",
"Get:10 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [44.6 kB]\r\n",
"Get:11 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]\r\n",
"Get:12 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]\r\n",
"Get:13 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1792 kB] \r\n",
"Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [50.4 kB]\r\n",
"Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [1927 kB]\r\n",
"Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1346 kB]\r\n",
"Get:17 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1796 kB]\r\n",
"Get:18 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [28.1 kB]\r\n",
"Get:19 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [50.4 kB]\r\n",
"Get:20 http://repo.manticoresearch.com/repository/manticoresearch_jammy_dev jammy/main amd64 Packages [5020 kB]\r\n",
"Fetched 38.6 MB in 7s (5847 kB/s) \r\n",
"Reading package lists... Done\r\n",
"Reading package lists... Done\r\n",
"Building dependency tree... Done\r\n",
"Reading state information... Done\r\n",
"The following NEW packages will be installed:\r\n",
" manticore-columnar-lib\r\n",
"0 upgraded, 1 newly installed, 0 to remove and 21 not upgraded.\r\n",
"Need to get 1990 kB of archives.\r\n",
"After this operation, 10.0 MB of additional disk space will be used.\r\n",
"Get:1 http://repo.manticoresearch.com/repository/manticoresearch_jammy_dev jammy/main amd64 manticore-columnar-lib amd64 2.2.5-240217-a5342a1 [1990 kB]\r\n",
"Fetched 1990 kB in 1s (1505 kB/s) \r\n",
"debconf: delaying package configuration, since apt-utils is not installed\r\n",
"Selecting previously unselected package manticore-columnar-lib.\r\n",
"(Reading database ... 12260 files and directories currently installed.)\r\n",
"Preparing to unpack .../manticore-columnar-lib_2.2.5-240217-a5342a1_amd64.deb ...\r\n",
"Unpacking manticore-columnar-lib (2.2.5-240217-a5342a1) ...\r\n",
"Setting up manticore-columnar-lib (2.2.5-240217-a5342a1) ...\r\n",
"a546aec22291\r\n"
]
}
],
"source": [
"import time\n",
"\n",
"# Start container\n",
"containers = !docker ps --filter \"name=langchain-manticoresearch-server\" -q\n",
"if len(containers) == 0:\n",
" !docker run -d -p 9308:9308 --name langchain-manticoresearch-server manticoresearch/manticore:dev\n",
" time.sleep(20) # Wait for the container to start up\n",
"\n",
"# Get ID of container\n",
"container_id = containers[0]\n",
"\n",
"# Install manticore-columnar-lib package as root user\n",
"!docker exec -it --user 0 {container_id} apt-get update\n",
"!docker exec -it --user 0 {container_id} apt-get install -y manticore-columnar-lib\n",
"\n",
"# Restart container\n",
"!docker restart {container_id}"
]
},
{
"cell_type": "markdown",
"id": "42284e4c8fd0aeb4",
"metadata": {
"collapsed": false
},
"source": [
"Installing ManticoreSearch python client"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "bc7bd70a63cc8d90",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-03T11:28:38.544198Z",
"start_time": "2024-03-03T11:28:37.178755Z"
},
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\r\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\r\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\r\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet manticoresearch-dev"
]
},
{
"cell_type": "markdown",
"id": "f90b4793255edcb1",
"metadata": {
"collapsed": false
},
"source": [
"We want to use OpenAIEmbeddings so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "a303c63186fd8abd",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-03T11:28:38.546877Z",
"start_time": "2024-03-03T11:28:38.544907Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"from langchain_community.vectorstores import ManticoreSearch, ManticoreSearchSettings"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "46ad30f36815ed15",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-03T11:28:38.991083Z",
"start_time": "2024-03-03T11:28:38.547705Z"
},
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 338, which is longer than the specified 100\n",
"Created a chunk of size 508, which is longer than the specified 100\n",
"Created a chunk of size 277, which is longer than the specified 100\n",
"Created a chunk of size 777, which is longer than the specified 100\n",
"Created a chunk of size 247, which is longer than the specified 100\n",
"Created a chunk of size 228, which is longer than the specified 100\n",
"Created a chunk of size 557, which is longer than the specified 100\n",
"Created a chunk of size 587, which is longer than the specified 100\n",
"Created a chunk of size 173, which is longer than the specified 100\n",
"Created a chunk of size 622, which is longer than the specified 100\n",
"Created a chunk of size 775, which is longer than the specified 100\n",
"Created a chunk of size 292, which is longer than the specified 100\n",
"Created a chunk of size 456, which is longer than the specified 100\n",
"Created a chunk of size 291, which is longer than the specified 100\n",
"Created a chunk of size 367, which is longer than the specified 100\n",
"Created a chunk of size 604, which is longer than the specified 100\n",
"Created a chunk of size 618, which is longer than the specified 100\n",
"Created a chunk of size 340, which is longer than the specified 100\n",
"Created a chunk of size 395, which is longer than the specified 100\n",
"Created a chunk of size 321, which is longer than the specified 100\n",
"Created a chunk of size 453, which is longer than the specified 100\n",
"Created a chunk of size 354, which is longer than the specified 100\n",
"Created a chunk of size 481, which is longer than the specified 100\n",
"Created a chunk of size 233, which is longer than the specified 100\n",
"Created a chunk of size 270, which is longer than the specified 100\n",
"Created a chunk of size 305, which is longer than the specified 100\n",
"Created a chunk of size 520, which is longer than the specified 100\n",
"Created a chunk of size 289, which is longer than the specified 100\n",
"Created a chunk of size 280, which is longer than the specified 100\n",
"Created a chunk of size 417, which is longer than the specified 100\n",
"Created a chunk of size 495, which is longer than the specified 100\n",
"Created a chunk of size 602, which is longer than the specified 100\n",
"Created a chunk of size 1004, which is longer than the specified 100\n",
"Created a chunk of size 272, which is longer than the specified 100\n",
"Created a chunk of size 1203, which is longer than the specified 100\n",
"Created a chunk of size 844, which is longer than the specified 100\n",
"Created a chunk of size 135, which is longer than the specified 100\n",
"Created a chunk of size 306, which is longer than the specified 100\n",
"Created a chunk of size 407, which is longer than the specified 100\n",
"Created a chunk of size 910, which is longer than the specified 100\n",
"Created a chunk of size 398, which is longer than the specified 100\n",
"Created a chunk of size 674, which is longer than the specified 100\n",
"Created a chunk of size 356, which is longer than the specified 100\n",
"Created a chunk of size 474, which is longer than the specified 100\n",
"Created a chunk of size 814, which is longer than the specified 100\n",
"Created a chunk of size 530, which is longer than the specified 100\n",
"Created a chunk of size 469, which is longer than the specified 100\n",
"Created a chunk of size 489, which is longer than the specified 100\n",
"Created a chunk of size 433, which is longer than the specified 100\n",
"Created a chunk of size 603, which is longer than the specified 100\n",
"Created a chunk of size 380, which is longer than the specified 100\n",
"Created a chunk of size 354, which is longer than the specified 100\n",
"Created a chunk of size 391, which is longer than the specified 100\n",
"Created a chunk of size 772, which is longer than the specified 100\n",
"Created a chunk of size 267, which is longer than the specified 100\n",
"Created a chunk of size 571, which is longer than the specified 100\n",
"Created a chunk of size 594, which is longer than the specified 100\n",
"Created a chunk of size 458, which is longer than the specified 100\n",
"Created a chunk of size 386, which is longer than the specified 100\n",
"Created a chunk of size 417, which is longer than the specified 100\n",
"Created a chunk of size 370, which is longer than the specified 100\n",
"Created a chunk of size 402, which is longer than the specified 100\n",
"Created a chunk of size 306, which is longer than the specified 100\n",
"Created a chunk of size 173, which is longer than the specified 100\n",
"Created a chunk of size 628, which is longer than the specified 100\n",
"Created a chunk of size 321, which is longer than the specified 100\n",
"Created a chunk of size 294, which is longer than the specified 100\n",
"Created a chunk of size 689, which is longer than the specified 100\n",
"Created a chunk of size 641, which is longer than the specified 100\n",
"Created a chunk of size 473, which is longer than the specified 100\n",
"Created a chunk of size 414, which is longer than the specified 100\n",
"Created a chunk of size 585, which is longer than the specified 100\n",
"Created a chunk of size 764, which is longer than the specified 100\n",
"Created a chunk of size 502, which is longer than the specified 100\n",
"Created a chunk of size 640, which is longer than the specified 100\n",
"Created a chunk of size 507, which is longer than the specified 100\n",
"Created a chunk of size 564, which is longer than the specified 100\n",
"Created a chunk of size 707, which is longer than the specified 100\n",
"Created a chunk of size 380, which is longer than the specified 100\n",
"Created a chunk of size 615, which is longer than the specified 100\n",
"Created a chunk of size 733, which is longer than the specified 100\n",
"Created a chunk of size 277, which is longer than the specified 100\n",
"Created a chunk of size 497, which is longer than the specified 100\n",
"Created a chunk of size 625, which is longer than the specified 100\n",
"Created a chunk of size 468, which is longer than the specified 100\n",
"Created a chunk of size 289, which is longer than the specified 100\n",
"Created a chunk of size 576, which is longer than the specified 100\n",
"Created a chunk of size 297, which is longer than the specified 100\n",
"Created a chunk of size 534, which is longer than the specified 100\n",
"Created a chunk of size 427, which is longer than the specified 100\n",
"Created a chunk of size 412, which is longer than the specified 100\n",
"Created a chunk of size 381, which is longer than the specified 100\n",
"Created a chunk of size 417, which is longer than the specified 100\n",
"Created a chunk of size 244, which is longer than the specified 100\n",
"Created a chunk of size 307, which is longer than the specified 100\n",
"Created a chunk of size 528, which is longer than the specified 100\n",
"Created a chunk of size 565, which is longer than the specified 100\n",
"Created a chunk of size 487, which is longer than the specified 100\n",
"Created a chunk of size 470, which is longer than the specified 100\n",
"Created a chunk of size 332, which is longer than the specified 100\n",
"Created a chunk of size 552, which is longer than the specified 100\n",
"Created a chunk of size 427, which is longer than the specified 100\n",
"Created a chunk of size 596, which is longer than the specified 100\n",
"Created a chunk of size 192, which is longer than the specified 100\n",
"Created a chunk of size 403, which is longer than the specified 100\n",
"Created a chunk of size 255, which is longer than the specified 100\n",
"Created a chunk of size 1025, which is longer than the specified 100\n",
"Created a chunk of size 438, which is longer than the specified 100\n",
"Created a chunk of size 900, which is longer than the specified 100\n",
"Created a chunk of size 250, which is longer than the specified 100\n",
"Created a chunk of size 614, which is longer than the specified 100\n",
"Created a chunk of size 635, which is longer than the specified 100\n",
"Created a chunk of size 443, which is longer than the specified 100\n",
"Created a chunk of size 478, which is longer than the specified 100\n",
"Created a chunk of size 473, which is longer than the specified 100\n",
"Created a chunk of size 302, which is longer than the specified 100\n",
"Created a chunk of size 549, which is longer than the specified 100\n",
"Created a chunk of size 644, which is longer than the specified 100\n",
"Created a chunk of size 402, which is longer than the specified 100\n",
"Created a chunk of size 489, which is longer than the specified 100\n",
"Created a chunk of size 551, which is longer than the specified 100\n",
"Created a chunk of size 527, which is longer than the specified 100\n",
"Created a chunk of size 563, which is longer than the specified 100\n",
"Created a chunk of size 472, which is longer than the specified 100\n",
"Created a chunk of size 511, which is longer than the specified 100\n",
"Created a chunk of size 419, which is longer than the specified 100\n",
"Created a chunk of size 245, which is longer than the specified 100\n",
"Created a chunk of size 371, which is longer than the specified 100\n",
"Created a chunk of size 484, which is longer than the specified 100\n",
"Created a chunk of size 306, which is longer than the specified 100\n",
"Created a chunk of size 190, which is longer than the specified 100\n",
"Created a chunk of size 499, which is longer than the specified 100\n",
"Created a chunk of size 480, which is longer than the specified 100\n",
"Created a chunk of size 634, which is longer than the specified 100\n",
"Created a chunk of size 611, which is longer than the specified 100\n",
"Created a chunk of size 356, which is longer than the specified 100\n",
"Created a chunk of size 478, which is longer than the specified 100\n",
"Created a chunk of size 369, which is longer than the specified 100\n",
"Created a chunk of size 526, which is longer than the specified 100\n",
"Created a chunk of size 311, which is longer than the specified 100\n",
"Created a chunk of size 181, which is longer than the specified 100\n",
"Created a chunk of size 637, which is longer than the specified 100\n",
"Created a chunk of size 219, which is longer than the specified 100\n",
"Created a chunk of size 305, which is longer than the specified 100\n",
"Created a chunk of size 409, which is longer than the specified 100\n",
"Created a chunk of size 235, which is longer than the specified 100\n",
"Created a chunk of size 302, which is longer than the specified 100\n",
"Created a chunk of size 236, which is longer than the specified 100\n",
"Created a chunk of size 209, which is longer than the specified 100\n",
"Created a chunk of size 366, which is longer than the specified 100\n",
"Created a chunk of size 277, which is longer than the specified 100\n",
"Created a chunk of size 591, which is longer than the specified 100\n",
"Created a chunk of size 232, which is longer than the specified 100\n",
"Created a chunk of size 543, which is longer than the specified 100\n",
"Created a chunk of size 199, which is longer than the specified 100\n",
"Created a chunk of size 214, which is longer than the specified 100\n",
"Created a chunk of size 263, which is longer than the specified 100\n",
"Created a chunk of size 375, which is longer than the specified 100\n",
"Created a chunk of size 221, which is longer than the specified 100\n",
"Created a chunk of size 261, which is longer than the specified 100\n",
"Created a chunk of size 203, which is longer than the specified 100\n",
"Created a chunk of size 758, which is longer than the specified 100\n",
"Created a chunk of size 271, which is longer than the specified 100\n",
"Created a chunk of size 323, which is longer than the specified 100\n",
"Created a chunk of size 275, which is longer than the specified 100\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"bert_load_from_file: gguf version = 2\n",
"bert_load_from_file: gguf alignment = 32\n",
"bert_load_from_file: gguf data offset = 695552\n",
"bert_load_from_file: model name = BERT\n",
"bert_load_from_file: model architecture = bert\n",
"bert_load_from_file: model file type = 1\n",
"bert_load_from_file: bert tokenizer vocab = 30522\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"../../modules/paul_graham_essay.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = GPT4AllEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "a06370cae96cbaef",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-03T11:28:42.366398Z",
"start_time": "2024-03-03T11:28:38.991827Z"
},
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='Computer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory — indeed, a sneaking suspicion that it was the more admirable of the two halves — but building things seemed so much more exciting.', metadata={'some': 'metadata'}), Document(page_content=\"I applied to 3 grad schools: MIT and Yale, which were renowned for AI at the time, and Harvard, which I'd visited because Rich Draves went there, and was also home to Bill Woods, who'd invented the type of parser I used in my SHRDLU clone. Only Harvard accepted me, so that was where I went.\", metadata={'some': 'metadata'}), Document(page_content='For my undergraduate thesis, I reverse-engineered SHRDLU. My God did I love working on that program. It was a pleasing bit of code, but what made it even more exciting was my belief — hard to imagine now, but not unique in 1985 — that it was already climbing the lower slopes of intelligence.', metadata={'some': 'metadata'}), Document(page_content=\"The problem with systems work, though, was that it didn't last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good.\", metadata={'some': 'metadata'})]\n"
]
}
],
"source": [
"for d in docs:\n",
" d.metadata = {\"some\": \"metadata\"}\n",
"settings = ManticoreSearchSettings(table=\"manticoresearch_vector_search_example\")\n",
"docsearch = ManticoreSearch.from_documents(docs, embeddings, config=settings)\n",
"\n",
"query = \"Robert Morris is\"\n",
"docs = docsearch.similarity_search(query)\n",
"print(docs)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -11,9 +11,7 @@
"\n",
"This notebook shows how to use functionality related to the Milvus vector database.\n",
"\n",
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration\n",
"\n",
"To run, you should have a [Milvus instance up and running](https://milvus.io/docs/install_standalone-docker.md)."
"You'll need to install `langchain-milvus` with `pip install -qU langchain-milvus` to use this integration\n"
]
},
{
@@ -25,7 +23,15 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pymilvus"
"%pip install --upgrade --quiet langchain_milvus"
]
},
{
"cell_type": "markdown",
"id": "633addc3",
"metadata": {},
"source": [
"The latest version of pymilvus comes with a local vector database Milvus Lite, good for prototyping. If you have large scale of data such as more than a million docs, we recommend setting up a more performant Milvus server on [docker or kubernetes](https://milvus.io/docs/install_standalone-docker.md#Start-Milvus)."
]
},
{
@@ -43,15 +49,7 @@
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key:········\n"
]
}
],
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
@@ -69,7 +67,7 @@
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Milvus\n",
"from langchain_milvus.vectorstores import Milvus\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
@@ -83,8 +81,6 @@
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
@@ -102,10 +98,14 @@
},
"outputs": [],
"source": [
"# The easiest way is to use Milvus Lite where everything is stored in a local file.\n",
"# If you have a Milvus server you can use the server URI such as \"http://localhost:19530\".\n",
"URI = \"./milvus_demo.db\"\n",
"\n",
"vector_db = Milvus.from_documents(\n",
" docs,\n",
" embeddings,\n",
" connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n",
" connection_args={\"uri\": URI},\n",
")"
]
},
@@ -170,7 +170,7 @@
" docs,\n",
" embeddings,\n",
" collection_name=\"collection_1\",\n",
" connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n",
" connection_args={\"uri\": URI},\n",
")"
]
},
@@ -191,7 +191,7 @@
"source": [
"vector_db = Milvus(\n",
" embeddings,\n",
" connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n",
" connection_args={\"uri\": URI},\n",
" collection_name=\"collection_1\",\n",
")"
]
@@ -208,7 +208,6 @@
"cell_type": "markdown",
"id": "7fb27b941602401d91542211134fc71a",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
@@ -218,7 +217,8 @@
"\n",
"When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachothers data.\n",
"\n",
"Milvus recommends using [partition_key](https://milvus.io/docs/multi_tenancy.md#Partition-key-based-multi-tenancy) to implement multi-tenancy, here is an example."
"Milvus recommends using [partition_key](https://milvus.io/docs/multi_tenancy.md#Partition-key-based-multi-tenancy) to implement multi-tenancy, here is an example.\n",
"> The feature of Partition key is now not available in Milvus Lite, if you want to use it, you need to start Milvus server from [docker or kubernetes](https://milvus.io/docs/install_standalone-docker.md#Start-Milvus)."
]
},
{
@@ -226,7 +226,6 @@
"execution_count": 2,
"id": "acae54e37e7d407bbb7b55eff062a284",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
@@ -242,7 +241,7 @@
"vectorstore = Milvus.from_documents(\n",
" docs,\n",
" embeddings,\n",
" connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n",
" connection_args={\"uri\": URI},\n",
" drop_old=True,\n",
" partition_key_field=\"namespace\", # Use the \"namespace\" field as the partition key\n",
")"
@@ -252,7 +251,6 @@
"cell_type": "markdown",
"id": "9a63283cbaf04dbcab1f6479b197f3a8",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
@@ -274,7 +272,6 @@
"execution_count": 3,
"id": "8dd0d8092fe74a7c96281538738b07e2",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
@@ -303,7 +300,6 @@
"execution_count": 4,
"id": "72eea5119410473aa328ad9291626812",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
@@ -332,7 +328,7 @@
"id": "89756e9e",
"metadata": {},
"source": [
"**To delete or upsert (update/insert) one or more entities:**"
"### To delete or upsert (update/insert) one or more entities"
]
},
{
@@ -353,7 +349,7 @@
"vector_db = Milvus.from_documents(\n",
" docs,\n",
" embeddings,\n",
" connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n",
" connection_args={\"uri\": URI},\n",
")\n",
"\n",
"# Search pks (primary keys) using expression\n",
@@ -389,9 +385,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -324,7 +324,7 @@
"vector_store = Yellowbrick.from_documents(\n",
" documents=split_docs,\n",
" embedding=embeddings,\n",
" connection_info=yellowbrick_connection_string,\n",
" connection_string=yellowbrick_connection_string,\n",
" table=embedding_table,\n",
")\n",
"\n",

File diff suppressed because one or more lines are too long

View File

@@ -289,7 +289,7 @@
"source": [
"## Prompt Templates\n",
"\n",
"Right now we are passing a list of messages directly into the language model. Where does this list of messages come from? Usually it constructed from a combination of user input and application logic. This application logic usually takes the raw user input and transforms it into a list of messages ready to pass to the language model. Common transformations include adding a system message or formatting a template with the user input.\n",
"Right now we are passing a list of messages directly into the language model. Where does this list of messages come from? Usually, it is constructed from a combination of user input and application logic. This application logic usually takes the raw user input and transforms it into a list of messages ready to pass to the language model. Common transformations include adding a system message or formatting a template with the user input.\n",
"\n",
"PromptTemplates are a concept in LangChain designed to assist with this transformation. They take in raw user input and return data (a prompt) that is ready to pass into a language model. \n",
"\n",

View File

@@ -374,7 +374,7 @@
"outputs": [],
"source": [
"# Note we can also get this from the prompt hub, as noted above\n",
"reduce_prompt = hub.pull(\"rlm/map-prompt\")"
"reduce_prompt = hub.pull(\"rlm/reduce-prompt\")"
]
},
{

View File

@@ -83,6 +83,7 @@ const config = {
/** @type {import('@docusaurus/preset-classic').Options} */
({
docs: {
editUrl: "https://github.com/langchain-ai/langchain/edit/master/docs/",
sidebarPath: require.resolve("./sidebars.js"),
remarkPlugins: [
[require("@docusaurus/remark-plugin-npm2yarn"), { sync: true }],
@@ -291,6 +292,10 @@ const config = {
{
title: "GitHub",
items: [
{
label: "Organization",
href: "https://github.com/langchain-ai",
},
{
label: "Python",
href: "https://github.com/langchain-ai/langchain",

View File

@@ -7,7 +7,7 @@ import os
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Dict, List, Set
from typing import Any, Dict
from pydantic.v1 import BaseModel, root_validator
@@ -17,6 +17,7 @@ _ROOT_DIR = Path(os.path.abspath(__file__)).parents[2]
DOCS_DIR = _ROOT_DIR / "docs" / "docs"
CODE_DIR = _ROOT_DIR / "libs"
TEMPLATES_DIR = _ROOT_DIR / "templates"
COOKBOOKS_DIR = _ROOT_DIR / "cookbook"
ARXIV_ID_PATTERN = r"https://arxiv\.org/(abs|pdf)/(\d+\.\d+)"
LANGCHAIN_PYTHON_URL = "python.langchain.com"
@@ -29,6 +30,7 @@ class ArxivPaper:
referencing_doc2url: dict[str, str]
referencing_api_ref2url: dict[str, str]
referencing_template2url: dict[str, str]
referencing_cookbook2url: dict[str, str]
title: str
authors: list[str]
abstract: str
@@ -50,7 +52,6 @@ def search_documentation_for_arxiv_references(docs_dir: Path) -> dict[str, set[s
arxiv_url_pattern = re.compile(ARXIV_ID_PATTERN)
exclude_strings = {"file_path", "metadata", "link", "loader", "PyPDFLoader"}
# loop all the files (ipynb, mdx, md) in the docs folder
files = (
p.resolve()
for p in Path(docs_dir).glob("**/*")
@@ -76,39 +77,6 @@ def search_documentation_for_arxiv_references(docs_dir: Path) -> dict[str, set[s
return arxiv_id2file_names
def convert_module_name_and_members_to_urls(
arxiv_id2module_name_and_members: dict[str, set[str]],
) -> dict[str, set[str]]:
arxiv_id2urls = {}
for arxiv_id, module_name_and_members in arxiv_id2module_name_and_members.items():
urls = set()
for module_name_and_member in module_name_and_members:
module_name, type_and_member = module_name_and_member.split(":")
if "$" in type_and_member:
type, member = type_and_member.split("$")
else:
type = type_and_member
member = ""
_namespace_parts = module_name.split(".")
if type == "module":
first_namespace_part = _namespace_parts[0]
if first_namespace_part.startswith("langchain_"):
first_namespace_part = first_namespace_part.replace(
"langchain_", ""
)
url = f"{first_namespace_part}_api_reference.html#module-{module_name}"
elif type in ["class", "function"]:
second_namespace_part = _namespace_parts[1]
url = f"{second_namespace_part}/{module_name}.{member}.html#{module_name}.{member}"
else:
raise ValueError(
f"Unknown type: {type} in the {module_name_and_member}."
)
urls.add(url)
arxiv_id2urls[arxiv_id] = urls
return arxiv_id2urls
def search_code_for_arxiv_references(code_dir: Path) -> dict[str, set[str]]:
"""Search the code for arXiv references.
@@ -220,7 +188,6 @@ def search_code_for_arxiv_references(code_dir: Path) -> dict[str, set[str]]:
def search_templates_for_arxiv_references(templates_dir: Path) -> dict[str, set[str]]:
arxiv_url_pattern = re.compile(ARXIV_ID_PATTERN)
# exclude_strings = {"file_path", "metadata", "link", "loader", "PyPDFLoader"}
# loop all the Readme.md files since they are parsed into LangChain documentation
# exclude the Readme.md in the root folder
@@ -234,8 +201,6 @@ def search_templates_for_arxiv_references(templates_dir: Path) -> dict[str, set[
with open(file, "r", encoding="utf-8") as f:
lines = f.readlines()
for line in lines:
# if any(exclude_string in line for exclude_string in exclude_strings):
# continue
matches = arxiv_url_pattern.search(line)
if matches:
arxiv_id = matches.group(2)
@@ -247,6 +212,58 @@ def search_templates_for_arxiv_references(templates_dir: Path) -> dict[str, set[
return arxiv_id2template_names
def search_cookbooks_for_arxiv_references(cookbooks_dir: Path) -> dict[str, set[str]]:
arxiv_url_pattern = re.compile(ARXIV_ID_PATTERN)
files = (p.resolve() for p in Path(cookbooks_dir).glob("**/*.ipynb"))
arxiv_id2cookbook_names: dict[str, set[str]] = {}
for file in files:
with open(file, "r", encoding="utf-8") as f:
lines = f.readlines()
for line in lines:
matches = arxiv_url_pattern.search(line)
if matches:
arxiv_id = matches.group(2)
cookbook_name = file.stem
if arxiv_id not in arxiv_id2cookbook_names:
arxiv_id2cookbook_names[arxiv_id] = {cookbook_name}
else:
arxiv_id2cookbook_names[arxiv_id].add(cookbook_name)
return arxiv_id2cookbook_names
def convert_module_name_and_members_to_urls(
arxiv_id2module_name_and_members: dict[str, set[str]],
) -> dict[str, set[str]]:
arxiv_id2urls = {}
for arxiv_id, module_name_and_members in arxiv_id2module_name_and_members.items():
urls = set()
for module_name_and_member in module_name_and_members:
module_name, type_and_member = module_name_and_member.split(":")
if "$" in type_and_member:
type_, member = type_and_member.split("$")
else:
type_ = type_and_member
member = ""
_namespace_parts = module_name.split(".")
if type_ == "module":
first_namespace_part = _namespace_parts[0]
if first_namespace_part.startswith("langchain_"):
first_namespace_part = first_namespace_part.replace(
"langchain_", ""
)
url = f"{first_namespace_part}_api_reference.html#module-{module_name}"
elif type_ in ["class", "function"]:
second_namespace_part = _namespace_parts[1]
url = f"{second_namespace_part}/{module_name}.{member}.html#{module_name}.{member}"
else:
raise ValueError(
f"Unknown type: {type_} in the {module_name_and_member}."
)
urls.add(url)
arxiv_id2urls[arxiv_id] = urls
return arxiv_id2urls
def _get_doc_path(file_parts: tuple[str, ...], file_extension) -> str:
"""Get the relative path to the documentation page
from the absolute path of the file.
@@ -285,60 +302,6 @@ def _get_module_name(file_parts: tuple[str, ...]) -> str:
return ".".join(ns_parts)
def compound_urls(
arxiv_id2file_names: dict[str, set[str]],
arxiv_id2code_urls: dict[str, set[str]],
arxiv_id2templates: dict[str, set[str]],
) -> dict[str, dict[str, set[str]]]:
# format urls and verify that the urls are correct
arxiv_id2file_names_new = {}
for arxiv_id, file_names in arxiv_id2file_names.items():
key2urls = {
key: _format_doc_url(key)
for key in file_names
if _is_url_ok(_format_doc_url(key))
}
if key2urls:
arxiv_id2file_names_new[arxiv_id] = key2urls
arxiv_id2code_urls_new = {}
for arxiv_id, code_urls in arxiv_id2code_urls.items():
key2urls = {
key: _format_api_ref_url(key)
for key in code_urls
if _is_url_ok(_format_api_ref_url(key))
}
if key2urls:
arxiv_id2code_urls_new[arxiv_id] = key2urls
arxiv_id2templates_new = {}
for arxiv_id, templates in arxiv_id2templates.items():
key2urls = {
key: _format_template_url(key)
for key in templates
if _is_url_ok(_format_template_url(key))
}
if key2urls:
arxiv_id2templates_new[arxiv_id] = key2urls
arxiv_id2type2key2urls = dict.fromkeys(
arxiv_id2file_names_new | arxiv_id2code_urls_new | arxiv_id2templates_new
)
arxiv_id2type2key2urls = {k: {} for k in arxiv_id2type2key2urls}
for arxiv_id, key2urls in arxiv_id2file_names_new.items():
arxiv_id2type2key2urls[arxiv_id]["docs"] = key2urls
for arxiv_id, key2urls in arxiv_id2code_urls_new.items():
arxiv_id2type2key2urls[arxiv_id]["apis"] = key2urls
for arxiv_id, key2urls in arxiv_id2templates_new.items():
arxiv_id2type2key2urls[arxiv_id]["templates"] = key2urls
# reverse sort by the arxiv_id (the newest papers first)
ret = dict(
sorted(arxiv_id2type2key2urls.items(), key=lambda item: item[0], reverse=True)
)
return ret
def _is_url_ok(url: str) -> bool:
"""Check if the url page is open without error."""
import requests
@@ -424,6 +387,9 @@ class ArxivAPIWrapper(BaseModel):
referencing_template2url=type2key2urls["templates"]
if "templates" in type2key2urls
else {},
referencing_cookbook2url=type2key2urls["cookbooks"]
if "cookbooks" in type2key2urls
else {},
)
for result, type2key2urls in zip(results, arxiv_id2type2key2urls.values())
]
@@ -443,6 +409,10 @@ def _format_template_url(template_name: str) -> str:
return f"https://{LANGCHAIN_PYTHON_URL}/docs/templates/{template_name}"
def _format_cookbook_url(cookbook_name: str) -> str:
return f"https://github.com/langchain-ai/langchain/blob/master/cookbook/{cookbook_name}.ipynb"
def _compact_module_full_name(doc_path: str) -> str:
# agents/langchain_core.agents.AgentAction.html#langchain_core.agents.AgentAction
module = doc_path.split("#")[1].replace("module-", "")
@@ -454,9 +424,79 @@ def _compact_module_full_name(doc_path: str) -> str:
return module
def compound_urls(
arxiv_id2file_names: dict[str, set[str]],
arxiv_id2code_urls: dict[str, set[str]],
arxiv_id2templates: dict[str, set[str]],
arxiv_id2cookbooks: dict[str, set[str]],
) -> dict[str, dict[str, set[str]]]:
# format urls and verify that the urls are correct
arxiv_id2file_names_new = {}
for arxiv_id, file_names in arxiv_id2file_names.items():
key2urls = {
key: _format_doc_url(key)
for key in file_names
if _is_url_ok(_format_doc_url(key))
}
if key2urls:
arxiv_id2file_names_new[arxiv_id] = key2urls
arxiv_id2code_urls_new = {}
for arxiv_id, code_urls in arxiv_id2code_urls.items():
key2urls = {
key: _format_api_ref_url(key)
for key in code_urls
if _is_url_ok(_format_api_ref_url(key))
}
if key2urls:
arxiv_id2code_urls_new[arxiv_id] = key2urls
arxiv_id2templates_new = {}
for arxiv_id, templates in arxiv_id2templates.items():
key2urls = {
key: _format_template_url(key)
for key in templates
if _is_url_ok(_format_template_url(key))
}
if key2urls:
arxiv_id2templates_new[arxiv_id] = key2urls
arxiv_id2cookbooks_new = {}
for arxiv_id, cookbooks in arxiv_id2cookbooks.items():
key2urls = {
key: _format_cookbook_url(key)
for key in cookbooks
if _is_url_ok(_format_cookbook_url(key))
}
if key2urls:
arxiv_id2cookbooks_new[arxiv_id] = key2urls
arxiv_id2type2key2urls = dict.fromkeys(
arxiv_id2file_names_new
| arxiv_id2code_urls_new
| arxiv_id2templates_new
| arxiv_id2cookbooks_new
)
arxiv_id2type2key2urls = {k: {} for k in arxiv_id2type2key2urls}
for arxiv_id, key2urls in arxiv_id2file_names_new.items():
arxiv_id2type2key2urls[arxiv_id]["docs"] = key2urls
for arxiv_id, key2urls in arxiv_id2code_urls_new.items():
arxiv_id2type2key2urls[arxiv_id]["apis"] = key2urls
for arxiv_id, key2urls in arxiv_id2templates_new.items():
arxiv_id2type2key2urls[arxiv_id]["templates"] = key2urls
for arxiv_id, key2urls in arxiv_id2cookbooks_new.items():
arxiv_id2type2key2urls[arxiv_id]["cookbooks"] = key2urls
# reverse sort by the arxiv_id (the newest papers first)
ret = dict(
sorted(arxiv_id2type2key2urls.items(), key=lambda item: item[0], reverse=True)
)
return ret
def log_results(arxiv_id2type2key2urls):
arxiv_ids = arxiv_id2type2key2urls.keys()
doc_number, api_number, templates_number = 0, 0, 0
doc_number, api_number, templates_number, cookbooks_number = 0, 0, 0, 0
for type2key2url in arxiv_id2type2key2urls.values():
if "docs" in type2key2url:
doc_number += len(type2key2url["docs"])
@@ -464,9 +504,11 @@ def log_results(arxiv_id2type2key2urls):
api_number += len(type2key2url["apis"])
if "templates" in type2key2url:
templates_number += len(type2key2url["templates"])
if "cookbooks" in type2key2url:
cookbooks_number += len(type2key2url["cookbooks"])
logger.warning(
f"Found {len(arxiv_ids)} arXiv references in the {doc_number} docs, {api_number} API Refs,"
f" and {templates_number} Templates."
f" {templates_number} Templates, and {cookbooks_number} Cookbooks."
)
@@ -477,7 +519,7 @@ def generate_arxiv_references_page(file_name: Path, papers: list[ArxivPaper]) ->
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
and Templates.
Templates, and Cookbooks.
## Summary
@@ -510,6 +552,14 @@ and Templates.
for key, url in paper.referencing_template2url.items()
)
]
if paper.referencing_cookbook2url:
refs += [
"`Cookbook:` "
+ ", ".join(
f"[{key}]({url})"
for key, url in paper.referencing_cookbook2url.items()
)
]
refs_str = ", ".join(refs)
title_link = f"[{paper.title}]({paper.url})"
@@ -533,8 +583,17 @@ and Templates.
if paper.referencing_template2url
else ""
)
cookbook_refs = (
f" - **Cookbook:** {', '.join(f'[{key}]({url})' for key, url in paper.referencing_cookbook2url.items())}"
if paper.referencing_cookbook2url
else ""
)
refs = "\n".join(
[el for el in [docs_refs, api_ref_refs, template_refs] if el]
[
el
for el in [docs_refs, api_ref_refs, template_refs, cookbook_refs]
if el
]
)
f.write(f"""
## {paper.title}
@@ -562,8 +621,9 @@ def main():
)
arxiv_id2file_names = search_documentation_for_arxiv_references(DOCS_DIR)
arxiv_id2templates = search_templates_for_arxiv_references(TEMPLATES_DIR)
arxiv_id2cookbooks = search_cookbooks_for_arxiv_references(COOKBOOKS_DIR)
arxiv_id2type2key2urls = compound_urls(
arxiv_id2file_names, arxiv_id2code_urls, arxiv_id2templates
arxiv_id2file_names, arxiv_id2code_urls, arxiv_id2templates, arxiv_id2cookbooks
)
log_results(arxiv_id2type2key2urls)

View File

@@ -27,6 +27,7 @@ if __name__ == "__main__":
sidebar_hidden = """---
sidebar_class_name: hidden
custom_edit_url:
---
"""

View File

@@ -24,6 +24,7 @@ CHAT_MODEL_FEAT_TABLE = {
"ChatMistralAI": {
"tool_calling": True,
"structured_output": True,
"json_model": True,
"package": "langchain-mistralai",
"link": "/docs/integrations/chat/mistralai/",
},
@@ -80,6 +81,7 @@ CHAT_MODEL_FEAT_TABLE = {
"link": "/docs/integrations/chat/bedrock/",
},
"ChatHuggingFace": {
"tool_calling": True,
"local": True,
"package": "langchain-huggingface",
"link": "/docs/integrations/chat/huggingface/",
@@ -102,6 +104,7 @@ LLM_TEMPLATE = """\
sidebar_position: 1
sidebar_class_name: hidden
keywords: [compatibility]
custom_edit_url:
---
# LLMs
@@ -123,6 +126,7 @@ CHAT_MODEL_TEMPLATE = """\
sidebar_position: 0
sidebar_class_name: hidden
keywords: [compatibility, bind_tools, tool calling, function calling, structured output, with_structured_output, json mode, local model]
custom_edit_url:
---
# Chat models

Some files were not shown because too many files have changed in this diff Show More