Compare commits

..

243 Commits

Author SHA1 Message Date
Bagatur
8e2316b8c2 community[patch]: Release 0.2.11 (#24989) 2024-08-02 20:08:44 +00:00
ccurme
c2538e7834 experimental[patch]: bump min versions of core and community (#24996)
Ollama functions unit test broken with min version of community.
2024-08-02 19:58:55 +00:00
ccurme
acba38a18e docs: update toolkit guides (#24992) 2024-08-02 15:51:05 -04:00
ccurme
22c1a4041b community[patch]: support named arguments in github toolkit (#24986)
Parameters may be passed in by name if generated from tool calls.
2024-08-02 18:27:32 +00:00
ccurme
4797b806c2 experimental[patch]: release 0.0.64 (#24990) 2024-08-02 18:00:57 +00:00
Tomaz Bratanic
7061869aec Add relik graph transformer (#24982)
Relik is a new library for graph extraction that offers smaller and
cheaper models for graph construction
2024-08-02 13:55:41 -04:00
Erick Friis
98c22e9082 docs: feature table component (#24985) 2024-08-02 17:41:47 +00:00
ccurme
c04d95b962 standard-tests: set integration test parameters independent of unit test (#24979)
This ends up getting set in integration tests.
2024-08-02 10:40:11 -07:00
gbaian10
54e9ea433a fix: Modify the order of init_chat_model import ollama package. (#24977) 2024-08-02 08:32:56 -07:00
David Gao
fe1820cdaf docs: add wikipedia integration docs (#24932)
Dear langchain maintainers, 

I add the wikipedia integration docs according to the [web
docs](https://python.langchain.com/v0.2/docs/integrations/retrievers/wikipedia/),
and follow the format of [tavily
example](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/retrievers/tavily.ipynb)
and [retriever
template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb),
this is my first time contributing large repo. please let me know if I'm
doing anything wrong, thank you!

Topic related: #24908

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-02 10:12:04 -04:00
ZhangShenao
71c0564c9f community[patch]: Add test case for MoonshotChat (#24960)
Add test case for `MoonshotChat`.
2024-08-02 09:37:31 -04:00
ZhangShenao
c65e48996c patch[partners] Fix check_imports bugs in pinecone and milvus (#24971)
Fix wrong declared variables of `check_imports` in pinecone and milvus
2024-08-02 09:27:11 -04:00
Isaac Francisco
d7688a4328 community[patch]: adding artifact to Tavily search (#24376)
This allows you to get raw content as well as the answer, instead of
just getting the results.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-01 21:12:11 -07:00
Bagatur
7b08de8909 langchain[patch]: Release 0.2.12 (#24954) 2024-08-02 04:04:49 +00:00
Bagatur
245cb5a252 core[patch]: Release 0.2.27 (#24952) 2024-08-02 01:43:24 +00:00
Bagatur
199e9c5ae0 core[patch]: Fix tool args schema inherited field parsing (#24936)
Fix #24925
2024-08-01 18:36:33 -07:00
Bagatur
fba65ba04f infra: test core on py 3.9, 10, 11 (#24951) 2024-08-01 18:23:37 -07:00
Leonid Ganeline
4092876863 core: docstrings `BaseCallbackHandler update (#24948)
Added missed docstrings
2024-08-01 20:46:53 -04:00
ccurme
6e45dba471 docs: fix redirect (#24950) 2024-08-01 20:45:54 -04:00
WU LIFU
ad16eed119 core[patch]: runnable config ensure_config deep copy from var_child_runnable… (#24862)
**issue**: #24660 
RunnableWithMessageHistory.stream result in error because the
[evaluation](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/runnables/branch.py#L220)
of the branch
[condition](99eb31ec41/libs/core/langchain_core/runnables/history.py (L328C1-L329C1))
unexpectedly trigger the
"[on_end](99eb31ec41/libs/core/langchain_core/runnables/history.py (L332))"
(exit_history) callback of the default branch


**descriptions**
After a lot of investigation I'm convinced that the root cause is that
1. during the execution of the runnable, the
[var_child_runnable_config](99eb31ec41/libs/core/langchain_core/runnables/config.py (L122))
is shared between the branch
[condition](99eb31ec41/libs/core/langchain_core/runnables/history.py (L328C1-L329C1))
runnable and the [default branch
runnable](99eb31ec41/libs/core/langchain_core/runnables/history.py (L332))
within the same context
2. when the default branch runnable runs, it gets the
[var_child_runnable_config](99eb31ec41/libs/core/langchain_core/runnables/config.py (L163))
and may unintentionally [add more handlers
](99eb31ec41/libs/core/langchain_core/runnables/config.py (L325))to
the callback manager of this config
3. when it is again the turn for the
[condition](99eb31ec41/libs/core/langchain_core/runnables/history.py (L328C1-L329C1))
to run, it gets the `var_child_runnable_config` whose callback manager
has the handlers added by the default branch. When it runs that handler
(`exit_history`) it leads to the error
   
with the assumption that, the `ensure_config` function actually does
want to create a immutable copy from `var_child_runnable_config` because
it starts with an [`empty` variable
](99eb31ec41/libs/core/langchain_core/runnables/config.py (L156)),
i go ahead to do a deepcopy to ensure that future modification to the
returned value won't affect the `var_child_runnable_config` variable
   
   Having said that I actually 
1. don't know if this is a proper fix
2. don't know whether it will lead to other unintended consequence 
3. don't know why only "stream" runs into this issue while "invoke" runs
without problem

so @nfcampos @hwchase17 please help review, thanks!

---------

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-01 17:30:32 -07:00
Jacob Lee
3ab09d87d6 docs[patch]: Adds components for prereqs, compatibility, fix chat model tab issue (#24585)
Added to `docs/how_to/tools_runtime` as a proof of concept, will apply
everywhere if we like.

A bit more compact than the default callouts, will help standardize the
layout of our pages since we frequently use these boxes.

<img width="1088" alt="Screenshot 2024-07-23 at 4 49 02 PM"
src="https://github.com/user-attachments/assets/7380801c-e092-4d31-bcd8-3652ee05f29e">
2024-08-01 15:04:13 -07:00
ccurme
9cb69a8746 docs: update retriever template, add arxiv retriever (#24947) 2024-08-01 16:53:18 -04:00
Casey Clements
db3ceb4d0a partners/mongodb: Improved search index commands (#24745)
Hardens index commands with try/except for free clusters and optional
waits for syncing and tests.

[efriis](https://github.com/efriis) These are the upgrades to the search
index commands (CRUD) that I mentioned.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-01 20:16:32 +00:00
ccurme
db42576b09 docs: delete old migration guide (#24881)
Redirects to
https://python.langchain.com/v0.2/docs/versions/migrating_chains/
2024-08-01 16:11:47 -04:00
Ikko Eltociear Ashimine
be5294e35d docs: update agents.ipynb (#24945)
initalize -> initialize
2024-08-01 14:37:37 -04:00
ccurme
41ed23a050 docs: update retriever integration pages (#24931) 2024-08-01 14:37:07 -04:00
maang-h
ea505985c4 docs: Standardize ZhipuAIEmbeddings docstrings (#24933)
- **Description:** Standardize ZhipuAIEmbeddings rich docstrings.
- **Issue:** the issue #24856
2024-08-01 14:06:53 -04:00
ccurme
02db66d764 docs: fix kv store column headers (#24941)
![Screenshot 2024-08-01 at 12 32 19
PM](https://github.com/user-attachments/assets/888056b7-3065-4be0-a6b8-bcab5b729c2c)
2024-08-01 09:49:36 -07:00
Anneli Samuel
2204d8cb7d community[patch]: Invoke on_llm_new_token callback before yielding chunk (#24938)
**Description**: Invoke on_llm_new_token callback before yielding chunk
in streaming mode
**Issue**:
[#16913](https://github.com/langchain-ai/langchain/issues/16913)
2024-08-01 16:39:04 +00:00
John
ff6274d32d docs: update langchain-unstructured docs (#24935)
- **Description:** The UnstructuredClient will have a breaking change in
the near future. Add a note in the docs that the examples here may not
use the latest version and users should refer to the SDK docs for the
latest info.
2024-08-01 16:27:40 +00:00
ccurme
c72f0d2f20 docs: update toolkit integration pages (#24887)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-01 12:13:08 -04:00
Eugene Yurtsev
75776e4a54 core[patch]: In unit tests, use _schema() instead of BaseModel.schema() (#24930)
This PR introduces a module with some helper utilities for the pydantic
1 -> 2 migration.

They're meant to be used in the following way:

1) Use the utility code to get unit tests pass without requiring
modification to the unit tests
2) (If desired) upgrade the unit tests to match pydantic 2 output
3) (If desired) stop using the utility code

Currently, this module contains a way to map `schema()` generated by
pydantic 2 to (mostly) match the output from pydantic v1.
2024-08-01 11:59:04 -04:00
Serena Ruan
1827bb4042 community[patch]: support bind_tools for ChatMlflow (#24547)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- **Description:** 
Support ChatMlflow.bind_tools method
Tested in Databricks:
<img width="836" alt="image"
src="https://github.com/user-attachments/assets/fa28ef50-0110-4698-8eda-4faf6f0b9ef8">



- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
2024-08-01 08:43:07 -07:00
Michal Gregor
769c3bb838 huggingface: Added a missing argument to a ChatHuggingFace doc notebook. (#24929)
- **Description:** When adding docs for constructing ChatHuggingFace
using a HuggingFacePipeline, I forgot to add `return_full_text=False` as
an argument. In this setup, the chat response would incorrectly contain
all the input text. I am fixing that here by adding that line to the
offending notebook.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-01 15:42:35 +00:00
BottlePumpkin
bfc59c1d26 community: Fix KeyError in NotionDB loader when 'name' is missing (#24224)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.



**Description:** This PR fixes a KeyError in NotionDBLoader when the
"name" key is missing in the "people" property.

**Issue:** Fixes #24223 

**Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-01 13:55:40 +00:00
alexqiao
8eb0bdead3 community[patch]: Invoke callback prior to yielding token (#24917)
**Description: Invoke callback prior to yielding token in stream method
for chat_models .**
**Issue**: https://github.com/langchain-ai/langchain/issues/16913
#16913
2024-08-01 13:19:55 +00:00
ZhangShenao
b2dd9ffaaf patch[cli] Fix bug in check_imports.py (#24918)
The variable `has_failure` in check_imports.py is wrong-declared. It's
actually an another variable.
2024-08-01 09:08:12 -04:00
Jacob Lee
f14121faaf docs[patch]: Update local RAG tutorial (#24909) 2024-07-31 19:19:23 -07:00
Bagatur
b7abac9f92 infra: poetry lock root (#24913) 2024-08-01 01:19:34 +00:00
Jacob Lee
42c686bc28 docs[patch]: Update local model how-to guide (#24911)
Updates to use `langchain_ollama`, new models, chat model example
2024-07-31 18:01:55 -07:00
Erick Friis
600fc233ef partners/ollama: release 0.1.1 (#24910) 2024-07-31 17:31:29 -07:00
Bagatur
25b93cc4c0 core[patch]: stringify tool non-content blocks (#24626)
Slightly breaking bugfix. Shouldn't cause too many issues since no
models would be able to handle non-content block ToolMessage.content
anyways.
2024-07-31 16:42:38 -07:00
Bagatur
492df75937 docs: chat model table nit (#24907) 2024-07-31 15:14:27 -07:00
Bagatur
a24c445e02 docs: cleanup readme (#24905) 2024-07-31 15:03:28 -07:00
Jacob Lee
5098f9dc79 infra: related section in docs (#24829)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-31 14:25:58 -07:00
Nikita Pakunov
c776471ac6 community: fix AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata' (#24432)
Fixes #24049

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-31 21:18:33 +00:00
Bagatur
752a71b688 integrations[patch]: release model packages (#24900) 2024-07-31 20:48:20 +00:00
Jacob Lee
1213a59f87 docs[patch]: Update kv store docs pages (#24848) 2024-07-31 13:23:24 -07:00
Erick Friis
17a06cb7a6 infra: check templates based on integration (#24857)
instead of hardcoding a linter for each, iterate through the lines of
the template notebook and find lines that start with `##` (includes
lower headings), and enforce that those headings are found in new docs
that are contributed
2024-07-31 13:19:50 -07:00
Erick Friis
a7380dd531 cli: release 0.0.28 (#24852) 2024-07-31 13:03:24 -07:00
Erick Friis
e98e4be0f7 cli: register new integration doc templates (#24854)
- wait to merge for retriever.ipynb merge #24836
2024-07-31 13:03:05 -07:00
Eugene Yurtsev
210623b409 core[minor]: Add support for pydantic 2 to utility to get fields (#24899)
Add compatibility for pydantic 2 for a utility function.

This will help push some small changes to master, so they don't have to
be kept track of on a separate branch.
2024-07-31 19:11:07 +00:00
Bagatur
7d1694040d core[patch]: Release 0.2.26 (#24898) 2024-07-31 19:00:50 +00:00
Eugene Yurtsev
add16111b9 community[patch]: Make the pydantic linter stricter (#24897)
Stricter linting of deprecated pydantic features.
2024-07-31 18:57:37 +00:00
Eugene Yurtsev
a4a444f73d community[patch]: Fix arcee llm usage of root_validator(pre=False) (#24896)
Should be pre=True
2024-07-31 18:49:20 +00:00
Eugene Yurtsev
69c656aa5f langchain[minor]: Upgrade ambiguous root_validator to @pre_init (#24895)
The @pre_init validator is a temporary solution for base models. It has
similar (but not identical) semantics to @root_validator(), but it works
strictly as a pre-init validator.

It'll work as expected as long as the pydantic model type hints were
correct.
2024-07-31 18:46:47 +00:00
Eugene Yurtsev
5099a9c9b4 core[patch]: Update unit tests with a workaround for using AnyID in pydantic 2 (#24892)
Pydantic 2 ignores __eq__ overload for subclasses of strings.
2024-07-31 14:42:12 -04:00
Bagatur
8461934c2b core[patch], integrations[patch]: convert TypedDict to tool schema support (#24641)
supports following UX

```python
    class SubTool(TypedDict):
        """Subtool docstring"""

        args: Annotated[Dict[str, Any], {}, "this does bar"]

    class Tool(TypedDict):
        """Docstring
        Args:
            arg1: foo
        """

        arg1: str
        arg2: Union[int, str]
        arg3: Optional[List[SubTool]]
        arg4: Annotated[Literal["bar", "baz"], ..., "this does foo"]
        arg5: Annotated[Optional[float], None]
```

- can parse google style docstring
- can use Annotated to specify default value (second arg)
- can use Annotated to specify arg description (third arg)
- can have nested complex types
2024-07-31 18:27:24 +00:00
Eugene Yurtsev
d24b82357f community[patch]: Add missing annotations (#24890)
This PR adds annotations in comunity package.

Annotations are only strictly needed in subclasses of BaseModel for
pydantic 2 compatibility.

This PR adds some unnecessary annotations, but they're not bad to have
regardless for documentation pages.
2024-07-31 18:13:44 +00:00
Eugene Yurtsev
7720483432 langchain[patch]: Update unit tests to workaround a pydantic 2 issue (#24886)
This will allow our unit tests to pass when using AnyID() with our pydantic models.
2024-07-31 14:09:40 -04:00
Eugene Yurtsev
2019e31bc5 langchain[patch]: Add missing type annotations (#24889)
Adds missing type annotations in preparation for pydantic 2 upgrade.
2024-07-31 14:09:22 -04:00
ccurme
30f18c7b02 docs: add retriever integrations template (#24836) 2024-07-31 13:50:44 -04:00
Anirudh31415926535
4da3d4b18e docs: Minor corrections and updates to Cohere docs (#22726)
- **Description:** Update the Cohere's provider and RagRetriever
documentations with latest updates.
    - **Twitter handle:** Anirudh1810
2024-07-31 10:16:26 -07:00
ccurme
40b4a3de6e docs: update chat model integration pages (#24882)
to conform with template
2024-07-31 11:26:52 -04:00
Nishan Jain
b00c0fc558 [Community][minor]: Added prompt governance in pebblo_retrieval (#24874)
Title: [pebblo_retrieval] Identifying entities in prompts given in
PebbloRetrievalQA leading to prompt governance
Description: Implemented identification of entities in the prompt using
Pebblo prompt governance API.
Issue: NA
Dependencies: NA
Add tests and docs: NA
2024-07-31 13:14:51 +00:00
Rajendra Kadam
a6add89bd4 community[minor]: [PebbloSafeLoader] Implement content-size-based batching (#24871)
- **Title:** [PebbloSafeLoader] Implement content-size-based batching in
the classification flow(loader/doc API)
- **Description:** 
- Implemented content-size-based batching in the loader/doc API, set to
100KB with no external configuration option, intentionally hard-coded to
prevent timeouts.
    - Remove unused field(pb_id) from doc_metadata
- **Issue:** NA
- **Dependencies:** NA
- **Add tests and docs:** Updated
2024-07-31 09:10:28 -04:00
TrumanYan
096b66db4a community: replace it with Tencent Cloud SDK (#24172)
Description: The old method will be discontinued; use the official SDK
for more model options.
Issue: None
Dependencies: None
Twitter handle: None

Co-authored-by: trumanyan <trumanyan@tencent.com>
2024-07-31 09:05:38 -04:00
Erick Friis
99eb31ec41 cli: embed docstring template (#24855) 2024-07-31 02:16:40 +00:00
Noah Peterson
4b2a8ce6c7 docs: Shorten unreasonably long OllamaEmbeddings page (#24850)
This change removes excessive embeddings output in the Jupyter Notebook
on the [Ollama text embedding
page](https://python.langchain.com/v0.2/docs/integrations/text_embedding/ollama/)
2024-07-31 01:57:04 +00:00
Erick Friis
3999e9035c cli/docs: embedding template standardization (#24849) 2024-07-30 18:54:03 -07:00
Bagatur
1181c10c65 docs: reorder integrations sidebar (#24847) 2024-07-30 16:58:26 -07:00
Bagatur
943126c5fd docs: chat model pkg links (#24845) 2024-07-30 16:26:06 -07:00
Erick Friis
1f5444817a community: deprecate BedrockEmbeddings in favor of langchain-aws (#24846) 2024-07-30 23:13:17 +00:00
Jacob Lee
21eb4c9e5d docs[patch]: Adds first kv store doc matching new template (#24844) 2024-07-30 15:58:51 -07:00
Bagatur
a4e940550a docs: integrations custom callout (#24843) 2024-07-30 22:48:18 +00:00
Bagatur
61ecb10a77 docs: partner pkg table (#24840) 2024-07-30 15:28:10 -07:00
Erick Friis
b099cc3507 cli: release 0.0.27 (#24842) 2024-07-30 22:07:50 +00:00
Bagatur
419f2c2585 cli[patch]: tool integration templates (#24837)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-30 14:59:33 -07:00
mschoenb97IL
19b127f640 langchain: Update Langchain -> Langgraph migration docs for the deprecation of the messages_modifier parameter. (#24839)
**Description:** Updated the Langgraph migration docs to use
`state_modifier` rather than `messages_modifier`
**Issue:** N/A
**Dependencies:** N/A

- [ X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-30 21:28:32 +00:00
ccurme
c123cb2b30 docs: update migration guide (#24835)
Move to its own section in the sidebar.
2024-07-30 20:17:12 +00:00
Erick Friis
957b05b8d5 infra: py3.11 for community integration test compiling (#24834)
e.g.
https://github.com/langchain-ai/langchain/actions/runs/10167754785/job/28120861343?pr=24833
2024-07-30 18:43:10 +00:00
Erick Friis
88418af3f5 core: release 0.2.25 (#24833) 2024-07-30 18:41:09 +00:00
Bagatur
37b060112a langchain[patch]: fix ollama in init_chat_model (#24832) 2024-07-30 18:38:53 +00:00
Jerron Lim
d8f3ea82db langchain[patch]: init_chat_model() to import ChatOllama from langchain-ollama and fallback on langchain-community (#24821)
Description: init_chat_model() should import ChatOllama from
`langchain-ollama`. If that fails, fallback to `langchain-community`
2024-07-30 11:16:10 -07:00
Eugene Yurtsev
3a7f3d46c3 docs: Add pydantic compatibility to side bar (#24826)
Add pydantic compatibility to side bar
2024-07-30 14:10:48 -04:00
Isaac Francisco
511242280b [docs]: standardize vectorstores (#24797) 2024-07-30 10:38:04 -07:00
Jacob Lee
ac649800df docs[patch]: Adds kv store integration docs template (#24804) 2024-07-30 10:07:57 -07:00
cffranco94
b01d938997 experimental: Add config to convert_to_graph_documents (#24012)
PR title: Experimental: Add config to convert_to_graph_documents

Description: In order to use langfuse, i need to pass the langfuse
configuration when invoking the chain. langchain_experimental does not
allow to add any parameters (beside the documents) to the
convert_to_graph_documents method. This way, I cannot monitor the chain
in langfuse.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Catarina Franco <catarina.franco@criticalsoftware.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-30 17:01:06 +00:00
Shailendra Mishra
f2d810b3c0 clob_bugfix... (#24813)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-30 12:44:04 -04:00
Anush
51b15448cc community: Fix FastEmbedEmbeddings (#24462)
## Description

This PR:
- Fixes the validation error in `FastEmbedEmbeddings`.
- Adds support for `batch_size`, `parallel` params.
- Removes support for very old FastEmbed versions.
- Updates the FastEmbed doc with the new params.

Associated Issues:
- Resolves #24039
- Resolves #https://github.com/qdrant/fastembed/issues/296
2024-07-30 12:42:46 -04:00
ccurme
73ec24fc56 docs[patch]: add toolkit template (#24791) 2024-07-30 12:36:09 -04:00
Tamir Zitman
b3e1378f2b langchain : text_splitters Added PowerShell (#24582)
- **Description:** Added PowerShell support for text splitters language
include docs relevant update
  - **Issue:** None
  - **Dependencies:** None

---------

Co-authored-by: tzitman <tamir.zitman@intel.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-30 16:13:52 +00:00
ccurme
187ee96f7a docs: update chat model feature table (#24822) 2024-07-30 09:06:42 -07:00
Nuno Campos
68ecebf1ec core: Fix implementation of trim_first_node/trim_last_node to use exact same definition of first/last node as in the getter methods (#24802) 2024-07-30 08:44:27 -07:00
Igor Drozdov
c2706cfb9e feat(community): add tools support for litellm (#23906)
I used the following example to validate the behavior

```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_anthropic import ChatAnthropic
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor

@tool
def multiply(x: float, y: float) -> float:
    """Multiply 'x' times 'y'."""
    return x * y

@tool
def exponentiate(x: float, y: float) -> float:
    """Raise 'x' to the 'y'."""
    return x**y

@tool
def add(x: float, y: float) -> float:
    """Add 'x' and 'y'."""
    return x + y

prompt = ChatPromptTemplate.from_messages([
    ("system", "you're a helpful assistant"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

tools = [multiply, exponentiate, add]

llm = ChatAnthropic(model="claude-3-sonnet-20240229", temperature=0)
# llm = ChatLiteLLM(model="claude-3-sonnet-20240229", temperature=0)

agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241", })
```

`ChatAnthropic` version works:

```
> Entering new AgentExecutor chain...

Invoking: `exponentiate` with `{'x': 5, 'y': 2.743}`
responded: [{'text': 'To calculate 3 + 5^2.743, we can use the "exponentiate" and "add" tools:', 'type': 'text', 'index': 0}, {'id': 'toolu_01Gf54DFTkfLMJQX3TXffmxe', 'input': {}, 'name': 'exponentiate', 'type': 'tool_use', 'index': 1, 'partial_json': '{"x": 5, "y": 2.743}'}]

82.65606421491815
Invoking: `add` with `{'x': 3, 'y': 82.65606421491815}`
responded: [{'id': 'toolu_01XUq9S56GT3Yv2N1KmNmmWp', 'input': {}, 'name': 'add', 'type': 'tool_use', 'index': 0, 'partial_json': '{"x": 3, "y": 82.65606421491815}'}]

85.65606421491815
Invoking: `add` with `{'x': 17.24, 'y': -918.1241}`
responded: [{'text': '\n\nSo 3 + 5^2.743 = 85.66\n\nTo calculate 17.24 - 918.1241, we can use:', 'type': 'text', 'index': 0}, {'id': 'toolu_01BkXTwP7ec9JKYtZPy5JKjm', 'input': {}, 'name': 'add', 'type': 'tool_use', 'index': 1, 'partial_json': '{"x": 17.24, "y": -918.1241}'}]

-900.8841[{'text': '\n\nTherefore, 17.24 - 918.1241 = -900.88', 'type': 'text', 'index': 0}]

> Finished chain.
```

While `ChatLiteLLM` version doesn't.

But with the changes in this PR, along with:

- https://github.com/langchain-ai/langchain/pull/23823
- https://github.com/BerriAI/litellm/pull/4554

The result is _almost_ the same:

```
> Entering new AgentExecutor chain...

Invoking: `exponentiate` with `{'x': 5, 'y': 2.743}`
responded: To calculate 3 + 5^2.743, we can use the "exponentiate" and "add" tools:

82.65606421491815
Invoking: `add` with `{'x': 3, 'y': 82.65606421491815}`


85.65606421491815
Invoking: `add` with `{'x': 17.24, 'y': -918.1241}`
responded:

So 3 + 5^2.743 = 85.66

To calculate 17.24 - 918.1241, we can use:

-900.8841

Therefore, 17.24 - 918.1241 = -900.88

> Finished chain.
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-30 15:39:34 +00:00
David Robertson
bfb7f8d40a Brave Search: Enhance search result details with extra snippets (#19209)
**Description:** 

This update significantly improves the Brave Search Tool's utility
within the LangChain library by enriching the search results it returns.
The tool previously returned title, link, and snippet, with the snippet
being a truncated 140-character description from the search engine. To
make the search results more informative, this update enables
extra_snippets by default and introduces additional result fields:
title, link, description (enhancing and renaming the former snippet
field), age, and snippets. The snippets field provides a list of strings
summarizing the webpage, utilizing Brave's capability for more detailed
search insights. This enhancement aims to make the search tool far more
informative and beneficial for users.

**Issue:** N/A

**Dependencies:** No additional dependencies introduced.

**Twitter handle:** @davidalexr987

**Code Changes Summary:**

- Changed the default setting to include extra_snippets in search
results.
- Renamed the snippet field to description to accurately reflect its
content and included an age field for search results.
- Introduced a snippets field that lists webpage summaries, providing
users with comprehensive search result insights.

**Backward Compatibility Note:**

The renaming of snippet to description improves the accuracy of the
returned data field but may impact existing users who have developed
integration's or analyses based on the snippet field. I believe this
change is essential for clarity and utility, and it aligns better with
the data provided by Brave Search.

**Additional Notes:**

This proposal focuses exclusively on the Brave Search package, without
affecting other LangChain packages or introducing new dependencies.
2024-07-30 15:29:38 +00:00
Eugene Yurtsev
873f64751e docs: Remove danger on how to migrate to astream events v2 (#24825)
Users should migrate to v2 now
2024-07-30 15:28:07 +00:00
Ben Chambers
435771fe74 [community]: Fix package name mismatch (#24824)
- **Description:** fix a mismatch in pypi package names
2024-07-30 11:21:39 -04:00
ccurme
b7bbfc7c67 langchain: revert "init_chat_model() to support ChatOllama from langchain-ollama" (#24819)
Reverts langchain-ai/langchain#24818

Overlooked discussion in
https://github.com/langchain-ai/langchain/pull/24801.
2024-07-30 14:23:36 +00:00
Jerron Lim
5abfc85fec langchain: init_chat_model() to support ChatOllama from langchain-ollama (#24818)
Description: Since moving away from `langchain-community` is
recommended, `init_chat_models()` should import ChatOllama from
`langchain-ollama` instead.
2024-07-30 10:17:38 -04:00
Eugene Yurtsev
4fab8996cf docs: Update pydantic compatibility (#24625)
Update pydantic compatibility. This will only be true after we release
the partner packages.
2024-07-29 22:19:00 -04:00
Jacob Lee
d6ca1474e0 docs[patch]: Adds key-value store to conceptual guide (#24798) 2024-07-29 18:45:16 -07:00
Erick Friis
cdaea17b3e cli/docs: llm integration template standardization (#24795) 2024-07-29 17:47:13 -07:00
Bagatur
a6d1fb4275 core[patch]: introduce ToolMessage.status (#24628)
Anthropic models (including via Bedrock and other cloud platforms)
accept a status/is_error attribute on tool messages/results
(specifically in `tool_result` content blocks for Anthropic API). Adding
a ToolMessage.status attribute so that users can set this attribute when
using those models
2024-07-29 14:01:53 -07:00
Isaac Francisco
78d97b49d9 [partner]: ollama llm fix (#24790) 2024-07-29 13:00:02 -07:00
maang-h
4bb1a11e02 community: Add MiniMaxChat bind_tools and structured output (#24310)
- **Description:** 
  - Add `bind_tools` method to support tool calling 
  - Add `with_structured_output` method to support structured output
2024-07-29 15:51:52 -04:00
John
0a2ff40fcc partners/unstructured: fix client api_url (#24680)
**Description:** Add empty string default for api_key and change
`server_url` to `url` to match existing loaders.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-29 11:16:41 -07:00
maang-h
bf685c242f docs: Standardize QianfanEmbeddingsEndpoint (#24786)
- **Description:** Standardize QianfanEmbeddingsEndpoint, include:
  - docstrings, the issue #21983 
  - model init arg names, the issue #20085
2024-07-29 13:19:24 -04:00
ccurme
9998e55936 core[patch]: support tool calls with non-pickleable args in tools (#24741)
Deepcopy raises with non-pickleable args.
2024-07-29 13:18:39 -04:00
Erick Friis
df78608741 mongodb: bson optional import (#24685) 2024-07-29 09:54:01 -07:00
M. Ali
c086410677 fix docs typos (#23668)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: fix multiple typos"

Co-authored-by: mohblnk <mohamed.ali@blnk.ai>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-29 16:10:55 +00:00
Pere Pasamonte
98175860ad community: Fix AWS DocumentDB similarity_search when filter is None (#24777)
**Description**

Fixes DocumentDBVectorSearch similarity_search when no filter is used;
it defaults to None but $match does not accept None, so changed default
to empty {} before pipeline is created.

**Issue**

AWS DocumentDB similarity search does not work when no filter is used.
Error msg: "the match filter must be an expression in an object" #24775

**Dependencies**

No dependencies

**Twitter handle**

https://x.com/perepasamonte

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-29 15:32:05 +00:00
Lennart J. Kurzweg
7da0597ecb partners[ollama]: Support seed parameter for ChatOllama (#24782)
## Description

Adds seed parameter to ChatOllama

## Resolves Issues
- #24703

## Dependency Changes
None

Co-authored-by: Lennart J. Kurzweg (Nx2) <git@nx2.site>
2024-07-29 15:15:20 +00:00
ccurme
e264ccf484 standard-tests[patch]: update groq and structured output test (#24781)
- Mixtral with Groq has started consistently failing tool calling tests.
Here we restrict testing to llama 3.1.
- `.schema` is deprecated in pydantic proper in favor of
`.model_json_schema`.
2024-07-29 11:10:01 -04:00
ZhangShenao
4a05679fdb patch[experimental] Fix prompt in GenerativeAgentMemory (#24771)
There is an issue with the prompt format in `GenerativeAgentMemory` ,
try to fix it.
The prompt is same as the one in method `_score_memory_importance`.
2024-07-29 07:02:31 -04:00
WU LIFU
2ba8393182 graph_transformers: bug fix for create_simple_model not passing in ll… (#24643)
issue: #24615 

descriptions: The _Graph pydantic model generated from
create_simple_model (which LLMGraphTransformer uses when allowed nodes
and relationships are provided) does not constrain the relationships
(source and target types, relationship type), and the node and
relationship properties with enums when using ChatOpenAI.
The issue is that when calling optional_enum_field throughout
create_simple_model the llm_type parameter is not passed in except for
when creating node type. Passing it into each call fixes the issue.

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
2024-07-29 07:00:56 -04:00
William FH
01ab2918a2 core[patch]: Respect injected in bound fns (#24733)
Since right now you cant use the nice injected arg syntas directly with
model.bind_tools()
2024-07-28 15:45:19 -07:00
Pavel
7fcfe7c1f4 openai[patch]: openai proxy added to base embeddings (#24539)
- [ ] **PR title**: "langchain-openai: openai proxy added to base
embeddings"

- [ ] **PR message**: 
    - **Description:** 
    Dear langchain developers,
You've already supported proxy for ChatOpenAI implementation in your
package. At the same time, if somebody needed to use proxy for chat, it
also could be necessary to be able to use it for OpenAIEmbeddings.
That's why I think it's important to add proxy support for OpenAI
embeddings. That's what I've done in this PR.

@baskaryan

---------

Co-authored-by: karpov <karpov@dohod.ru>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-07-28 20:54:13 +00:00
Lakshmi Peri
821196c4ee langchain-aws InMemoryVectorStore documentation updates (#24347)
Thank you for contributing to LangChain!

- [x] **PR title**: "Add documentaiton on InMemoryVectorStore driver for
MemoryDB to langchain-aws"
  - Langchain-aws repo :Add MemoryDB documentation 
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Added documentation on InMemoryVectorStore driver to
aws.mdx and usage example on MemoryDB clusuter
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
Add memorydb notebook to docs/docs/integrations/ folde


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-28 15:09:51 -04:00
Chuck Wooters
56c2a7f6d4 partners: add missing key name to Field() for ChatFireworks model (#24721)
**Description:** 

In the `ChatFireworks` class definition, the Field() call for the "stop"
("stop_sequences") parameter is missing the "default" keyword.

**Issue:**

Type checker reports "stop_sequences" as a missing arg (not recognizing
the default value is None)

**Dependencies:**

None

**Twitter handle:**

None
2024-07-28 18:40:21 +00:00
AmosDinh
c113682328 community:Add support for specifying document_loaders.firecrawl api url. (#24747)
community:Add support for specifying document_loaders.firecrawl api url.


Add support for specifying document_loaders.firecrawl api url. 
This is mainly to support the
[self-hosting](https://github.com/mendableai/firecrawl/blob/main/SELF_HOST.md)
option firecrawl provides. Eg. now I can specify localhost:....

The corresponding firecrawl class already provides functionality to pass
the argument. See here:
4c9d62f6d3/apps/python-sdk/firecrawl/firecrawl.py (L29)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-28 14:30:36 -04:00
Jerron Lim
df37c0d086 partners[ollama]: Support base_url for ChatOllama (#24719)
Add a class attribute `base_url` for ChatOllama to allow users to choose
a different URL to connect to.

Fixes #24555
2024-07-28 14:25:58 -04:00
Bagatur
8964f8a710 core: use mypy<1.11 (#24749)
Bug in mypy 1.11.0 blocking CI, see example:
https://github.com/langchain-ai/langchain/actions/runs/10127096903/job/28004492692?pr=24641
2024-07-27 16:37:02 -07:00
Moritz
b81fbc962c docs: fix typo in DSPy docs (#24748)
**Description:** Just a missing "r" in metric
**Dependencies:**N/A
2024-07-27 23:34:39 +00:00
Isaac Francisco
152427eca1 make image inputs compatible with langchain_ollama (#24619) 2024-07-26 17:39:57 -07:00
William FH
0535d72927 Add type() in error msg (#24723) 2024-07-26 16:48:45 -07:00
Eugene Yurtsev
9be6b5a20f core[patch]: Correct doc-string for InMemoryRateLimiter (#24730)
Correct the documentaiton string.
2024-07-26 22:17:22 +00:00
Erick Friis
d5b4b7e05c infra: langchain max python 3.11 for resolution (#24729) 2024-07-26 21:17:11 +00:00
Erick Friis
3c3d3e9579 infra: community max python 3.11 for resolution (#24728) 2024-07-26 21:10:14 +00:00
Cristi Burcă
174e7d2ab2 langchain: Make OutputFixingParser.from_llm() create a useable retry chain (#24687)
Description: OutputFixingParser.from_llm() creates a retry chain that
returns a Generation instance, when it should actually just return a
string.
Issue: https://github.com/langchain-ai/langchain/issues/24600
Twitter handle: scribu

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-26 13:55:47 -07:00
Bagatur
b3a23ddf93 integration releases (#24725)
Release anthropic, openai, groq, mistralai, robocorp
2024-07-26 12:30:10 -07:00
Bagatur
315223ce26 core[patch]: Release 0.2.24 (#24722) 2024-07-26 18:55:32 +00:00
Hayden Wolff
0345990a42 docs: Add NVIDIA NIMs to Model Tab and Feature Table (#24146)
**Description:** Add NVIDIA NIMs to Model Tab and LLM Feature Table

---------

Co-authored-by: Hayden Wolff <hwolff@nvidia.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-26 18:20:52 +00:00
Haijian Wang
cda3025ee1 Integrating the Yi family of models. (#24491)
Thank you for contributing to LangChain!

- [x] **PR title**: "community:add Yi LLM", "docs:add Yi Documentation"
                          
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** This PR adds support for the Yi model to LangChain.
- **Dependencies:**
[langchain_core,requests,contextlib,typing,logging,json,langchain_community]
    - **Twitter handle:** 01.AI


- [x] **Add tests and docs**: I've added the corresponding documentation
to the relevant paths

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-26 10:57:33 -07:00
Bagatur
ad7581751f core[patch]: ChatPromptTemplate.init same as ChatPromptTemplate.from_… (#24486) 2024-07-26 10:48:39 -07:00
Marc Gibbons
cc451effd1 community[patch]: langchain_community.vectorstores.azuresearch Raise LangChainException instead of bare Exception (#23935)
Raise `LangChainException` instead of `Exception`. This alleviates the
need for library users to use bare try/except to handle exceptions
raised by `AzureSearch`.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-26 15:59:06 +00:00
Jacob Lee
3d16dcd88d docs[patch]: Hide deprecated ChatGPT plugins page (#24704) 2024-07-26 08:24:33 -07:00
Eugene Yurtsev
3a5365a33e ai21: apply rate limiter in integration tests (#24717)
Apply rate limiter in integration tests
2024-07-26 11:15:36 -04:00
Eugene Yurtsev
03d62a737a together: Add rate limiter to integration tests (#24714)
Rate limit the integration tests to avoid getting 429s.
2024-07-26 10:59:33 -04:00
Eugene Yurtsev
e00cc74926 docs[minor]: Add how to guide for rate limiting a chat model (#24686)
Add how-to guide for rate limiting a chat model.
2024-07-26 14:29:06 +00:00
Diverrez morgan
c4d2a53f18 community: creation score_threshold in flashrank_rerank.py (#24016)
Description: 
add a optional score relevance threshold for select only coherent
document, it's in complement of top_n

Discussion:
add relevance score threshold in flashrank_rerank document compressors
#24013

Dependencies:
 no dependencies

---------

Co-authored-by: Benjamin BERNARD <benjamin.bernard@openpathview.fr>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-26 13:34:39 +00:00
Cong Peng
190988d93e community: Add parameter allow_dangerous_requests to WebResearchRetriever.from_llm construct (#24712)
**Description:** To avoid ValueError when construct the retriever from
method `from_llm()`.
2024-07-26 06:24:58 -07:00
monysun
5f593c172a community: fix dashcope embeddings embed_query func post too much req to api (#24707)
the fuc of embed_query of dashcope embeddings send a str param, and in
the embed_with_retry func will send error content to api
2024-07-26 12:44:07 +00:00
yonarw
b65ac8d39c community[minor]: Self query retriever for HANA Cloud Vector Engine (#24494)
Description:

- This PR adds a self query retriever implementation for SAP HANA Cloud
Vector Engine. The retriever supports all operators except for contains.
- Issue: N/A
- Dependencies: no new dependencies added

**Add tests and docs:**
Added integration tests to:
libs/community/tests/unit_tests/query_constructors/test_hanavector.py

**Documentation for self query retriever:**
/docs/integrations/retrievers/self_query/hanavector_self_query.ipynb

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-07-26 06:56:51 +00:00
nobbbbby
4f3b4fc7fe community[patch]: Extend Baichuan model with tool support (#24529)
**Description:** Expanded the chat model functionality to support tools
in the 'baichuan.py' file. Updated module imports and added tool object
handling in message conversions. Additional changes include the
implementation of tool binding and related unit tests. The alterations
offer enhanced model capabilities by enabling interaction with tool-like
objects.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-25 23:20:44 -07:00
Rave Harpaz
ee399e3ec5 community[patch]: Add OCI Generative AI tool and structured output support (#24693)
- [x] **PR title**: 
  community: Add OCI Generative AI tool and structured output support


- [x] **PR message**: 
- **Description:** adding tool calling and structured output support for
chat models offered by OCI Generative AI services. This is an update to
our last PR 22880 with changes in
/langchain_community/chat_models/oci_generative_ai.py
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: 
  1. we have updated our unit tests
2. we have updated our documentation under
/docs/docs/integrations/chat/oci_generative_ai.ipynb


- [x] **Lint and test**: `make format`, `make lint` and `make test` we
run successfully

---------

Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
2024-07-25 23:19:00 -07:00
Yuki Watanabe
2b6a262f84 community[patch]: Replace filters argument to filter in DatabricksVectorSearch (#24530)
The
[DatabricksVectorSearch](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/databricks_vector_search.py#L21)
class exposes similarity search APIs with argument `filters`, which is
inconsistent with other VS classes who uses `filter` (singular). This PR
updates the argument and add alias for backward compatibility.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-07-25 21:20:18 -07:00
Leonid Ganeline
148766ddc1 docs: integrations missed links (#24681)
Added missed links; missed provider page
2024-07-25 20:38:25 -07:00
Sunish Sheth
59880a9147 community[patch]: mlflow handle empty chunk(#24689) 2024-07-25 20:36:29 -07:00
Eugene Yurtsev
20690db482 core[minor]: Add BaseModel.rate_limiter, RateLimiter abstraction and in-memory implementation (#24669)
This PR proposes to create a rate limiter in the chat model directly,
and would replace: https://github.com/langchain-ai/langchain/pull/21992

It resolves most of the constraints that the Runnable rate limiter
introduced:

1. It's not annoying to apply the rate limiter to existing code; i.e., 
possible to roll out the change at the location where the model is
instantiated,
rather than at every location where the model is used! (Which is
necessary
   if the model is used in different ways in a given application.)
2. batch rate limiting is enforced properly
3. the rate limiter works correctly with streaming
4. the rate limiter is aware of the cache
5. The rate limiter can take into account information about the inputs
into the
model (we can add optional inputs to it down-the road together with
outputs!)

The only downside is that information will not be properly reflected in
tracing
as we don't have any metadata evens about a rate limiter. So the total
time
spent on a model invocation will be: 

* time spent waiting for the rate limiter
* time spend on the actual model request

## Example

```python
from langchain_core.rate_limiters import InMemoryRateLimiter
from langchain_groq import ChatGroq

groq = ChatGroq(rate_limiter=InMemoryRateLimiter(check_every_n_seconds=1))
groq.invoke('hello')
```
2024-07-26 03:03:34 +00:00
Eugene Yurtsev
c623ae6661 experimental[patch]: Fix import test (#24672)
Import test was misconfigured, the glob wasn't returning any file paths
2024-07-25 22:14:40 -04:00
Chaunte W. Lacewell
69eacaa887 Community[minor]: Update VDMS vectorstore (#23729)
**Description:** 
- This PR exposes some functions in VDMS vectorstore, updates VDMS
related notebooks, updates tests, and upgrade version of VDMS (>=0.0.20)

**Issue:** N/A

**Dependencies:** 
- Update vdms>=0.0.20
2024-07-25 22:13:04 -04:00
sykp241095
703491e824 docs: update another TiDB Cloud link as it is already public beta (#24694)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-25 18:39:55 -07:00
Nuno Campos
8734cabc09 core: Don't draw None edge labels (#24690)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-25 22:12:39 +00:00
Jacob Lee
ce067c19e9 docs[patch]: Simplify tool calling guide, improve tool calling conceptual guide (#24637)
Lots of duplicated content from concepts, missing pointers to the second
half of the tool calling loop

Simpler + more focused + a more prominent link to the second half of the
loop was what I was aiming for, but down to be more conservative and
just more prominently link the "passing tools back to the model" guide.

I have also moved the tool calling conceptual guide out from under
`Structured Output` (while leaving a small section for structured
output-specific information) and added more content. The existing
`#functiontool-calling` link will go to this new section.
2024-07-25 14:39:14 -07:00
Bagatur
4840db6892 docs: standardize groq chat model docs (#24616)
part of #22296

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-25 14:10:49 -07:00
Isaac Francisco
218c554c4f [docs]: add doctoring to ChatTogether (#24636) 2024-07-25 14:10:41 -07:00
Bagatur
0fe29b4343 docs: standardize Together docs (#24617)
Part of #22296

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-25 14:10:31 -07:00
Isaac Francisco
5c7e589aaf deprecating ollama_functions (#24632) 2024-07-25 13:50:04 -07:00
KyrianC
0fdbaf4a8d community: fix ChatEdenAI + EdenAI Tools (#23715)
Fixes for Eden AI Custom tools and ChatEdenAI:
- add missing import in __init__ of chat_models
- add `args_schema` to custom tools. otherwise '__arg1' would sometimes
be passed to the `run` method
- fix IndexError when no human msg is added in ChatEdenAI
2024-07-25 15:19:14 -04:00
Daniel Campos
871bf5a841 docs: Update snowflake.mdx for arctic-m-v1.5 (#24678)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-25 17:48:54 +00:00
Leonid Ganeline
8b7cffc363 docs: integrations missed references (#24631)
**Issue:** Several packages are not referenced in the `providers` pages.

**Fix:** Added the missed references. Fixed the notebook formatting.
2024-07-25 13:26:46 -04:00
ccurme
58dd69f7f2 core[patch]: fix mutating tool calls (#24677)
In some cases tool calls are mutated when passed through a tool.
2024-07-25 16:46:36 +00:00
ccurme
dfbd12b384 mistral[patch]: translate tool call IDs to mistral compatible format (#24668)
Mistral appears to have added validation for the format of its tool call
IDs:

`{"object":"error","message":"Tool call id was abc123 but must be a-z,
A-Z, 0-9, with a length of
9.","type":"invalid_request_error","param":null,"code":null}`

This breaks compatibility of messages from other providers. Here we add
a function that converts any string to a Mistral-valid tool call ID, and
apply it to incoming messages.
2024-07-25 12:39:32 -04:00
maang-h
38d30e285a docs: Standardize BaichuanTextEmbeddings docstrings (#24674)
- **Description:** Standardize BaichuanTextEmbeddings docstrings.
- **Issue:** the issue #21983
2024-07-25 12:12:00 -04:00
Eugene Yurtsev
89bcca3542 experimental[patch]: Bump core (#24671) 2024-07-25 09:05:43 -07:00
rick-SOPTIM
cd563fb628 community[minor]: passthrough auth parameter on requests to Ollama-LLMs (#24068)
Thank you for contributing to LangChain!

**Description:**
This PR allows users of `langchain_community.llms.ollama.Ollama` to
specify the `auth` parameter, which is then forwarded to all internal
calls of `requests.request`. This works in the same way as the existing
`headers` parameters. The auth parameter enables the usage of the given
class with Ollama instances, which are secured by more complex
authentication mechanisms, that do not only rely on static headers. An
example are AWS API Gateways secured by the IAM authorizer, which
expects signatures dynamically calculated on the specific HTTP request.

**Issue:**

Integrating a remote LLM running through Ollama using
`langchain_community.llms.ollama.Ollama` only allows setting static HTTP
headers with the parameter `headers`. This does not work, if the given
instance of Ollama is secured with an authentication mechanism that
makes use of dynamically created HTTP headers which for example may
depend on the content of a given request.

**Dependencies:**

None

**Twitter handle:**

None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-25 15:48:35 +00:00
남광우
256bad3251 core[minor]: Support asynchronous in InMemoryVectorStore (#24472)
### Description

* support asynchronous in InMemoryVectorStore
* since embeddings might be possible to call asynchronously, ensure that
both asynchronous and synchronous functions operate correctly.
2024-07-25 11:36:55 -04:00
Luca Dorigo
5fdbdd6bec community[patch]: Fix invalid iohttp verify parameter (#24655)
Should fix https://github.com/langchain-ai/langchain/issues/24654
2024-07-25 11:09:21 -04:00
Daniel Glogowski
221486687a docs: updated CHATNVIDIA notebooks (#24584)
Updated notebook for tool calling support in chat models
2024-07-25 09:22:53 -04:00
Ken Jenney
d6631919f4 docs: tool calling is enabled in ChatOllama (#24665)
Description: According to this page:
https://python.langchain.com/v0.2/docs/integrations/chat/ollama_functions/
ChatOllama does support Tool Calling.
Issue: The documentation is incorrect
Dependencies: None
Twitter handle: NA
2024-07-25 13:21:30 +00:00
sykp241095
235eb38d3e docs: update TiDB Cloud links as vector search feature becomes public beta (#24667)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-25 13:20:02 +00:00
Eugene Yurtsev
7dd6b32991 core[minor]: Add InMemoryRateLimiter (#21992)
This PR introduces the following Runnables:

1. BaseRateLimiter: an abstraction for specifying a time based rate
limiter as a Runnable
2. InMemoryRateLimiter: Provides an in-memory implementation of a rate
limiter

## Example

```python

from langchain_core.runnables import InMemoryRateLimiter, RunnableLambda
from datetime import datetime

foo = InMemoryRateLimiter(requests_per_second=0.5)

def meow(x):
    print(datetime.now().strftime("%H:%M:%S.%f"))
    return x

chain = foo | meow

for _ in range(10):
    print(chain.invoke('hello'))
```

Produces:

```
17:12:07.530151
hello
17:12:09.537932
hello
17:12:11.548375
hello
17:12:13.558383
hello
17:12:15.568348
hello
17:12:17.578171
hello
17:12:19.587508
hello
17:12:21.597877
hello
17:12:23.607707
hello
17:12:25.617978
hello
```


![image](https://github.com/user-attachments/assets/283af59f-e1e1-408b-8e75-d3910c3c44cc)


## Interface

The rate limiter uses the following interface for acquiring a token:

```python
class BaseRateLimiter(Runnable[Input, Output], abc.ABC):
  @abc.abstractmethod
  def acquire(self, *, blocking: bool = True) -> bool:
      """Attempt to acquire the necessary tokens for the rate limiter.```
```

The flag `blocking` has been added to the abstraction to allow
supporting streaming (which is easier if blocking=False).

## Limitations

- The rate limiter is not designed to work across different processes.
It is an in-memory rate limiter, but it is thread safe.
- The rate limiter only supports time-based rate limiting. It does not
take into account the size of the request or any other factors.
- The current implementation does not handle streaming inputs well and
will consume all inputs even if the rate limit has been reached. Better
support for streaming inputs will be added in the future.
- When the rate limiter is combined with another runnable via a
RunnableSequence, usage of .batch() or .abatch() will only respect the
average rate limit. There will be bursty behavior as .batch() and
.abatch() wait for each step to complete before starting the next step.
One way to mitigate this is to use batch_as_completed() or
abatch_as_completed().

## Bursty behavior in `batch` and `abatch`

When the rate limiter is combined with another runnable via a
RunnableSequence, usage of .batch() or .abatch() will only respect the
average rate limit. There will be bursty behavior as .batch() and
.abatch() wait for each step to complete before starting the next step.

This becomes a problem if users are using `batch` and `abatch` with many
inputs (e.g., 100). In this case, there will be a burst of 100 inputs
into the batch of the rate limited runnable.

1. Using a RunnableBinding

The API would look like:

```python
from langchain_core.runnables import InMemoryRateLimiter, RunnableLambda

rate_limiter = InMemoryRateLimiter(requests_per_second=0.5)

def meow(x):
    return x

rate_limited_meow = RunnableLambda(meow).with_rate_limiter(rate_limiter)
```

2. Another option is to add some init option to RunnableSequence that
changes `.batch()` to be depth first (e.g., by delegating to
`batch_as_completed`)

```python
RunnableSequence(first=rate_limiter, last=model, how='batch-depth-first')
```

Pros: Does not require Runnable Binding
Cons: Feels over-complicated
2024-07-25 01:34:03 +00:00
Oleg Kulyk
4b1b7959a2 community[minor]: Add ScrapingAnt Loader Community Integration (#24514)
Added [ScrapingAnt](https://scrapingant.com/) Web Loader integration.
ScrapingAnt is a web scraping API that allows extracting web page data
into accessible and well-formatted markdown.

Description: Added ScrapingAnt web loader for retrieving web page data
as markdown
Dependencies: scrapingant-client
Twitter: @WeRunTheWorld3

---------

Co-authored-by: Oleg Kulyk <oleg@scrapingant.com>
2024-07-24 21:11:43 -04:00
Jacob Lee
afee851645 docs[patch]: Fix image caption document loader page and typo on custom tools page (#24635) 2024-07-24 17:16:18 -07:00
Jacob Lee
a73e2222d4 docs[patch]: Updates LLM caching, HF sentence transformers, and DDG pages (#24633) 2024-07-24 16:58:05 -07:00
Erick Friis
e160b669c8 infra: add unstructured api key to release (#24638) 2024-07-24 16:47:24 -07:00
John
d59c656ea5 unstructured, community, initialize langchain-unstructured package (#22779)
#### Update (2): 
A single `UnstructuredLoader` is added to handle both local and api
partitioning. This loader also handles single or multiple documents.

#### Changes in `community`:
Changes here do not affect users. In the initial process of using the
SDK for the API Loaders, the Loaders in community were refactored.
Other changes include:
The `UnstructuredBaseLoader` has a new check to see if both
`mode="paged"` and `chunking_strategy="by_page"`. It also now has
`Element.element_id` added to the `Document.metadata`.
`UnstructuredAPIFileLoader` and `UnstructuredAPIFileIOLoader`. As such,
now both directly inherit from `UnstructuredBaseLoader` and initialize
their `file_path`/`file` attributes respectively and implement their own
`_post_process_elements` methods.

--------
#### Update:
New SDK Loaders in a [partner
package](https://python.langchain.com/v0.1/docs/contributing/integrations/#partner-package-in-langchain-repo)
are introduced to prevent breaking changes for users (see discussion
below).

##### TODO:
- [x] Test docstring examples
--------
- **Description:** UnstructuredAPIFileIOLoader and
UnstructuredAPIFileLoader calls to the unstructured api are now made
using the unstructured-client sdk.
- **New Dependencies:** unstructured-client

- [x] **Add tests and docs**: If you're adding a new integration, please
include
- [x] a test for the integration, preferably unit tests that do not rely
on network access,
- [x] update the description in
`docs/docs/integrations/providers/unstructured.mdx`
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

TODO:
- [x] Update
https://python.langchain.com/v0.1/docs/integrations/document_loaders/unstructured_file/#unstructured-api
-
`langchain/docs/docs/integrations/document_loaders/unstructured_file.ipynb`
- The description here needs to indicate that users should install
`unstructured-client` instead of `unstructured`. Read over closely to
look for any other changes that need to be made.
- [x] Update the `lazy_load` method in `UnstructuredBaseLoader` to
handle json responses from the API instead of just lists of elements.
- This method may need to be overwritten by the API loaders instead of
changing it in the `UnstructuredBaseLoader`.
- [x] Update the documentation links in the class docstrings (the
Unstructured documents have moved)
- [x] Update Document.metadata to include `element_id` (see thread
[here](https://unstructuredw-kbe4326.slack.com/archives/C044N0YV08G/p1718187499818419))

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
2024-07-24 23:21:20 +00:00
Leonid Ganeline
2394807033 docs: fix ChatGooglePalm fix (#24629)
**Issue:** now the
[ChatGooglePalm](https://python.langchain.com/v0.2/docs/integrations/vectorstores/scann/#retrievalqa-demo)
class is not parsed and do not presented in the "API Reference:" line.

**PR:** [Fixed
it](https://langchain-7n5k5wkfs-langchain.vercel.app/v0.2/docs/integrations/vectorstores/scann/#retrievalqa-demo)
by properly importing.
2024-07-24 18:09:08 -04:00
Joel Akeret
acfce30017 Adding compatibility for OllamaFunctions with ImagePromptTemplate (#24499)
- [ ] **PR title**: "experimental: Adding compatibility for
OllamaFunctions with ImagePromptTemplate"

- [ ] **PR message**: 
- **Description:** Removes the outdated
`_convert_messages_to_ollama_messages` method override in the
`OllamaFunctions` class to ensure that ollama multimodal models can be
invoked with an image.
    - **Issue:** #24174

---------

Co-authored-by: Joel Akeret <joel.akeret@ti&m.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-24 14:57:05 -07:00
Erick Friis
8f3c052db1 cli: release 0.0.26 (#24623)
- **cli: remove snapshot flag from pytest defaults**
- **x**
- **x**
2024-07-24 13:13:58 -07:00
ChengZi
29a3b3a711 partners[milvus]: add dynamic field (#24544)
add dynamic field feature to langchain_milvus
more unittest, more robustic

plan to deprecate the `metadata_field` in the future, because it's
function is the same as `enable_dynamic_field`, but the latter one is a
more advanced concept in milvus

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-24 20:01:58 +00:00
Erick Friis
20fe4deea0 milvus: release 0.1.3 (#24624) 2024-07-24 13:01:27 -07:00
Erick Friis
3a55f4bfe9 cli: remove snapshot flag from pytest defaults (#24622) 2024-07-24 19:41:01 +00:00
Isaac Francisco
fea9ff3831 docs: add tables for search and code interpreter tools (#24586) 2024-07-24 10:51:39 -07:00
Eugene Yurtsev
b55f6105c6 community[patch]: Add linter to prevent further usage of root_validator and validator (#24613)
This linter is meant to move development to use __init__ instead of
root_validator and validator.

We need to investigate whether we need to lint some of the functionality
of Field (e.g., `lt` and `gt`, `alias`)

`alias` is the one that's most popular:

(community) ➜ community git:(eugene/add_linter_to_community) ✗ git grep
" Field(" | grep "alias=" | wc -l
144

(community) ➜ community git:(eugene/add_linter_to_community) ✗ git grep
" Field(" | grep "ge=" | wc -l
10

(community) ➜ community git:(eugene/add_linter_to_community) ✗ git grep
" Field(" | grep "gt=" | wc -l
4
2024-07-24 12:35:21 -04:00
Anush
4585eaef1b qdrant: Fix vectors_config access (#24606)
## Description

Fixes #24558 by accessing `vectors_config` after asserting it to be a
dict.
2024-07-24 10:54:33 -04:00
ccurme
f337f3ed36 docs: update chain migration guide (#24501)
- Update `ConversationChain` example to show use without session IDs;
- Fix a minor bug (specify history_messages_key).
2024-07-24 10:45:00 -04:00
maang-h
22175738ac docs: Add MongoDBChatMessageHistory docstrings (#24608)
- **Description:** Add MongoDBChatMessageHistory rich docstrings.
- **Issue:** the issue #21983
2024-07-24 10:12:44 -04:00
Anindyadeep
12c3454fd9 [Community] PremAI Tool Calling Functionality (#23931)
This PR is under WIP and adds the following functionalities:

- [X] Supports tool calling across the langchain ecosystem. (However
streaming is not supported)
- [X] Update documentation
2024-07-24 09:53:58 -04:00
Vishnu Nandakumar
e271965d1e community: retrievers: added capability for using Product Quantization as one of the retriever. (#22424)
- [ ] **Community**: "Retrievers: Product Quantization"
- [X] This PR adds Product Quantization feature to the retrievers to the
Langchain Community. PQ is one of the fastest retrieval methods if the
embeddings are rich enough in context due to the concepts of
quantization and representation through centroids
    - **Description:** Adding PQ as one of the retrievers
    - **Dependencies:** using the package nanopq for this PR
    - **Twitter handle:** vishnunkumar_


- [X] **Add tests and docs**: If you're adding a new integration, please
include
   - [X] Added unit tests for the same in the retrievers.
   - [] Will add an example notebook subsequently

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/ -
done the same

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-24 13:52:15 +00:00
stydxm
b9bea36dd4 community: fix typo in warning message (#24597)
- **Description:** 
  This PR fixes a small typo in a warning message
- **Issue:**

![](https://github.com/user-attachments/assets/5aa57724-26c5-49f6-8bc1-5a54bb67ed49)
There were double `Use` and double `instead`
2024-07-24 13:19:07 +00:00
cüre
da06d4d7af community: update finetuned model cost for 4o-mini (#24605)
- **Description:** adds model price for. reference:
https://openai.com/api/pricing/
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** cureef
2024-07-24 13:17:26 +00:00
Philippe PRADOS
5f73c836a6 openai[small]: Add the new model: gpt-4o-mini (#24594) 2024-07-24 09:14:48 -04:00
Mateusz Szewczyk
597be7d501 docs: Update IBM docs about information to pass client into WatsonxLLM and WatsonxEmbeddings object. (#24602)
Thank you for contributing to LangChain!

- [x] **PR title**: Update IBM docs about information to pass client
into WatsonxLLM and WatsonxEmbeddings object.


- [x] **PR message**: 
- **Description:** Update IBM docs about information to pass client into
WatsonxLLM and WatsonxEmbeddings object.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-07-24 09:12:13 -04:00
Jacob Lee
379803751e docs[patch]: Remove very old document comparison notebook (#24587) 2024-07-23 22:25:35 -07:00
ZhangShenao
ad18afc3ec community[patch]: Fix param spelling error in ElasticsearchChatMessageHistory (#24589)
Fix param spelling error in `ElasticsearchChatMessageHistory`
2024-07-23 19:29:42 -07:00
Isaac Francisco
464a525a5a [partner]: minor change to embeddings for Ollama (#24521) 2024-07-24 00:00:13 +00:00
Aayush Kataria
0f45ac4088 LangChain Community: VectorStores: Azure Cosmos DB Filtered Vector Search (#24087)
Thank you for contributing to LangChain!

- This PR adds vector search filtering for Azure Cosmos DB Mongo vCore
and NoSQL.


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-23 16:59:23 -07:00
Gareth
ac41c97d21 pinecone: Add embedding Inference Support (#24515)
**Description**

Add support for Pinecone hosted embedding models as
`PineconeEmbeddings`. Replacement for #22890

**Dependencies**
Add `aiohttp` to support async embeddings call against REST directly

- [x] **Add tests and docs**: If you're adding a new integration, please
include

Added `docs/docs/integrations/text_embedding/pinecone.ipynb`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Twitter: `gdjdg17`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-23 22:50:28 +00:00
ccurme
aaf788b7cb docs[patch]: fix chat model tabs in runnable-as-tool guide (#24580) 2024-07-23 18:36:01 -04:00
Bagatur
47ae06698f docs: update ChatModelTabs defaults (#24583) 2024-07-23 21:56:30 +00:00
Erick Friis
03881c6743 docs: fix hf embeddings install (#24577) 2024-07-23 21:03:30 +00:00
ccurme
2d6b0bf3e3 core[patch]: add to RunnableLambda docstring (#24575)
Explain behavior when function returns a runnable.
2024-07-23 20:46:44 +00:00
Erick Friis
ee3955c68c docs: add tool calling for ollama (#24574) 2024-07-23 20:33:23 +00:00
Carlos André Antunes
325068bb53 community: Fix azure_openai.py (#24572)
In some lines its trying to read a key that do not exists yet. In this
cases I changed the direct access to dict.get() method


- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-07-23 16:22:21 -04:00
Bagatur
bff6ca78a2 docs: duplicate how to link (#24569) 2024-07-23 18:52:05 +00:00
Nik Jmaeff
6878bc39b5 langchain: fix TrajectoryEvalChain.prep_inputs (#19959)
The previous implementation would never be called.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-23 18:37:39 +00:00
Bagatur
55e66aa40c langchain[patch]: init_chat_model support ChatBedrockConverse (#24564) 2024-07-23 11:07:38 -07:00
Bagatur
9b7db08184 experimental[patch]: Release 0.0.63 (#24563) 2024-07-23 16:28:37 +00:00
Bagatur
8691a5a37f community[patch]: Release 0.2.10 (#24560) 2024-07-23 09:24:57 -07:00
Bagatur
4919d5d6df langchain[patch]: Release 0.2.11 (#24559) 2024-07-23 09:18:44 -07:00
Bagatur
918e1c8a93 core[patch]: Release 0.2.23 (#24557) 2024-07-23 09:01:18 -07:00
Lance Martin
58def6e34d Add tool calling example to Ollama ntbk (#24522) 2024-07-23 15:58:54 +00:00
Leonid Ganeline
e787532479 langchain: globals fix (#21281)
Issue: functions from `globals`, like the `get_debug` are placed in the
init.py file. As a result, they don't listed in the API Reference docs.
[See
this](https://langchain-9jq1kef7i-langchain.vercel.app/v0.2/docs/how_to/debugging/#set_debugtrue)
and [broken
this](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_debug.html).
Change: moved code from init.py into the `globals.py` file and removed
`globals` directory. Similar to: #21266
BTW `globals` in core implemented exactly inside a file not inside a
folder.
2024-07-23 11:23:18 -04:00
Ben Chambers
e80b0932ee community[patch]: small fixes to link extractors (#24528)
- **Description:** small fixes to imports / types in the link extraction
work

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-23 14:28:06 +00:00
Morteza Hosseini
9e06991aae community[patch]: Update URL to the 2markdown API (#24546)
Update the URL to Markdown endpoint.

API information is available here: https://2markdown.com/docs#url2md
2024-07-23 14:27:55 +00:00
ZhangShenao
a14e02ab33 core[patch]: Fix word spelling error in globals.py (#24532)
Fix word spelling error in `globals.py`

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-23 14:27:16 +00:00
maang-h
378db2e1a5 docs: Add RedisChatMessageHistory docstrings (#24548)
- **Description:** Add `RedisChatMessageHistory ` rich docstrings.
- **Issue:** the issue #21983

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-23 14:23:46 +00:00
ccurme
a197a8e184 openai[patch]: move test (#24552)
No-override tests (https://github.com/langchain-ai/langchain/pull/24407)
include a condition that integrations not implement additional tests.
2024-07-23 10:22:22 -04:00
Eugene Yurtsev
0bb54ab9f0 CI: Temporarily disable min version checking on pull request (#24551)
Short term to fix CI
2024-07-23 14:12:08 +00:00
Eugene Yurtsev
f47b4edcc2 standard-test: Fix typo in skipif for chat model integration tests (#24553) 2024-07-23 10:11:01 -04:00
Jesse Wright
837a3d400b chore(docs): SQARQL -> SPARQL typo fix (#24536)
nit picky typo fix
2024-07-23 13:39:34 +00:00
Eugene Yurtsev
20b72a044c standard-tests: Add BaseModel variations tests to with_structured_output (#24527)
After this standard tests will test with the following combinations:

1. pydantic.BaseModel
2. pydantic.v1.BaseModel

If ran within a matrix, it'll covert both pydantic.BaseModel originating
from
pydantic 1 and the one defined in pydantic 2.
2024-07-23 09:01:26 -04:00
Bagatur
70c71efcab core[patch]: merge_content fix (#24526) 2024-07-22 22:20:22 -07:00
Ben Chambers
a5a3d28776 community[patch]: Remove targets_table from C* GraphVectorStore (#24502)
- **Description:** Remove the unnecessary `targets_table` parameter
2024-07-22 22:09:36 -04:00
Alexander Golodkov
2a70a07aad community[minor]: added new document loaders based on dedoc library (#24303)
### Description
This pull request added new document loaders to load documents of
various formats using [Dedoc](https://github.com/ispras/dedoc):
  - `DedocFileLoader` (determine file types automatically and parse)
  - `DedocPDFLoader` (for `PDF` and images parsing)
- `DedocAPIFileLoader` (determine file types automatically and parse
using Dedoc API without library installation)

[Dedoc](https://dedoc.readthedocs.io) is an open-source library/service
that extracts texts, tables, attached files and document structure
(e.g., titles, list items, etc.) from files of various formats. The
library is actively developed and maintained by a group of developers.

`Dedoc` supports `DOCX`, `XLSX`, `PPTX`, `EML`, `HTML`, `PDF`, images
and more.
Full list of supported formats can be found
[here](https://dedoc.readthedocs.io/en/latest/#id1).
For `PDF` documents, `Dedoc` allows to determine textual layer
correctness and split the document into paragraphs.


### Issue
This pull request extends variety of document loaders supported by
`langchain_community` allowing users to choose the most suitable option
for raw documents parsing.

### Dependencies
The PR added a new (optional) dependency `dedoc>=2.2.5` ([library
documentation](https://dedoc.readthedocs.io)) to the
`extended_testing_deps.txt`

### Twitter handle
None

### Add tests and docs
1. Test for the integration:
`libs/community/tests/integration_tests/document_loaders/test_dedoc.py`
2. Example notebook:
`docs/docs/integrations/document_loaders/dedoc.ipynb`
3. Information about the library:
`docs/docs/integrations/providers/dedoc.mdx`

### Lint and test

Done locally:

  - `make format`
  - `make lint`
  - `make integration_tests`
  - `make docs_build` (from the project root)

---------

Co-authored-by: Nasty <bogatenkova.anastasiya@mail.ru>
2024-07-23 02:04:53 +00:00
Ben Chambers
5ac936a284 community[minor]: add document transformer for extracting links (#24186)
- **Description:** Add a DocumentTransformer for executing one or more
`LinkExtractor`s and adding the extracted links to each document.
- **Issue:** n/a
- **Depedencies:** none

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-22 22:01:21 -04:00
Jacob Lee
3c4652c906 docs[patch]: Hide OllamaFunctions now that Ollama supports tool calling (#24523) 2024-07-22 17:56:51 -07:00
Erick Friis
2c6b9e8771 standard-tests: add override check (#24407) 2024-07-22 23:38:01 +00:00
Nithish Raghunandanan
1639ccfd15 couchbase: [patch] Return chat message history in order (#24498)
**Description:** Fixes an issue where the chat message history was not
returned in order. Fixed it now by returning based on timestamps.

- [x] **Add tests and docs**: Updated the tests to check the order
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-22 23:30:29 +00:00
C K Ashby
ab036c1a4c docs: Update .run() to .invoke() (#24520) 2024-07-22 14:21:33 -07:00
Erick Friis
3dce2e1d35 all: add release notes to pypi (#24519) 2024-07-22 13:59:13 -07:00
Bagatur
c48e99e7f2 docs: fix sql db note (#24505) 2024-07-22 13:30:29 -07:00
Bagatur
8a140ee77c core[patch]: don't serialize BasePromptTemplate.input_types (#24516)
Candidate fix for #24513
2024-07-22 13:30:16 -07:00
MarkYQJ
df357f82ca ignore the first turn to apply "history" mechanism (#14118)
This will generate a meaningless string "system: " for generating
condense question; this increases the probability to make an improper
condense question and misunderstand user's question. Below is a case
- Original Question: Can you explain the arguments of Meilisearch?
- Condense Question
  - What are the benefits of using Meilisearch? (by CodeLlama)
  - What are the reasons for using Meilisearch? (by GPT-4)

The condense questions (not matter from CodeLlam or GPT-4) are different
from the original one.

By checking the content of each dialogue turn, generating history string
only when the dialog content is not empty.
Since there is nothing before first turn, the "history" mechanism will
be ignored at the very first turn.

Doing so, the condense question will be "What are the arguments for
using Meilisearch?".

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-22 20:11:17 +00:00
Bagatur
236e957abb core,groq,openai,mistralai,robocorp,fireworks,anthropic[patch]: Update BaseModel subclass and instance checks to handle both v1 and proper namespaces (#24417)
After this PR chat models will correctly handle pydantic 2 with
bind_tools and with_structured_output.


```python
import pydantic
print(pydantic.__version__)
```
2.8.2

```python
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field

class Add(BaseModel):
    x: int
    y: int

model = ChatOpenAI().bind_tools([Add])
print(model.invoke('2 + 5').tool_calls)

model = ChatOpenAI().with_structured_output(Add)
print(type(model.invoke('2 + 5')))
```

```
[{'name': 'Add', 'args': {'x': 2, 'y': 5}, 'id': 'call_PNUFa4pdfNOYXxIMHc6ps2Do', 'type': 'tool_call'}]
<class '__main__.Add'>
```


```python
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel, Field

class Add(BaseModel):
    x: int
    y: int

model = ChatOpenAI().bind_tools([Add])
print(model.invoke('2 + 5').tool_calls)

model = ChatOpenAI().with_structured_output(Add)
print(type(model.invoke('2 + 5')))
```

```python
[{'name': 'Add', 'args': {'x': 2, 'y': 5}, 'id': 'call_hhiHYP441cp14TtrHKx3Upg0', 'type': 'tool_call'}]
<class '__main__.Add'>
```

Addresses issues: https://github.com/langchain-ai/langchain/issues/22782

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-22 20:07:39 +00:00
C K Ashby
199e64d372 Please spell Lex's name correctly Fridman (#24517)
https://www.youtube.com/watch?v=ZIyB9e_7a4c

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-22 19:38:32 +00:00
Erick Friis
1f01c0fd98 infra: remove core from min version pr testing (#24507)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-22 17:46:15 +00:00
Naka Masato
884f76e05a fix: load google credentials properly in GoogleDriveLoader (#12871)
- **Description:** 
- Fix #12870: set scope in `default` func (ref:
https://google-auth.readthedocs.io/en/master/reference/google.auth.html)
- Moved the code to load default credentials to the bottom for clarity
of the logic
	- Add docstring and comment for each credential loading logic
- **Issue:** https://github.com/langchain-ai/langchain/issues/12870
- **Dependencies:** no dependencies change
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** @gymnstcs

<!-- If no one reviews your PR within a few days, please @-mention one
of @baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-22 17:43:33 +00:00
Erick Friis
a45337ea07 ollama: release 0.1.0 (#24510) 2024-07-22 10:35:26 -07:00
Isaac Francisco
1318d534af [docs]: minor react change (#24509) 2024-07-22 10:25:01 -07:00
Jorge Piedrahita Ortiz
10e3982b59 community: sambanova integration minor changes (#24503)
- Minor changes in samabanova llm integration 
  - default api 
  - docstrings
- minor changes in docs
2024-07-22 17:06:35 +00:00
515 changed files with 35919 additions and 19617 deletions

View File

@@ -1,7 +1,6 @@
import glob
import json
import os
import re
import sys
import tomllib
from collections import defaultdict
@@ -86,6 +85,11 @@ def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if dir_ == "libs/core":
return [
{"working-directory": dir_, "python-version": f"3.{v}"}
for v in range(8, 13)
]
min_python = "3.8"
max_python = "3.12"
@@ -95,6 +99,15 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
# declare deps in funny way
max_python = "3.11"
if dir_ in ["libs/community", "libs/langchain"] and job == "extended-tests":
# community extended test resolution in 3.12 is slow
# even in uv
max_python = "3.11"
if dir_ == "libs/community" and job == "compile-integration-tests":
# community integration deps are slow in 3.12
max_python = "3.11"
return [
{"working-directory": dir_, "python-version": min_python},
{"working-directory": dir_, "python-version": max_python},

View File

@@ -17,6 +17,8 @@ MIN_VERSION_LIBS = [
"SQLAlchemy",
]
SKIP_IF_PULL_REQUEST = ["langchain-core"]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
@@ -43,7 +45,7 @@ def get_min_version(version: str) -> str:
raise ValueError(f"Unrecognized version format: {version}")
def get_min_version_from_toml(toml_path: str):
def get_min_version_from_toml(toml_path: str, versions_for: str):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
@@ -56,6 +58,10 @@ def get_min_version_from_toml(toml_path: str):
# Iterate over the libs in MIN_VERSION_LIBS
for lib in MIN_VERSION_LIBS:
if versions_for == "pull_request" and lib in SKIP_IF_PULL_REQUEST:
# some libs only get checked on release because of simultaneous
# changes
continue
# Check if the lib is present in the dependencies
if lib in dependencies:
# Get the version string
@@ -76,8 +82,10 @@ def get_min_version_from_toml(toml_path: str):
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
versions_for = sys.argv[2]
assert versions_for in ["release", "pull_request"]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
min_versions = get_min_version_from_toml(toml_file, versions_for)
print(" ".join([f"{lib}=={version}" for lib, version in min_versions.items()]))

View File

@@ -231,7 +231,7 @@ jobs:
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
@@ -290,6 +290,7 @@ jobs:
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
UNSTRUCTURED_API_KEY: ${{ secrets.UNSTRUCTURED_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}

View File

@@ -71,15 +71,16 @@ jobs:
id: min-version
run: |
poetry run pip install packaging tomli
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
make tests
working-directory: ${{ inputs.working-directory }}
# Temporarily disabled until we can get the minimum versions working
# - name: Run unit tests with minimum dependency versions
# if: ${{ steps.min-version.outputs.min-versions != '' }}
# env:
# MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
# run: |
# poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
# make tests
# working-directory: ${{ inputs.working-directory }}

View File

@@ -7,7 +7,6 @@
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)

View File

@@ -36,6 +36,7 @@ Notebook | Description
[llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics.
[meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly.
[multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text.
[multi_modal_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_RAG_vdms.ipynb) | Perform retrieval-augmented generation (rag) on documents including text and images, using unstructured for parsing, Intel's Visual Data Management System (VDMS) as the vectorstore, and chains.
[multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents.
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.

View File

@@ -18,26 +18,7 @@
"* Use of multimodal embeddings (such as [CLIP](https://openai.com/research/clip)) to embed images and text\n",
"* Use of [VDMS](https://github.com/IntelLabs/vdms/blob/master/README.md) as a vector store with support for multi-modal\n",
"* Retrieval of both images and text using similarity search\n",
"* Passing raw images and text chunks to a multimodal LLM for answer synthesis \n",
"\n",
"\n",
"## Packages\n",
"\n",
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {},
"outputs": [],
"source": [
"# (newest versions required for multi-modal)\n",
"! pip install --quiet -U vdms langchain-experimental\n",
"\n",
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"
"* Passing raw images and text chunks to a multimodal LLM for answer synthesis "
]
},
{
@@ -53,7 +34,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "5f483872",
"metadata": {},
"outputs": [
@@ -61,8 +42,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"docker: Error response from daemon: Conflict. The container name \"/vdms_rag_nb\" is already in use by container \"0c19ed281463ac10d7efe07eb815643e3e534ddf24844357039453ad2b0c27e8\". You have to remove (or rename) that container to be able to reuse that name.\n",
"See 'docker run --help'.\n"
"a1b9206b08ef626e15b356bf9e031171f7c7eb8f956a2733f196f0109246fe2b\n"
]
}
],
@@ -75,9 +55,32 @@
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "2498a0a1",
"metadata": {},
"source": [
"## Packages\n",
"\n",
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental\n",
"\n",
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "78ac6543",
"metadata": {},
"outputs": [],
@@ -95,14 +98,9 @@
"\n",
"### Partition PDF text and images\n",
" \n",
"Let's look at an example pdf containing interesting images.\n",
"Let's use famous photographs from the PDF version of Library of Congress Magazine in this example.\n",
"\n",
"Famous photographs from library of congress:\n",
"\n",
"* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n",
"* We'll use this as an example below\n",
"\n",
"We can use `partition_pdf` below from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images."
"We can use `partition_pdf` from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images."
]
},
{
@@ -116,8 +114,8 @@
"\n",
"import requests\n",
"\n",
"# Folder with pdf and extracted images\n",
"datapath = Path(\"./multimodal_files\").resolve()\n",
"# Folder to store pdf and extracted images\n",
"datapath = Path(\"./data/multimodal_files\").resolve()\n",
"datapath.mkdir(parents=True, exist_ok=True)\n",
"\n",
"pdf_url = \"https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\"\n",
@@ -174,14 +172,8 @@
"source": [
"## Multi-modal embeddings with our document\n",
"\n",
"We will use [OpenClip multimodal embeddings](https://python.langchain.com/docs/integrations/text_embedding/open_clip).\n",
"\n",
"We use a larger model for better performance (set in `langchain_experimental.open_clip.py`).\n",
"\n",
"```\n",
"model_name = \"ViT-g-14\"\n",
"checkpoint = \"laion2b_s34b_b88k\"\n",
"```"
"In this section, we initialize the VDMS vector store for both text and images. For better performance, we use model `ViT-g-14` from [OpenClip multimodal embeddings](https://python.langchain.com/docs/integrations/text_embedding/open_clip).\n",
"The images are stored as base64 encoded strings with `vectorstore.add_images`.\n"
]
},
{
@@ -200,9 +192,7 @@
"vectorstore = VDMS(\n",
" client=vdms_client,\n",
" collection_name=\"mm_rag_clip_photos\",\n",
" embedding_function=OpenCLIPEmbeddings(\n",
" model_name=\"ViT-g-14\", checkpoint=\"laion2b_s34b_b88k\"\n",
" ),\n",
" embedding=OpenCLIPEmbeddings(model_name=\"ViT-g-14\", checkpoint=\"laion2b_s34b_b88k\"),\n",
")\n",
"\n",
"# Get image URIs with .jpg extension only\n",
@@ -233,7 +223,7 @@
"source": [
"## RAG\n",
"\n",
"`vectorstore.add_images` will store / retrieve images as base64 encoded strings."
"Here we define helper functions for image results."
]
},
{
@@ -392,7 +382,8 @@
"id": "1566096d-97c2-4ddc-ba4a-6ef88c525e4e",
"metadata": {},
"source": [
"## Test retrieval and run RAG"
"## Test retrieval and run RAG\n",
"Now let's query for a `woman with children` and retrieve the top results."
]
},
{
@@ -452,6 +443,14 @@
" print(doc.page_content)"
]
},
{
"cell_type": "markdown",
"id": "15e9b54d",
"metadata": {},
"source": [
"Now let's use the `multi_modal_rag_chain` to process the same query and display the response."
]
},
{
"cell_type": "code",
"execution_count": 11,
@@ -462,10 +461,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1. Detailed description of the visual elements in the image: The image features a woman with children, likely a mother and her family, standing together outside. They appear to be poor or struggling financially, as indicated by their attire and surroundings.\n",
"2. Historical and cultural context of the image: The photo was taken in 1936 during the Great Depression, when many families struggled to make ends meet. Dorothea Lange, a renowned American photographer, took this iconic photograph that became an emblem of poverty and hardship experienced by many Americans at that time.\n",
"3. Interpretation of the image's symbolism and meaning: The image conveys a sense of unity and resilience despite adversity. The woman and her children are standing together, displaying their strength as a family unit in the face of economic challenges. The photograph also serves as a reminder of the importance of empathy and support for those who are struggling.\n",
"4. Connections between the image and the related text: The text provided offers additional context about the woman in the photo, her background, and her feelings towards the photograph. It highlights the historical backdrop of the Great Depression and emphasizes the significance of this particular image as a representation of that time period.\n"
" The image depicts a woman with several children. The woman appears to be of Cherokee heritage, as suggested by the text provided. The image is described as having been initially regretted by the subject, Florence Owens Thompson, due to her feeling that it did not accurately represent her leadership qualities.\n",
"The historical and cultural context of the image is tied to the Great Depression and the Dust Bowl, both of which affected the Cherokee people in Oklahoma. The photograph was taken during this period, and its subject, Florence Owens Thompson, was a leader within her community who worked tirelessly to help those affected by these crises.\n",
"The image's symbolism and meaning can be interpreted as a representation of resilience and strength in the face of adversity. The woman is depicted with multiple children, which could signify her role as a caregiver and protector during difficult times.\n",
"Connections between the image and the related text include Florence Owens Thompson's leadership qualities and her regretted feelings about the photograph. Additionally, the mention of Dorothea Lange, the photographer who took this photo, ties the image to its historical context and the broader narrative of the Great Depression and Dust Bowl in Oklahoma. \n"
]
}
],
@@ -492,14 +491,6 @@
"source": [
"! docker kill vdms_rag_nb"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ba652da",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -518,7 +509,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -38,8 +38,14 @@ generate-files:
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
@@ -63,10 +69,13 @@ render:
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
append-related:
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync
build: install-py-deps generate-files copy-infra render md-sync append-related
vercel-build: install-vercel-deps build generate-references
rm -rf docs

View File

@@ -90,7 +90,7 @@ LCEL aims to provide consistency around behavior and customization over legacy s
`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety
of viable models emerge, customization has become more and more important.
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/how_to/migrate_chains/).
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/versions/migrating_chains).
For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel).
@@ -165,7 +165,7 @@ Some important things to note:
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model.
:::important
**Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling.
Some chat models have been fine-tuned for **tool calling** and provide a dedicated API for it.
Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling.
Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.
:::
@@ -255,7 +255,7 @@ This represents the result of a tool call. In addition to `role` and `content`,
#### (Legacy) FunctionMessage
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. ToolMessage should be used instead to correspond to the updated tool-calling API.
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. `ToolMessage` should be used instead to correspond to the updated tool-calling API.
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
@@ -498,6 +498,30 @@ Retrievers accept a string query as input and return a list of Document's as out
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
### Key-value stores
For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/) or
[caching embeddings](/docs/how_to/caching_embeddings/), having a form of key-value (KV) storage is helpful.
LangChain includes a [`BaseStore`](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.BaseStore.html) interface,
which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a
more specific `BaseStore[str, bytes]` instance that stores binary data (referred to as a `ByteStore`), and internally take care of
encoding and decoding data for their specific needs.
This means that as a user, you only need to think about one type of store rather than different ones for different types of data.
#### Interface
All [`BaseStores`](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.BaseStore.html) support the following interface. Note that the interface allows
for modifying **multiple** key-value pairs at once:
- `mget(key: Sequence[str]) -> List[Optional[bytes]]`: get the contents of multiple keys, returning `None` if the key does not exist
- `mset(key_value_pairs: Sequence[Tuple[str, bytes]]) -> None`: set the contents of multiple keys
- `mdelete(key: Sequence[str]) -> None`: delete multiple keys
- `yield_keys(prefix: Optional[str] = None) -> Iterator[str]`: yield all keys in the store, optionally filtering by a prefix
For key-value store implementations, see [this section](/docs/integrations/stores/).
### Tools
<span data-heading-keywords="tool,tools"></span>
@@ -826,6 +850,61 @@ units (like words or subwords) that carry meaning, rather than individual charac
to learn and understand the structure of the language, including grammar and context.
Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
### Function/tool calling
:::info
We use the term tool calling interchangeably with function calling. Although
function calling is sometimes meant to refer to invocations of a single function,
we treat all models as though they can return multiple tool or function calls in
each message.
:::
Tool calling allows a [chat model](/docs/concepts/#chat-models) to respond to a given prompt by generating output that
matches a user-defined schema.
While the name implies that the model is performing
some action, this is actually not the case! The model only generates the arguments to a tool, and actually running the tool (or not) is up to the user.
One common example where you **wouldn't** want to call a function with the generated arguments
is if you want to [extract structured output matching some schema](/docs/concepts/#structured-output)
from unstructured text. You would give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.
![Diagram of a tool call by a chat model](/img/tool_call.png)
Tool calling is not universal, but is supported by many popular LLM providers, including [Anthropic](/docs/integrations/chat/anthropic/),
[Cohere](/docs/integrations/chat/cohere/), [Google](/docs/integrations/chat/google_vertex_ai_palm/),
[Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/), and even for locally-running models via [Ollama](/docs/integrations/chat/ollama/).
LangChain provides a standardized interface for tool calling that is consistent across different models.
The standard interface consists of:
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) as well as [Pydantic](https://pydantic.dev/) objects.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
#### Tool usage
After the model calls tools, you can use the tool by invoking it, then passing the arguments back to the model.
LangChain provides the [`Tool`](/docs/concepts/#tools) abstraction to help you handle this.
The general flow is this:
1. Generate tool calls with a chat model in response to a query.
2. Invoke the appropriate tools using the generated tool call as arguments.
3. Format the result of the tool invocations as [`ToolMessages`](/docs/concepts/#toolmessage).
4. Pass the entire list of messages back to the model so that it can generate a final answer (or call more tools).
![Diagram of a complete tool calling flow](/img/tool_calling_flow.png)
This is how tool calling [agents](/docs/concepts/#agents) perform tasks and answer queries.
Check out some more focused guides below:
- [How to use chat models to call tools](/docs/how_to/tool_calling/)
- [How to pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model/)
- [Building an agent with LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
### Structured output
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide
@@ -958,48 +1037,48 @@ chain.invoke({ "question": "What is the powerhouse of the cell?" })
For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features).
#### Function/tool calling
#### Tool calling {#structured-output-tool-calling}
:::info
We use the term tool calling interchangeably with function calling. Although
function calling is sometimes meant to refer to invocations of a single function,
we treat all models as though they can return multiple tool or function calls in
each message
:::
For models that support it, [tool calling](/docs/concepts/#functiontool-calling) can be very convenient for structured output. It removes the
guesswork around how best to prompt schemas in favor of a built-in model feature.
Tool calling allows a model to respond to a given prompt by generating output that
matches a user-defined schema. While the name implies that the model is performing
some action, this is actually not the case! The model is coming up with the
arguments to a tool, and actually running the tool (or not) is up to the user -
for example, if you want to [extract output matching some schema](/docs/tutorials/extraction)
from unstructured text, you could give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.
It works by first binding the desired schema either directly or via a [LangChain tool](/docs/concepts/#tools) to a
[chat model](/docs/concepts/#chat-models) using the `.bind_tools()` method. The model will then generate an `AIMessage` containing
a `tool_calls` field containing `args` that match the desired shape.
For models that support it, tool calling can be very convenient. It removes the
guesswork around how best to prompt schemas in favor of a built-in model feature. It can also
more naturally support agentic flows, since you can just pass multiple tool schemas instead
of fiddling with enums or unions.
There are several acceptable formats you can use to bind tools to a model in LangChain. Here's one example:
Many LLM providers, including [Anthropic](https://www.anthropic.com/),
[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others,
support variants of a tool calling feature. These features typically allow requests
to the LLM to include available tools and their schemas, and for responses to include
calls to these tools. For instance, given a search engine tool, an LLM might handle a
query by first issuing a call to the search engine. The system calling the LLM can
receive the tool call, execute it, and return the output to the LLM to inform its
response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/)
and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools).
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
LangChain provides a standardized interface for tool calling that is consistent across different models.
class ResponseFormatter(BaseModel):
"""Always use this tool to structure your response to the user."""
The standard interface consists of:
answer: str = Field(description="The answer to the user's question")
followup_question: str = Field(description="A followup question the user could ask")
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) here.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
model = ChatOpenAI(
model="gpt-4o",
temperature=0,
)
The following how-to guides are good practical resources for using function/tool calling:
model_with_tools = model.bind_tools([ResponseFormatter])
ai_msg = model_with_tools.invoke("What is the powerhouse of the cell?")
ai_msg.tool_calls[0]["args"]
```
```
{'answer': "The powerhouse of the cell is the mitochondrion. It generates most of the cell's supply of adenosine triphosphate (ATP), which is used as a source of chemical energy.",
'followup_question': 'How do mitochondria generate ATP?'}
```
Tool calling is a generally consistent way to get a model to generate structured output, and is the default technique
used for the [`.with_structured_output()`](/docs/concepts/#with_structured_output) method when a model supports it.
The following how-to guides are good practical resources for using function/tool calling for structured output:
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
- [How to use a model to call tools](/docs/how_to/tool_calling)

Binary file not shown.

View File

@@ -0,0 +1,146 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "dcf87b32",
"metadata": {},
"source": [
"# How to handle rate limits\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LLMs](/docs/concepts/#llms)\n",
":::\n",
"\n",
"\n",
"You may find yourself in a situation where you are getting rate limited by the model provider API because you're making too many requests.\n",
"\n",
"For example, this might happen if you are running many parallel queries to benchmark the chat model on a test dataset.\n",
"\n",
"If you are facing such a situation, you can use a rate limiter to help match the rate at which you're making request to the rate allowed\n",
"by the API.\n",
"\n",
":::info Requires ``langchain-core >= 0.2.24``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.24``. Please make sure your package is up to date.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "cbc3c873-6109-4e03-b775-b73c1003faea",
"metadata": {},
"source": [
"## Initialize a rate limiter\n",
"\n",
"Langchain comes with a built-in in memory rate limiter. This rate limiter is thread safe and can be shared by multiple threads in the same process.\n",
"\n",
"The provided rate limiter can only limit the number of requests per unit time. It will not help if you need to also limited based on the size\n",
"of the requests."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "aa9c3c8c-0464-4190-a8c5-d69d173505a6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.rate_limiters import InMemoryRateLimiter\n",
"\n",
"rate_limiter = InMemoryRateLimiter(\n",
" requests_per_second=0.1, # <-- Super slow! We can only make a request once every 10 seconds!!\n",
" check_every_n_seconds=0.1, # Wake up every 100 ms to check whether allowed to make a request,\n",
" max_bucket_size=10, # Controls the maximum burst size.\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8e058bde-9413-4b08-8cc6-0c9cb638f19f",
"metadata": {},
"source": [
"## Choose a model\n",
"\n",
"Choose any model and pass to it the rate_limiter via the `rate_limiter` attribute."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0f880a3a-c047-4e94-a323-fff2a4c0e96d",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import time\n",
"from getpass import getpass\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
"\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"model = ChatAnthropic(model_name=\"claude-3-opus-20240229\", rate_limiter=rate_limiter)"
]
},
{
"cell_type": "markdown",
"id": "80c9ab3a-299a-460f-985c-90280a046f52",
"metadata": {},
"source": [
"Let's confirm that the rate limiter works. We should only be able to invoke the model once per 10 seconds."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d074265c-9f32-4c5f-b914-944148993c4d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"11.599073648452759\n",
"10.7502121925354\n",
"10.244257926940918\n",
"8.83088755607605\n",
"11.645203590393066\n"
]
}
],
"source": [
"for _ in range(5):\n",
" tic = time.time()\n",
" model.invoke(\"hello\")\n",
" toc = time.time()\n",
" print(toc - tic)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9e4144de-d925-4d4c-91c3-685ef8baa57c",
"id": "2bb9c73f-9d00-4a19-a81f-cab2f0fd921a",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "a9e37aa1",
"metadata": {},
"outputs": [],
@@ -718,8 +718,44 @@
"php_splitter = RecursiveCharacterTextSplitter.from_language(\n",
" language=Language.PHP, chunk_size=50, chunk_overlap=0\n",
")\n",
"haskell_docs = php_splitter.create_documents([PHP_CODE])\n",
"haskell_docs"
"php_docs = php_splitter.create_documents([PHP_CODE])\n",
"php_docs"
]
},
{
"cell_type": "markdown",
"id": "e9fa62c1",
"metadata": {},
"source": [
"## PowerShell\n",
"Here's an example using the PowerShell text splitter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7e6893ad",
"metadata": {},
"outputs": [],
"source": [
"POWERSHELL_CODE = \"\"\"\n",
"$directoryPath = Get-Location\n",
"\n",
"$items = Get-ChildItem -Path $directoryPath\n",
"\n",
"$files = $items | Where-Object { -not $_.PSIsContainer }\n",
"\n",
"$sortedFiles = $files | Sort-Object LastWriteTime\n",
"\n",
"foreach ($file in $sortedFiles) {\n",
" Write-Output (\"Name: \" + $file.Name + \" | Last Write Time: \" + $file.LastWriteTime)\n",
"}\n",
"\"\"\"\n",
"powershell_splitter = RecursiveCharacterTextSplitter.from_language(\n",
" language=Language.POWERSHELL, chunk_size=100, chunk_overlap=0\n",
")\n",
"powershell_docs = powershell_splitter.create_documents([POWERSHELL_CODE])\n",
"powershell_docs"
]
}
],
@@ -739,7 +775,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -267,9 +267,9 @@
"We first instantiate a chat model that supports [tool calling](/docs/how_to/tool_calling/):\n",
"\n",
"```{=mdx}\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
"/>\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
@@ -541,7 +541,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -270,7 +270,7 @@
"source": [
"### StructuredTool\n",
"\n",
"The `StrurcturedTool.from_function` class method provides a bit more configurability than the `@tool` decorator, without requiring much additional code."
"The `StructuredTool.from_function` class method provides a bit more configurability than the `@tool` decorator, without requiring much additional code."
]
},
{

View File

@@ -31,6 +31,8 @@ This highlights functionality that is core to using LangChain.
[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.
[**Migration guide**](/docs/versions/migrating_chains): For migrating legacy chain abstractions to LCEL.
- [How to: chain runnables](/docs/how_to/sequence)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
@@ -43,7 +45,6 @@ This highlights functionality that is core to using LangChain.
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
- [How to: migrate chains to LCEL](/docs/how_to/migrate_chains)
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)
## Components
@@ -81,12 +82,13 @@ These are the core building blocks you can use when building applications.
- [How to: stream a response back](/docs/how_to/chat_streaming)
- [How to: track token usage](/docs/how_to/chat_token_usage_tracking)
- [How to: track response metadata across providers](/docs/how_to/response_metadata)
- [How to: let your end users choose their model](/docs/how_to/chat_models_universal_init/)
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: stream tool calls](/docs/how_to/tool_streaming)
- [How to: handle rate limits](/docs/how_to/chat_model_rate_limiting)
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
### Messages
@@ -105,7 +107,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
- [How to: stream a response back](/docs/how_to/streaming_llm)
- [How to: track token usage](/docs/how_to/llm_token_usage_tracking)
- [How to: work with local LLMs](/docs/how_to/local_llms)
- [How to: work with local models](/docs/how_to/local_llms)
### Output parsers

View File

@@ -15,7 +15,23 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "25b0b0fa",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_openai langchain_community\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"# Please manually enter OpenAI Key"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0aa6d335",
"metadata": {},
"outputs": [],
@@ -23,13 +39,14 @@
"from langchain.globals import set_llm_cache\n",
"from langchain_openai import OpenAI\n",
"\n",
"# To make the caching really obvious, lets use a slower model.\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
"# To make the caching really obvious, lets use a slower and older model.\n",
"# Caching supports newer chat models as well.\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 3,
"id": "f168ff0d",
"metadata": {},
"outputs": [
@@ -37,17 +54,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 13.7 ms, sys: 6.54 ms, total: 20.2 ms\n",
"Wall time: 330 ms\n"
"CPU times: user 546 ms, sys: 379 ms, total: 925 ms\n",
"Wall time: 1.11 s\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
"\"\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything!\""
]
},
"execution_count": 12,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -59,12 +76,12 @@
"set_llm_cache(InMemoryCache())\n",
"\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.predict(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 4,
"id": "ce7620fb",
"metadata": {},
"outputs": [
@@ -72,17 +89,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 436 µs, sys: 921 µs, total: 1.36 ms\n",
"Wall time: 1.36 ms\n"
"CPU times: user 192 µs, sys: 77 µs, total: 269 µs\n",
"Wall time: 270 µs\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
"\"\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything!\""
]
},
"execution_count": 13,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -90,7 +107,7 @@
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.predict(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -103,7 +120,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"id": "2e65de83",
"metadata": {},
"outputs": [],
@@ -113,7 +130,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"id": "0be83715",
"metadata": {},
"outputs": [],
@@ -126,7 +143,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 7,
"id": "9b427ce7",
"metadata": {},
"outputs": [
@@ -134,17 +151,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 29.3 ms, sys: 17.3 ms, total: 46.7 ms\n",
"Wall time: 364 ms\n"
"CPU times: user 10.6 ms, sys: 4.21 ms, total: 14.8 ms\n",
"Wall time: 851 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!'"
"\"\\n\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything!\""
]
},
"execution_count": 10,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -152,12 +169,12 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.predict(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 8,
"id": "87f52611",
"metadata": {},
"outputs": [
@@ -165,17 +182,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 4.58 ms, sys: 2.23 ms, total: 6.8 ms\n",
"Wall time: 4.68 ms\n"
"CPU times: user 59.7 ms, sys: 63.6 ms, total: 123 ms\n",
"Wall time: 134 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!'"
"\"\\n\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything!\""
]
},
"execution_count": 11,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -183,7 +200,7 @@
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.predict(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -211,7 +228,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -5,11 +5,11 @@
"id": "b8982428",
"metadata": {},
"source": [
"# Run LLMs locally\n",
"# Run models locally\n",
"\n",
"## Use case\n",
"\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"The popularity of projects like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"\n",
"This has at least two important benefits:\n",
"\n",
@@ -66,6 +66,12 @@
"\n",
"![Image description](../../static/img/llama_t_put.png)\n",
"\n",
"### Formatting prompts\n",
"\n",
"Some providers have [chat model](/docs/concepts/#chat-models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model.\n",
"\n",
"This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",
"## Quickstart\n",
"\n",
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
@@ -73,10 +79,20 @@
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
" \n",
"* [Download and run](https://ollama.ai/download) the app\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama2`\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama3.1:8b`\n",
"* When the app is running, all models are automatically served on `localhost:11434`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29450fc9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_ollama"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -86,7 +102,7 @@
{
"data": {
"text/plain": [
"' The first man on the moon was Neil Armstrong, who landed on the moon on July 20, 1969 as part of the Apollo 11 mission. obviously.'"
"'...Neil Armstrong!\\n\\nOn July 20, 1969, Neil Armstrong became the first person to set foot on the lunar surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he stepped off the lunar module Eagle onto the Moon\\'s surface.\\n\\nWould you like to know more about the Apollo 11 mission or Neil Armstrong\\'s achievements?'"
]
},
"execution_count": 2,
@@ -95,51 +111,78 @@
}
],
"source": [
"from langchain_community.llms import Ollama\n",
"from langchain_ollama import OllamaLLM\n",
"\n",
"llm = OllamaLLM(model=\"llama3.1:8b\")\n",
"\n",
"llm = Ollama(model=\"llama2\")\n",
"llm.invoke(\"The first man on the moon was ...\")"
]
},
{
"cell_type": "markdown",
"id": "343ab645",
"id": "674cc672",
"metadata": {},
"source": [
"Stream tokens as they are being generated."
"Stream tokens as they are being generated:"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "9cd83603",
"execution_count": 3,
"id": "1386a852",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon's surface, famously declaring \"That's one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission."
"...|"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Neil| Armstrong|,| an| American| astronaut|.| He| stepped| out| of| the| lunar| module| Eagle| and| onto| the| surface| of| the| Moon| on| July| |20|,| |196|9|,| famously| declaring|:| \"|That|'s| one| small| step| for| man|,| one| giant| leap| for| mankind|.\"||"
]
}
],
"source": [
"for chunk in llm.stream(\"The first man on the moon was ...\"):\n",
" print(chunk, end=\"|\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "e5731060",
"metadata": {},
"source": [
"Ollama also includes a chat model wrapper that handles formatting conversation turns:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f14a778a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission.'"
"AIMessage(content='The answer is a historic one!\\n\\nThe first man to walk on the Moon was Neil Armstrong, an American astronaut and commander of the Apollo 11 mission. On July 20, 1969, Armstrong stepped out of the lunar module Eagle onto the surface of the Moon, famously declaring:\\n\\n\"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nArmstrong was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the Moon during the mission. Michael Collins remained in orbit around the Moon in the command module Columbia.\\n\\nNeil Armstrong passed away on August 25, 2012, but his legacy as a pioneering astronaut and engineer continues to inspire people around the world!', response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-08-01T00:38:29.176717Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 10681861417, 'load_duration': 34270292, 'prompt_eval_count': 19, 'prompt_eval_duration': 6209448000, 'eval_count': 141, 'eval_duration': 4432022000}, id='run-7bed57c5-7f54-4092-912c-ae49073dcd48-0', usage_metadata={'input_tokens': 19, 'output_tokens': 141, 'total_tokens': 160})"
]
},
"execution_count": 40,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = Ollama(\n",
" model=\"llama2\", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
")\n",
"llm.invoke(\"The first man on the moon was ...\")"
"chat_model = ChatOllama(model=\"llama3.1:8b\")\n",
"\n",
"chat_model.invoke(\"Who was the first man on the moon?\")"
]
},
{
@@ -199,7 +242,7 @@
"\n",
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
"\n",
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* E.g., for Llama 2 7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)"
]
@@ -222,9 +265,7 @@
}
],
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama2:13b\")\n",
"llm = OllamaLLM(model=\"llama2:13b\")\n",
"llm.invoke(\"The first man on the moon was ... think step by step\")"
]
},
@@ -268,11 +309,7 @@
"cell_type": "code",
"execution_count": null,
"id": "5eba38dc",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"metadata": {},
"outputs": [],
"source": [
"%env CMAKE_ARGS=\"-DLLAMA_METAL=on\"\n",
@@ -542,7 +579,6 @@
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.prompt_selector import ConditionalPromptSelector\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
@@ -613,9 +649,9 @@
],
"source": [
"# Chain\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"chain = prompt | llm\n",
"question = \"What NFL team won the Super Bowl in the year that Justin Bieber was born?\"\n",
"llm_chain.run({\"question\": question})"
"chain.invoke({\"question\": question})"
]
},
{
@@ -666,7 +702,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -41,7 +41,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "662fac50",
"metadata": {},
"outputs": [],
@@ -50,6 +50,26 @@
"%pip install -U langgraph langchain langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "6f8ec38f",
"metadata": {},
"source": [
"Then, set your OpenAI API key."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5fca87ef",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"sk-...\""
]
},
{
"cell_type": "markdown",
"id": "8e50635c-1671-46e6-be65-ce95f8167c2f",
@@ -62,7 +82,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "1e425fea-2796-4b99-bee6-9a6ffe73f756",
"metadata": {},
"outputs": [],
@@ -95,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "03ea357c-9c36-4464-b2cc-27bd150e1554",
"metadata": {},
"outputs": [
@@ -106,7 +126,7 @@
" 'output': 'The value of `magic_function(3)` is 5.'}"
]
},
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -142,7 +162,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "53a3737a-d167-4255-89bf-20ac37f89a3e",
"metadata": {},
"outputs": [
@@ -153,7 +173,7 @@
" 'output': 'The value of `magic_function(3)` is 5.'}"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -173,7 +193,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "74ecebe3-512e-409c-a661-bdd5b0a2b782",
"metadata": {},
"outputs": [
@@ -181,10 +201,10 @@
"data": {
"text/plain": [
"{'input': 'Pardon?',\n",
" 'output': 'The result of applying `magic_function` to the input 3 is 5.'}"
" 'output': 'The value you get when you apply `magic_function` to the input 3 is 5.'}"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -223,7 +243,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "a9a11ccd-75e2-4c11-844d-a34870b0ff91",
"metadata": {},
"outputs": [
@@ -234,7 +254,7 @@
" 'output': 'El valor de `magic_function(3)` es 5.'}"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -263,19 +283,19 @@
"source": [
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n",
"\n",
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`messages_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies messages before they are passed into the model, and can be one of four values:\n",
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies the graph state before the llm is called, and can be one of four values:\n",
"\n",
"- A `SystemMessage`, which is added to the beginning of the list of messages.\n",
"- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n",
"- A `Callable`, which should take in a list of messages. The output is then passed to the language model.\n",
"- Or a [`Runnable`](/docs/concepts/#langchain-expression-language-lcel), which should should take in a list of messages. The output is then passed to the language model.\n",
"- A `Callable`, which should take in full graph state. The output is then passed to the language model.\n",
"- Or a [`Runnable`](/docs/concepts/#langchain-expression-language-lcel), which should take in full graph state. The output is then passed to the language model.\n",
"\n",
"Here's how it looks in action:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "a9486805-676a-4d19-a5c4-08b41b172989",
"metadata": {},
"outputs": [],
@@ -287,7 +307,7 @@
"# This could also be a SystemMessage object\n",
"# system_message = SystemMessage(content=\"You are a helpful assistant. Respond only in Spanish.\")\n",
"\n",
"app = create_react_agent(model, tools, messages_modifier=system_message)\n",
"app = create_react_agent(model, tools, state_modifier=system_message)\n",
"\n",
"\n",
"messages = app.invoke({\"messages\": [(\"user\", query)]})"
@@ -304,7 +324,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "d369ab45-0c82-45f4-9d3e-8efb8dd47e2c",
"metadata": {},
"outputs": [
@@ -317,8 +337,8 @@
}
],
"source": [
"from langchain_core.messages import AnyMessage\n",
"from langgraph.prebuilt import create_react_agent\n",
"from langgraph.prebuilt.chat_agent_executor import AgentState\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -328,13 +348,13 @@
")\n",
"\n",
"\n",
"def _modify_messages(messages: list[AnyMessage]):\n",
" return prompt.invoke({\"messages\": messages}).to_messages() + [\n",
"def _modify_state_messages(state: AgentState):\n",
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages() + [\n",
" (\"user\", \"Also say 'Pandamonium!' after the answer.\")\n",
" ]\n",
"\n",
"\n",
"app = create_react_agent(model, tools, messages_modifier=_modify_messages)\n",
"app = create_react_agent(model, tools, state_modifier=_modify_state_messages)\n",
"\n",
"\n",
"messages = app.invoke({\"messages\": [(\"human\", query)]})\n",
@@ -366,8 +386,8 @@
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1fb52a2c",
"execution_count": 9,
"id": "b97beba5-8f74-430c-9399-91b77c8fa15c",
"metadata": {},
"outputs": [
{
@@ -376,7 +396,7 @@
"text": [
"Hi Polly! The output of the magic function for the input 3 is 5.\n",
"---\n",
"Yes, I remember your name, Polly! How can I assist you further?\n",
"Yes, your name is Polly!\n",
"---\n",
"The output of the magic function for the input 3 is 5.\n"
]
@@ -384,14 +404,14 @@
],
"source": [
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_core.tools import tool\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")\n",
"memory = ChatMessageHistory(session_id=\"test-session\")\n",
"memory = InMemoryChatMessageHistory(session_id=\"test-session\")\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
@@ -456,24 +476,23 @@
},
{
"cell_type": "code",
"execution_count": 9,
"id": "035e1253",
"execution_count": 10,
"id": "baca3dc6-678b-4509-9275-2fd653102898",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hi Polly! The output of the magic_function for the input 3 is 5.\n",
"Hi Polly! The output of the magic_function for the input of 3 is 5.\n",
"---\n",
"Yes, your name is Polly!\n",
"---\n",
"The output of the magic_function for the input 3 was 5.\n"
"The output of the magic_function for the input of 3 was 5.\n"
]
}
],
"source": [
"from langchain_core.messages import SystemMessage\n",
"from langgraph.checkpoint import MemorySaver # an in-memory checkpointer\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
@@ -483,7 +502,7 @@
"\n",
"memory = MemorySaver()\n",
"app = create_react_agent(\n",
" model, tools, messages_modifier=system_message, checkpointer=memory\n",
" model, tools, state_modifier=system_message, checkpointer=memory\n",
")\n",
"\n",
"config = {\"configurable\": {\"thread_id\": \"test-thread\"}}\n",
@@ -525,16 +544,16 @@
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d640feb3",
"execution_count": 11,
"id": "e62843c4-1107-41f0-a50b-aea256e28053",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'actions': [ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-c68fd76f-a3c3-4c3c-bfd7-748c171ed4b8', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt', 'index': 0}])], tool_call_id='call_q9MgGFjqJbV2xSUX93WqxmOt')], 'messages': [AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-c68fd76f-a3c3-4c3c-bfd7-748c171ed4b8', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt', 'index': 0}])]}\n",
"{'steps': [AgentStep(action=ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-c68fd76f-a3c3-4c3c-bfd7-748c171ed4b8', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_q9MgGFjqJbV2xSUX93WqxmOt', 'index': 0}])], tool_call_id='call_q9MgGFjqJbV2xSUX93WqxmOt'), observation=5)], 'messages': [FunctionMessage(content='5', name='magic_function')]}\n",
"{'actions': [ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518'}, id='run-5664e138-7085-4da7-a49e-5656a87b8d78', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_1exy0rScfPmo4fy27FbQ5qJ2')], 'messages': [AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518'}, id='run-5664e138-7085-4da7-a49e-5656a87b8d78', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'index': 0, 'type': 'tool_call_chunk'}])]}\n",
"{'steps': [AgentStep(action=ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518'}, id='run-5664e138-7085-4da7-a49e-5656a87b8d78', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_1exy0rScfPmo4fy27FbQ5qJ2', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_1exy0rScfPmo4fy27FbQ5qJ2'), observation=5)], 'messages': [FunctionMessage(content='5', name='magic_function')]}\n",
"{'output': 'The value of `magic_function(3)` is 5.', 'messages': [AIMessage(content='The value of `magic_function(3)` is 5.')]}\n"
]
}
@@ -585,23 +604,23 @@
},
{
"cell_type": "code",
"execution_count": 11,
"id": "86abbe07",
"execution_count": 12,
"id": "076ebc85-f804-4093-a25a-a16334c9898e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_yTjXXibj76tyFyPRa1soLo0S', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 70, 'total_tokens': 84}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-b275f314-c42e-4e77-9dec-5c23f7dbd53b-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_yTjXXibj76tyFyPRa1soLo0S'}])]}}\n",
"{'tools': {'messages': [ToolMessage(content='5', name='magic_function', id='41c5f227-528d-4483-a313-b03b23b1d327', tool_call_id='call_yTjXXibj76tyFyPRa1soLo0S')]}}\n",
"{'agent': {'messages': [AIMessage(content='The value of `magic_function(3)` is 5.', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 93, 'total_tokens': 107}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'stop', 'logprobs': None}, id='run-0ef12b6e-415d-4758-9b62-5e5e1b350072-0')]}}\n"
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_my9rzFSKR4T1yYKwCsfbZB8A', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 61, 'total_tokens': 75}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_bc2a86f5f5', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-dd705555-8fae-4fb1-a033-5d99a23e3c22-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_my9rzFSKR4T1yYKwCsfbZB8A', 'type': 'tool_call'}], usage_metadata={'input_tokens': 61, 'output_tokens': 14, 'total_tokens': 75})]}}\n",
"{'tools': {'messages': [ToolMessage(content='5', name='magic_function', tool_call_id='call_my9rzFSKR4T1yYKwCsfbZB8A')]}}\n",
"{'agent': {'messages': [AIMessage(content='The value of `magic_function(3)` is 5.', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 84, 'total_tokens': 98}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None}, id='run-698cad05-8cb2-4d08-8c2a-881e354f6cc7-0', usage_metadata={'input_tokens': 84, 'output_tokens': 14, 'total_tokens': 98})]}}\n"
]
}
],
"source": [
"from langchain_core.messages import AnyMessage\n",
"from langgraph.prebuilt import create_react_agent\n",
"from langgraph.prebuilt.chat_agent_executor import AgentState\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -611,12 +630,11 @@
")\n",
"\n",
"\n",
"def _modify_messages(messages: list[AnyMessage]):\n",
" return prompt.invoke({\"messages\": messages}).to_messages()\n",
"def _modify_state_messages(state: AgentState):\n",
" return prompt.invoke({\"messages\": state[\"messages\"]}).to_messages()\n",
"\n",
"\n",
"app = create_react_agent(model, tools, messages_modifier=_modify_messages)\n",
"\n",
"app = create_react_agent(model, tools, state_modifier=_modify_state_messages)\n",
"\n",
"for step in app.stream({\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"):\n",
" print(step)"
@@ -637,14 +655,14 @@
{
"cell_type": "code",
"execution_count": 12,
"id": "4eff44bc-a620-4c8a-97b1-268692a842bb",
"id": "a2f720f3-c121-4be2-b498-92c16bb44b0a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_ABI4hftfEdnVgKyfF6OzZbca', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-837e794f-cfd8-40e0-8abc-4d98ced11b75', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_ABI4hftfEdnVgKyfF6OzZbca'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_ABI4hftfEdnVgKyfF6OzZbca', 'index': 0}])], tool_call_id='call_ABI4hftfEdnVgKyfF6OzZbca'), 5)]\n"
"[(ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log=\"\\nInvoking: `magic_function` with `{'input': 3}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_uPZ2D1Bo5mdED3gwgaeWURrf', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518'}, id='run-a792db4a-278d-4090-82ae-904a30eada93', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_uPZ2D1Bo5mdED3gwgaeWURrf', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{\"input\":3}', 'id': 'call_uPZ2D1Bo5mdED3gwgaeWURrf', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_uPZ2D1Bo5mdED3gwgaeWURrf'), 5)]\n"
]
}
],
@@ -667,16 +685,16 @@
{
"cell_type": "code",
"execution_count": 13,
"id": "4f4364ea-dffe-4d25-bdce-ef7d0020b880",
"id": "ef23117a-5ccb-42ce-80c3-ea49a9d3a942",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='what is the value of magic_function(3)?', id='0f63e437-c4d8-4da9-b6f5-b293ebfe4a64'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_S96v28LlI6hNkQrNnIio0JPh', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 64, 'total_tokens': 78}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-ffef7898-14b1-4537-ad90-7c000a8a5d25-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_S96v28LlI6hNkQrNnIio0JPh'}]),\n",
" ToolMessage(content='5', name='magic_function', id='fbd9df4e-1dda-4d3e-9044-b001f7875476', tool_call_id='call_S96v28LlI6hNkQrNnIio0JPh'),\n",
" AIMessage(content='The value of `magic_function(3)` is 5.', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 87, 'total_tokens': 101}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'stop', 'logprobs': None}, id='run-e5d94c54-d9f4-45cd-be8e-a9101a8d88d6-0')]}"
"{'messages': [HumanMessage(content='what is the value of magic_function(3)?', id='cd7d0f49-a0e0-425a-b2b0-603a716058ed'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_VfZ9287DuybOSrBsQH5X12xf', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 55, 'total_tokens': 69}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a1e965cd-bf61-44f9-aec1-8aaecb80955f-0', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_VfZ9287DuybOSrBsQH5X12xf', 'type': 'tool_call'}], usage_metadata={'input_tokens': 55, 'output_tokens': 14, 'total_tokens': 69}),\n",
" ToolMessage(content='5', name='magic_function', id='20d5c2fe-a5d8-47fa-9e04-5282642e2039', tool_call_id='call_VfZ9287DuybOSrBsQH5X12xf'),\n",
" AIMessage(content='The value of `magic_function(3)` is 5.', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 78, 'total_tokens': 92}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None}, id='run-abf9341c-ef41-4157-935d-a3be5dfa2f41-0', usage_metadata={'input_tokens': 78, 'output_tokens': 14, 'total_tokens': 92})]}"
]
},
"execution_count": 13,
@@ -708,7 +726,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 16,
"id": "16f189a7-fc78-4cb5-aa16-a94ca06401a6",
"metadata": {},
"outputs": [],
@@ -724,7 +742,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 17,
"id": "c96aefd7-6f6e-4670-aca6-1ac3d4e7871f",
"metadata": {},
"outputs": [
@@ -739,11 +757,7 @@
"Invoking: `magic_function` with `{'input': '3'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mSorry, there was an error. Please try again.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `magic_function` with `{'input': '3'}`\n",
"responded: Parece que hubo un error al intentar obtener el valor de `magic_function(3)`. Permíteme intentarlo de nuevo.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mSorry, there was an error. Please try again.\u001b[0m\u001b[32;1m\u001b[1;3mAún no puedo obtener el valor de `magic_function(3)`. ¿Hay algo más en lo que pueda ayudarte?\u001b[0m\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mSorry, there was an error. Please try again.\u001b[0m\u001b[32;1m\u001b[1;3mParece que hubo un error al intentar calcular el valor de la función mágica. ¿Te gustaría que lo intente de nuevo?\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -752,10 +766,10 @@
"data": {
"text/plain": [
"{'input': 'what is the value of magic_function(3)?',\n",
" 'output': 'Aún no puedo obtener el valor de `magic_function(3)`. ¿Hay algo más en lo que pueda ayudarte?'}"
" 'output': 'Parece que hubo un error al intentar calcular el valor de la función mágica. ¿Te gustaría que lo intente de nuevo?'}"
]
},
"execution_count": 15,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -797,7 +811,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 18,
"id": "b974a91f-6ae8-4644-83d9-73666258a6db",
"metadata": {},
"outputs": [
@@ -805,12 +819,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"('human', 'what is the value of magic_function(3)?')\n",
"content='' additional_kwargs={'tool_calls': [{'id': 'call_pFdKcCu5taDTtOOfX14vEDRp', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 64, 'total_tokens': 78}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-25836468-ba7e-43be-a7cf-76bba06a2a08-0' tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_pFdKcCu5taDTtOOfX14vEDRp'}]\n",
"content='Sorry, there was an error. Please try again.' name='magic_function' id='1a08b883-9c7b-4969-9e9b-67ce64cdcb5f' tool_call_id='call_pFdKcCu5taDTtOOfX14vEDRp'\n",
"content='It seems there was an error when trying to apply the magic function. Let me try again.' additional_kwargs={'tool_calls': [{'id': 'call_DA0lpDIkBFg2GHy4WsEcZG4K', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 34, 'prompt_tokens': 97, 'total_tokens': 131}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-d571b774-0ea3-4e35-8b7d-f32932c3f3cc-0' tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_DA0lpDIkBFg2GHy4WsEcZG4K'}]\n",
"content='Sorry, there was an error. Please try again.' name='magic_function' id='0b45787b-c82a-487f-9a5a-de129c30460f' tool_call_id='call_DA0lpDIkBFg2GHy4WsEcZG4K'\n",
"content='It appears that there is a consistent issue when trying to apply the magic function to the input \"3.\" This could be due to various reasons, such as the input not being in the correct format or an internal error.\\n\\nIf you have any other questions or if there\\'s something else you\\'d like to try, please let me know!' response_metadata={'token_usage': {'completion_tokens': 66, 'prompt_tokens': 153, 'total_tokens': 219}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'stop', 'logprobs': None} id='run-50a962e6-21b7-4327-8dea-8e2304062627-0'\n"
"content='what is the value of magic_function(3)?' id='74e2d5e8-2b59-4820-979c-8d11ecfc14c2'\n",
"content='' additional_kwargs={'tool_calls': [{'id': 'call_ihtrH6IG95pDXpKluIwAgi3J', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 55, 'total_tokens': 69}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-5a35e465-8a08-43dd-ac8b-4a76dcace305-0' tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_ihtrH6IG95pDXpKluIwAgi3J', 'type': 'tool_call'}] usage_metadata={'input_tokens': 55, 'output_tokens': 14, 'total_tokens': 69}\n",
"content='Sorry, there was an error. Please try again.' name='magic_function' id='8c37c19b-3586-46b1-aab9-a045786801a2' tool_call_id='call_ihtrH6IG95pDXpKluIwAgi3J'\n",
"content='It seems there was an error in processing the request. Let me try again.' additional_kwargs={'tool_calls': [{'id': 'call_iF0vYWAd6rfely0cXSqdMOnF', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 88, 'total_tokens': 119}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-eb88ec77-d492-43a5-a5dd-4cefef9a6920-0' tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_iF0vYWAd6rfely0cXSqdMOnF', 'type': 'tool_call'}] usage_metadata={'input_tokens': 88, 'output_tokens': 31, 'total_tokens': 119}\n",
"content='Sorry, there was an error. Please try again.' name='magic_function' id='c9ff261f-a0f1-4c92-a9f2-cd749f62d911' tool_call_id='call_iF0vYWAd6rfely0cXSqdMOnF'\n",
"content='I am currently unable to process the request with the input \"3\" for the `magic_function`. If you have any other questions or need assistance with something else, please let me know!' response_metadata={'token_usage': {'completion_tokens': 39, 'prompt_tokens': 141, 'total_tokens': 180}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None} id='run-d42508aa-f286-4b57-80fb-f8a76736d470-0' usage_metadata={'input_tokens': 141, 'output_tokens': 39, 'total_tokens': 180}\n"
]
}
],
@@ -847,7 +861,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 19,
"id": "4b8498fc-a7af-4164-a401-d8714f082306",
"metadata": {},
"outputs": [
@@ -874,7 +888,7 @@
" 'output': 'Agent stopped due to max iterations.'}"
]
},
"execution_count": 17,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -917,7 +931,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 20,
"id": "a2b29113-e6be-4f91-aa4c-5c63dea3e423",
"metadata": {},
"outputs": [
@@ -925,7 +939,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_HaQkeCwD5QskzJzFixCBacZ4', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 64, 'total_tokens': 78}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-596c9200-771f-436d-8576-72fcb81620f1-0', tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_HaQkeCwD5QskzJzFixCBacZ4'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_FKiTkTd0Ffd4rkYSzERprf1M', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 55, 'total_tokens': 69}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-b842f7b6-ec10-40f8-8c0e-baa220b77e91-0', tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_FKiTkTd0Ffd4rkYSzERprf1M', 'type': 'tool_call'}], usage_metadata={'input_tokens': 55, 'output_tokens': 14, 'total_tokens': 69})]}}\n",
"------\n",
"{'input': 'what is the value of magic_function(3)?', 'output': 'Agent stopped due to max iterations.'}\n"
]
@@ -956,7 +970,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 21,
"id": "e9eb55f4-a321-4bac-b52d-9e43b411cf92",
"metadata": {},
"outputs": [
@@ -964,7 +978,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_4agJXUHtmHrOOMogjF6ZuzAv', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 64, 'total_tokens': 78}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a1c77db7-405f-43d9-8d57-751f2ca1a58c-0', tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_4agJXUHtmHrOOMogjF6ZuzAv'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_WoOB8juagB08xrP38twYlYKR', 'function': {'arguments': '{\"input\":\"3\"}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 55, 'total_tokens': 69}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-73dee47e-30ab-42c9-bb0c-6f227cac96cd-0', tool_calls=[{'name': 'magic_function', 'args': {'input': '3'}, 'id': 'call_WoOB8juagB08xrP38twYlYKR', 'type': 'tool_call'}], usage_metadata={'input_tokens': 55, 'output_tokens': 14, 'total_tokens': 69})]}}\n",
"------\n",
"Task Cancelled.\n"
]
@@ -1005,7 +1019,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 22,
"id": "3f6e2cf2",
"metadata": {},
"outputs": [
@@ -1067,7 +1081,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 23,
"id": "73cabbc4",
"metadata": {},
"outputs": [
@@ -1075,10 +1089,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"('human', 'what is the value of magic_function(3)?')\n",
"content='' additional_kwargs={'tool_calls': [{'id': 'call_bTURmOn9C8zslmn0kMFeykIn', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 64, 'total_tokens': 78}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-0844a504-7e6b-4ea6-a069-7017e38121ee-0' tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_bTURmOn9C8zslmn0kMFeykIn'}]\n",
"content='Sorry there was an error, please try again.' name='magic_function' id='00d5386f-eb23-4628-9a29-d9ce6a7098cc' tool_call_id='call_bTURmOn9C8zslmn0kMFeykIn'\n",
"content='' additional_kwargs={'tool_calls': [{'id': 'call_JYqvvvWmXow2u012DuPoDHFV', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 96, 'total_tokens': 110}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_729ea513f7', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-b73b1b1c-c829-4348-98cd-60b315c85448-0' tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_JYqvvvWmXow2u012DuPoDHFV'}]\n",
"content='what is the value of magic_function(3)?' id='4fa7fbe5-758c-47a3-9268-717665d10680'\n",
"content='' additional_kwargs={'tool_calls': [{'id': 'call_ujE0IQBbIQnxcF9gsZXQfdhF', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 55, 'total_tokens': 69}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-65d689aa-baee-4342-a5d2-048feefab418-0' tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_ujE0IQBbIQnxcF9gsZXQfdhF', 'type': 'tool_call'}] usage_metadata={'input_tokens': 55, 'output_tokens': 14, 'total_tokens': 69}\n",
"content='Sorry there was an error, please try again.' name='magic_function' id='ef8ddf1d-9ad7-4ac0-b784-b673c4d94bbd' tool_call_id='call_ujE0IQBbIQnxcF9gsZXQfdhF'\n",
"content='It seems there was an issue with the previous attempt. Let me try that again.' additional_kwargs={'tool_calls': [{'id': 'call_GcsAfCFUHJ50BN2IOWnwTbQ7', 'function': {'arguments': '{\"input\":3}', 'name': 'magic_function'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 32, 'prompt_tokens': 87, 'total_tokens': 119}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-54527c4b-8ff0-4ee8-8abf-224886bd222e-0' tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_GcsAfCFUHJ50BN2IOWnwTbQ7', 'type': 'tool_call'}] usage_metadata={'input_tokens': 87, 'output_tokens': 32, 'total_tokens': 119}\n",
"{'input': 'what is the value of magic_function(3)?', 'output': 'Agent stopped due to max iterations.'}\n"
]
}
@@ -1118,7 +1132,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 24,
"id": "b94bb169",
"metadata": {},
"outputs": [
@@ -1216,12 +1230,12 @@
"source": [
"### In LangGraph\n",
"\n",
"We can use the [`messages_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
"We can use the [`state_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 25,
"id": "b309ba9a",
"metadata": {},
"outputs": [
@@ -1246,9 +1260,9 @@
}
],
"source": [
"from langchain_core.messages import AnyMessage\n",
"from langgraph.errors import GraphRecursionError\n",
"from langgraph.prebuilt import create_react_agent\n",
"from langgraph.prebuilt.chat_agent_executor import AgentState\n",
"\n",
"magic_step_num = 1\n",
"\n",
@@ -1265,12 +1279,12 @@
"tools = [magic_function]\n",
"\n",
"\n",
"def _modify_messages(messages: list[AnyMessage]):\n",
"def _modify_state_messages(state: AgentState):\n",
" # Give the agent amnesia, only keeping the original user query\n",
" return [(\"system\", \"You are a helpful assistant\"), messages[0]]\n",
" return [(\"system\", \"You are a helpful assistant\"), state[\"messages\"][0]]\n",
"\n",
"\n",
"app = create_react_agent(model, tools, messages_modifier=_modify_messages)\n",
"app = create_react_agent(model, tools, state_modifier=_modify_state_messages)\n",
"\n",
"try:\n",
" for step in app.stream({\"messages\": [(\"human\", query)]}, stream_mode=\"updates\"):\n",
@@ -1308,7 +1322,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,798 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f331037f-be3f-4782-856f-d55dab952488",
"metadata": {},
"source": [
"# How to migrate chains to LCEL\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
"\n",
":::\n",
"\n",
"LCEL is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n",
"\n",
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n",
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n",
"\n",
"LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL primitives. Doing so confers some general advantages:\n",
"\n",
"- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n",
"- The chains may be more easily extended or modified;\n",
"- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n",
"\n",
"The LCEL implementations can be slightly more verbose, but there are significant benefits in transparency and customizability.\n",
"\n",
"In this guide we review LCEL implementations of common legacy abstractions. Where appropriate, we link out to separate guides with more detail."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b99b47ec",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "717c8673",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
"metadata": {},
"source": [
"## `LLMChain`\n",
"<span data-heading-keywords=\"llmchain\"></span>\n",
"\n",
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Clarity around contents and parameters. The legacy `LLMChain` contains a default output parser and other options.\n",
"- Easier streaming. `LLMChain` only supports streaming via callbacks.\n",
"- Easier access to raw message outputs if desired. `LLMChain` only exposes these via a parameter or via callback.\n",
"\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e628905c-430e-4e4a-9d7c-c91d2f42052e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'funny',\n",
" 'text': \"Why couldn't the bicycle find its way home?\\n\\nBecause it lost its bearings!\"}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
"\n",
"chain({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0d2a7cf8-1bc7-405c-bb0d-f2ab2ba3b6ab",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"chain.invoke({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "529206c5-abbe-4213-9e6c-3b8586c8000d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'funny',\n",
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"outer_chain = RunnablePassthrough().assign(text=chain)\n",
"\n",
"outer_chain.invoke({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "29d2e26c-2854-4971-9c2b-613450993921",
"metadata": {},
"source": [
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers."
]
},
{
"cell_type": "markdown",
"id": "00df631d-5121-4918-94aa-b88acce9b769",
"metadata": {},
"source": [
"## `ConversationChain`\n",
"<span data-heading-keywords=\"conversationchain\"></span>\n",
"\n",
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Innate support for threads/separate sessions. To make this work with `ConversationChain`, you'd need to instantiate a separate memory class outside the chain.\n",
"- More explicit parameters. `ConversationChain` contains a hidden default prompt, which can cause confusion.\n",
"- Streaming support. `ConversationChain` only supports streaming via callbacks.\n",
"\n",
"`RunnableWithMessageHistory` implements sessions via configuration parameters. It should be instantiated with a callable that returns a [chat message history](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html). By default, it expects this function to take a single argument `session_id`.\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Legacy\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'how are you?',\n",
" 'history': '',\n",
" 'response': \"Arrr, I be doin' well, me matey! Just sailin' the high seas in search of treasure and adventure. How can I assist ye today?\"}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"template = \"\"\"\n",
"You are a pirate. Answer the following questions as best you can.\n",
"Chat history: {history}\n",
"Question: {input}\n",
"\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"memory = ConversationBufferMemory()\n",
"\n",
"chain = ConversationChain(\n",
" llm=ChatOpenAI(),\n",
" memory=memory,\n",
" prompt=prompt,\n",
")\n",
"\n",
"chain({\"input\": \"how are you?\"})"
]
},
{
"cell_type": "markdown",
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "173e1a9c-2a18-4669-b0de-136f39197786",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Arr, matey! I be sailin' the high seas with me crew, searchin' for buried treasure and adventure! How be ye doin' on this fine day?\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a pirate. Answer the following questions as best you can.\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"history = InMemoryChatMessageHistory()\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"wrapped_chain = RunnableWithMessageHistory(chain, lambda x: history)\n",
"\n",
"wrapped_chain.invoke(\n",
" {\"input\": \"how are you?\"},\n",
" config={\"configurable\": {\"session_id\": \"42\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6b386ce6-895e-442c-88f3-7bec0ab9f401",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The above example uses the same `history` for all sessions. The example below shows how to use a different chat history for each session."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4e05994f-1fbc-4699-bf2e-62cb0e4deeb8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Ahoy there! What be ye wantin' from this old pirate?\", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 29, 'total_tokens': 44}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1846d5f5-0dda-43b6-bb49-864e541f9c29-0', usage_metadata={'input_tokens': 29, 'output_tokens': 15, 'total_tokens': 44})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = InMemoryChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"wrapped_chain = RunnableWithMessageHistory(chain, get_session_history)\n",
"\n",
"wrapped_chain.invoke(\"Hello!\", config={\"configurable\": {\"session_id\": \"abc123\"}})"
]
},
{
"cell_type": "markdown",
"id": "c36ebecb",
"metadata": {},
"source": [
"See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n",
"\n",
"## `RetrievalQA`\n",
"<span data-heading-keywords=\"retrievalqa\"></span>\n",
"\n",
"The [`RetrievalQA`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) chain performed natural-language question answering over a data source using retrieval-augmented generation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Easier customizability. Details such as the prompt and how documents are formatted are only configurable via specific parameters in the `RetrievalQA` chain.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Now let's look at them side-by-side. We'll use the same ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1efbe16e",
"metadata": {},
"outputs": [],
"source": [
"# Load docs\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Store splits\n",
"vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n",
"\n",
"# LLM\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "markdown",
"id": "c7e16438",
"metadata": {},
"source": [
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "43bf55a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What are autonomous agents?',\n",
" 'result': 'Autonomous agents are LLM-empowered agents that handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. These agents can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other LLMs. They are capable of reasoning and planning ahead for complicated tasks by breaking them down into smaller steps.'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import RetrievalQA\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"\n",
"qa_chain = RetrievalQA.from_llm(\n",
" llm, retriever=vectorstore.as_retriever(), prompt=prompt\n",
")\n",
"\n",
"qa_chain(\"What are autonomous agents?\")"
]
},
{
"cell_type": "markdown",
"id": "081948e5",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "9efcc931",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Autonomous agents are agents that can handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. They can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other language model models. These agents use reasoning steps to develop solutions to specific tasks, like creating a novel anticancer drug.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"qa_chain = (\n",
" {\n",
" \"context\": vectorstore.as_retriever() | format_docs,\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"qa_chain.invoke(\"What are autonomous agents?\")"
]
},
{
"cell_type": "markdown",
"id": "d6f44fe8",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The LCEL implementation exposes the internals of what's happening around retrieving, formatting documents, and passing them through a prompt to the LLM, but it is more verbose. You can customize and wrap this composition logic in a helper function, or use the higher-level [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) and [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) helper method:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "5fe42761",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'What are autonomous agents?',\n",
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. LilLog. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\", metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'})],\n",
" 'answer': 'Autonomous agents are entities that can operate independently, making decisions and taking actions without direct human intervention. These agents can perform tasks such as planning, executing complex experiments, and leveraging various tools and resources to achieve objectives. In the context provided, LLM-powered autonomous agents are specifically designed for scientific discovery, capable of handling tasks like designing novel anticancer drugs through reasoning steps.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/langchain-ai/retrieval-qa-chat\n",
"retrieval_qa_chat_prompt = hub.pull(\"langchain-ai/retrieval-qa-chat\")\n",
"\n",
"combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)\n",
"rag_chain = create_retrieval_chain(vectorstore.as_retriever(), combine_docs_chain)\n",
"\n",
"rag_chain.invoke({\"input\": \"What are autonomous agents?\"})"
]
},
{
"cell_type": "markdown",
"id": "2772f4e9",
"metadata": {},
"source": [
"## `ConversationalRetrievalChain`\n",
"<span data-heading-keywords=\"conversationalretrievalchain\"></span>\n",
"\n",
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
"\n",
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n",
"\n",
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:"
]
},
{
"cell_type": "markdown",
"id": "8bc06416",
"metadata": {},
"source": [
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "54eb9576",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'What are autonomous agents?',\n",
" 'chat_history': '',\n",
" 'answer': 'Autonomous agents are powered by Large Language Models (LLMs) to handle tasks like scientific discovery and complex experiments autonomously. These agents can browse the internet, read documentation, execute code, and leverage other LLMs to perform tasks. They can reason and plan ahead to decompose complicated tasks into manageable steps.'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"\n",
"condense_question_template = \"\"\"\n",
"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"Follow Up Input: {question}\n",
"Standalone question:\"\"\"\n",
"\n",
"condense_question_prompt = ChatPromptTemplate.from_template(condense_question_template)\n",
"\n",
"qa_template = \"\"\"\n",
"You are an assistant for question-answering tasks.\n",
"Use the following pieces of retrieved context to answer\n",
"the question. If you don't know the answer, say that you\n",
"don't know. Use three sentences maximum and keep the\n",
"answer concise.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"\n",
"Other context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_template(qa_template)\n",
"\n",
"convo_qa_chain = ConversationalRetrievalChain.from_llm(\n",
" llm,\n",
" vectorstore.as_retriever(),\n",
" condense_question_prompt=condense_question_prompt,\n",
" combine_docs_chain_kwargs={\n",
" \"prompt\": qa_prompt,\n",
" },\n",
")\n",
"\n",
"convo_qa_chain(\n",
" {\n",
" \"question\": \"What are autonomous agents?\",\n",
" \"chat_history\": \"\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "43a8a23c",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "c884b138",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'What are autonomous agents?',\n",
" 'chat_history': [],\n",
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. LilLog. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Or\\n@article{weng2023agent,\\n title = \"LLM-powered Autonomous Agents\",\\n author = \"Weng, Lilian\",\\n journal = \"lilianweng.github.io\",\\n year = \"2023\",\\n month = \"Jun\",\\n url = \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\\n}\\nReferences#\\n[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022\\n[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'})],\n",
" 'answer': 'Autonomous agents are entities capable of acting independently, making decisions, and performing tasks without direct human intervention. These agents can interact with their environment, perceive information, and take actions based on their goals or objectives. They often use artificial intelligence techniques to navigate and accomplish tasks in complex or dynamic environments.'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
"\n",
"condense_question_system_template = (\n",
" \"Given a chat history and the latest user question \"\n",
" \"which might reference context in the chat history, \"\n",
" \"formulate a standalone question which can be understood \"\n",
" \"without the chat history. Do NOT answer the question, \"\n",
" \"just reformulate it if needed and otherwise return it as is.\"\n",
")\n",
"\n",
"condense_question_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", condense_question_system_template),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"history_aware_retriever = create_history_aware_retriever(\n",
" llm, vectorstore.as_retriever(), condense_question_prompt\n",
")\n",
"\n",
"system_prompt = (\n",
" \"You are an assistant for question-answering tasks. \"\n",
" \"Use the following pieces of retrieved context to answer \"\n",
" \"the question. If you don't know the answer, say that you \"\n",
" \"don't know. Use three sentences maximum and keep the \"\n",
" \"answer concise.\"\n",
" \"\\n\\n\"\n",
" \"{context}\"\n",
")\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"qa_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
"\n",
"convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)\n",
"\n",
"convo_qa_chain.invoke(\n",
" {\n",
" \"input\": \"What are autonomous agents?\",\n",
" \"chat_history\": [],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b2717810",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"</ColumnContainer>\n",
"\n",
"## Next steps\n",
"\n",
"You've now seen how to migrate existing usage of some legacy chains to LCEL.\n",
"\n",
"Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,27 +1,97 @@
# How to use LangChain with different Pydantic versions
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)
- Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/).
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/).
- Pydantic 1 End of Life was in June 2024. LangChain will be dropping support for Pydantic 1 in the near future,
and likely migrating internally to Pydantic 2. The timeline is tentatively September. This change will be accompanied by a minor version bump in the main langchain packages to version 0.3.x.
## LangChain Pydantic migration plan
As of `langchain>=0.0.267`, LangChain allows users to install either Pydantic V1 or V2.
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
* During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial
migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
Internally, LangChain continues to use the [Pydantic V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features) via
the v1 namespace of Pydantic 2.
User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.
Because Pydantic does not support mixing .v1 and .v2 objects, users should be aware of a number of issues
when using LangChain with Pydantic.
## 1. Passing Pydantic objects to LangChain APIs
Most LangChain APIs that accept Pydantic objects have been updated to accept both Pydantic v1 and v2 objects.
* Pydantic v1 objects correspond to subclasses of `pydantic.BaseModel` if `pydantic 1` is installed or subclasses of `pydantic.v1.BaseModel` if `pydantic 2` is installed.
* Pydantic v2 objects correspond to subclasses of `pydantic.BaseModel` if `pydantic 2` is installed.
| API | Pydantic 1 | Pydantic 2 |
|----------------------------------------|------------|----------------------------------------------------------------|
| `BaseChatModel.bind_tools` | Yes | langchain-core>=0.2.23, appropriate version of partner package |
| `BaseChatModel.with_structured_output` | Yes | langchain-core>=0.2.23, appropriate version of partner package |
| `Tool.from_function` | Yes | langchain-core>=0.2.23 |
| `StructuredTool.from_function` | Yes | langchain-core>=0.2.23 |
Partner packages that accept pydantic v2 objects via `bind_tools` or `with_structured_output` APIs:
| Package Name | pydantic v1 | pydantic v2 |
|---------------------|-------------|-------------|
| langchain-mistralai | Yes | >=0.1.11 |
| langchain-anthropic | Yes | >=0.1.21 |
| langchain-robocorp | Yes | >=0.0.10 |
| langchain-openai | Yes | >=0.1.19 |
| langchain-fireworks | Yes | >=0.1.5 |
Additional partner packages will be updated to accept Pydantic v2 objects in the future.
If you are still seeing issues with these APIs or other APIs that accept Pydantic objects, please open an issue, and we'll
address it.
Example:
Prior to `langchain-core<0.2.23`, use Pydantic v1 objects when passing to LangChain APIs.
```python
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel # <-- Note v1 namespace
class Person(BaseModel):
"""Personal information"""
name: str
model = ChatOpenAI()
model = model.with_structured_output(Person)
model.invoke('Bob is a person.')
```
After `langchain-core>=0.2.23`, use either Pydantic v1 or v2 objects when passing to LangChain APIs.
```python
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
class Person(BaseModel):
"""Personal information"""
name: str
model = ChatOpenAI()
model = model.with_structured_output(Person)
model.invoke('Bob is a person.')
```
## 2. Sub-classing LangChain models
Because LangChain internally uses Pydantic v1, if you are sub-classing LangChain models, you should use Pydantic v1
primitives.
Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in
the case of inheritance and in the case of passing objects to LangChain.
**Example 1: Extending via inheritance**
**YES**
```python
from pydantic.v1 import root_validator, validator
from pydantic.v1 import validator
from langchain_core.tools import BaseTool
class CustomTool(BaseTool): # BaseTool is v1 code
@@ -70,38 +140,33 @@ CustomTool(
)
```
**Example 2: Passing objects to LangChain**
**YES**
## 3. Disable run-time validation for LangChain objects used inside Pydantic v2 models
e.g.,
```python
from langchain_core.tools import Tool
from pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace
from typing import Annotated
class CalculatorInput(BaseModel):
question: str = Field()
from langchain_openai import ChatOpenAI # <-- ChatOpenAI uses pydantic v1
from pydantic import BaseModel, SkipValidation
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
class Foo(BaseModel): # <-- BaseModel is from Pydantic v2
model: Annotated[ChatOpenAI, SkipValidation()]
Foo(model=ChatOpenAI(api_key="hello"))
```
**NO**
## 4: LangServe cannot generate OpenAPI docs if running Pydantic 2
```python
from langchain_core.tools import Tool
from pydantic import BaseModel, Field # <-- Uses v2 namespace
If you are using Pydantic 2, you will not be able to generate OpenAPI docs using LangServe.
class CalculatorInput(BaseModel):
question: str = Field()
If you need OpenAPI docs, your options are to either install Pydantic 1:
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```
`pip install pydantic==1.10.17`
or else to use the `APIHandler` object in LangChain to manually create the
routes for your API.
See: https://python.langchain.com/v0.2/docs/langserve/#pydantic

View File

@@ -43,7 +43,7 @@
"\n",
"This is the easiest and most reliable way to get structured outputs. `with_structured_output()` is implemented for models that provide native APIs for structuring outputs, like tool/function calling or JSON mode, and makes use of these capabilities under the hood.\n",
"\n",
"This method takes a schema as input which specifies the names, types, and descriptions of the desired output attributes. The method returns a model-like Runnable, except that instead of outputting strings or Messages it outputs objects corresponding to the given schema. The schema can be specified as a [JSON Schema](https://json-schema.org/) or a Pydantic class. If JSON Schema is used then a dictionary will be returned by the Runnable, and if a Pydantic class is used then Pydantic objects will be returned.\n",
"This method takes a schema as input which specifies the names, types, and descriptions of the desired output attributes. The method returns a model-like Runnable, except that instead of outputting strings or Messages it outputs objects corresponding to the given schema. The schema can be specified as a TypedDict class, [JSON Schema](https://json-schema.org/) or a Pydantic class. If TypedDict or JSON Schema are used then a dictionary will be returned by the Runnable, and if a Pydantic class is used then a Pydantic object will be returned.\n",
"\n",
"As an example, let's get a model to generate a joke and separate the setup from the punchline:\n",
"\n",
@@ -58,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "6d55008f",
"metadata": {},
"outputs": [],
@@ -68,7 +68,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-0125-preview\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)"
]
},
{
@@ -76,22 +76,24 @@
"id": "a808a401-be1f-49f9-ad13-58dd68f7db5f",
"metadata": {},
"source": [
"If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class:"
"### Pydantic class\n",
"\n",
"If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class. The key advantage of using Pydantic is that the model-generated output will be validated. Pydantic will raise an error if any required fields are missing or if any fields are of the wrong type."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "070bf702",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=8)"
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=7)"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -102,12 +104,15 @@
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Pydantic\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(description=\"How funny the joke is, from 1 to 10\")\n",
" rating: Optional[int] = Field(\n",
" default=None, description=\"How funny the joke is, from 1 to 10\"\n",
" )\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
@@ -130,12 +135,73 @@
"id": "deddb6d3",
"metadata": {},
"source": [
"We can also pass in a [JSON Schema](https://json-schema.org/) dict if you prefer not to use Pydantic. In this case, the response is also a dict:"
"### TypedDict or JSON Schema\n",
"\n",
"If you don't want to use Pydantic, explicitly don't want validation of the arguments, or want to be able to stream the model outputs, you can define your schema using a TypedDict class. We can optionally use a special `Annotated` syntax supported by LangChain that allows you to specify the default value and description of a field. Note, the default value is *not* filled in automatically if the model doesn't generate it, it is only used in defining the schema that is passed to the model.\n",
"\n",
":::info Requirements\n",
"\n",
"- Core: `langchain-core>=0.2.26`\n",
"- Typing extensions: It is highly recommended to import `Annotated` and `TypedDict` from `typing_extensions` instead of `typing` to ensure consistent behavior across Python versions.\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "70d82891-42e8-424a-919e-07d83bcfec61",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!',\n",
" 'rating': 7}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"\n",
"# TypedDict\n",
"class Joke(TypedDict):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: Annotated[str, ..., \"The setup of the joke\"]\n",
"\n",
" # Alternatively, we could have specified setup as:\n",
"\n",
" # setup: str # no default, no description\n",
" # setup: Annotated[str, ...] # no default, no description\n",
" # setup: Annotated[str, \"foo\"] # default, no description\n",
"\n",
" punchline: Annotated[str, ..., \"The punchline of the joke\"]\n",
" rating: Annotated[Optional[int], None, \"How funny the joke is, from 1 to 10\"]\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"id": "e4d7b4dc-f617-4ea8-aa58-847c228791b4",
"metadata": {},
"source": [
"Equivalently, we can pass in a [JSON Schema](https://json-schema.org/) dict. This requires no imports or classes and makes it very clear exactly how each parameter is documented, at the cost of being a bit more verbose."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6700994a",
"metadata": {},
"outputs": [
@@ -144,10 +210,10 @@
"text/plain": [
"{'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!',\n",
" 'rating': 8}"
" 'rating': 7}"
]
},
"execution_count": 8,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -169,6 +235,7 @@
" \"rating\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"How funny the joke is, from 1 to 10\",\n",
" \"default\": None,\n",
" },\n",
" },\n",
" \"required\": [\"setup\", \"punchline\"],\n",
@@ -185,7 +252,7 @@
"source": [
"### Choosing between multiple schemas\n",
"\n",
"The simplest way to let the model choose from multiple schemas is to create a parent Pydantic class that has a Union-typed attribute:"
"The simplest way to let the model choose from multiple schemas is to create a parent schema that has a Union-typed attribute:"
]
},
{
@@ -209,6 +276,17 @@
"from typing import Union\n",
"\n",
"\n",
"# Pydantic\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(\n",
" default=None, description=\"How funny the joke is, from 1 to 10\"\n",
" )\n",
"\n",
"\n",
"class ConversationalResponse(BaseModel):\n",
" \"\"\"Respond in a conversational manner. Be kind and helpful.\"\"\"\n",
"\n",
@@ -260,7 +338,7 @@
"source": [
"### Streaming\n",
"\n",
"We can stream outputs from our structured model when the output type is a dict (i.e., when the schema is specified as a JSON Schema dict). \n",
"We can stream outputs from our structured model when the output type is a dict (i.e., when the schema is specified as a TypedDict class or JSON Schema dict). \n",
"\n",
":::info\n",
"\n",
@@ -271,7 +349,7 @@
},
{
"cell_type": "code",
"execution_count": 43,
"execution_count": 9,
"id": "aff89877-28a3-472f-a1aa-eff893fe7736",
"metadata": {},
"outputs": [
@@ -302,12 +380,24 @@
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the'}\n",
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse'}\n",
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!'}\n",
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 8}\n"
"{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 7}\n"
]
}
],
"source": [
"structured_llm = llm.with_structured_output(json_schema)\n",
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"\n",
"# TypedDict\n",
"class Joke(TypedDict):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: Annotated[str, ..., \"The setup of the joke\"]\n",
" punchline: Annotated[str, ..., \"The punchline of the joke\"]\n",
" rating: Annotated[Optional[int], None, \"How funny the joke is, from 1 to 10\"]\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"for chunk in structured_llm.stream(\"Tell me a joke about cats\"):\n",
" print(chunk)"
@@ -327,7 +417,7 @@
},
{
"cell_type": "code",
"execution_count": 47,
"execution_count": 11,
"id": "283ba784-2072-47ee-9b2c-1119e3c69e8e",
"metadata": {},
"outputs": [
@@ -335,11 +425,11 @@
"data": {
"text/plain": [
"{'setup': 'Woodpecker',\n",
" 'punchline': \"Woodpecker goes 'knock knock', but don't worry, they never expect you to answer the door!\",\n",
" 'rating': 8}"
" 'punchline': \"Woodpecker who? Woodpecker who can't find a tree is just a bird with a headache!\",\n",
" 'rating': 7}"
]
},
"execution_count": 47,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -377,7 +467,7 @@
},
{
"cell_type": "code",
"execution_count": 46,
"execution_count": 12,
"id": "d7381cb0-b2c3-4302-a319-ed72d0b9e43f",
"metadata": {},
"outputs": [
@@ -385,11 +475,11 @@
"data": {
"text/plain": [
"{'setup': 'Crocodile',\n",
" 'punchline': \"Crocodile 'see you later', but in a while, it becomes an alligator!\",\n",
" 'punchline': 'Crocodile be seeing you later, alligator!',\n",
" 'rating': 7}"
]
},
"execution_count": 46,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -491,23 +581,24 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 15,
"id": "df0370e3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None)"
"{'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!'}"
]
},
"execution_count": 6,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"structured_llm = llm.with_structured_output(Joke, method=\"json_mode\")\n",
"structured_llm = llm.with_structured_output(None, method=\"json_mode\")\n",
"\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
@@ -526,19 +617,21 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 17,
"id": "10ed2842",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ASK4EmZeZ69Fi3p554Mb4rWy', 'function': {'arguments': '{\"setup\":\"Why was the cat sitting on the computer?\",\"punchline\":\"Because it wanted to keep an eye on the mouse!\"}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 36, 'prompt_tokens': 107, 'total_tokens': 143}, 'model_name': 'gpt-4-0125-preview', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-6491d35b-9164-4656-b75c-d7882cfb76cb-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!'}, 'id': 'call_ASK4EmZeZ69Fi3p554Mb4rWy'}], usage_metadata={'input_tokens': 107, 'output_tokens': 36, 'total_tokens': 143}),\n",
" 'parsed': Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None),\n",
"{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'function': {'arguments': '{\"setup\":\"Why was the cat sitting on the computer?\",\"punchline\":\"Because it wanted to keep an eye on the mouse!\",\"rating\":7}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 33, 'prompt_tokens': 93, 'total_tokens': 126}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_4e2b2da518', 'finish_reason': 'stop', 'logprobs': None}, id='run-d880d7e2-df08-4e9e-ad92-dfc29f2fd52f-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 7}, 'id': 'call_f25ZRmh8u5vHlOWfTUw8sJFZ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 93, 'output_tokens': 33, 'total_tokens': 126}),\n",
" 'parsed': {'setup': 'Why was the cat sitting on the computer?',\n",
" 'punchline': 'Because it wanted to keep an eye on the mouse!',\n",
" 'rating': 7},\n",
" 'parsing_error': None}"
]
},
"execution_count": 5,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -546,9 +639,7 @@
"source": [
"structured_llm = llm.with_structured_output(Joke, include_raw=True)\n",
"\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
")"
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
@@ -824,7 +915,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -838,7 +929,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -22,71 +22,86 @@
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Tool calling](/docs/concepts/#functiontool-calling)\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Output parsers](/docs/concepts/#output-parsers)\n",
":::\n",
"\n",
"[Tool calling](/docs/concepts/#functiontool-calling) allows a chat model to respond to a given prompt by \"calling a tool\".\n",
"\n",
"Remember, while the name \"tool calling\" implies that the model is directly performing some action, this is actually not the case! The model only generates the arguments to a tool, and actually running the tool (or not) is up to the user.\n",
"\n",
"Tool calling is a general technique that generates structured output from a model, and you can use it even when you don't intend to invoke any tools. An example use-case of that is [extraction from unstructured text](/docs/tutorials/extraction/).\n",
"\n",
"![Diagram of calling a tool](/img/tool_call.png)\n",
"\n",
"If you want to see how to use the model-generated tool call to actually run a tool [check out this guide](/docs/how_to/tool_results_pass_to_model/).\n",
"\n",
":::note Supported models\n",
"\n",
"Tool calling is not universal, but is supported by many popular LLM providers. You can find a [list of all models that support tool calling here](/docs/integrations/chat/).\n",
"\n",
":::\n",
"\n",
":::info Tool calling vs function calling\n",
"\n",
"We use the term tool calling interchangeably with function calling. Although\n",
"function calling is sometimes meant to refer to invocations of a single function,\n",
"we treat all models as though they can return multiple tool or function calls in \n",
"each message.\n",
"\n",
":::\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"\n",
"Tool calling allows a chat model to respond to a given prompt by \"calling a tool\".\n",
"While the name implies that the model is performing \n",
"some action, this is actually not the case! The model generates the \n",
"arguments to a tool, and actually running the tool (or not) is up to the user.\n",
"For example, if you want to [extract output matching some schema](/docs/how_to/structured_output/) \n",
"from unstructured text, you could give the model an \"extraction\" tool that takes \n",
"parameters matching the desired schema, then treat the generated output as your final \n",
"result.\n",
"\n",
":::note\n",
"\n",
"If you only need formatted values, try the [.with_structured_output()](/docs/how_to/structured_output/#the-with_structured_output-method) chat model method as a simpler entrypoint.\n",
"\n",
":::\n",
"\n",
"However, tool calling goes beyond [structured output](/docs/how_to/structured_output/)\n",
"since you can pass responses from called tools back to the model to create longer interactions.\n",
"For instance, given a search engine tool, an LLM might handle a \n",
"query by first issuing a call to the search engine with arguments. The system calling the LLM can \n",
"receive the tool call, execute it, and return the output to the LLM to inform its \n",
"response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n",
"and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools). \n",
"\n",
"Tool calling is not universal, but many popular LLM providers, including [Anthropic](https://www.anthropic.com/), \n",
"[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), \n",
"[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, \n",
"support variants of a tool calling feature.\n",
"\n",
"LangChain implements standard interfaces for defining tools, passing them to LLMs, \n",
"and representing tool calls. This guide and the other How-to pages in the Tool section will show you how to use tools with LangChain."
"LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls.\n",
"This guide will cover how to bind tools to an LLM, then invoke the LLM to generate these arguments."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Passing tools to chat models\n",
"## Defining tool schemas\n",
"\n",
"Chat models that support tool calling features implement a `.bind_tools` method, which \n",
"receives a list of functions, Pydantic models, or LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
"and binds them to the chat model in its expected format. Subsequent invocations of the \n",
"chat model will include tool schemas in its calls to the LLM.\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"\n",
"For example, below we implement simple tools for arithmetic:"
"### Python functions\n",
"Our tool schemas can be Python functions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# The function name, type hints, and docstring are all part of the tool\n",
"# schema that's passed to the model. Defining good, descriptive schemas\n",
"# is an extension of prompt engineering and is an important part of\n",
"# getting models to perform well.\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Add two integers.\n",
"\n",
" Args:\n",
" a: First integer\n",
" b: Second integer\n",
" \"\"\"\n",
" return a + b\n",
"\n",
"\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiply two integers.\n",
"\n",
" Args:\n",
" a: First integer\n",
" b: Second integer\n",
" \"\"\"\n",
" return a * b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### LangChain Tool\n",
"\n",
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for details.\n",
"\n",
"### Pydantic class\n",
"\n",
"You can equivalently define the schemas without the accompanying functions using [Pydantic](https://docs.pydantic.dev):"
]
},
{
@@ -95,14 +110,57 @@
"metadata": {},
"outputs": [],
"source": [
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
"class add(BaseModel):\n",
" \"\"\"Add two integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### TypedDict class\n",
"\n",
":::info Requires `langchain-core>=0.2.25`\n",
":::\n",
"\n",
"Or using TypedDicts and annotations:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"\n",
"class add(TypedDict):\n",
" \"\"\"Add two integers.\"\"\"\n",
"\n",
" # Annotations must have the type and can optionally include a default value and description (in that order).\n",
" a: Annotated[int, ..., \"First integer\"]\n",
" b: Annotated[int, ..., \"Second integer\"]\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers.\"\"\"\n",
"\n",
" a: Annotated[int, ..., \"First integer\"]\n",
" b: Annotated[int, ..., \"Second integer\"]\n",
"\n",
"\n",
"tools = [add, multiply]"
@@ -112,44 +170,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for detail.\n",
"\n",
"We can also define the schema using [Pydantic](https://docs.pydantic.dev):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Note that the docstrings here are crucial, as they will be passed along\n",
"# to the model along with the class name.\n",
"class Add(BaseModel):\n",
" \"\"\"Add two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class Multiply(BaseModel):\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"tools = [Add, Multiply]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can bind them to chat models as follows:\n",
"To actually bind those schemas to a chat model, we'll use the `.bind_tools()` method. This handles converting\n",
"the `add` and `multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
@@ -158,11 +180,7 @@
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n",
"```\n",
"\n",
"We'll use the `.bind_tools()` method to handle converting\n",
"`Multiply` to the proper format for the model, then and bind it (i.e.,\n",
"passing it in each time the model is invoked)."
"```"
]
},
{
@@ -183,21 +201,21 @@
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_g4RuAijtDcSeM96jXyCuiLSN', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 95, 'total_tokens': 113}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5157d15a-7e0e-4ab1-af48-3d98010cd152-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_g4RuAijtDcSeM96jXyCuiLSN'}], usage_metadata={'input_tokens': 95, 'output_tokens': 18, 'total_tokens': 113})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_BwYJ4UgU5pRVCBOUmiu7NhF9', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 80, 'total_tokens': 97}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_ba606877f9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7f05e19e-4561-40e2-a2d0-8f4e28e9a00f-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BwYJ4UgU5pRVCBOUmiu7NhF9', 'type': 'tool_call'}], usage_metadata={'input_tokens': 80, 'output_tokens': 17, 'total_tokens': 97})"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -214,7 +232,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, even though the prompt didn't really suggest a tool call, our LLM made one since it was forced to do so. You can look at the docs for [bind_tools()](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools."
"As we can see our LLM generated arguments to a tool! You can look at the docs for [bind_tools()](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools, as well as [this guide on how to force the LLM to call a tool](/docs/how_to/tool_choice/) rather than letting it decide."
]
},
{
@@ -238,21 +256,23 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Multiply',\n",
"[{'name': 'multiply',\n",
" 'args': {'a': 3, 'b': 12},\n",
" 'id': 'call_TnadLbWJu9HwDULRb51RNSMw'},\n",
" {'name': 'Add',\n",
" 'id': 'call_rcdMie7E89Xx06lEKKxJyB5N',\n",
" 'type': 'tool_call'},\n",
" {'name': 'add',\n",
" 'args': {'a': 11, 'b': 49},\n",
" 'id': 'call_Q9vt1up05sOQScXvUYWzSpCg'}]"
" 'id': 'call_nheGN8yfvSJsnIuGZaXihou3',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -274,31 +294,49 @@
"are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n",
"a name, string arguments, identifier, and error message.\n",
"\n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further \n",
"process the output. For example, we can convert existing values populated on the `.tool_calls` attribute back to the original Pydantic class using the\n",
"\n",
"## Parsing\n",
"\n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further process the output. For example, we can convert existing values populated on the `.tool_calls` to Pydantic objects using the\n",
"[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html):"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Multiply(a=3, b=12), Add(a=11, b=49)]"
"[multiply(a=3, b=12), add(a=11, b=49)]"
]
},
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import PydanticToolsParser\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n",
"\n",
"class add(BaseModel):\n",
" \"\"\"Add two integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"chain = llm_with_tools | PydanticToolsParser(tools=[add, multiply])\n",
"chain.invoke(query)"
]
},
@@ -308,26 +346,26 @@
"source": [
"## Next steps\n",
"\n",
"Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, you can learn more about how to use tools:\n",
"Now you've learned how to bind tool schemas to a chat model and have the model call the tool.\n",
"\n",
"Next, check out this guide on actually using the tool by invoking the function and passing the results back to the model:\n",
"\n",
"- Few shot promting [with tools](/docs/how_to/tools_few_shot/)\n",
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
"- Bind [model-specific tools](/docs/how_to/tools_model_specific/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Pass [tool results back to model](/docs/how_to/tool_results_pass_to_model)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
"- Building [tool-using chains and agents](/docs/how_to#tools)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
"- Getting [structured outputs](/docs/how_to/structured_output/) from models\n",
"- Few shot prompting [with tools](/docs/how_to/tools_few_shot/)\n",
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-311",
"language": "python",
"name": "python3"
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
@@ -339,7 +377,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -9,12 +9,34 @@
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
"- [Using chat models to call tools](/docs/how_to/tool_calling)\n",
"- [Defining custom tools](/docs/how_to/custom_tools/)\n",
"\n",
":::\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s and `ToolCall`s. First, let's define our tools and our model."
"Some models are capable of [**tool calling**](/docs/concepts/#functiontool-calling) - generating arguments that conform to a specific user-provided schema. This guide will demonstrate how to use those tool cals to actually call a function and properly pass the results back to the model.\n",
"\n",
"![Diagram of a tool call invocation](/img/tool_invocation.png)\n",
"\n",
"![Diagram of a tool call result](/img/tool_results.png)\n",
"\n",
"First, let's define our tools and our model:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n",
"```"
]
},
{
@@ -22,6 +44,25 @@
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
@@ -38,23 +79,8 @@
" return a * b\n",
"\n",
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"tools = [add, multiply]\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"llm_with_tools = llm.bind_tools(tools)"
]
},
@@ -62,15 +88,88 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The nice thing about Tools is that if we invoke them with a ToolCall, we'll automatically get back a ToolMessage that can be fed back to the model: \n",
"Now, let's get the model to call a tool. We'll add it to a list of messages that we'll treat as conversation history:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_GPGPE943GORirhIAYnWv00rK', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_dm8o64ZrY3WFZHAvCh1bEJ6i', 'type': 'tool_call'}]\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"messages = [HumanMessage(query)]\n",
"\n",
"ai_msg = llm_with_tools.invoke(messages)\n",
"\n",
"print(ai_msg.tool_calls)\n",
"\n",
"messages.append(ai_msg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next let's invoke the tool functions using the args the model populated!\n",
"\n",
"Conveniently, if we invoke a LangChain `Tool` with a `ToolCall`, we'll automatically get back a `ToolMessage` that can be fed back to the model:\n",
"\n",
":::caution Compatibility\n",
"\n",
"This functionality was added in `langchain-core == 0.2.19`. Please make sure your package is up to date.\n",
"\n",
"If you are on earlier versions of `langchain-core`, you will need to extract the `args` field from the tool and construct a `ToolMessage` manually.\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_loT2pliJwJe3p7nkgXYF48A1', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'multiply'}, 'type': 'function'}, {'id': 'call_bG9tYZCXOeYDZf3W46TceoV4', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 87, 'total_tokens': 137}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_661538dc1f', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-e3db3c46-bf9e-478e-abc1-dc9a264f4afe-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_loT2pliJwJe3p7nkgXYF48A1', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_bG9tYZCXOeYDZf3W46TceoV4', 'type': 'tool_call'}], usage_metadata={'input_tokens': 87, 'output_tokens': 50, 'total_tokens': 137}),\n",
" ToolMessage(content='36', name='multiply', tool_call_id='call_loT2pliJwJe3p7nkgXYF48A1'),\n",
" ToolMessage(content='60', name='add', tool_call_id='call_bG9tYZCXOeYDZf3W46TceoV4')]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_msg = selected_tool.invoke(tool_call)\n",
" messages.append(tool_msg)\n",
"\n",
"messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, we'll invoke the model with the tool results. The model will use this information to generate a final answer to our original query:"
]
},
{
"cell_type": "code",
"execution_count": 5,
@@ -79,10 +178,7 @@
{
"data": {
"text/plain": [
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'multiply'}, 'type': 'function'}, {'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 88, 'total_tokens': 137}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-56657feb-96dd-456c-ab8e-1857eab2ade0-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'type': 'tool_call'}], usage_metadata={'input_tokens': 88, 'output_tokens': 49, 'total_tokens': 137}),\n",
" ToolMessage(content='36', name='multiply', tool_call_id='call_Smg3NHJNxrKfAmd4f9GkaYn3'),\n",
" ToolMessage(content='60', name='add', tool_call_id='call_55K1C0DmH6U5qh810gW34xZ0')]"
"AIMessage(content='The result of \\\\(3 \\\\times 12\\\\) is 36, and the result of \\\\(11 + 49\\\\) is 60.', response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 153, 'total_tokens': 184}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_661538dc1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-87d1ef0a-1223-4bb3-9310-7b591789323d-0', usage_metadata={'input_tokens': 153, 'output_tokens': 31, 'total_tokens': 184})"
]
},
"execution_count": 5,
@@ -90,37 +186,6 @@
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, ToolMessage\n",
"\n",
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
"\n",
"messages = [HumanMessage(query)]\n",
"ai_msg = llm_with_tools.invoke(messages)\n",
"messages.append(ai_msg)\n",
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_msg = selected_tool.invoke(tool_call)\n",
" messages.append(tool_msg)\n",
"messages"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 153, 'total_tokens': 171}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ba5032f0-f773-406d-a408-8314e66511d0-0', usage_metadata={'input_tokens': 153, 'output_tokens': 18, 'total_tokens': 171})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools.invoke(messages)"
]
@@ -129,15 +194,25 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls."
"Note that each `ToolMessage` must include a `tool_call_id` that matches an `id` in the original tool calls that the model generates. This helps the model match tool responses with tool calls.\n",
"\n",
"Tool calling agents, like those in [LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/), use this basic flow to answer queries and solve tasks.\n",
"\n",
"## Related\n",
"\n",
"- [LangGraph quickstart](https://langchain-ai.github.io/langgraph/tutorials/introduction/)\n",
"- Few shot prompting [with tools](/docs/how_to/tools_few_shot/)\n",
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -149,7 +224,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -6,26 +6,20 @@
"source": [
"# How to pass run time values to tools\n",
"\n",
":::info Prerequisites\n",
"import Prerequisites from \"@theme/Prerequisites\";\n",
"import Compatibility from \"@theme/Compatibility\";\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [How to create tools](/docs/how_to/custom_tools)\n",
"- [How to use a model to call tools](/docs/how_to/tool_calling)\n",
":::\n",
"<Prerequisites titlesAndLinks={[\n",
" [\"Chat models\", \"/docs/concepts/#chat-models\"],\n",
" [\"LangChain Tools\", \"/docs/concepts/#tools\"],\n",
" [\"How to create tools\", \"/docs/how_to/custom_tools\"],\n",
" [\"How to use a model to call tools\", \"/docs/how_to/tool_calling\"],\n",
"]} />\n",
"\n",
":::info Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::\n",
"\n",
":::caution Added in `langchain-core==0.2.21`\n",
"\n",
"Must have `langchain-core>=0.2.21` to use this functionality.\n",
"\n",
":::\n",
"<Compatibility packagesAndVersions={[\n",
" [\"langchain-core\", \"0.2.21\"],\n",
"]} />\n",
"\n",
"You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.\n",
"\n",
@@ -33,7 +27,13 @@
"\n",
"Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.\n",
"\n",
"This how-to guide shows you how to prevent the model from generating certain tool arguments and injecting them in directly at runtime."
"This how-to guide shows you how to prevent the model from generating certain tool arguments and injecting them in directly at runtime.\n",
"\n",
":::info Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::"
]
},
{
@@ -597,9 +597,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -611,7 +611,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -17,26 +17,25 @@
"source": [
"# ChatAI21\n",
"\n",
"## Overview\n",
"\n",
"This notebook covers how to get started with AI21 chat models.\n",
"Note that different chat models support different parameters. See the ",
"[AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"Note that different chat models support different parameters. See the [AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"[See all AI21's LangChain components.](https://pypi.org/project/langchain-ai21/) \n",
"## Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c3bef91",
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-15T06:50:44.929635Z",
"start_time": "2024-02-15T06:50:41.209704Z"
}
},
"outputs": [],
"source": [
"!pip install -qU langchain-ai21"
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatAI21](https://api.python.langchain.com/en/latest/chat_models/langchain_ai21.chat_models.ChatAI21.html#langchain_ai21.chat_models.ChatAI21) | [langchain-ai21](https://api.python.langchain.com/en/latest/ai21_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ai21?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ai21?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"\n",
"## Setup"
]
},
{
@@ -44,10 +43,9 @@
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Environment Setup\n",
"### Credentials\n",
"\n",
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the ",
"`AI21_API_KEY` environment variable:\n"
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:\n"
]
},
{
@@ -67,48 +65,166 @@
},
{
"cell_type": "markdown",
"id": "4828829d3da430ce",
"metadata": {
"collapsed": false
},
"id": "f6844fff-3702-4489-ab74-732f69f3b9d7",
"metadata": {},
"source": [
"## Usage"
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "39353473fce5dd2e",
"execution_count": null,
"id": "7c2e19d3-7c58-4470-9e1a-718b27a32056",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "98e22f31-8acc-42d6-916d-415d1263c56e",
"metadata": {},
"source": [
"### Installation"
]
},
{
"cell_type": "markdown",
"id": "f9699cd9-58f2-450e-aa64-799e66906c0f",
"metadata": {},
"source": [
"!pip install -qU langchain-ai21"
]
},
{
"cell_type": "markdown",
"id": "4828829d3da430ce",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c40756fb-cbf8-4d44-a293-3989d707237e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ai21 import ChatAI21\n",
"\n",
"llm = ChatAI21(model=\"jamba-instruct\", temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "2bdc5d68-2a19-495e-8c04-d11adc86d3ae",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "46b982dc-5d8a-46da-a711-81c03ccd6adc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Bonjour, comment vas-tu?')"
"AIMessage(content=\"J'adore programmer.\", id='run-2e8d16d6-a06e-45cb-8d0c-1c8208645033-0')"
]
},
"execution_count": 1,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "10a30f84-b531-4fd5-8b5b-91512fbdc75b",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "39353473fce5dd2e",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', id='run-e1bd82dc-1a7e-4b2e-bde9-ac995929ac0f-0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_ai21 import ChatAI21\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"chat = ChatAI21(model=\"jamba-instruct\")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"Translate this sentence from English to French. {english_text}.\"),\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({\"english_text\": \"Hello, how are you?\"})"
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e79de691-9dd6-4697-b57e-59a4a3cc073a",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAI21 features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_ai21.chat_models.ChatAI21.html"
]
}
],
@@ -128,7 +244,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -115,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -123,8 +123,8 @@
"from langchain_openai import AzureChatOpenAI\n",
"\n",
"llm = AzureChatOpenAI(\n",
" azure_deployment=\"YOUR-DEPLOYMENT\",\n",
" api_version=\"2024-05-01-preview\",\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
@@ -143,7 +143,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -152,10 +152,10 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-a6a732c2-cb02-4e50-9a9c-ab30eab034fc-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-bea4b46c-e3e1-4495-9d3a-698370ad963d-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -174,7 +174,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -202,17 +202,17 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-084967d7-06f2-441f-b5c1-477e2a9e9d03-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-cbc44038-09d3-40d4-9da2-c5910ee636ca-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
]
},
"execution_count": 12,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -264,8 +264,8 @@
},
{
"cell_type": "code",
"execution_count": 5,
"id": "84c411b0-1790-4798-8bb7-47d8ece4c2dc",
"execution_count": 6,
"id": "2ca02d23-60d0-43eb-8d04-070f61f8fefd",
"metadata": {},
"outputs": [
{
@@ -288,22 +288,22 @@
},
{
"cell_type": "code",
"execution_count": 6,
"id": "21234693-d92b-4d69-8a7f-55aa062084bf",
"execution_count": 7,
"id": "e1b07ae2-3de7-44bd-bfdc-b76f4ba45a35",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000078\n"
"Total Cost (USD): $0.000074\n"
]
}
],
"source": [
"llm_0301 = AzureChatOpenAI(\n",
" azure_deployment=\"YOUR-DEPLOYMENT\",\n",
" api_version=\"2024-05-01-preview\",\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" model_version=\"0301\",\n",
")\n",
"with get_openai_callback() as cb:\n",
@@ -338,7 +338,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"id": "53fbf15f",
"metadata": {},
"source": [
"---\n",
@@ -12,129 +12,103 @@
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# ChatCohere\n",
"# Cohere\n",
"\n",
"This doc will help you get started with Cohere [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatCohere features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html).\n",
"\n",
"For an overview of all Cohere models head to the [Cohere docs](https://docs.cohere.com/docs/models).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/cohere) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatCohere](https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html) | [langchain-cohere](https://api.python.langchain.com/en/latest/cohere_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cohere?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cohere?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).\n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods."
]
},
{
"cell_type": "markdown",
"id": "3607d67e-e56c-4102-bbba-df2edc0e109e",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"To access Cohere models you'll need to create a Cohere account, get an API key, and install the `langchain-cohere` integration package.\n",
"The integration lives in the `langchain-cohere` package. We can install these with:\n",
"\n",
"### Credentials\n",
"```bash\n",
"pip install -U langchain-cohere\n",
"```\n",
"\n",
"Head to https://dashboard.cohere.com/welcome/login to sign up to Cohere and generate an API key. Once you've done this set the COHERE_API_KEY environment variable:"
"We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"execution_count": 11,
"id": "2108b517-1e8d-473d-92fa-4f930e8072a7",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Enter your Cohere API key: \")"
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"id": "cf690fbb",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"execution_count": 12,
"id": "7f11de02",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"id": "4c26754b-b3c9-4d93-8f36-43049bd943bf",
"metadata": {},
"source": [
"### Installation\n",
"## Usage\n",
"\n",
"The LangChain Cohere integration lives in the `langchain-cohere` package:"
"ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-cohere"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"execution_count": 5,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_cohere import ChatCohere\n",
"\n",
"llm = ChatCohere(\n",
" model=\"command-r-plus\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"execution_count": 13,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = ChatCohere()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
@@ -142,110 +116,223 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'd84f80f3-4611-46e6-aed0-9d8665a20a11', 'token_count': {'input_tokens': 89, 'output_tokens': 5}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'd84f80f3-4611-46e6-aed0-9d8665a20a11', 'token_count': {'input_tokens': 89, 'output_tokens': 5}}, id='run-514ab516-ed7e-48ac-b132-2598fb80ebef-0')"
"AIMessage(content='4 && 5 \\n6 || 7 \\n\\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')"
]
},
"execution_count": 2,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
"messages = [HumanMessage(content=\"1\"), HumanMessage(content=\"2 3\")]\n",
"chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"execution_count": 16,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.ainvoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer.\n"
"4 && 5"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"execution_count": 18,
"id": "064288e4-f184-4496-9427-bcf148fa055e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmierung.', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '053bebde-4e1d-4d06-8ee6-3446e7afa25e', 'token_count': {'input_tokens': 84, 'output_tokens': 6}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '053bebde-4e1d-4d06-8ee6-3446e7afa25e', 'token_count': {'input_tokens': 84, 'output_tokens': 6}}, id='run-53700708-b7fb-417b-af36-1a6fcde38e7d-0')"
"[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]"
]
},
"execution_count": 4,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
"chat.batch([messages])"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"id": "f1c56460",
"metadata": {},
"source": [
"## API reference\n",
"## Chaining\n",
"\n",
"For detailed documentation of all ChatCohere features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "0851b103",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "ae950c0f-1691-47f1-b609-273033cae707",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='What color socks do bears wear?\\n\\nThey dont wear socks, they have bear feet. \\n\\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0')"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "12db8d69",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Cohere supports tool calling functionalities!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "337e24af",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" HumanMessage,\n",
" ToolMessage,\n",
")\n",
"from langchain_core.tools import tool"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "74d292e7",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def magic_function(number: int) -> int:\n",
" \"\"\"Applies a magic operation to an integer\n",
" Args:\n",
" number: Number to have magic operation performed on\n",
" \"\"\"\n",
" return number + 10\n",
"\n",
"\n",
"def invoke_tools(tool_calls, messages):\n",
" for tool_call in tool_calls:\n",
" selected_tool = {\"magic_function\": magic_function}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" return messages\n",
"\n",
"\n",
"tools = [magic_function]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ecafcbc6",
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = chat.bind_tools(tools=tools)\n",
"messages = [HumanMessage(content=\"What is the value of magic_function(2)?\")]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "aa34fc39",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The value of magic_function(2) is 12.', additional_kwargs={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, response_metadata={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, id='run-f318a9cf-55c8-44f4-91d1-27cf46c6a465-0')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = llm_with_tools.invoke(messages)\n",
"while res.tool_calls:\n",
" messages.append(res)\n",
" messages = invoke_tools(res.tool_calls, messages)\n",
" res = llm_with_tools.invoke(messages)\n",
"\n",
"res"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-2"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -257,7 +344,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -36,7 +36,7 @@
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"| | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"### Supported Methods\n",
"\n",
@@ -395,6 +395,66 @@
"chat_model_external.invoke(\"How to use Databricks?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Function calling on Databricks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Databricks Function Calling is OpenAI-compatible and is only available during model serving as part of Foundation Model APIs.\n",
"\n",
"See [Databricks function calling introduction](https://docs.databricks.com/en/machine-learning/model-serving/function-calling.html#supported-models) for supported models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"\n",
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
" },\n",
" },\n",
" },\n",
" }\n",
"]\n",
"\n",
"# supported tool_choice values: \"auto\", \"required\", \"none\", function name in string format,\n",
"# or a dictionary as {\"type\": \"function\", \"function\": {\"name\": <<tool_name>>}}\n",
"model = llm.bind_tools(tools, tool_choice=\"auto\")\n",
"\n",
"messages = [{\"role\": \"user\", \"content\": \"What is the current temperature of Chicago?\"}]\n",
"print(model.invoke(messages))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [Databricks Unity Catalog](docs/integrations/tools/databricks.ipynb) about how to use UC functions in chains."
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -2,298 +2,259 @@
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Groq\n",
"keywords: [chatgroq]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# Groq\n",
"# ChatGroq\n",
"\n",
"LangChain supports integration with [Groq](https://groq.com/) chat models. Groq specializes in fast AI inference.\n",
"This will help you getting started with Groq [chat models](../../concepts.mdx#chat-models). For detailed documentation of all ChatGroq features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html). For a list of all Groq models, visit this [link](https://console.groq.com/docs/models).\n",
"\n",
"To get started, you'll first need to install the langchain-groq package:"
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/groq) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatGroq](https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html) | [langchain-groq](https://api.python.langchain.com/en/latest/groq_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-groq?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-groq?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access Groq models you'll need to create a Groq account, get an API key, and install the `langchain-groq` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [Groq console](https://console.groq.com/keys) to sign up to Groq and generate an API key. Once you've done this set the GROQ_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"GROQ_API_KEY\"] = getpass.getpass(\"Enter your Groq API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"Request an [API key](https://wow.groq.com) and set it as an environment variable:\n",
"\n",
"```bash\n",
"export GROQ_API_KEY=<YOUR API KEY>\n",
"```\n",
"\n",
"Alternatively, you may configure the API key when you initialize ChatGroq.\n",
"\n",
"Here's an example of it in action:"
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Low latency is crucial for Large Language Models (LLMs) because it directly impacts the user experience, model performance, and overall efficiency. Here are some reasons why low latency is essential for LLMs:\\n\\n1. **Real-time Interaction**: LLMs are often used in applications that require real-time interaction, such as chatbots, virtual assistants, and language translation. Low latency ensures that the model responds quickly to user input, providing a seamless and engaging experience.\\n2. **Conversational Flow**: In conversational AI, latency can disrupt the natural flow of conversation. Low latency helps maintain a smooth conversation, allowing users to respond quickly and naturally, without feeling like they're waiting for the model to catch up.\\n3. **Model Performance**: High latency can lead to increased error rates, as the model may struggle to keep up with the input pace. Low latency enables the model to process information more efficiently, resulting in better accuracy and performance.\\n4. **Scalability**: As the number of users and requests increases, low latency becomes even more critical. It allows the model to handle a higher volume of requests without sacrificing performance, making it more scalable and efficient.\\n5. **Resource Utilization**: Low latency can reduce the computational resources required to process requests. By minimizing latency, you can optimize resource allocation, reduce costs, and improve overall system efficiency.\\n6. **User Experience**: High latency can lead to frustration, abandonment, and a poor user experience. Low latency ensures that users receive timely responses, which is essential for building trust and satisfaction.\\n7. **Competitive Advantage**: In applications like customer service or language translation, low latency can be a key differentiator. It can provide a competitive advantage by offering a faster and more responsive experience, setting your application apart from others.\\n8. **Edge Computing**: With the increasing adoption of edge computing, low latency is critical for processing data closer to the user. This reduces latency even further, enabling real-time processing and analysis of data.\\n9. **Real-time Analytics**: Low latency enables real-time analytics and insights, which are essential for applications like sentiment analysis, trend detection, and anomaly detection.\\n10. **Future-Proofing**: As LLMs continue to evolve and become more complex, low latency will become even more critical. By prioritizing low latency now, you'll be better prepared to handle the demands of future LLM applications.\\n\\nIn summary, low latency is vital for LLMs because it ensures a seamless user experience, improves model performance, and enables efficient resource utilization. By prioritizing low latency, you can build more effective, scalable, and efficient LLM applications that meet the demands of real-time interaction and processing.\", response_metadata={'token_usage': {'completion_tokens': 541, 'prompt_tokens': 33, 'total_tokens': 574, 'completion_time': 1.499777658, 'prompt_time': 0.008344704, 'queue_time': None, 'total_time': 1.508122362}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_87cbfbbc4d', 'finish_reason': 'stop', 'logprobs': None}, id='run-49dad960-ace8-4cd7-90b3-2db99ecbfa44-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_groq import ChatGroq\n",
"\n",
"chat = ChatGroq(\n",
" temperature=0,\n",
" model=\"llama3-70b-8192\",\n",
" # api_key=\"\" # Optional if not set as an environment variable\n",
")\n",
"\n",
"system = \"You are a helpful assistant.\"\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({\"text\": \"Explain the importance of low latency for LLMs.\"})"
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"You can view the available models [here](https://console.groq.com/docs/models).\n",
"### Installation\n",
"\n",
"## Tool calling\n",
"\n",
"Groq chat models support [tool calling](/docs/how_to/tool_calling) to generate output matching a specific schema. The model may choose to call multiple tools or the same tool multiple times if appropriate.\n",
"\n",
"Here's an example:"
"The LangChain Groq integration lives in the `langchain-groq` package:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'get_current_weather',\n",
" 'args': {'location': 'San Francisco', 'unit': 'Celsius'},\n",
" 'id': 'call_pydj'},\n",
" {'name': 'get_current_weather',\n",
" 'args': {'location': 'Tokyo', 'unit': 'Celsius'},\n",
" 'id': 'call_jgq3'}]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_current_weather(location: str, unit: Optional[str]):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
" return \"Cloudy with a chance of rain.\"\n",
"\n",
"\n",
"tool_model = chat.bind_tools([get_current_weather], tool_choice=\"auto\")\n",
"\n",
"res = tool_model.invoke(\"What is the weather like in San Francisco and Tokyo?\")\n",
"\n",
"res.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### `.with_structured_output()`\n",
"\n",
"You can also use the convenience [`.with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method) method to coerce `ChatGroq` into returning a structured output.\n",
"Here is an example:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why did the cat join a band?', punchline='Because it wanted to be the purr-cussionist!', rating=None)"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(description=\"How funny the joke is, from 1 to 10\")\n",
"\n",
"\n",
"structured_llm = chat.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Behind the scenes, this takes advantage of the above tool calling functionality.\n",
"\n",
"## Async"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Here is a limerick about the sun:\\n\\nThere once was a sun in the sky,\\nWhose warmth and light caught the eye,\\nIt shone bright and bold,\\nWith a fiery gold,\\nAnd brought life to all, as it flew by.', response_metadata={'token_usage': {'completion_tokens': 51, 'prompt_tokens': 18, 'total_tokens': 69, 'completion_time': 0.144614022, 'prompt_time': 0.00585394, 'queue_time': None, 'total_time': 0.150467962}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'stop', 'logprobs': None}, id='run-e42340ba-f0ad-4b54-af61-8308d8ec8256-0')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatGroq(temperature=0, model=\"llama3-70b-8192\")\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Write a Limerick about {topic}\")])\n",
"chain = prompt | chat\n",
"await chain.ainvoke({\"topic\": \"The Sun\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 3,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Silvery glow bright\n",
"Luna's gentle light shines down\n",
"Midnight's gentle queen"
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.1.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"chat = ChatGroq(temperature=0, model=\"llama3-70b-8192\")\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Write a haiku about {topic}\")])\n",
"chain = prompt | chat\n",
"for chunk in chain.stream({\"topic\": \"The Moon\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
"%pip install -qU langchain-groq"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Passing custom parameters\n",
"## Instantiation\n",
"\n",
"You can pass other Groq-specific parameters using the `model_kwargs` argument on initialization. Here's an example of enabling JSON mode:"
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 4,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_groq import ChatGroq\n",
"\n",
"llm = ChatGroq(\n",
" model=\"mixtral-8x7b-32768\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='{ \"response\": \"That\\'s a tough question! There are eight species of bears found in the world, and each one is unique and amazing in its own way. However, if I had to pick one, I\\'d say the giant panda is a popular favorite among many people. Who can resist those adorable black and white markings?\", \"followup_question\": \"Would you like to know more about the giant panda\\'s habitat and diet?\" }', response_metadata={'token_usage': {'completion_tokens': 89, 'prompt_tokens': 50, 'total_tokens': 139, 'completion_time': 0.249032839, 'prompt_time': 0.011134497, 'queue_time': None, 'total_time': 0.260167336}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'stop', 'logprobs': None}, id='run-558ce67e-8c63-43fe-a48f-6ecf181bc922-0')"
"AIMessage(content='I enjoy programming. (The French translation is: \"J\\'aime programmer.\")\\n\\nNote: I chose to translate \"I love programming\" as \"J\\'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.', response_metadata={'token_usage': {'completion_tokens': 73, 'prompt_tokens': 31, 'total_tokens': 104, 'completion_time': 0.1140625, 'prompt_time': 0.003352463, 'queue_time': None, 'total_time': 0.117414963}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_c5f20b5bb1', 'finish_reason': 'stop', 'logprobs': None}, id='run-64433c19-eadf-42fc-801e-3071e3c40160-0', usage_metadata={'input_tokens': 31, 'output_tokens': 73, 'total_tokens': 104})"
]
},
"execution_count": 15,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatGroq(\n",
" model=\"llama3-70b-8192\", model_kwargs={\"response_format\": {\"type\": \"json_object\"}}\n",
")\n",
"\n",
"system = \"\"\"\n",
"You are a helpful assistant.\n",
"Always respond with a JSON object with two string keys: \"response\" and \"followup_question\".\n",
"\"\"\"\n",
"human = \"{question}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke({\"question\": \"what bear is best?\"})"
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I enjoy programming. (The French translation is: \"J'aime programmer.\")\n",
"\n",
"Note: I chose to translate \"I love programming\" as \"J'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='That\\'s great! I can help you translate English phrases related to programming into German.\\n\\n\"I love programming\" can be translated as \"Ich liebe Programmieren\" in German.\\n\\nHere are some more programming-related phrases translated into German:\\n\\n* \"Programming language\" = \"Programmiersprache\"\\n* \"Code\" = \"Code\"\\n* \"Variable\" = \"Variable\"\\n* \"Function\" = \"Funktion\"\\n* \"Array\" = \"Array\"\\n* \"Object-oriented programming\" = \"Objektorientierte Programmierung\"\\n* \"Algorithm\" = \"Algorithmus\"\\n* \"Data structure\" = \"Datenstruktur\"\\n* \"Debugging\" = \"Fehlersuche\"\\n* \"Compile\" = \"Kompilieren\"\\n* \"Link\" = \"Verknüpfen\"\\n* \"Run\" = \"Ausführen\"\\n* \"Test\" = \"Testen\"\\n* \"Deploy\" = \"Bereitstellen\"\\n* \"Version control\" = \"Versionskontrolle\"\\n* \"Open source\" = \"Open Source\"\\n* \"Software development\" = \"Softwareentwicklung\"\\n* \"Agile methodology\" = \"Agile Methodik\"\\n* \"DevOps\" = \"DevOps\"\\n* \"Cloud computing\" = \"Cloud Computing\"\\n\\nI hope this helps! Let me know if you have any other questions or if you need further translations.', response_metadata={'token_usage': {'completion_tokens': 331, 'prompt_tokens': 25, 'total_tokens': 356, 'completion_time': 0.520006542, 'prompt_time': 0.00250165, 'queue_time': None, 'total_time': 0.522508192}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_c5f20b5bb1', 'finish_reason': 'stop', 'logprobs': None}, id='run-74207fb7-85d3-417d-b2b9-621116b75d41-0', usage_metadata={'input_tokens': 25, 'output_tokens': 331, 'total_tokens': 356})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatGroq features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -307,9 +268,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 5
}

View File

@@ -4,18 +4,67 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hugging Face\n",
"---\n",
"sidebar_label: Hugging Face\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatHuggingFace\n",
"\n",
"This notebook shows how to get started using `Hugging Face` LLM's as chat models.\n",
"## Overview\n",
"\n",
"This notebook shows how to get started using Hugging Face LLMs as chat models.\n",
"\n",
"In particular, we will:\n",
"1. Utilize the [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py) integrations to instantiate an `LLM`.\n",
"1. Utilize the [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py) integrations to instantiate an LLM.\n",
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](/docs/concepts/#message-types) abstraction.\n",
"3. Explore tool calling with the `ChatHuggingFace`.\n",
"4. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline\n",
"\n",
"### Integration details\n",
"\n",
"> Note: To get started, you'll need to have a [Hugging Face Access Token](https://huggingface.co/docs/hub/security-tokens) saved as an environment variable: `HUGGINGFACEHUB_API_TOKEN`."
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatHuggingFace](https://api.python.langchain.com/en/latest/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html) | [langchain-huggingface](https://api.python.langchain.com/en/latest/huggingface_api_reference.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_huggingface?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_huggingface?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access Hugging Face models you'll need to create a Hugging Face account, get an API key, and install the `langchain-huggingface` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Generate a [Hugging Face Access Token](https://huggingface.co/docs/hub/security-tokens) and store it as an environment variable: `HUGGINGFACEHUB_API_TOKEN`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"HUGGINGFACEHUB_API_TOKEN\"):\n",
" os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = getpass.getpass(\"Enter your token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Below we install additional packages as well for demonstration purposes:"
]
},
{
@@ -31,7 +80,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Instantiate an LLM"
"## Instantiation"
]
},
{
@@ -80,6 +129,7 @@
" max_new_tokens=512,\n",
" do_sample=False,\n",
" repetition_penalty=1.03,\n",
" return_full_text=False,\n",
" ),\n",
")"
]
@@ -118,7 +168,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Instantiate the `ChatHuggingFace` to apply chat templates"
"## Invocation"
]
},
{
@@ -249,7 +299,44 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Explore the tool calling with `ChatHuggingFace`\n",
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool calling with `ChatHuggingFace`\n",
"\n",
"`text-generation-inference` supports tool with open source LLMs starting from v2.0.1"
]
@@ -313,7 +400,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Take it for a spin as an agent!\n",
"## Use with agents\n",
"\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. \n",
"\n",
@@ -458,6 +545,15 @@
"\n",
"It's exciting to see how far open-source LLM's can go as general purpose reasoning agents. Give it a try yourself!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatHuggingFace features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html"
]
}
],
"metadata": {
@@ -476,7 +572,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -12,43 +12,87 @@
},
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"id": "a14c83bf-af26-4f22-8c1a-d632c5795ecf",
"metadata": {},
"source": [
"# MistralAI\n",
"\n",
"This notebook covers how to get started with MistralAI chat models, via their [API](https://docs.mistral.ai/api/).\n",
"This will help you getting started with Mistral [chat models](/docs/concepts/#chat-models), accessed via their [API](https://docs.mistral.ai/api/). For detailed documentation of all ChatMistralAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html).\n",
"\n",
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/mistral) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatMistralAI](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) | [langchain_mistralai](https://api.python.langchain.com/en/latest/mistralai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_mistralai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_mistralai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) for detailed documentation of all attributes and methods."
]
},
{
"cell_type": "markdown",
"id": "cc686b8f",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"You will need the `langchain-core` and `langchain-mistralai` package to use the API. You can install these with:\n",
"To access Mistral models you'll need to create a Mistral account, get an API key, and install the `langchain-mistralai` integration package.\n",
"\n",
"```bash\n",
"pip install -U langchain-core langchain-mistralai\n",
"### Credentials\n",
"\n",
"We'll also need to get a [Mistral API key](https://console.mistral.ai/users/api-keys/)"
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API. Once you've obtained an API key, store it in the `MISTRAL_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "c3fd4184",
"execution_count": null,
"id": "9acd8340-09d4-4ece-871a-a35b0732c7d8",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"api_key = getpass.getpass()"
"if not os.getenv(\"__MODULE_NAME___API_KEY\"):\n",
" os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\n",
" \"Enter your __ModuleName__ API key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "42c979b1-df49-4f6c-9fe6-d9dbf3ea8c2a",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc4f11ec-5cb3-4caf-b3cd-7a20c41b0cfe",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0fc42221-97b2-466b-95db-10368e17ca56",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain MistralAI integration lives in the `langchain-mistralai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85cb1ab8-9f2c-4b93-8415-ad65819dcb38",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-mistralai"
]
},
{
@@ -56,57 +100,76 @@
"id": "502127fd",
"metadata": {},
"source": [
"## Usage"
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"execution_count": 1,
"id": "2dfa801a-d040-4c09-9634-58604e8eaf16",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_mistralai.chat_models import ChatMistralAI"
"from langchain_mistralai.chat_models import ChatMistralAI\n",
"\n",
"llm = ChatMistralAI(model=\"mistral-large-latest\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"cell_type": "markdown",
"id": "f668acff-eb14-4b3a-959a-df5bfc02968b",
"metadata": {},
"source": [
"# If api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.\n",
"chat = ChatMistralAI(api_key=api_key)"
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"execution_count": 2,
"id": "86e3f9e6-67ec-4fbf-8ff1-85331200f412",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Who's there? I was just about to ask the same thing! How can I assist you today?\")"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 36, 'completion_tokens': 9}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-d6196c33-9410-413b-b454-4ed0bec1f0c7-0', usage_metadata={'input_tokens': 27, 'output_tokens': 9, 'total_tokens': 36})"
]
},
"execution_count": 9,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [HumanMessage(content=\"knock knock\")]\n",
"chat.invoke(messages)"
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8f8a24bc-b7f0-4d3a-b310-8a4e0ba125dd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
@@ -119,7 +182,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
@@ -128,16 +191,16 @@
{
"data": {
"text/plain": [
"AIMessage(content='Who\\'s there?\\n\\n(You can then continue the \"knock knock\" joke by saying the name of the person or character who should be responding. For example, if I say \"Banana,\" you could respond with \"Banana who?\" and I would say \"Banana bunch! Get it? Because a group of bananas is called a \\'bunch\\'!\" and then we would both laugh and have a great time. But really, you can put anything you want in the spot where I put \"Banana\" and it will still technically be a \"knock knock\" joke. The possibilities are endless!)')"
"AIMessage(content=\"J'aime programmer.\", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 34, 'completion_tokens': 7}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-1873888a-186f-49a8-ab81-24335bd3099b-0', usage_metadata={'input_tokens': 27, 'output_tokens': 7, 'total_tokens': 34})"
]
},
"execution_count": 10,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.ainvoke(messages)"
"await llm.ainvoke(messages)"
]
},
{
@@ -150,7 +213,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 5,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
@@ -160,32 +223,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Who's there?\n",
"\n",
"(After this, the conversation can continue as a call and response \"who's there\" joke. Here is an example of how it could go:\n",
"\n",
"You say: Orange.\n",
"I say: Orange who?\n",
"You say: Orange you glad I didn't say banana!?)\n",
"\n",
"But since you asked for a knock knock joke specifically, here's one for you:\n",
"\n",
"Knock knock.\n",
"\n",
"Me: Who's there?\n",
"\n",
"You: Lettuce.\n",
"\n",
"Me: Lettuce who?\n",
"\n",
"You: Lettuce in, it's too cold out here!\n",
"\n",
"I hope this brings a smile to your face! Do you have a favorite knock knock joke you'd like to share? I'd love to hear it."
"J'adore programmer."
]
}
],
"source": [
"for chunk in chat.stream(messages):\n",
"for chunk in llm.stream(messages):\n",
" print(chunk.content, end=\"\")"
]
},
@@ -199,23 +242,23 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"id": "e63aebcb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Who's there? I was just about to ask the same thing! Go ahead and tell me who's there. I love a good knock-knock joke.\")]"
"[AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 36, 'completion_tokens': 9}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-2aa2a189-c405-4cf5-bd31-e9025e4c8536-0', usage_metadata={'input_tokens': 27, 'output_tokens': 9, 'total_tokens': 36})]"
]
},
"execution_count": 12,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat.batch([messages])"
"llm.batch([messages])"
]
},
{
@@ -230,36 +273,52 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 7,
"id": "ee43a1ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0dc49212",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Why do bears hate shoes so much? They like to run around in their bear feet.')"
"AIMessage(content='Ich liebe Programmieren.', response_metadata={'token_usage': {'prompt_tokens': 21, 'total_tokens': 28, 'completion_tokens': 7}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-409ebc9a-b4a0-4734-ab6f-e11f6b4f808f-0', usage_metadata={'input_tokens': 21, 'output_tokens': 7, 'total_tokens': 28})"
]
},
"execution_count": 14,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "eb7e01fb-a433-48b1-a4c2-e6009523a896",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatMistralAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html"
]
}
],
@@ -279,7 +338,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,13 +2,24 @@
"cells": [
{
"cell_type": "markdown",
"id": "cc6caafa",
"metadata": {
"id": "cc6caafa"
},
"id": "1f666798-8635-4bc0-a515-04d318588d67",
"metadata": {},
"source": [
"# NVIDIA NIMs\n",
"---\n",
"sidebar_label: NVIDIA AI Endpoints\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fa8eb20e-4db8-45e3-9e79-c595f4f274da",
"metadata": {},
"source": [
"# ChatNVIDIA\n",
"\n",
"This will help you getting started with NVIDIA [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatNVIDIA` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html).\n",
"\n",
"## Overview\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
@@ -24,7 +35,66 @@
"\n",
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatNVIDIA](https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain_nvidia_ai_endpoints](https://api.python.langchain.com/en/latest/nvidia_ai_endpoints_api_reference.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_nvidia_ai_endpoints?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_nvidia_ai_endpoints?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.\n",
"\n",
"### Credentials\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "208b72da-1535-4249-bbd3-2500028e25e9",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NVIDIA_API_KEY\"):\n",
" # Note: the API key should start with \"nvapi-\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = getpass.getpass(\"Enter your NVIDIA API key: \")"
]
},
{
"cell_type": "markdown",
"id": "52dc8dcb-0a48-4a4e-9947-764116d2ffd4",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2cd9cb12-6ca5-432a-9e42-8a57da073c7e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
@@ -32,7 +102,9 @@
"id": "f2be90a9",
"metadata": {},
"source": [
"## Installation"
"### Installation\n",
"\n",
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain_nvidia_ai_endpoints` package:"
]
},
{
@@ -45,51 +117,14 @@
"%pip install --upgrade --quiet langchain-nvidia-ai-endpoints"
]
},
{
"cell_type": "markdown",
"id": "ccff689e",
"metadata": {
"id": "ccff689e"
},
"source": [
"## Setup\n",
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "686c4d2f",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
"else:\n",
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
" assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "markdown",
"id": "af0ce26b",
"metadata": {},
"source": [
"## Working with NVIDIA API Catalog"
"## Instantiation\n",
"\n",
"Now we can access models in the NVIDIA API Catalog:"
]
},
{
@@ -108,7 +143,24 @@
"## Core LC Chat Interface\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")\n",
"llm = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")"
]
},
{
"cell_type": "markdown",
"id": "469c8c7f-de62-457f-a30f-674763a8b717",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9512c81b-1f3a-4eca-9470-f52cedff5c74",
"metadata": {},
"outputs": [],
"source": [
"result = llm.invoke(\"Write a ballad about LangChain.\")\n",
"print(result.content)"
]
@@ -302,9 +354,6 @@
"\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `nvidia/neva-22b`.\n",
"\n",
"\n",
"These models accept LangChain's standard image formats, and accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle.\n",
"\n",
"Below is an example use:"
]
},
@@ -447,92 +496,6 @@
"llm.invoke(f'What\\'s in this image?\\n<img src=\"{base64_with_mime_type}\" />')"
]
},
{
"cell_type": "markdown",
"id": "3e61d868",
"metadata": {},
"source": [
"#### **Advanced Use Case:** Forcing Payload \n",
"\n",
"You may notice that some newer models may have strong parameter expectations that the LangChain connector may not support by default. For example, we cannot invoke the [Kosmos](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/kosmos-2) model at the time of this notebook's latest release due to the lack of a streaming argument on the server side: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d143e0d6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"# kosmos.invoke(\n",
"# [\n",
"# HumanMessage(\n",
"# content=[\n",
"# {\"type\": \"text\", \"text\": \"Describe this image:\"},\n",
"# {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"# ]\n",
"# )\n",
"# ]\n",
"# )\n",
"\n",
"# Exception: [422] Unprocessable Entity\n",
"# body -> stream\n",
"# Extra inputs are not permitted (type=extra_forbidden)\n",
"# RequestID: 35538c9a-4b45-4616-8b75-7ef816fccf38"
]
},
{
"cell_type": "markdown",
"id": "1e230b70",
"metadata": {},
"source": [
"For a simple use case like this, we can actually try to force the payload argument of our underlying client by specifying the `payload_fn` function as follows: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0925b2b1",
"metadata": {},
"outputs": [],
"source": [
"def drop_streaming_key(d):\n",
" \"\"\"Takes in payload dictionary, outputs new payload dictionary\"\"\"\n",
" if \"stream\" in d:\n",
" d.pop(\"stream\")\n",
" return d\n",
"\n",
"\n",
"## Override the payload passthrough. Default is to pass through the payload as is.\n",
"kosmos = ChatNVIDIA(model=\"microsoft/kosmos-2\")\n",
"kosmos.client.payload_fn = drop_streaming_key\n",
"\n",
"kosmos.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe this image:\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ]\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "fe6e1758",
"metadata": {},
"source": [
"For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the `NVEModel` client as a requests backbone. The `NVIDIAEmbeddings` class is a good source of inspiration for this. "
]
},
{
"cell_type": "markdown",
"id": "137662a6",
@@ -540,7 +503,7 @@
"id": "137662a6"
},
"source": [
"## Example usage within RunnableWithMessageHistory "
"## Example usage within a RunnableWithMessageHistory"
]
},
{
@@ -630,14 +593,14 @@
{
"cell_type": "code",
"execution_count": null,
"id": "uHIMZxVSVNBC",
"id": "LyD1xVKmVSs4",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 284
"height": 350
},
"id": "uHIMZxVSVNBC",
"outputId": "79acc89d-a820-4f2c-bac2-afe99da95580"
"id": "LyD1xVKmVSs4",
"outputId": "a1714513-a8fd-4d14-f974-233e39d5c4f5"
},
"outputs": [],
"source": [
@@ -646,6 +609,128 @@
" config=config,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f3cbbba0",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Starting in v0.2, `ChatNVIDIA` supports [bind_tools](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html#langchain_core.language_models.chat_models.BaseChatModel.bind_tools).\n",
"\n",
"`ChatNVIDIA` provides integration with the variety of models on [build.nvidia.com](https://build.nvidia.com) as well as local NIMs. Not all these models are trained for tool calling. Be sure to select a model that does have tool calling for your experimention and applications."
]
},
{
"cell_type": "markdown",
"id": "6f7b535e",
"metadata": {},
"source": [
"You can get a list of models that are known to support tool calling with,"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e36c8911",
"metadata": {},
"outputs": [],
"source": [
"tool_models = [\n",
" model for model in ChatNVIDIA.get_available_models() if model.supports_tools\n",
"]\n",
"tool_models"
]
},
{
"cell_type": "markdown",
"id": "b01d75a7",
"metadata": {},
"source": [
"With a tool capable model,"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bd54f174",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import Field\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_current_weather(\n",
" location: str = Field(..., description=\"The location to get the weather for.\"),\n",
"):\n",
" \"\"\"Get the current weather for a location.\"\"\"\n",
" ...\n",
"\n",
"\n",
"llm = ChatNVIDIA(model=tool_models[0].id).bind_tools(tools=[get_current_weather])\n",
"response = llm.invoke(\"What is the weather in Boston?\")\n",
"response.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "e08df68c",
"metadata": {},
"source": [
"See [How to use chat models to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling/) for additional examples."
]
},
{
"cell_type": "markdown",
"id": "a9a3c438-121d-46eb-8fb5-b8d5a13cd4a4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af585c6b-fe0a-4833-9860-a4209a71b3c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f2f25dd3-0b4a-465f-a53e-95521cdc253c",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ChatNVIDIA` features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html"
]
}
],
"metadata": {
@@ -667,7 +752,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.2"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -33,7 +33,7 @@
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | | ❌ | \n",
"| | | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | | ❌ | \n",
"\n",
"## Setup\n",
"\n",

View File

@@ -35,7 +35,7 @@
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | ❌ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"| | ❌ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
@@ -110,7 +110,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -134,18 +134,21 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"AIMessage(content='Je adore le programmation.\\n\\n(Note: \"programmation\" is the feminine form of the noun in French, but if you want to use the masculine form, it would be \"le programme\" instead.)' response_metadata={'model': 'llama3', 'created_at': '2024-07-04T04:20:28.138164Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 1943337750, 'load_duration': 1128875, 'prompt_eval_count': 33, 'prompt_eval_duration': 322813000, 'eval_count': 43, 'eval_duration': 1618213000} id='run-ed8c17ab-7fc2-4c90-a88a-f6273b49bc78-0')\n"
]
"data": {
"text/plain": [
"AIMessage(content='Je adore le programmation.\\n\\n(Note: \"programmation\" is not commonly used in French, but I translated it as \"le programmation\" to maintain the same grammatical structure and meaning as the original English sentence.)', response_metadata={'model': 'llama3', 'created_at': '2024-07-22T17:43:54.731273Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 11094839375, 'load_duration': 10121854667, 'prompt_eval_count': 36, 'prompt_eval_duration': 146569000, 'eval_count': 46, 'eval_duration': 816593000}, id='run-befccbdc-e1f9-42a9-85cf-e69b926d6b8b-0', usage_metadata={'input_tokens': 36, 'output_tokens': 46, 'total_tokens': 82})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -164,7 +167,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -174,7 +177,7 @@
"text": [
"Je adore le programmation.\n",
"\n",
"(Note: \"programmation\" is the feminine form of the noun in French, but if you want to use the masculine form, it would be \"le programme\" instead.)\n"
"(Note: \"programmation\" is not commonly used in French, but I translated it as \"le programmation\" to maintain the same grammatical structure and meaning as the original English sentence.)\n"
]
}
],
@@ -232,6 +235,86 @@
")"
]
},
{
"cell_type": "markdown",
"id": "0f51345d-0a9d-43f1-8fca-d0662cb8e21b",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"We can use [tool calling](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/library/llama3-groq-tool-use): \n",
"\n",
"```\n",
"ollama pull llama3-groq-tool-use\n",
"```\n",
"\n",
"We can just pass normal Python functions directly as tools."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "5250bceb-1029-41ff-b447-983518704d88",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'validate_user',\n",
" 'args': {'addresses': ['123 Fake St, Boston MA',\n",
" '234 Pretend Boulevard, Houston TX'],\n",
" 'user_id': 123},\n",
" 'id': 'fe2148d3-95fb-48e9-845a-4bfecc1f1f96',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import List\n",
"\n",
"from langchain_ollama import ChatOllama\n",
"from typing_extensions import TypedDict\n",
"\n",
"\n",
"def validate_user(user_id: int, addresses: List) -> bool:\n",
" \"\"\"Validate user using historical addresses.\n",
"\n",
" Args:\n",
" user_id: (int) the user ID.\n",
" addresses: Previous addresses.\n",
" \"\"\"\n",
" return True\n",
"\n",
"\n",
"llm = ChatOllama(\n",
" model=\"llama3-groq-tool-use\",\n",
" temperature=0,\n",
").bind_tools([validate_user])\n",
"\n",
"result = llm.invoke(\n",
" \"Could you validate user 123? They previously lived at \"\n",
" \"123 Fake St in Boston MA and 234 Pretend Boulevard in \"\n",
" \"Houston TX.\"\n",
")\n",
"result.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "2bb034ff-218f-4865-afea-3f5e57d3bdee",
"metadata": {},
"source": [
"We look at the LangSmith trace to see that the tool call was performed: \n",
"\n",
"https://smith.langchain.com/public/4169348a-d6be-45df-a7cf-032f6baa4697/r\n",
"\n",
"In particular, the trace shows how the tool schema was populated."
]
},
{
"cell_type": "markdown",
"id": "4c5e0197",
@@ -384,7 +467,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.11.8"
}
},
"nbformat": 4,

View File

@@ -6,6 +6,7 @@
"source": [
"---\n",
"sidebar_label: Ollama Functions\n",
"sidebar_class_name: hidden\n",
"---"
]
},
@@ -15,16 +16,16 @@
"source": [
"# OllamaFunctions\n",
"\n",
":::warning\n",
"\n",
"This was an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. The [primary Ollama integration](/docs/integrations/chat/ollama/) now supports tool calling, and should be used instead.\n",
"\n",
":::\n",
"This notebook shows how to use an experimental wrapper around Ollama that gives it [tool calling capabilities](https://python.langchain.com/v0.2/docs/concepts/#functiontool-calling).\n",
"\n",
"Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use llama3 and phi3 models.\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).\n",
"\n",
":::warning\n",
"\n",
"This is an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. Use with caution.\n",
"\n",
":::\n",
"## Overview\n",
"\n",
"### Integration details\n",
@@ -283,7 +284,9 @@
{
"cell_type": "markdown",
"metadata": {},
"source": "For more on binding tools and tool call outputs, head to the [tool calling](docs/how_to/function_calling) docs."
"source": [
"For more on binding tools and tool call outputs, head to the [tool calling](../../how_to/function_calling.ipynb) docs."
]
},
{
"cell_type": "markdown",

View File

@@ -82,9 +82,9 @@
"outputs": [],
"source": [
"# By default it will use the model which was deployed through the platform\n",
"# in my case it will is \"claude-3-haiku\"\n",
"# in my case it will is \"gpt-4o\"\n",
"\n",
"chat = ChatPremAI(project_id=8)"
"chat = ChatPremAI(project_id=1234, model_name=\"gpt-4o\")"
]
},
{
@@ -107,7 +107,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"I am an artificial intelligence created by Anthropic. I'm here to help with a wide variety of tasks, from research and analysis to creative projects and open-ended conversation. I have general knowledge and capabilities, but I'm not a real person - I'm an AI assistant. Please let me know if you have any other questions!\n"
"I am an AI language model created by OpenAI, designed to assist with answering questions and providing information based on the context provided. How can I help you today?\n"
]
}
],
@@ -133,7 +133,7 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"I am an artificial intelligence created by Anthropic. My purpose is to assist and converse with humans in a friendly and helpful way. I have a broad knowledge base that I can use to provide information, answer questions, and engage in discussions on a wide range of topics. Please let me know if you have any other questions - I'm here to help!\")"
"AIMessage(content=\"I'm your friendly assistant! How can I help you today?\", response_metadata={'document_chunks': [{'repository_id': 1985, 'document_id': 1306, 'chunk_id': 173899, 'document_name': '[D] Difference between sparse and dense informati…', 'similarity_score': 0.3209080100059509, 'content': \"with the difference or anywhere\\nwhere I can read about it?\\n\\n\\n 17 9\\n\\n\\n u/ScotiabankCanada • Promoted\\n\\n\\n Accelerate your study permit process\\n with Scotiabank's Student GIC\\n Program. We're here to help you tur…\\n\\n\\n startright.scotiabank.com Learn More\\n\\n\\n Add a Comment\\n\\n\\nSort by: Best\\n\\n\\n DinosParkour • 1y ago\\n\\n\\n Dense Retrieval (DR) m\"}]}, id='run-510bbd0e-3f8f-4095-9b1f-c2d29fd89719-0')"
]
},
"execution_count": 5,
@@ -160,10 +160,18 @@
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/anindya/prem/langchain/libs/community/langchain_community/chat_models/premai.py:355: UserWarning: WARNING: Parameter top_p is not supported in kwargs.\n",
" warnings.warn(f\"WARNING: Parameter {key} is not supported in kwargs.\")\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='I am an artificial intelligence created by Anthropic')"
"AIMessage(content=\"Hello! I'm your friendly assistant. How can I\", response_metadata={'document_chunks': [{'repository_id': 1985, 'document_id': 1306, 'chunk_id': 173899, 'document_name': '[D] Difference between sparse and dense informati…', 'similarity_score': 0.3209080100059509, 'content': \"with the difference or anywhere\\nwhere I can read about it?\\n\\n\\n 17 9\\n\\n\\n u/ScotiabankCanada • Promoted\\n\\n\\n Accelerate your study permit process\\n with Scotiabank's Student GIC\\n Program. We're here to help you tur…\\n\\n\\n startright.scotiabank.com Learn More\\n\\n\\n Add a Comment\\n\\n\\nSort by: Best\\n\\n\\n DinosParkour • 1y ago\\n\\n\\n Dense Retrieval (DR) m\"}]}, id='run-c4b06b98-4161-4cca-8495-fd2fc98fa8f8-0')"
]
},
"execution_count": 6,
@@ -195,13 +203,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"query = \"what is the diameter of individual Galaxy\"\n",
"query = \"Which models are used for dense retrieval\"\n",
"repository_ids = [\n",
" 1991,\n",
" 1985,\n",
"]\n",
"repositories = dict(ids=repository_ids, similarity_threshold=0.3, limit=3)"
]
@@ -219,9 +227,34 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dense retrieval models typically include:\n",
"\n",
"1. **BERT-based Models**: Such as DPR (Dense Passage Retrieval) which uses BERT for encoding queries and passages.\n",
"2. **ColBERT**: A model that combines BERT with late interaction mechanisms.\n",
"3. **ANCE (Approximate Nearest Neighbor Negative Contrastive Estimation)**: Uses BERT and focuses on efficient retrieval.\n",
"4. **TCT-ColBERT**: A variant of ColBERT that uses a two-tower\n",
"{\n",
" \"document_chunks\": [\n",
" {\n",
" \"repository_id\": 1985,\n",
" \"document_id\": 1306,\n",
" \"chunk_id\": 173899,\n",
" \"document_name\": \"[D] Difference between sparse and dense informati\\u2026\",\n",
" \"similarity_score\": 0.3209080100059509,\n",
" \"content\": \"with the difference or anywhere\\nwhere I can read about it?\\n\\n\\n 17 9\\n\\n\\n u/ScotiabankCanada \\u2022 Promoted\\n\\n\\n Accelerate your study permit process\\n with Scotiabank's Student GIC\\n Program. We're here to help you tur\\u2026\\n\\n\\n startright.scotiabank.com Learn More\\n\\n\\n Add a Comment\\n\\n\\nSort by: Best\\n\\n\\n DinosParkour \\u2022 1y ago\\n\\n\\n Dense Retrieval (DR) m\"\n",
" }\n",
" ]\n",
"}\n"
]
}
],
"source": [
"import json\n",
"\n",
@@ -262,7 +295,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
@@ -288,7 +321,7 @@
"outputs": [],
"source": [
"template_id = \"78069ce8-xxxxx-xxxxx-xxxx-xxx\"\n",
"response = chat.invoke([human_message], template_id=template_id)\n",
"response = chat.invoke([human_messages], template_id=template_id)\n",
"print(response.content)"
]
},
@@ -310,14 +343,14 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! As an AI language model, I don't have feelings or a physical state, but I'm functioning properly and ready to assist you with any questions or tasks you might have. How can I help you today?"
"It looks like your message got cut off. If you need information about Dense Retrieval (DR) or any other topic, please provide more details or clarify your question."
]
}
],
@@ -338,14 +371,14 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! As an AI language model, I don't have feelings or a physical form, but I'm functioning properly and ready to assist you. How can I help you today?"
"Woof! 🐾 How can I help you today? Want to play fetch or maybe go for a walk 🐶🦴"
]
}
],
@@ -365,6 +398,275 @@
" sys.stdout.write(chunk.content)\n",
" sys.stdout.flush()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tool/Function Calling\n",
"\n",
"LangChain PremAI supports tool/function calling. Tool/function calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. \n",
"\n",
"- You can learn all about tool calling in details [in our documentation here](https://docs.premai.io/get-started/function-calling).\n",
"- You can learn more about langchain tool calling in [this part of the docs](https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling).\n",
"\n",
"**NOTE:**\n",
"The current version of LangChain ChatPremAI do not support function/tool calling with streaming support. Streaming support along with function calling will come soon. \n",
"\n",
"#### Passing tools to model\n",
"\n",
"In order to pass tools and let the LLM choose the tool it needs to call, we need to pass a tool schema. A tool schema is the function definition along with proper docstring on what does the function do, what each argument of the function is etc. Below are some simple arithmetic functions with their schema. \n",
"\n",
"**NOTE:** When defining function/tool schema, do not forget to add information around the function arguments, otherwise it would throw error."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"# Define the schema for function arguments\n",
"class OperationInput(BaseModel):\n",
" a: int = Field(description=\"First number\")\n",
" b: int = Field(description=\"Second number\")\n",
"\n",
"\n",
"# Now define the function where schema for argument will be OperationInput\n",
"@tool(\"add\", args_schema=OperationInput, return_direct=True)\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\n",
"\n",
" Args:\n",
" a: first int\n",
" b: second int\n",
" \"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool(\"multiply\", args_schema=OperationInput, return_direct=True)\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\n",
"\n",
" Args:\n",
" a: first int\n",
" b: second int\n",
" \"\"\"\n",
" return a * b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Binding tool schemas with our LLM\n",
"\n",
"We will now use the `bind_tools` method to convert our above functions to a \"tool\" and binding it with the model. This means we are going to pass these tool informations everytime we invoke the model. "
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"tools = [add, multiply]\n",
"llm_with_tools = chat.bind_tools(tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After this, we get the response from the model which is now binded with the tools. "
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
"\n",
"messages = [HumanMessage(query)]\n",
"ai_msg = llm_with_tools.invoke(messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, when our chat model is binded with tools, then based on the given prompt, it calls the correct set of the tools and sequentially. "
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'multiply',\n",
" 'args': {'a': 3, 'b': 12},\n",
" 'id': 'call_A9FL20u12lz6TpOLaiS6rFa8'},\n",
" {'name': 'add',\n",
" 'args': {'a': 11, 'b': 49},\n",
" 'id': 'call_MPKYGLHbf39csJIyb5BZ9xIk'}]"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We append this message shown above to the LLM which acts as a context and makes the LLM aware that what all functions it has called. "
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"messages.append(ai_msg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since tool calling happens into two phases, where:\n",
"\n",
"1. in our first call, we gathered all the tools that the LLM decided to tool, so that it can get the result as an added context to give more accurate and hallucination free result. \n",
"\n",
"2. in our second call, we will parse those set of tools decided by LLM and run them (in our case it will be the functions we defined, with the LLM's extracted arguments) and pass this result to the LLM"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import ToolMessage\n",
"\n",
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we call the LLM (binded with the tools) with the function response added in it's context. "
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The final answers are:\n",
"\n",
"- 3 * 12 = 36\n",
"- 11 + 49 = 60\n"
]
}
],
"source": [
"response = llm_with_tools.invoke(messages)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Defining tool schemas: Pydantic class\n",
"\n",
"Above we have shown how to define schema using `tool` decorator, however we can equivalently define the schema using Pydantic. Pydantic is useful when your tool inputs are more complex:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"\n",
"\n",
"class add(BaseModel):\n",
" \"\"\"Add two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can bind them to chat models and directly get the result:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[multiply(a=3, b=12), add(a=11, b=49)]"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = llm_with_tools | PydanticToolsParser(tools=[multiply, add])\n",
"chain.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, as done above, we parse this and run this functions and call the LLM once again to get the result."
]
}
],
"metadata": {
@@ -383,7 +685,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.9.19"
}
},
"nbformat": 4,

View File

@@ -1,103 +1,263 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2970dd75-8ebf-4b51-8282-9b454b8f356d",
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"# Together AI\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/inference-models) in a couple lines of code.\n",
"\n",
"This example goes over how to use LangChain to interact with Together AI models."
"---\n",
"sidebar_label: Together\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "1c47fc36",
"id": "e49f1e0d",
"metadata": {},
"source": [
"## Installation"
"# ChatTogether\n",
"\n",
"\n",
"This page will help you get started with Together AI [chat models](../../concepts.mdx#chat-models). For detailed documentation of all ChatTogether features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_together.chat_models.ChatTogether.html).\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/chat-models)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/togetherai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatTogether](https://api.python.langchain.com/en/latest/chat_models/langchain_together.chat_models.ChatTogether.html) | [langchain-together](https://api.python.langchain.com/en/latest/together_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-together?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-together?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access Together models you'll need to create a/an Together account, get an API key, and install the `langchain-together` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://api.together.ai) to sign up to Together and generate an API key. Once you've done this set the TOGETHER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ecdb29d",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade langchain-together"
]
},
{
"cell_type": "markdown",
"id": "89883202",
"metadata": {},
"source": [
"## Environment\n",
"import getpass\n",
"import os\n",
"\n",
"To use Together AI, you'll need an API key which you can find here:\n",
"https://api.together.ai/settings/api-keys. This can be passed in as an init param\n",
"``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n"
"os.environ[\"TOGETHER_API_KEY\"] = getpass.getpass(\"Enter your Together API key: \")"
]
},
{
"cell_type": "markdown",
"id": "8304b4d9",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"## Example"
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "637bb53f",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# Querying chat models with Together AI\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Together integration lives in the `langchain-together` package:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.1.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-together"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_together import ChatTogether\n",
"\n",
"# choose from our 50+ models here: https://docs.together.ai/docs/inference-models\n",
"chat = ChatTogether(\n",
" # together_api_key=\"YOUR_API_KEY\",\n",
"llm = ChatTogether(\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
")\n",
"\n",
"# stream the response back from the model\n",
"for m in chat.stream(\"Tell me fun things to do in NYC\"):\n",
" print(m.content, end=\"\", flush=True)\n",
"\n",
"# if you don't want to do streaming, you can use the invoke method\n",
"# chat.invoke(\"Tell me fun things to do in NYC\")"
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7b7170d-d7c5-4890-9714-a37238343805",
"metadata": {},
"outputs": [],
"execution_count": 6,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 35, 'total_tokens': 44}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-79efa49b-dbaf-4ef8-9dce-958533823ef6-0', usage_metadata={'input_tokens': 35, 'output_tokens': 9, 'total_tokens': 44})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Querying code and language models with Together AI\n",
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"from langchain_together import Together\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 30, 'total_tokens': 37}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-80bba5fa-1723-4242-8d5a-c09b76b8350b-0', usage_metadata={'input_tokens': 30, 'output_tokens': 7, 'total_tokens': 37})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = Together(\n",
" model=\"codellama/CodeLlama-70b-Python-hf\",\n",
" # together_api_key=\"...\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"print(llm.invoke(\"def bubble_sort(): \"))"
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatTogether features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_together.chat_models.ChatTogether.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -111,7 +271,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -12,14 +12,83 @@
},
{
"cell_type": "markdown",
"id": "eb7e5679-aa06-47e4-a1a3-b6b70e604017",
"id": "8f82e243-f4ee-44e2-b417-099b6401ae3e",
"metadata": {},
"source": [
"# vLLM Chat\n",
"\n",
"vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.\n",
"\n",
"This notebook covers how to get started with vLLM chat models using langchain's `ChatOpenAI` **as it is**."
"## Overview\n",
"This will help you getting started with vLLM [chat models](/docs/concepts/#chat-models), which leverage the `langchain-openai` package. For detailed documentation of all `ChatOpenAI` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain_openai](https://api.python.langchain.com/en/latest/langchain_openai.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"Specific model features-- such as tool calling, support for multi-modal inputs, support for token-level streaming, etc.-- will depend on the hosted model.\n",
"\n",
"## Setup\n",
"\n",
"See the vLLM docs [here](https://docs.vllm.ai/en/latest/).\n",
"\n",
"To access vLLM models through LangChain, you'll need to install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Authentication will depend on specifics of the inference server."
]
},
{
"cell_type": "markdown",
"id": "c3b1707a-cf2c-4367-94e3-436c43402503",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e40bd5e-cbaa-41ef-aaf9-0858eb207184",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0739b647-609b-46d3-bdd3-e86fe4463288",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain vLLM integration can be accessed via the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7afcfbdc-56aa-4529-825a-8acbe7aa5241",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "2cf576d6-7b67-4937-bf99-39071e85720c",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
@@ -51,7 +120,7 @@
"source": [
"inference_server_url = \"http://localhost:8000/v1\"\n",
"\n",
"chat = ChatOpenAI(\n",
"llm = ChatOpenAI(\n",
" model=\"mosaicml/mpt-7b\",\n",
" openai_api_key=\"EMPTY\",\n",
" openai_api_base=inference_server_url,\n",
@@ -60,6 +129,14 @@
")"
]
},
{
"cell_type": "markdown",
"id": "34b18328-5e8b-4ff2-9b89-6fbb76b5c7f0",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 15,
@@ -88,82 +165,66 @@
" content=\"Translate the following sentence from English to Italian: I love programming.\"\n",
" ),\n",
"]\n",
"chat(messages)"
"llm.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "55fc7046-a6dc-4720-8c0c-24a6db76a4f4",
"id": "a580a1e4-11a3-4277-bfba-bfb414ac7201",
"metadata": {},
"source": [
"You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use ChatPromptTemplate's format_prompt -- this returns a `PromptValue`, which you can convert to a string or `Message` object, depending on whether you want to use the formatted value as input to an llm or chat model.\n",
"## Chaining\n",
"\n",
"For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "123980e9-0dee-4ce5-bde6-d964dd90129c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"template = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"system_message_prompt = SystemMessagePromptTemplate.from_template(template)\n",
"human_template = \"{text}\"\n",
"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "b2fb8c59-8892-4270-85a2-4f8ab276b75d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' I love programming too.', additional_kwargs={}, example=False)"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_prompt = ChatPromptTemplate.from_messages(\n",
" [system_message_prompt, human_message_prompt]\n",
")\n",
"\n",
"# get a chat completion from the formatted messages\n",
"chat(\n",
" chat_prompt.format_prompt(\n",
" input_language=\"English\", output_language=\"Italian\", text=\"I love programming.\"\n",
" ).to_messages()\n",
")"
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0bbd9861-2b94-4920-8708-b690004f4c4d",
"id": "dd0f4043-48bd-4245-8bdb-e7669666a277",
"metadata": {},
"outputs": [],
"source": []
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "265f5d51-0a76-4808-8d13-ef598ee6e366",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all features and configurations exposed via `langchain-openai`, head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html\n",
"\n",
"Refer to the vLLM [documentation](https://docs.vllm.ai/en/latest/) as well."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_pytorch_p310",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "conda_pytorch_p310"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -175,7 +236,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatYI\n",
"\n",
"This will help you getting started with Yi [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatYi features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/lanchain_community.chat_models.yi.ChatYi.html).\n",
"\n",
"[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatYi](https://api.python.langchain.com/en/latest/chat_models/lanchain_community.chat_models.yi.ChatYi.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access ChatYi models you'll need to create a/an 01.AI account, get an API key, and install the `langchain_community` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [01.AI](https://platform.01.ai) to sign up to 01.AI and generate an API key. Once you've done this set the `YI_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"YI_API_KEY\"] = getpass.getpass(\"Enter your Yi API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain __ModuleName__ integration lives in the `langchain_community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.yi import ChatYi\n",
"\n",
"llm = ChatYi(\n",
" model=\"yi-large\",\n",
" temperature=0,\n",
" timeout=60,\n",
" yi_api_base=\"https://api.01.ai/v1/chat/completions\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Large Language Models (LLMs) have the potential to significantly impact healthcare by enhancing various aspects of patient care, research, and administrative processes. Here are some potential applications:\\n\\n1. **Clinical Documentation and Reporting**: LLMs can assist in generating patient reports and documentation by understanding and summarizing clinical notes, making the process more efficient and reducing the administrative burden on healthcare professionals.\\n\\n2. **Medical Coding and Billing**: These models can help in automating the coding process for medical billing by accurately translating clinical notes into standardized codes, reducing errors and improving billing efficiency.\\n\\n3. **Clinical Decision Support**: LLMs can analyze patient data and medical literature to provide evidence-based recommendations to healthcare providers, aiding in diagnosis and treatment planning.\\n\\n4. **Patient Education and Communication**: By simplifying medical jargon, LLMs can help in educating patients about their conditions, treatment options, and preventive care, improving patient engagement and health literacy.\\n\\n5. **Natural Language Processing (NLP) for EHRs**: LLMs can enhance NLP capabilities in Electronic Health Records (EHRs) systems, enabling better extraction of information from unstructured data, such as clinical notes, to support data-driven decision-making.\\n\\n6. **Drug Discovery and Development**: LLMs can analyze biomedical literature and clinical trial data to identify new drug candidates, predict drug interactions, and support the development of personalized medicine.\\n\\n7. **Telemedicine and Virtual Health Assistants**: Integrated into telemedicine platforms, LLMs can provide preliminary assessments and triage, offering patients basic health advice and determining the urgency of their needs, thus optimizing the utilization of healthcare resources.\\n\\n8. **Research and Literature Review**: LLMs can expedite the process of reviewing medical literature by quickly identifying relevant studies and summarizing findings, accelerating research and evidence-based practice.\\n\\n9. **Personalized Medicine**: By analyzing a patient's genetic information and medical history, LLMs can help in tailoring treatment plans and medication dosages, contributing to the advancement of personalized medicine.\\n\\n10. **Quality Improvement and Risk Assessment**: LLMs can analyze healthcare data to identify patterns that may indicate areas for quality improvement or potential risks, such as hospital-acquired infections or adverse drug events.\\n\\n11. **Mental Health Support**: LLMs can provide mental health support by offering coping strategies, mindfulness exercises, and preliminary assessments, serving as a complement to professional mental health services.\\n\\n12. **Continuing Medical Education (CME)**: LLMs can personalize CME by recommending educational content based on a healthcare provider's practice area, patient demographics, and emerging medical literature, ensuring that professionals stay updated with the latest advancements.\\n\\nWhile the applications of LLMs in healthcare are promising, it's crucial to address challenges such as data privacy, model bias, and the need for regulatory approval to ensure that these technologies are implemented safely and ethically.\", response_metadata={'token_usage': {'completion_tokens': 656, 'prompt_tokens': 40, 'total_tokens': 696}, 'model': 'yi-large'}, id='run-870850bd-e4bf-4265-8730-1736409c0acf-0')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are an AI assistant specializing in technology trends.\"),\n",
" HumanMessage(\n",
" content=\"What are the potential applications of large language models in healthcare?\"\n",
" ),\n",
"]\n",
"\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 33, 'total_tokens': 41}, 'model': 'yi-large'}, id='run-daa3bc58-8289-4d72-a24e-80622fa90d6d-0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatYi features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.yi.ChatYi.html"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -0,0 +1,484 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6b74f73d-1763-42d0-9c24-8f65f445bb72",
"metadata": {},
"source": [
"# Dedoc\n",
"\n",
"This sample demonstrates the use of `Dedoc` in combination with `LangChain` as a `DocumentLoader`.\n",
"\n",
"## Overview\n",
"\n",
"[Dedoc](https://dedoc.readthedocs.io) is an [open-source](https://github.com/ispras/dedoc)\n",
"library/service that extracts texts, tables, attached files and document structure\n",
"(e.g., titles, list items, etc.) from files of various formats.\n",
"\n",
"`Dedoc` supports `DOCX`, `XLSX`, `PPTX`, `EML`, `HTML`, `PDF`, images and more.\n",
"Full list of supported formats can be found [here](https://dedoc.readthedocs.io/en/latest/#id1).\n",
"\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support |\n",
"|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:-----:|:------------:|:----------:|\n",
"| [DedocFileLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.dedoc.DedocFileLoader.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ❌ |\n",
"| [DedocPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.DedocPDFLoader.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ❌ | \n",
"| [DedocAPIFileLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.dedoc.DedocAPIFileLoader.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ❌ | \n",
"\n",
"\n",
"### Loader features\n",
"\n",
"Methods for lazy loading and async loading are available, but in fact, document loading is executed synchronously.\n",
"\n",
"| Source | Document Lazy Loading | Async Support |\n",
"|:------------------:|:---------------------:|:-------------:| \n",
"| DedocFileLoader | ❌ | ❌ |\n",
"| DedocPDFLoader | ❌ | ❌ | \n",
"| DedocAPIFileLoader | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"* To access `DedocFileLoader` and `DedocPDFLoader` document loaders, you'll need to install the `dedoc` integration package.\n",
"* To access `DedocAPIFileLoader`, you'll need to run the `Dedoc` service, e.g. `Docker` container (please see [the documentation](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html#install-and-run-dedoc-using-docker) \n",
"for more details):\n",
"\n",
"```bash\n",
"docker pull dedocproject/dedoc\n",
"docker run -p 1231:1231\n",
"```\n",
"\n",
"`Dedoc` installation instruction is given [here](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "511c109d-a5c3-42ba-914e-5d1b385bc40f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"# Install package\n",
"%pip install --quiet \"dedoc[torch]\""
]
},
{
"cell_type": "markdown",
"id": "6820c0e9-d56d-4899-b8c8-374760360e2b",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c1f98cae-71ec-4d60-87fb-96c1a76851d8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import DedocFileLoader\n",
"\n",
"loader = DedocFileLoader(\"./example_data/state_of_the_union.txt\")"
]
},
{
"cell_type": "markdown",
"id": "5d7bc2b3-73a0-4cd6-8014-cc7184aa9d4a",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b9097c14-6168-4726-819e-24abb9a63b13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and t'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0].page_content[:100]"
]
},
{
"cell_type": "markdown",
"id": "9ed8bd46-0047-4ccc-b2d6-beb7761f7312",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6ae12d7e-8105-4bbe-9031-0e968475f6bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and t\n"
]
}
],
"source": [
"docs = loader.lazy_load()\n",
"\n",
"for doc in docs:\n",
" print(doc.page_content[:100])\n",
" break"
]
},
{
"cell_type": "markdown",
"id": "8772ae40-6239-4751-bb2d-b4a9415c1ad1",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed information on configuring and calling `Dedoc` loaders, please see the API references: \n",
"\n",
"* https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.dedoc.DedocFileLoader.html\n",
"* https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.DedocPDFLoader.html\n",
"* https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.dedoc.DedocAPIFileLoader.html"
]
},
{
"cell_type": "markdown",
"id": "c4d5e702-0e21-4cad-a4c3-b9b3bff77203",
"metadata": {},
"source": [
"## Loading any file\n",
"\n",
"For automatic handling of any file in a [supported format](https://dedoc.readthedocs.io/en/latest/#id1),\n",
"`DedocFileLoader` can be useful.\n",
"The file loader automatically detects the file type with a correct extension.\n",
"\n",
"File parsing process can be configured through `dedoc_kwargs` during the `DedocFileLoader` class initialization.\n",
"Here the basic examples of some options usage are given, \n",
"please see the documentation of `DedocFileLoader` and \n",
"[dedoc documentation](https://dedoc.readthedocs.io/en/latest/parameters/parameters.html) \n",
"to get more details about configuration parameters."
]
},
{
"cell_type": "markdown",
"id": "de97d0ed-d6b1-44e0-b392-1f3d89c762f9",
"metadata": {},
"source": [
"### Basic example"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "50ffeeee-db12-4801-b208-7e32ea3d72ad",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\n\\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\n\\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\n\\n\\nWith a duty to one another to the American people to '"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import DedocFileLoader\n",
"\n",
"loader = DedocFileLoader(\"./example_data/state_of_the_union.txt\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[0].page_content[:400]"
]
},
{
"cell_type": "markdown",
"id": "457e5d4c-a4ee-4f31-ae74-3f75a1bbd0af",
"metadata": {},
"source": [
"### Modes of split\n",
"\n",
"`DedocFileLoader` supports different types of document splitting into parts (each part is returned separately).\n",
"For this purpose, `split` parameter is used with the following options:\n",
"* `document` (default value): document text is returned as a single langchain `Document` object (don't split);\n",
"* `page`: split document text into pages (works for `PDF`, `DJVU`, `PPTX`, `PPT`, `ODP`);\n",
"* `node`: split document text into `Dedoc` tree nodes (title nodes, list item nodes, raw text nodes);\n",
"* `line`: split document text into textual lines."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "eec54d31-ae7a-4a3c-aa10-4ae276b1e4c4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = DedocFileLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" split=\"page\",\n",
" pages=\":2\",\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"len(docs)"
]
},
{
"cell_type": "markdown",
"id": "61e11769-4780-4f77-b10e-27db6936f226",
"metadata": {},
"source": [
"### Handling tables\n",
"\n",
"`DedocFileLoader` supports tables handling when `with_tables` parameter is \n",
"set to `True` during loader initialization (`with_tables=True` by default). \n",
"\n",
"Tables are not split - each table corresponds to one langchain `Document` object.\n",
"For tables, `Document` object has additional `metadata` fields `type=\"table\"` \n",
"and `text_as_html` with table `HTML` representation."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bbeb2f8a-ac5e-4b59-8026-7ea3fc14c928",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('table',\n",
" '<table border=\"1\" style=\"border-collapse: collapse; width: 100%;\">\\n<tbody>\\n<tr>\\n<td colspan=\"1\" rowspan=\"1\">Team</td>\\n<td colspan=\"1\" rowspan=\"1\"> &quot;Payroll (millions)&quot;</td>\\n<td colspan=\"1\" r')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = DedocFileLoader(\"./example_data/mlb_teams_2012.csv\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[1].metadata[\"type\"], docs[1].metadata[\"text_as_html\"][:200]"
]
},
{
"cell_type": "markdown",
"id": "b4a2b872-2aba-4e4c-8b2f-83a5a81ee1da",
"metadata": {},
"source": [
"### Handling attached files\n",
"\n",
"`DedocFileLoader` supports attached files handling when `with_attachments` is set \n",
"to `True` during loader initialization (`with_attachments=False` by default). \n",
"\n",
"Attachments are split according to the `split` parameter.\n",
"For attachments, langchain `Document` object has an additional metadata \n",
"field `type=\"attachment\"`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bb9d6c1c-e24c-4979-88a0-38d54abd6332",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('attachment',\n",
" '\\nContent-Type\\nmultipart/mixed; boundary=\"0000000000005d654405f082adb7\"\\nDate\\nFri, 23 Dec 2022 12:08:48 -0600\\nFrom\\nMallori Harrell <mallori@unstructured.io>\\nMIME-Version\\n1.0\\nMessage-ID\\n<CAPgNNXSzLVJ-d1OCX_TjFgJU7ugtQrjFybPtAMmmYZzphxNFYg@mail.gmail.com>\\nSubject\\nFake email with attachment\\nTo\\nMallori Harrell <mallori@unstructured.io>')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = DedocFileLoader(\n",
" \"./example_data/fake-email-attachment.eml\",\n",
" with_attachments=True,\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[1].metadata[\"type\"], docs[1].page_content"
]
},
{
"cell_type": "markdown",
"id": "d435c3f6-703a-4064-8307-ace140de967a",
"metadata": {},
"source": [
"## Loading PDF file\n",
"\n",
"If you want to handle only `PDF` documents, you can use `DedocPDFLoader` with only `PDF` support.\n",
"The loader supports the same parameters for document split, tables and attachments extraction.\n",
"\n",
"`Dedoc` can extract `PDF` with or without a textual layer, \n",
"as well as automatically detect its presence and correctness.\n",
"Several `PDF` handlers are available, you can use `pdf_with_text_layer` \n",
"parameter to choose one of them.\n",
"Please see [parameters description](https://dedoc.readthedocs.io/en/latest/parameters/pdf_handling.html) \n",
"to get more details.\n",
"\n",
"For `PDF` without a textual layer, `Tesseract OCR` and its language packages should be installed.\n",
"In this case, [the instruction](https://dedoc.readthedocs.io/en/latest/tutorials/add_new_language.html) can be useful."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0103a7f3-6b5e-4444-8f4d-83dd3724a9af",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n2\\n\\nZ. Shen et al.\\n\\n37], layout detection [38, 22], table detection [26], and scene text detection [4].\\n\\nA generalized learning-based framework dramatically reduces the need for the\\n\\nmanual specification of complicated rules, which is the status quo with traditional\\n\\nmethods. DL has the potential to transform DIA pipelines and benefit a broad\\n\\nspectrum of large-scale document digitization projects.\\n'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import DedocPDFLoader\n",
"\n",
"loader = DedocPDFLoader(\n",
" \"./example_data/layout-parser-paper.pdf\", pdf_with_text_layer=\"true\", pages=\"2:2\"\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[0].page_content[:400]"
]
},
{
"cell_type": "markdown",
"id": "13061995-1805-40c2-a77a-a6cd80999e20",
"metadata": {},
"source": [
"## Dedoc API\n",
"\n",
"If you want to get up and running with less set up, you can use `Dedoc` as a service.\n",
"**`DedocAPIFileLoader` can be used without installation of `dedoc` library.**\n",
"The loader supports the same parameters as `DedocFileLoader` and\n",
"also automatically detects input file types.\n",
"\n",
"To use `DedocAPIFileLoader`, you should run the `Dedoc` service, e.g. `Docker` container (please see [the documentation](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html#install-and-run-dedoc-using-docker) \n",
"for more details):\n",
"\n",
"```bash\n",
"docker pull dedocproject/dedoc\n",
"docker run -p 1231:1231\n",
"```\n",
"\n",
"Please do not use our demo URL `https://dedoc-readme.hf.space` in your code."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "211fc0b5-6080-4974-a6c1-f982bafd87d6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nMadam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\n\\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\n\\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\n\\n\\nWith a duty to one another to the American people to '"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import DedocAPIFileLoader\n",
"\n",
"loader = DedocAPIFileLoader(\n",
" \"./example_data/state_of_the_union.txt\",\n",
" url=\"https://dedoc-readme.hf.space\",\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[0].page_content[:400]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "faaff475-5209-436f-bcde-97d58daed05c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -5,7 +5,7 @@
"id": "20deed05",
"metadata": {},
"source": [
"# Unstructured File\n",
"# Unstructured\n",
"\n",
"This notebook covers how to use `Unstructured` package to load files of many types. `Unstructured` currently supports loading of text files, powerpoints, html, pdfs, images, and more.\n",
"\n",
@@ -14,79 +14,70 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "2886982e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"# # Install package\n",
"%pip install --upgrade --quiet \"unstructured[all-docs]\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "54d62efd",
"metadata": {},
"outputs": [],
"source": [
"# # Install other dependencies\n",
"# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst\n",
"# !brew install libmagic\n",
"# !brew install poppler\n",
"# !brew install tesseract\n",
"# # If parsing xml / html documents:\n",
"# !brew install libxml2\n",
"# !brew install libxslt"
"# Install package, compatible with API partitioning\n",
"%pip install --upgrade --quiet \"langchain-unstructured\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "af6a64f5",
"cell_type": "markdown",
"id": "e75e2a6d",
"metadata": {},
"outputs": [],
"source": [
"# import nltk\n",
"# nltk.download('punkt')"
"### Local Partitioning (Optional)\n",
"\n",
"By default, `langchain-unstructured` installs a smaller footprint that requires\n",
"offloading of the partitioning logic to the Unstructured API, which requires an `api_key`. For\n",
"partitioning using the API, refer to the Unstructured API section below.\n",
"\n",
"If you would like to run the partitioning logic locally, you will need to install\n",
"a combination of system dependencies, as outlined in the \n",
"[Unstructured documentation here](https://docs.unstructured.io/open-source/installation/full-installation).\n",
"\n",
"For example, on Macs you can install the required dependencies with:\n",
"\n",
"```bash\n",
"# base dependencies\n",
"brew install libmagic poppler tesseract\n",
"\n",
"# If parsing xml / html documents:\n",
"brew install libxml2 libxslt\n",
"```\n",
"\n",
"You can install the required `pip` dependencies with:\n",
"\n",
"```bash\n",
"pip install \"langchain-unstructured[local]\"\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a9c1c775",
"metadata": {},
"source": [
"### Quickstart\n",
"\n",
"To simply load a file as a document, you can use the LangChain `DocumentLoader.load` \n",
"interface:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "79d3e549",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\\n\\nLast year COVID-19 kept us apart. This year we are finally together again.\\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\\n\\nWith a duty to one another to the American people to the Constit'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain_community.document_loaders import UnstructuredFileLoader\n",
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"loader = UnstructuredFileLoader(\"./example_data/state_of_the_union.txt\")\n",
"loader = UnstructuredLoader(\"./example_data/state_of_the_union.txt\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[0].page_content[:400]"
"docs = loader.load()"
]
},
{
@@ -99,113 +90,31 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "092d9a0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'1/22/23, 6:30 PM - User 1: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\\n\\n1/22/23, 8:24 PM - User 2: Goodmorning! $50 is too low.\\n\\n1/23/23, 2:59 AM - User 1: How much do you want?\\n\\n1/23/23, 3:00 AM - User 2: Online is at least $100\\n\\n1/23/23, 3:01 AM - User 2: Here is $129\\n\\n1/23/23, 3:01 AM - User 2: <Media omitted>\\n\\n1/23/23, 3:01 AM - User 1: Im not int'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"whatsapp_chat.txt : 1/22/23, 6:30 PM - User 1: Hi! Im interested in your bag. Im offering $50. Let me know if you are in\n",
"state_of_the_union.txt : May God bless you all. May God protect our troops.\n"
]
}
],
"source": [
"files = [\"./example_data/whatsapp_chat.txt\", \"./example_data/layout-parser-paper.pdf\"]\n",
"file_paths = [\n",
" \"./example_data/whatsapp_chat.txt\",\n",
" \"./example_data/state_of_the_union.txt\",\n",
"]\n",
"\n",
"loader = UnstructuredFileLoader(files)\n",
"loader = UnstructuredLoader(file_paths)\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[0].page_content[:400]"
]
},
{
"cell_type": "markdown",
"id": "7874d01d",
"metadata": {},
"source": [
"## Retain Elements\n",
"\n",
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ff5b616d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', metadata={'source': './example_data/state_of_the_union.txt', 'file_directory': './example_data', 'filename': 'state_of_the_union.txt', 'last_modified': '2024-07-01T11:18:22', 'languages': ['eng'], 'filetype': 'text/plain', 'category': 'NarrativeText'}),\n",
" Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', metadata={'source': './example_data/state_of_the_union.txt', 'file_directory': './example_data', 'filename': 'state_of_the_union.txt', 'last_modified': '2024-07-01T11:18:22', 'languages': ['eng'], 'filetype': 'text/plain', 'category': 'NarrativeText'}),\n",
" Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', metadata={'source': './example_data/state_of_the_union.txt', 'file_directory': './example_data', 'filename': 'state_of_the_union.txt', 'last_modified': '2024-07-01T11:18:22', 'languages': ['eng'], 'filetype': 'text/plain', 'category': 'NarrativeText'}),\n",
" Document(page_content='With a duty to one another to the American people to the Constitution.', metadata={'source': './example_data/state_of_the_union.txt', 'file_directory': './example_data', 'filename': 'state_of_the_union.txt', 'last_modified': '2024-07-01T11:18:22', 'languages': ['eng'], 'filetype': 'text/plain', 'category': 'UncategorizedText'}),\n",
" Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', metadata={'source': './example_data/state_of_the_union.txt', 'file_directory': './example_data', 'filename': 'state_of_the_union.txt', 'last_modified': '2024-07-01T11:18:22', 'languages': ['eng'], 'filetype': 'text/plain', 'category': 'NarrativeText'})]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = UnstructuredFileLoader(\n",
" \"./example_data/state_of_the_union.txt\", mode=\"elements\"\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[:5]"
]
},
{
"cell_type": "markdown",
"id": "672733fd",
"metadata": {},
"source": [
"## Define a Partitioning Strategy\n",
"\n",
"Unstructured document loader allow users to pass in a `strategy` parameter that lets `unstructured` know how to partition the document. Currently supported strategies are `\"hi_res\"` (the default) and `\"fast\"`. Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the `strategy` kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an `UnstructuredFileLoader` below."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "767238a4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='2 v 8 4 3 5 1 . 3 0 1 2 : v i X r a', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 393.9), (16.34, 560.0), (36.34, 560.0), (36.34, 393.9)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': '89565df026a24279aaea20dc08cedbec', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'Title'}),\n",
" Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applica- tions. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io.', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((162.779, 338.45008160000003), (162.779, 566.8455408), (454.0372021523199, 566.8455408), (454.0372021523199, 338.45008160000003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'links': [{'text': ':// layout - parser . github . io', 'url': 'https://layout-parser.github.io', 'start_index': 1477}], 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'NarrativeText'})]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import UnstructuredFileLoader\n",
"\n",
"loader = UnstructuredFileLoader(\n",
" \"./example_data/layout-parser-paper.pdf\", strategy=\"fast\", mode=\"elements\"\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"docs[5:10]"
"print(docs[0].metadata.get(\"filename\"), \": \", docs[0].page_content[:100])\n",
"print(docs[-1].metadata.get(\"filename\"), \": \", docs[-1].page_content[:100])"
]
},
{
@@ -215,37 +124,52 @@
"source": [
"## PDF Example\n",
"\n",
"Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. Modes of operation are \n",
"- `single` all the text from all elements are combined into one (default)\n",
"- `elements` maintain individual elements\n",
"- `paged` texts from each page are only combined"
"Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements."
]
},
{
"cell_type": "markdown",
"id": "672733fd",
"metadata": {},
"source": [
"### Define a Partitioning Strategy\n",
"\n",
"Unstructured document loader allow users to pass in a `strategy` parameter that lets Unstructured\n",
"know how to partition pdf and other OCR'd documents. Currently supported strategies are `\"auto\"`,\n",
"`\"hi_res\"`, `\"ocr_only\"`, and `\"fast\"`. Learn more about the different strategies\n",
"[here](https://docs.unstructured.io/open-source/core-functionality/partitioning#partition-pdf). \n",
"\n",
"Not all document types have separate hi res and fast partitioning strategies. For those document types, the `strategy` kwarg is\n",
"ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing\n",
"(i.e. a model for document partitioning). You can see how to apply a strategy to an\n",
"`UnstructuredLoader` below."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "686e5eb4",
"execution_count": 6,
"id": "60685353",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='2 v 8 4 3 5 1 . 3 0 1 2 : v i X r a', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 393.9), (16.34, 560.0), (36.34, 560.0), (36.34, 393.9)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': '89565df026a24279aaea20dc08cedbec', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'Title'}),\n",
" Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applica- tions. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io.', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((162.779, 338.45008160000003), (162.779, 566.8455408), (454.0372021523199, 566.8455408), (454.0372021523199, 338.45008160000003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'links': [{'text': ':// layout - parser . github . io', 'url': 'https://layout-parser.github.io', 'start_index': 1477}], 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'NarrativeText'})]"
"[Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 393.9), (16.34, 560.0), (36.34, 560.0), (36.34, 393.9)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'parent_id': '89565df026a24279aaea20dc08cedbec', 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': 'e9fa370aef7ee5c05744eb7bb7d9981b'}, page_content='2 v 8 4 3 5 1 . 3 0 1 2 : v i X r a'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'Title', 'element_id': 'bde0b230a1aa488e3ce837d33015181b'}, page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': '54700f902899f0c8c90488fa8d825bce'}, page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': 'b650f5867bad9bb4e30384282c79bcfe'}, page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((162.779, 338.45008160000003), (162.779, 566.8455408), (454.0372021523199, 566.8455408), (454.0372021523199, 338.45008160000003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'links': [{'text': ':// layout - parser . github . io', 'url': 'https://layout-parser.github.io', 'start_index': 1477}], 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'NarrativeText', 'element_id': 'cfc957c94fe63c8fd7c7f4bcb56e75a7'}, page_content='Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applica- tions. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io.')]"
]
},
"execution_count": 12,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = UnstructuredFileLoader(\n",
" \"./example_data/layout-parser-paper.pdf\", mode=\"elements\"\n",
")\n",
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"loader = UnstructuredLoader(\"./example_data/layout-parser-paper.pdf\", strategy=\"fast\")\n",
"\n",
"docs = loader.load()\n",
"\n",
@@ -257,37 +181,39 @@
"id": "1cf27fc8",
"metadata": {},
"source": [
"If you need to post process the `unstructured` elements after extraction, you can pass in a list of `str` -> `str` functions to the `post_processors` kwarg when you instantiate the `UnstructuredFileLoader`. This applies to other Unstructured loaders as well. Below is an example."
"## Post Processing\n",
"\n",
"If you need to post process the `unstructured` elements after extraction, you can pass in a list of\n",
"`str` -> `str` functions to the `post_processors` kwarg when you instantiate the `UnstructuredLoader`. This applies to other Unstructured loaders as well. Below is an example."
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 7,
"id": "112e5538",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='2 v 8 4 3 5 1 . 3 0 1 2 : v i X r a', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 393.9), (16.34, 560.0), (36.34, 560.0), (36.34, 393.9)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': '89565df026a24279aaea20dc08cedbec', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'Title'}),\n",
" Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText'}),\n",
" Document(page_content='Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applica- tions. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io.', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((162.779, 338.45008160000003), (162.779, 566.8455408), (454.0372021523199, 566.8455408), (454.0372021523199, 338.45008160000003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2023-12-19T13:42:18', 'links': [{'text': ':// layout - parser . github . io', 'url': 'https://layout-parser.github.io', 'start_index': 1477}], 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'NarrativeText'})]"
"[Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 393.9), (16.34, 560.0), (36.34, 560.0), (36.34, 393.9)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'parent_id': '89565df026a24279aaea20dc08cedbec', 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': 'e9fa370aef7ee5c05744eb7bb7d9981b'}, page_content='2 v 8 4 3 5 1 . 3 0 1 2 : v i X r a'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'Title', 'element_id': 'bde0b230a1aa488e3ce837d33015181b'}, page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': '54700f902899f0c8c90488fa8d825bce'}, page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'UncategorizedText', 'element_id': 'b650f5867bad9bb4e30384282c79bcfe'}, page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca'),\n",
" Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((162.779, 338.45008160000003), (162.779, 566.8455408), (454.0372021523199, 566.8455408), (454.0372021523199, 338.45008160000003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': './example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-02-27T15:49:27', 'links': [{'text': ':// layout - parser . github . io', 'url': 'https://layout-parser.github.io', 'start_index': 1477}], 'page_number': 1, 'parent_id': 'bde0b230a1aa488e3ce837d33015181b', 'filetype': 'application/pdf', 'category': 'NarrativeText', 'element_id': 'cfc957c94fe63c8fd7c7f4bcb56e75a7'}, page_content='Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applica- tions. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io.')]"
]
},
"execution_count": 14,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import UnstructuredFileLoader\n",
"from langchain_unstructured import UnstructuredLoader\n",
"from unstructured.cleaners.core import clean_extra_whitespace\n",
"\n",
"loader = UnstructuredFileLoader(\n",
"loader = UnstructuredLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"elements\",\n",
" post_processors=[clean_extra_whitespace],\n",
")\n",
"\n",
@@ -303,34 +229,70 @@
"source": [
"## Unstructured API\n",
"\n",
"If you want to get up and running with less set up, you can simply run `pip install unstructured` and use `UnstructuredAPIFileLoader` or `UnstructuredAPIFileIOLoader`. That will process your document using the hosted Unstructured API. You can generate a free Unstructured API key [here](https://www.unstructured.io/api-key/). The [Unstructured documentation](https://unstructured-io.github.io/unstructured/) page will have instructions on how to generate an API key once theyre available. Check out the instructions [here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image) if youd like to self-host the Unstructured API or run it locally."
"If you want to get up and running with smaller packages and get the most up-to-date partitioning you can `pip install\n",
"unstructured-client` and `pip install langchain-unstructured`. For\n",
"more information about the `UnstructuredLoader`, refer to the\n",
"[Unstructured provider page](https://python.langchain.com/v0.1/docs/integrations/document_loaders/unstructured_file/).\n",
"\n",
"The loader will process your document using the hosted Unstructured serverless API when you pass in\n",
"your `api_key` and set `partition_via_api=True`. You can generate a free\n",
"Unstructured API key [here](https://unstructured.io/api-key/).\n",
"\n",
"Check out the instructions [here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image)\n",
"if youd like to self-host the Unstructured API or run it locally."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "6e5fde16",
"metadata": {},
"outputs": [],
"source": [
"# Install package\n",
"%pip install \"langchain-unstructured\"\n",
"%pip install \"unstructured-client\"\n",
"\n",
"# Set API key\n",
"import os\n",
"\n",
"os.environ[\"UNSTRUCTURED_API_KEY\"] = \"FAKE_API_KEY\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "386eb63c",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO: Preparing to split document for partition.\n",
"INFO: Given file doesn't have '.pdf' extension, so splitting is not enabled.\n",
"INFO: Partitioning without split.\n",
"INFO: Successfully partitioned the document.\n"
]
},
{
"data": {
"text/plain": [
"Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})"
"Document(metadata={'source': 'example_data/fake.docx', 'category_depth': 0, 'filename': 'fake.docx', 'languages': ['por', 'cat'], 'filetype': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'category': 'Title', 'element_id': '56d531394823d81787d77a04462ed096'}, page_content='Lorem ipsum dolor sit amet.')"
]
},
"execution_count": 4,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_loaders import UnstructuredAPIFileLoader\n",
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"filenames = [\"example_data/fake.docx\", \"example_data/fake-email.eml\"]\n",
"\n",
"loader = UnstructuredAPIFileLoader(\n",
" file_path=filenames[0],\n",
" api_key=\"FAKE_API_KEY\",\n",
"loader = UnstructuredLoader(\n",
" file_path=\"example_data/fake.docx\",\n",
" api_key=os.getenv(\"UNSTRUCTURED_API_KEY\"),\n",
" partition_via_api=True,\n",
")\n",
"\n",
"docs = loader.load()\n",
@@ -342,43 +304,198 @@
"id": "94158999",
"metadata": {},
"source": [
"You can also batch multiple files through the Unstructured API in a single API using `UnstructuredAPIFileLoader`."
"You can also batch multiple files through the Unstructured API in a single API using `UnstructuredLoader`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "a3d7c846",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Lorem ipsum dolor sit amet.\\n\\nThis is a test email to use for unit tests.\\n\\nImportant points:\\n\\nRoses are red\\n\\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
"name": "stderr",
"output_type": "stream",
"text": [
"INFO: Preparing to split document for partition.\n",
"INFO: Given file doesn't have '.pdf' extension, so splitting is not enabled.\n",
"INFO: Partitioning without split.\n",
"INFO: Successfully partitioned the document.\n",
"INFO: Preparing to split document for partition.\n",
"INFO: Given file doesn't have '.pdf' extension, so splitting is not enabled.\n",
"INFO: Partitioning without split.\n",
"INFO: Successfully partitioned the document.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"fake.docx : Lorem ipsum dolor sit amet.\n",
"fake-email.eml : Violets are blue\n"
]
}
],
"source": [
"loader = UnstructuredAPIFileLoader(\n",
" file_path=filenames,\n",
" api_key=\"FAKE_API_KEY\",\n",
"loader = UnstructuredLoader(\n",
" file_path=[\"example_data/fake.docx\", \"example_data/fake-email.eml\"],\n",
" api_key=os.getenv(\"UNSTRUCTURED_API_KEY\"),\n",
" partition_via_api=True,\n",
")\n",
"\n",
"docs = loader.load()\n",
"docs[0]"
"\n",
"print(docs[0].metadata[\"filename\"], \": \", docs[0].page_content[:100])\n",
"print(docs[-1].metadata[\"filename\"], \": \", docs[-1].page_content[:100])"
]
},
{
"cell_type": "markdown",
"id": "a324a0db",
"metadata": {},
"source": [
"### Unstructured SDK Client\n",
"\n",
"Partitioning with the Unstructured API relies on the [Unstructured SDK\n",
"Client](https://docs.unstructured.io/api-reference/api-services/sdk).\n",
"\n",
"Below is an example showing how you can customize some features of the client and use your own `requests.Session()`, pass in an alternative `server_url`, or customize the `RetryConfig` object for more control over how failed requests are handled.\n",
"\n",
"Note that the example below may not use the latest version of the UnstructuredClient and there could be breaking changes in future releases. For the latest examples, refer to the [Unstructured Python SDK](https://docs.unstructured.io/api-reference/api-services/sdk-python) docs."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0e510495",
"execution_count": 11,
"id": "58e55264",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO: Preparing to split document for partition.\n",
"INFO: Concurrency level set to 5\n",
"INFO: Splitting pages 1 to 16 (16 total)\n",
"INFO: Determined optimal split size of 4 pages.\n",
"INFO: Partitioning 4 files with 4 page(s) each.\n",
"INFO: Partitioning set #1 (pages 1-4).\n",
"INFO: Partitioning set #2 (pages 5-8).\n",
"INFO: Partitioning set #3 (pages 9-12).\n",
"INFO: Partitioning set #4 (pages 13-16).\n",
"INFO: HTTP Request: POST https://api.unstructuredapp.io/general/v0/general \"HTTP/1.1 200 OK\"\n",
"INFO: HTTP Request: POST https://api.unstructuredapp.io/general/v0/general \"HTTP/1.1 200 OK\"\n",
"INFO: HTTP Request: POST https://api.unstructuredapp.io/general/v0/general \"HTTP/1.1 200 OK\"\n",
"INFO: Successfully partitioned set #1, elements added to the final result.\n",
"INFO: Successfully partitioned set #2, elements added to the final result.\n",
"INFO: Successfully partitioned set #3, elements added to the final result.\n",
"INFO: Successfully partitioned set #4, elements added to the final result.\n",
"INFO: Successfully partitioned the document.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"layout-parser-paper.pdf : LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis\n"
]
}
],
"source": [
"import requests\n",
"from langchain_unstructured import UnstructuredLoader\n",
"from unstructured_client import UnstructuredClient\n",
"from unstructured_client.utils import BackoffStrategy, RetryConfig\n",
"\n",
"client = UnstructuredClient(\n",
" api_key_auth=os.getenv(\n",
" \"UNSTRUCTURED_API_KEY\"\n",
" ), # Note: the client API param is \"api_key_auth\" instead of \"api_key\"\n",
" client=requests.Session(),\n",
" server_url=\"https://api.unstructuredapp.io/general/v0/general\",\n",
" retry_config=RetryConfig(\n",
" strategy=\"backoff\",\n",
" retry_connection_errors=True,\n",
" backoff=BackoffStrategy(\n",
" initial_interval=500,\n",
" max_interval=60000,\n",
" exponent=1.5,\n",
" max_elapsed_time=900000,\n",
" ),\n",
" ),\n",
")\n",
"\n",
"loader = UnstructuredLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" partition_via_api=True,\n",
" client=client,\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"print(docs[0].metadata[\"filename\"], \": \", docs[0].page_content[:100])"
]
},
{
"cell_type": "markdown",
"id": "c66fbeb3",
"metadata": {},
"source": [
"## Chunking\n",
"\n",
"The `UnstructuredLoader` does not support `mode` as parameter for grouping text like the older\n",
"loader `UnstructuredFileLoader` and others did. It instead supports \"chunking\". Chunking in\n",
"unstructured differs from other chunking mechanisms you may be familiar with that form chunks based\n",
"on plain-text features--character sequences like \"\\n\\n\" or \"\\n\" that might indicate a paragraph\n",
"boundary or list-item boundary. Instead, all documents are split using specific knowledge about each\n",
"document format to partition the document into semantic units (document elements) and we only need to\n",
"resort to text-splitting when a single element exceeds the desired maximum chunk size. In general,\n",
"chunking combines consecutive elements to form chunks as large as possible without exceeding the\n",
"maximum chunk size. Chunking produces a sequence of CompositeElement, Table, or TableChunk elements.\n",
"Each “chunk” is an instance of one of these three types.\n",
"\n",
"See this [page](https://docs.unstructured.io/open-source/core-functionality/chunking) for more\n",
"details about chunking options, but to reproduce the same behavior as `mode=\"single\"`, you can set\n",
"`chunking_strategy=\"basic\"`, `max_characters=<some-really-big-number>`, and `include_orig_elements=False`."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e9f1c20d",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING: Partitioning locally even though api_key is defined since partition_via_api=False.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of LangChain documents: 1\n",
"Length of text in the document: 42772\n"
]
}
],
"source": [
"from langchain_unstructured import UnstructuredLoader\n",
"\n",
"loader = UnstructuredLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" chunking_strategy=\"basic\",\n",
" max_characters=1000000,\n",
" include_orig_elements=False,\n",
")\n",
"\n",
"docs = loader.load()\n",
"\n",
"print(\"Number of LangChain documents:\", len(docs))\n",
"print(\"Length of text in the document:\", len(docs[0].page_content))"
]
}
],
"metadata": {
@@ -397,7 +514,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -316,7 +316,7 @@
"id": "eb00a625-a6c9-4766-b3f0-eaed024851c9",
"metadata": {},
"source": [
"## Return SQARQL query\n",
"## Return SPARQL query\n",
"You can return the SPARQL query step from the Sparql QA Chain using the `return_sparql_query` parameter"
]
},
@@ -358,7 +358,7 @@
"\u001b[32;1m\u001b[1;3m[]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"SQARQL query: PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n",
"SPARQL query: PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n",
"SELECT ?workHomepage\n",
"WHERE {\n",
" ?person foaf:name \"Tim Berners-Lee\" .\n",
@@ -370,7 +370,7 @@
],
"source": [
"result = chain(\"What is Tim Berners-Lee's work homepage?\")\n",
"print(f\"SQARQL query: {result['sparql_query']}\")\n",
"print(f\"SPARQL query: {result['sparql_query']}\")\n",
"print(f\"Final answer: {result['result']}\")"
]
},

View File

@@ -7,12 +7,29 @@
"source": [
"# Model caches\n",
"\n",
"This notebook covers how to cache results of individual LLM calls using different caches."
"This notebook covers how to cache results of individual LLM calls using different caches.\n",
"\n",
"First, let's install some dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "88486f6f",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai langchain-community\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "10ad9224",
"metadata": {
"ExecuteTime": {
@@ -25,8 +42,9 @@
"from langchain.globals import set_llm_cache\n",
"from langchain_openai import OpenAI\n",
"\n",
"# To make the caching really obvious, lets use a slower model.\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
"# To make the caching really obvious, lets use a slower and older model.\n",
"# Caching supports newer chat models as well.\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
]
},
{
@@ -41,7 +59,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 3,
"id": "426ff912",
"metadata": {},
"outputs": [],
@@ -53,7 +71,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "64005d1f",
"metadata": {},
"outputs": [
@@ -61,45 +79,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 52.2 ms, sys: 15.2 ms, total: 67.4 ms\n",
"Wall time: 1.19 s\n"
"CPU times: user 7.57 ms, sys: 8.22 ms, total: 15.8 ms\n",
"Wall time: 649 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c8a1cb2b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 191 µs, sys: 11 µs, total: 202 µs\n",
"Wall time: 205 µs\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!\""
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 4,
@@ -107,10 +94,41 @@
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c8a1cb2b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 551 µs, sys: 221 µs, total: 772 µs\n",
"Wall time: 1.23 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -125,7 +143,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "aefd9d2f",
"metadata": {},
"outputs": [],
@@ -135,7 +153,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "5f036236",
"metadata": {},
"outputs": [],
@@ -148,7 +166,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "fa18e3af",
"metadata": {},
"outputs": [
@@ -156,47 +174,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 33.2 ms, sys: 18.1 ms, total: 51.2 ms\n",
"Wall time: 667 ms\n"
"CPU times: user 12.6 ms, sys: 3.51 ms, total: 16.1 ms\n",
"Wall time: 486 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5bf2f6fd",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 4.86 ms, sys: 1.97 ms, total: 6.83 ms\n",
"Wall time: 5.79 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 8,
@@ -204,10 +189,43 @@
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "5bf2f6fd",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 52.6 ms, sys: 57.7 ms, total: 110 ms\n",
"Wall time: 113 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -227,6 +245,16 @@
"Use [Upstash Redis](https://upstash.com) to cache prompts and responses with a serverless HTTP API."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9bd81e8e",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU upstash_redis"
]
},
{
"cell_type": "code",
"execution_count": 11,
@@ -272,7 +300,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -303,7 +331,7 @@
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -326,6 +354,16 @@
"Use [Redis](/docs/integrations/providers/redis) to cache prompts and responses."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d104226b",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU redis"
]
},
{
"cell_type": "code",
"execution_count": 9,
@@ -369,7 +407,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -400,7 +438,7 @@
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -414,7 +452,17 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "77b3e4e0",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU redis"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "64df3099",
"metadata": {},
"outputs": [],
@@ -455,7 +503,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -487,7 +535,7 @@
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"llm(\"Tell me one joke\")"
"llm.invoke(\"Tell me one joke\")"
]
},
{
@@ -505,6 +553,16 @@
"Let's first start with an example of exact match"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7fe96cea",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU gptcache"
]
},
{
"cell_type": "code",
"execution_count": 5,
@@ -563,7 +621,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -594,7 +652,7 @@
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -659,7 +717,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -690,7 +748,7 @@
"source": [
"%%time\n",
"# This is an exact match, so it finds it in the cache\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -721,7 +779,7 @@
"source": [
"%%time\n",
"# This is not an exact match, but semantically within distance so it hits!\n",
"llm(\"Tell me joke\")"
"llm.invoke(\"Tell me joke\")"
]
},
{
@@ -744,7 +802,11 @@
"### `MongoDBCache`\n",
"An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"\n",
"To import this cache:\n",
"To import this cache, first install the required dependency:\n",
"\n",
"```bash\n",
"%pip install -qU langchain-mongodb\n",
"```\n",
"\n",
"```python\n",
"from langchain_mongodb.cache import MongoDBCache\n",
@@ -822,7 +884,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet momento"
"%pip install -qU momento"
]
},
{
@@ -877,7 +939,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -909,7 +971,7 @@
"%%time\n",
"# The second time it is, so it goes faster\n",
"# When run in the same region as the cache, latencies are single digit ms\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -1015,7 +1077,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"cassio>=0.1.4\""
"%pip install -qU \"cassio>=0.1.4\""
]
},
{
@@ -1350,6 +1412,8 @@
}
],
"source": [
"%pip install -qU langchain_astradb\n",
"\n",
"import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
@@ -1633,7 +1697,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -1669,7 +1733,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -1690,7 +1754,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-elasticsearch"
"%pip install -qU langchain-elasticsearch"
]
},
{
@@ -1823,7 +1887,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2, cache=False)"
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", n=2, best_of=2, cache=False)"
]
},
{
@@ -1853,7 +1917,7 @@
],
"source": [
"%%time\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -1883,7 +1947,7 @@
],
"source": [
"%%time\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -1901,18 +1965,18 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 10,
"id": "9afa3f7a",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"no_cache_llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", cache=False)"
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"no_cache_llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", cache=False)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 11,
"id": "98a78e8e",
"metadata": {},
"outputs": [],
@@ -1924,19 +1988,19 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 14,
"id": "2bfb099b",
"metadata": {},
"outputs": [],
"source": [
"with open(\"../../how_to/state_of_the_union.txt\") as f:\n",
"with open(\"../how_to/state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 15,
"id": "f78b7f51",
"metadata": {},
"outputs": [],
@@ -1949,7 +2013,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 16,
"id": "a2a30822",
"metadata": {},
"outputs": [],
@@ -1959,7 +2023,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 17,
"id": "a545b743",
"metadata": {},
"outputs": [
@@ -1967,24 +2031,27 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms\n",
"Wall time: 5.09 s\n"
"CPU times: user 176 ms, sys: 23.2 ms, total: 199 ms\n",
"Wall time: 4.42 s\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'"
"{'input_documents': [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \\n\\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \\n\\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \\n\\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \\n\\nThroughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \\n\\nThey keep moving. \\n\\nAnd the costs and the threats to America and the world keep rising. \\n\\nThats why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \\n\\nThe United States is a member along with 29 other nations. \\n\\nIt matters. American diplomacy matters. American resolve matters. \\n\\nPutins latest attack on Ukraine was premeditated and unprovoked. \\n\\nHe rejected repeated efforts at diplomacy. \\n\\nHe thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \\n\\nWe prepared extensively and carefully. \\n\\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \\n\\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \\n\\nWe countered Russias lies with truth. \\n\\nAnd now that he has acted the free world is holding him accountable. \\n\\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \\n\\nWe are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \\n\\nTogether with our allies we are right now enforcing powerful economic sanctions. \\n\\nWe are cutting off Russias largest banks from the international financial system. \\n\\nPreventing Russias central bank from defending the Russian Ruble making Putins $630 Billion “war fund” worthless. \\n\\nWe are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come. \\n\\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \\n\\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \\n\\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.'),\n",
" Document(page_content='We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. \\n\\nAnd tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \\n\\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame. \\n\\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \\n\\nWe are giving more than $1 Billion in direct assistance to Ukraine. \\n\\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \\n\\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \\n\\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \\n\\nFor that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \\n\\nAs I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power. \\n\\nAnd we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \\n\\nPutin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \\n\\nAnd a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \\n\\nTo all Americans, I will be honest with you, as Ive always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd Im taking robust action to make sure the pain of our sanctions is targeted at Russias economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \\n\\nThese steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay. \\n\\nWhen the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger. \\n\\nWhile it shouldnt have taken something so terrible for people around the world to see whats at stake now everyone sees it clearly. \\n\\nWe see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine. \\n\\nIn the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \\n\\nThis is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \\n\\nTo our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \\n\\nPutin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. \\n\\nHe will never extinguish their love of freedom. He will never weaken the resolve of the free world. \\n\\nWe meet tonight in an America that has lived through two of the hardest years this nation has ever faced.'),\n",
" Document(page_content='We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \\n\\nThe pandemic has been punishing. \\n\\nAnd so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \\n\\nI understand. \\n\\nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \\n\\nThats why one of the first things I did as President was fight to pass the American Rescue Plan. \\n\\nBecause people were hurting. We needed to act, and we did. \\n\\nFew pieces of legislation have done more in a critical moment in our history to lift us out of crisis. \\n\\nIt fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \\n\\nHelped put food on their table, keep a roof over their heads, and cut the cost of health insurance. \\n\\nAnd as my Dad used to say, it gave people a little breathing room. \\n\\nAnd unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people—and left no one behind. \\n\\nAnd it worked. It created jobs. Lots of jobs. \\n\\nIn fact—our economy created over 6.5 Million new jobs just last year, more jobs created in one year \\nthan ever before in the history of America. \\n\\nOur economy grew at a rate of 5.7% last year, the strongest growth in nearly 40 years, the first step in bringing fundamental change to an economy that hasnt worked for the working people of this nation for too long. \\n\\nFor the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. \\n\\nBut that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \\n\\nVice President Harris and I ran for office with a new economic vision for America. \\n\\nInvest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \\nand the middle out, not from the top down. \\n\\nBecause we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. \\n\\nAmerica used to have the best roads, bridges, and airports on Earth. \\n\\nNow our infrastructure is ranked 13th in the world. \\n\\nWe wont be able to compete for the jobs of the 21st Century if we dont fix that. \\n\\nThats why it was so important to pass the Bipartisan Infrastructure Law—the most sweeping investment to rebuild America in history. \\n\\nThis was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. \\n\\nWere done talking about infrastructure weeks. \\n\\nWere going to have an infrastructure decade. \\n\\nIt is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China. \\n\\nAs Ive told Xi Jinping, it is never a good bet to bet against the American people. \\n\\nWell create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \\n\\nAnd well do it all to withstand the devastating effects of the climate crisis and promote environmental justice. \\n\\nWell build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes—so every child—and every American—has clean water to drink at home and at school, provide affordable high-speed internet for every American—urban, suburban, rural, and tribal communities. \\n\\n4,000 projects have already been announced. \\n\\nAnd tonight, Im announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. \\n\\nWhen we use taxpayer dollars to rebuild America we are going to Buy American: buy American products to support American jobs.')],\n",
" 'output_text': \" The speaker addresses the unity and strength of Americans and discusses the recent conflict with Russia and actions taken by the US and its allies. They announce closures of airspace, support for Ukraine, and measures to target corrupt Russian leaders. President Biden reflects on past hardships and highlights efforts to pass the American Rescue Plan. He criticizes the previous administration's policies and shares plans for the economy, including investing in America, education, rebuilding infrastructure, and supporting American jobs. \"}"
]
},
"execution_count": 21,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"chain.run(docs)"
"chain.invoke(docs)"
]
},
{
@@ -1997,7 +2064,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 19,
"id": "39cbb282",
"metadata": {},
"outputs": [
@@ -2005,32 +2072,43 @@
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms\n",
"Wall time: 1.04 s\n"
"CPU times: user 7 ms, sys: 1.94 ms, total: 8.94 ms\n",
"Wall time: 1.06 s\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'"
"{'input_documents': [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \\n\\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \\n\\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \\n\\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \\n\\nThroughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \\n\\nThey keep moving. \\n\\nAnd the costs and the threats to America and the world keep rising. \\n\\nThats why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \\n\\nThe United States is a member along with 29 other nations. \\n\\nIt matters. American diplomacy matters. American resolve matters. \\n\\nPutins latest attack on Ukraine was premeditated and unprovoked. \\n\\nHe rejected repeated efforts at diplomacy. \\n\\nHe thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \\n\\nWe prepared extensively and carefully. \\n\\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \\n\\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \\n\\nWe countered Russias lies with truth. \\n\\nAnd now that he has acted the free world is holding him accountable. \\n\\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \\n\\nWe are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \\n\\nTogether with our allies we are right now enforcing powerful economic sanctions. \\n\\nWe are cutting off Russias largest banks from the international financial system. \\n\\nPreventing Russias central bank from defending the Russian Ruble making Putins $630 Billion “war fund” worthless. \\n\\nWe are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come. \\n\\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \\n\\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \\n\\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.'),\n",
" Document(page_content='We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. \\n\\nAnd tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \\n\\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame. \\n\\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \\n\\nWe are giving more than $1 Billion in direct assistance to Ukraine. \\n\\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \\n\\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \\n\\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \\n\\nFor that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \\n\\nAs I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power. \\n\\nAnd we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \\n\\nPutin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \\n\\nAnd a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \\n\\nTo all Americans, I will be honest with you, as Ive always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd Im taking robust action to make sure the pain of our sanctions is targeted at Russias economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \\n\\nThese steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay. \\n\\nWhen the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger. \\n\\nWhile it shouldnt have taken something so terrible for people around the world to see whats at stake now everyone sees it clearly. \\n\\nWe see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine. \\n\\nIn the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \\n\\nThis is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \\n\\nTo our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \\n\\nPutin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. \\n\\nHe will never extinguish their love of freedom. He will never weaken the resolve of the free world. \\n\\nWe meet tonight in an America that has lived through two of the hardest years this nation has ever faced.'),\n",
" Document(page_content='We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \\n\\nThe pandemic has been punishing. \\n\\nAnd so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \\n\\nI understand. \\n\\nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \\n\\nThats why one of the first things I did as President was fight to pass the American Rescue Plan. \\n\\nBecause people were hurting. We needed to act, and we did. \\n\\nFew pieces of legislation have done more in a critical moment in our history to lift us out of crisis. \\n\\nIt fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \\n\\nHelped put food on their table, keep a roof over their heads, and cut the cost of health insurance. \\n\\nAnd as my Dad used to say, it gave people a little breathing room. \\n\\nAnd unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people—and left no one behind. \\n\\nAnd it worked. It created jobs. Lots of jobs. \\n\\nIn fact—our economy created over 6.5 Million new jobs just last year, more jobs created in one year \\nthan ever before in the history of America. \\n\\nOur economy grew at a rate of 5.7% last year, the strongest growth in nearly 40 years, the first step in bringing fundamental change to an economy that hasnt worked for the working people of this nation for too long. \\n\\nFor the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. \\n\\nBut that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \\n\\nVice President Harris and I ran for office with a new economic vision for America. \\n\\nInvest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \\nand the middle out, not from the top down. \\n\\nBecause we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. \\n\\nAmerica used to have the best roads, bridges, and airports on Earth. \\n\\nNow our infrastructure is ranked 13th in the world. \\n\\nWe wont be able to compete for the jobs of the 21st Century if we dont fix that. \\n\\nThats why it was so important to pass the Bipartisan Infrastructure Law—the most sweeping investment to rebuild America in history. \\n\\nThis was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. \\n\\nWere done talking about infrastructure weeks. \\n\\nWere going to have an infrastructure decade. \\n\\nIt is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China. \\n\\nAs Ive told Xi Jinping, it is never a good bet to bet against the American people. \\n\\nWell create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \\n\\nAnd well do it all to withstand the devastating effects of the climate crisis and promote environmental justice. \\n\\nWell build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes—so every child—and every American—has clean water to drink at home and at school, provide affordable high-speed internet for every American—urban, suburban, rural, and tribal communities. \\n\\n4,000 projects have already been announced. \\n\\nAnd tonight, Im announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. \\n\\nWhen we use taxpayer dollars to rebuild America we are going to Buy American: buy American products to support American jobs.')],\n",
" 'output_text': '\\n\\nThe speaker addresses the unity of Americans and discusses the conflict with Russia and support for Ukraine. The US and allies are taking action against Russia and targeting corrupt leaders. There is also support and assurance for the American people. President Biden reflects on recent hardships and highlights efforts to pass the American Rescue Plan. He also shares plans for economic growth and investment in America. '}"
]
},
"execution_count": 22,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"chain.run(docs)"
"chain.invoke(docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 20,
"id": "9df0dab8",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"rm: sqlite.db: No such file or directory\n"
]
}
],
"source": [
"!rm .langchain.db sqlite.db"
]
@@ -2105,7 +2183,7 @@
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
"llm.invoke(\"Tell me a joke\")"
]
},
{
@@ -2142,7 +2220,7 @@
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"llm(\"Tell me one joke\")"
"llm.invoke(\"Tell me one joke\")"
]
},
{
@@ -2192,6 +2270,16 @@
"The standard cache that looks for an exact match of the user prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ac0a2276",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_couchbase couchbase"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -2578,14 +2666,6 @@
"| langchain_couchbase.cache | [CouchbaseCache](https://api.python.langchain.com/en/latest/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19067f14-c69a-4156-9504-af43a0713669",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -2604,7 +2684,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -108,7 +108,7 @@
"metadata": {},
"outputs": [],
"source": [
"model = Cohere(model=\"command\", max_tokens=256, temperature=0.75)"
"model = Cohere(max_tokens=256, temperature=0.75)"
]
},
{

View File

@@ -194,12 +194,37 @@
")"
]
},
{
"cell_type": "markdown",
"id": "e4a1e0f1",
"metadata": {},
"source": [
"For certain requirements, there is an option to pass the IBM's [`APIClient`](https://ibm.github.io/watsonx-ai-python-sdk/base.html#apiclient) object into the `WatsonxLLM` class."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4b28afc1",
"metadata": {},
"outputs": [],
"source": [
"from ibm_watsonx_ai import APIClient\n",
"\n",
"api_client = APIClient(...)\n",
"\n",
"watsonx_llm = WatsonxLLM(\n",
" model_id=\"ibm/granite-13b-instruct-v2\",\n",
" watsonx_client=api_client,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7c4a632b",
"metadata": {},
"source": [
"You can also pass the IBM's [`ModelInference`](https://ibm.github.io/watsonx-ai-python-sdk/fm_model_inference.html) object into `WatsonxLLM` class."
"You can also pass the IBM's [`ModelInference`](https://ibm.github.io/watsonx-ai-python-sdk/fm_model_inference.html) object into the `WatsonxLLM` class."
]
},
{

View File

@@ -88,6 +88,7 @@
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"select_expert\": \"llama-2-7b-chat-hf\",\n",
" \"process_prompt\": False,\n",
" # \"stop_sequences\": '\\\"sequence1\\\",\\\"sequence2\\\"',\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",
@@ -116,6 +117,7 @@
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"select_expert\": \"llama-2-7b-chat-hf\",\n",
" \"process_prompt\": False,\n",
" # \"stop_sequences\": '\\\"sequence1\\\",\\\"sequence2\\\"',\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",
@@ -175,9 +177,7 @@
"import os\n",
"\n",
"sambastudio_base_url = \"<Your SambaStudio environment URL>\"\n",
"sambastudio_base_uri = (\n",
" \"<Your SambaStudio endpoint base URI>\" # optional, \"api/predict/nlp\" set as default\n",
")\n",
"sambastudio_base_uri = \"<Your SambaStudio endpoint base URI>\" # optional, \"api/predict/generic\" set as default\n",
"sambastudio_project_id = \"<Your SambaStudio project id>\"\n",
"sambastudio_endpoint_id = \"<Your SambaStudio endpoint id>\"\n",
"sambastudio_api_key = \"<Your SambaStudio endpoint API key>\"\n",
@@ -271,6 +271,7 @@
" \"do_sample\": True,\n",
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"process_prompt\": False,\n",
" \"select_expert\": \"Meta-Llama-3-8B-Instruct\",\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",

View File

@@ -0,0 +1,133 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Yi\n",
"[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"## Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisite\n",
"An API key is required to access Yi LLM API. Visit https://www.lingyiwanwu.com/ to get your API key. When applying for the API key, you need to specify whether it's for domestic (China) or international use."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use Yi LLM"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"YI_API_KEY\"] = \"YOUR_API_KEY\"\n",
"\n",
"from langchain_community.llms import YiLLM\n",
"\n",
"# Load the model\n",
"llm = YiLLM(model=\"yi-large\")\n",
"\n",
"# You can specify the region if needed (default is \"auto\")\n",
"# llm = YiLLM(model=\"yi-large\", region=\"domestic\") # or \"international\"\n",
"\n",
"# Basic usage\n",
"res = llm.invoke(\"What's your name?\")\n",
"print(res)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate method\n",
"res = llm.generate(\n",
" prompts=[\n",
" \"Explain the concept of large language models.\",\n",
" \"What are the potential applications of AI in healthcare?\",\n",
" ]\n",
")\n",
"print(res)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Streaming\n",
"for chunk in llm.stream(\"Describe the key features of the Yi language model series.\"):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Asynchronous streaming\n",
"import asyncio\n",
"\n",
"\n",
"async def run_aio_stream():\n",
" async for chunk in llm.astream(\n",
" \"Write a brief on the future of AI according to Dr. Kai-Fu Lee's vision.\"\n",
" ):\n",
" print(chunk, end=\"\", flush=True)\n",
"\n",
"\n",
"asyncio.run(run_aio_stream())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Adjusting parameters\n",
"llm_with_params = YiLLM(\n",
" model=\"yi-large\",\n",
" temperature=0.7,\n",
" top_p=0.9,\n",
")\n",
"\n",
"res = llm_with_params(\n",
" \"Propose an innovative AI application that could benefit society.\"\n",
")\n",
"print(res)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -6,7 +6,7 @@
"source": [
"# TiDB\n",
"\n",
"> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at https://tidb.cloud/ai.\n",
"> [TiDB Cloud](https://www.pingcap.com/tidb-serverless/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Create a free TiDB Serverless cluster and start using the vector search feature at https://pingcap.com/ai.\n",
"\n",
"This notebook introduces how to use TiDB to store chat message history. "
]

18
docs/docs/integrations/platforms/aws.mdx Normal file → Executable file
View File

@@ -197,6 +197,24 @@ See a [usage example](/docs/integrations/vectorstores/documentdb).
```python
from langchain.vectorstores import DocumentDBVectorSearch
```
### Amazon MemoryDB
[Amazon MemoryDB](https://aws.amazon.com/memorydb/) is a durable, in-memory database service that delivers ultra-fast performance. MemoryDB is compatible with Redis OSS, a popular open source data store,
enabling you to quickly build applications using the same flexible and friendly Redis OSS APIs, and commands that they already use today.
InMemoryVectorStore class provides a vectorstore to connect with Amazon MemoryDB.
```python
from langchain_aws.vectorstores.inmemorydb import InMemoryVectorStore
vds = InMemoryVectorStore.from_documents(
chunks,
embeddings,
redis_url="rediss://cluster_endpoint:6379/ssl=True ssl_cert_reqs=none",
vector_schema=vector_schema,
index_name=INDEX_NAME,
)
```
See a [usage example](/docs/integrations/vectorstores/memorydb).
## Retrievers

View File

@@ -140,6 +140,18 @@ See a [usage example](/docs/integrations/text_embedding/google_vertex_ai_palm).
from langchain_google_vertexai import VertexAIEmbeddings
```
### Palm Embedding
We need to install `langchain-community` python package.
```bash
pip install langchain-community
```
```python
from langchain_community.embeddings.google_palm import GooglePalmEmbeddings
```
## Document Loaders
### AlloyDB for PostgreSQL

View File

@@ -1,54 +0,0 @@
---
sidebar_position: 0
sidebar_class_name: hidden
---
# Providers
:::info
If you'd like to write your own integration, see [Extending LangChain](/docs/how_to/#custom).
If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
:::
LangChain integrates with many providers.
## Partner Packages
These providers have standalone `langchain-{provider}` packages for improved versioning, dependency management and testing.
- [AI21](/docs/integrations/providers/ai21)
- [Airbyte](/docs/integrations/providers/airbyte)
- [Amazon Web Services](/docs/integrations/platforms/aws)
- [Anthropic](/docs/integrations/platforms/anthropic)
- [Astra DB](/docs/integrations/providers/astradb)
- [Cohere](/docs/integrations/providers/cohere)
- [Couchbase](/docs/integrations/providers/couchbase)
- [Elasticsearch](/docs/integrations/providers/elasticsearch)
- [Exa Search](/docs/integrations/providers/exa_search)
- [Fireworks](/docs/integrations/providers/fireworks)
- [Google](/docs/integrations/platforms/google)
- [Groq](/docs/integrations/providers/groq)
- [IBM](/docs/integrations/providers/ibm)
- [MistralAI](/docs/integrations/providers/mistralai)
- [MongoDB](/docs/integrations/providers/mongodb_atlas)
- [Nomic](/docs/integrations/providers/nomic)
- [Nvidia](/docs/integrations/providers/nvidia)
- [OpenAI](/docs/integrations/platforms/openai)
- [Pinecone](/docs/integrations/providers/pinecone)
- [Qdrant](/docs/integrations/providers/qdrant)
- [Robocorp](/docs/integrations/providers/robocorp)
- [Together AI](/docs/integrations/providers/together)
- [Upstage](/docs/integrations/providers/upstage)
- [Voyage AI](/docs/integrations/providers/voyageai)
## Featured Community Providers
- [Hugging Face](/docs/integrations/platforms/huggingface)
- [Microsoft](/docs/integrations/platforms/microsoft)
## All Providers
Click [here](/docs/integrations/providers/) to see all providers.

View File

@@ -156,6 +156,20 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_onedrive).
from langchain_community.document_loaders import OneDriveLoader
```
### Microsoft OneDrive File
>[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft.
First, you need to install a python package.
```bash
pip install o365
```
```python
from langchain_community.document_loaders import OneDriveFileLoader
```
### Microsoft Word
@@ -338,7 +352,7 @@ Follow the documentation [here](/docs/integrations/tools/bing_search) to get a d
The environment variable `BING_SUBSCRIPTION_KEY` and `BING_SEARCH_URL` are required from Bing Search resource.
```bash
```python
from langchain_community.tools.bing_search import BingSearchResults
from langchain_community.utilities import BingSearchAPIWrapper

View File

@@ -68,3 +68,18 @@ Learn more in the [example notebook](/docs/integrations/document_loaders/cassand
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of
> the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.
## Toolkit
The `Cassandra Database toolkit` enables AI engineers to efficiently integrate agents
with Cassandra data.
```python
from langchain_community.agent_toolkits.cassandra_database.toolkit import (
CassandraDatabaseToolkit,
)
```
Learn more in the [example notebook](/docs/integrations/toolkits/cassandra_database).

View File

@@ -46,6 +46,55 @@ print(llm.invoke("Come up with a pet name"))
```
Usage of the Cohere (legacy) [LLM model](/docs/integrations/llms/cohere)
### Tool calling
```python
from langchain_cohere import ChatCohere
from langchain_core.messages import (
HumanMessage,
ToolMessage,
)
from langchain_core.tools import tool
@tool
def magic_function(number: int) -> int:
"""Applies a magic operation to an integer
Args:
number: Number to have magic operation performed on
"""
return number + 10
def invoke_tools(tool_calls, messages):
for tool_call in tool_calls:
selected_tool = {"magic_function":magic_function}[
tool_call["name"].lower()
]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
return messages
tools = [magic_function]
llm = ChatCohere()
llm_with_tools = llm.bind_tools(tools=tools)
messages = [
HumanMessage(
content="What is the value of magic_function(2)?"
)
]
res = llm_with_tools.invoke(messages)
while res.tool_calls:
messages.append(res)
messages = invoke_tools(res.tool_calls, messages)
res = llm_with_tools.invoke(messages)
print(res.content)
```
Tool calling with Cohere LLM can be done by binding the necessary tools to the llm as seen above.
An alternative, is to support multi hop tool calling with the ReAct agent as seen below.
### ReAct Agent
The agent is based on the paper
@@ -77,6 +126,7 @@ agent_executor.invoke({
"input": "In what year was the company that was founded as Sound of Music added to the S&P 500?",
})
```
The ReAct agent can be used to call multiple tools in sequence.
### RAG Retriever

View File

@@ -0,0 +1,56 @@
# Dedoc
>[Dedoc](https://dedoc.readthedocs.io) is an [open-source](https://github.com/ispras/dedoc)
library/service that extracts texts, tables, attached files and document structure
(e.g., titles, list items, etc.) from files of various formats.
`Dedoc` supports `DOCX`, `XLSX`, `PPTX`, `EML`, `HTML`, `PDF`, images and more.
Full list of supported formats can be found [here](https://dedoc.readthedocs.io/en/latest/#id1).
## Installation and Setup
### Dedoc library
You can install `Dedoc` using `pip`.
In this case, you will need to install dependencies,
please go [here](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html)
to get more information.
```bash
pip install dedoc
```
### Dedoc API
If you are going to use `Dedoc` API, you don't need to install `dedoc` library.
In this case, you should run the `Dedoc` service, e.g. `Docker` container (please see
[the documentation](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html#install-and-run-dedoc-using-docker)
for more details):
```bash
docker pull dedocproject/dedoc
docker run -p 1231:1231
```
## Document Loader
* For handling files of any formats (supported by `Dedoc`), you can use `DedocFileLoader`:
```python
from langchain_community.document_loaders import DedocFileLoader
```
* For handling PDF files (with or without a textual layer), you can use `DedocPDFLoader`:
```python
from langchain_community.document_loaders import DedocPDFLoader
```
* For handling files of any formats without library installation,
you can use `Dedoc API` with `DedocAPIFileLoader`:
```python
from langchain_community.document_loaders import DedocAPIFileLoader
```
Please see a [usage example](/docs/integrations/document_loaders/dedoc) for more details.

View File

@@ -355,7 +355,7 @@
"id": "859daaee-ac5d-47f8-8704-827f5578bf1b",
"metadata": {},
"source": [
"## Define a metic\n",
"## Define a metric\n",
"\n",
"We now need to define a metric. This will be used to determine which runs were successful and we can learn from. Here we will use DSPy's metrics, though you can write your own."
]

View File

@@ -38,7 +38,7 @@ import getpass
if "PREMAI_API_KEY" not in os.environ:
os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
chat = ChatPremAI(project_id=8)
chat = ChatPremAI(project_id=1234, model_name="gpt-4o")
```
### Chat Completions
@@ -50,7 +50,8 @@ The first one will give us a static result. Whereas the second one will stream t
```python
human_message = HumanMessage(content="Who are you?")
chat.invoke([human_message])
response = chat.invoke([human_message])
print(response.content)
```
You can provide system prompt here like this:
@@ -84,8 +85,8 @@ Repositories are also supported in langchain premai. Here is how you can do it.
```python
query = "what is the diameter of individual Galaxy"
repository_ids = [1991, ]
query = "Which models are used for dense retrieval"
repository_ids = [1985,]
repositories = dict(
ids=repository_ids,
similarity_threshold=0.3,
@@ -100,6 +101,8 @@ First we start by defining our repository with some repository ids. Make sure th
Now, we connect the repository with our chat object to invoke RAG based generations.
```python
import json
response = chat.invoke(query, max_tokens=100, repositories=repositories)
print(response.content)
@@ -109,25 +112,22 @@ print(json.dumps(response.response_metadata, indent=4))
This is how an output looks like.
```bash
The diameters of individual galaxies range from 80,000-150,000 light-years.
Dense retrieval models typically include:
1. **BERT-based Models**: Such as DPR (Dense Passage Retrieval) which uses BERT for encoding queries and passages.
2. **ColBERT**: A model that combines BERT with late interaction mechanisms.
3. **ANCE (Approximate Nearest Neighbor Negative Contrastive Estimation)**: Uses BERT and focuses on efficient retrieval.
4. **TCT-ColBERT**: A variant of ColBERT that uses a two-tower
{
"document_chunks": [
{
"repository_id": 19xx,
"document_id": 13xx,
"chunk_id": 173xxx,
"document_name": "Kegy 202 Chapter 2",
"similarity_score": 0.586126983165741,
"content": "n thousands\n of light-years. The diameters of individual\n galaxies range from 80,000-150,000 light\n "
},
{
"repository_id": 19xx,
"document_id": 13xx,
"chunk_id": 173xxx,
"document_name": "Kegy 202 Chapter 2",
"similarity_score": 0.4815782308578491,
"content": " for development of galaxies. A galaxy contains\n a large number of stars. Galaxies spread over\n vast distances that are measured in thousands\n "
},
"repository_id": 1985,
"document_id": 1306,
"chunk_id": 173899,
"document_name": "[D] Difference between sparse and dense informati\u2026",
"similarity_score": 0.3209080100059509,
"content": "with the difference or anywhere\nwhere I can read about it?\n\n\n 17 9\n\n\n u/ScotiabankCanada \u2022 Promoted\n\n\n Accelerate your study permit process\n with Scotiabank's Student GIC\n Program. We're here to help you tur\u2026\n\n\n startright.scotiabank.com Learn More\n\n\n Add a Comment\n\n\nSort by: Best\n\n\n DinosParkour \u2022 1y ago\n\n\n Dense Retrieval (DR) m"
}
]
}
```
@@ -264,4 +264,164 @@ doc_result[:5]
0.0008162345038726926,
-0.004556538071483374,
0.02918623760342598,
-0.02547479420900345]
-0.02547479420900345]
## Tool/Function Calling
LangChain PremAI supports tool/function calling. Tool/function calling allows a model to respond to a given prompt by generating output that matches a user-defined schema.
- You can learn all about tool calling in details [in our documentation here](https://docs.premai.io/get-started/function-calling).
- You can learn more about langchain tool calling in [this part of the docs](https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling).
**NOTE:**
> The current version of LangChain ChatPremAI do not support function/tool calling with streaming support. Streaming support along with function calling will come soon.
### Passing tools to model
In order to pass tools and let the LLM choose the tool it needs to call, we need to pass a tool schema. A tool schema is the function definition along with proper docstring on what does the function do, what each argument of the function is etc. Below are some simple arithmetic functions with their schema.
**NOTE:**
> When defining function/tool schema, do not forget to add information around the function arguments, otherwise it would throw error.
```python
from langchain_core.tools import tool
from langchain_core.pydantic_v1 import BaseModel, Field
# Define the schema for function arguments
class OperationInput(BaseModel):
a: int = Field(description="First number")
b: int = Field(description="Second number")
# Now define the function where schema for argument will be OperationInput
@tool("add", args_schema=OperationInput, return_direct=True)
def add(a: int, b: int) -> int:
"""Adds a and b.
Args:
a: first int
b: second int
"""
return a + b
@tool("multiply", args_schema=OperationInput, return_direct=True)
def multiply(a: int, b: int) -> int:
"""Multiplies a and b.
Args:
a: first int
b: second int
"""
return a * b
```
### Binding tool schemas with our LLM
We will now use the `bind_tools` method to convert our above functions to a "tool" and binding it with the model. This means we are going to pass these tool informations everytime we invoke the model.
```python
tools = [add, multiply]
llm_with_tools = chat.bind_tools(tools)
```
After this, we get the response from the model which is now binded with the tools.
```python
query = "What is 3 * 12? Also, what is 11 + 49?"
messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
```
As we can see, when our chat model is binded with tools, then based on the given prompt, it calls the correct set of the tools and sequentially.
```python
ai_msg.tool_calls
```
**Output**
```python
[{'name': 'multiply',
'args': {'a': 3, 'b': 12},
'id': 'call_A9FL20u12lz6TpOLaiS6rFa8'},
{'name': 'add',
'args': {'a': 11, 'b': 49},
'id': 'call_MPKYGLHbf39csJIyb5BZ9xIk'}]
```
We append this message shown above to the LLM which acts as a context and makes the LLM aware that what all functions it has called.
```python
messages.append(ai_msg)
```
Since tool calling happens into two phases, where:
1. in our first call, we gathered all the tools that the LLM decided to tool, so that it can get the result as an added context to give more accurate and hallucination free result.
2. in our second call, we will parse those set of tools decided by LLM and run them (in our case it will be the functions we defined, with the LLM's extracted arguments) and pass this result to the LLM
```python
from langchain_core.messages import ToolMessage
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_output = selected_tool.invoke(tool_call["args"])
messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))
```
Finally, we call the LLM (binded with the tools) with the function response added in it's context.
```python
response = llm_with_tools.invoke(messages)
print(response.content)
```
**Output**
```txt
The final answers are:
- 3 * 12 = 36
- 11 + 49 = 60
```
### Defining tool schemas: Pydantic class `Optional`
Above we have shown how to define schema using `tool` decorator, however we can equivalently define the schema using Pydantic. Pydantic is useful when your tool inputs are more complex:
```python
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
class add(BaseModel):
"""Add two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
class multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
tools = [add, multiply]
```
Now, we can bind them to chat models and directly get the result:
```python
chain = llm_with_tools | PydanticToolsParser(tools=[multiply, add])
chain.invoke(query)
```
**Output**
```txt
[multiply(a=3, b=12), add(a=11, b=49)]
```
Now, as done above, we parse this and run this functions and call the LLM once again to get the result.

View File

@@ -11,7 +11,8 @@ You need to install `langchain-robocorp` python package:
pip install langchain-robocorp
```
You will need a running instance of Action Server to communicate with from your agent application. See the [Robocorp Quickstart](https://github.com/robocorp/robocorp#quickstart) on how to setup Action Server and create your Actions.
You will need a running instance of `Action Server` to communicate with from your agent application.
See the [Robocorp Quickstart](https://github.com/robocorp/robocorp#quickstart) on how to setup Action Server and create your Actions.
You can bootstrap a new project using Action Server `new` command.
@@ -21,6 +22,12 @@ cd ./your-project-name
action-server start
```
## Tool
```python
from langchain_robocorp.toolkits import ActionServerRequestTool
```
## Toolkit
See a [usage example](/docs/integrations/toolkits/robocorp).

View File

@@ -0,0 +1,25 @@
# SAP
>[SAP SE(Wikipedia)](https://www.sap.com/about/company.html) is a German multinational
> software company. It develops enterprise software to manage business operation and
> customer relations. The company is the world's leading
> `enterprise resource planning (ERP)` software vendor.
## Installation and Setup
We need to install the `hdbcli` python package.
```bash
pip install hdbcli
```
## Vectorstore
>[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is
> a vector store fully integrated into the `SAP HANA Cloud` database.
See a [usage example](/docs/integrations/vectorstores/sap_hanavector).
```python
from langchain_community.vectorstores.hanavector import HanaDB
```

View File

@@ -20,3 +20,16 @@ from langchain_community.vectorstores import SKLearnVectorStore
```
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn).
## Retriever
`Support vector machines (SVMs)` are the supervised learning
methods used for classification, regression and outliers detection.
See a [usage example](/docs/integrations/retrievers/svm).
```python
from langchain_community.retrievers import SVMRetriever
```

View File

@@ -7,7 +7,6 @@
There isn't any special setup for it.
## Document loader
See a [usage example](/docs/integrations/document_loaders/slack).
@@ -16,6 +15,14 @@ See a [usage example](/docs/integrations/document_loaders/slack).
from langchain_community.document_loaders import SlackDirectoryLoader
```
## Toolkit
See a [usage example](/docs/integrations/toolkits/slack).
```python
from langchain_community.agent_toolkits import SlackToolkit
```
## Chat loader
See a [usage example](/docs/integrations/chat_loaders/slack).

View File

@@ -7,8 +7,8 @@ This page covers how to use the `Snowflake` ecosystem within `LangChain`.
## Embedding models
Snowflake offers their open weight `arctic` line of embedding models for free
on [Hugging Face](https://huggingface.co/Snowflake/snowflake-arctic-embed-l).
Snowflake offers their open-weight `arctic` line of embedding models for free
on [Hugging Face](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5). The most recent model, snowflake-arctic-embed-m-v1.5 feature [matryoshka embedding](https://arxiv.org/abs/2205.13147) which allows for effective vector truncation.
You can use these models via the
[HuggingFaceEmbeddings](/docs/integrations/text_embedding/huggingfacehub) connector:
@@ -19,7 +19,7 @@ pip install langchain-community sentence-transformers
```python
from langchain_huggingface import HuggingFaceEmbeddings
model = HuggingFaceEmbeddings(model_name="snowflake/arctic-embed-l")
model = HuggingFaceEmbeddings(model_name="snowflake/arctic-embed-m-v1.5")
```
## Document loader

View File

@@ -1,10 +1,10 @@
# TiDB
> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution,
> [TiDB Cloud](https://www.pingcap.com/tidb-serverless), is a comprehensive Database-as-a-Service (DBaaS) solution,
> that provides dedicated and serverless options. `TiDB Serverless` is now integrating
> a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly
> develop AI applications using `TiDB Serverless` without the need for a new database or additional
> technical stacks. Be among the first to experience it by joining the [waitlist for the private beta](https://tidb.cloud/ai).
> technical stacks. Create a free TiDB Serverless cluster and start using the vector search feature at https://pingcap.com/ai.
## Installation and Setup

View File

@@ -8,11 +8,21 @@ ecosystem within LangChain.
## Installation and Setup
If you are using a loader that runs locally, use the following steps to get `unstructured` and
its dependencies running locally.
If you are using a loader that runs locally, use the following steps to get `unstructured` and its
dependencies running.
- Install the Python SDK with `pip install unstructured`.
- You can install document specific dependencies with extras, i.e. `pip install "unstructured[docx]"`.
- For the smallest installation footprint and to take advantage of features not available in the
open-source `unstructured` package, install the Python SDK with `pip install unstructured-client`
along with `pip install langchain-unstructured` to use the `UnstructuredLoader` and partition
remotely against the Unstructured API. This loader lives
in a LangChain partner repo instead of the `langchain-community` repo and you will need an
`api_key`, which you can generate a free key [here](https://unstructured.io/api-key/).
- Unstructured's documentation for the sdk can be found here:
https://docs.unstructured.io/api-reference/api-services/sdk
- To run everything locally, install the open-source python package with `pip install unstructured`
along with `pip install langchain-community` and use the same `UnstructuredLoader` as mentioned above.
- You can install document specific dependencies with extras, e.g. `pip install "unstructured[docx]"`.
- To install the dependencies for all document types, use `pip install "unstructured[all-docs]"`.
- Install the following system dependencies if they are not already available on your system with e.g. `brew install` for Mac.
Depending on what document types you're parsing, you may not need all of these.
@@ -22,16 +32,11 @@ its dependencies running locally.
- `qpdf` (PDFs)
- `libreoffice` (MS Office docs)
- `pandoc` (EPUBs)
- When running locally, Unstructured also recommends using Docker [by following this
guide](https://docs.unstructured.io/open-source/installation/docker-installation) to ensure all
system dependencies are installed correctly.
When running locally, Unstructured also recommends using Docker [by following this guide](https://docs.unstructured.io/open-source/installation/docker-installation)
to ensure all system dependencies are installed correctly.
If you want to get up and running with less set up, you can
simply run `pip install unstructured` and use `UnstructuredAPIFileLoader` or
`UnstructuredAPIFileIOLoader`. That will process your document using the hosted Unstructured API.
The `Unstructured API` requires API keys to make requests.
The Unstructured API requires API keys to make requests.
You can request an API key [here](https://unstructured.io/api-key-hosted) and start using it today!
Checkout the README [here](https://github.com/Unstructured-IO/unstructured-api) here to get started making API calls.
We'd love to hear your feedback, let us know how it goes in our [community slack](https://join.slack.com/t/unstructuredw-kbe4326/shared_invite/zt-1x7cgo0pg-PTptXWylzPQF9xZolzCnwQ).
@@ -42,30 +47,21 @@ Check out the instructions
## Data Loaders
The primary usage of the `Unstructured` is in data loaders.
The primary usage of `Unstructured` is in data loaders.
### UnstructuredAPIFileIOLoader
### UnstructuredLoader
See a [usage example](/docs/integrations/document_loaders/unstructured_file#unstructured-api).
See a [usage example](/docs/integrations/document_loaders/unstructured_file) to see how you can use
this loader for both partitioning locally and remotely with the serverless Unstructured API.
```python
from langchain_community.document_loaders import UnstructuredAPIFileIOLoader
```
### UnstructuredAPIFileLoader
See a [usage example](/docs/integrations/document_loaders/unstructured_file#unstructured-api).
```python
from langchain_community.document_loaders import UnstructuredAPIFileLoader
from langchain_unstructured import UnstructuredLoader
```
### UnstructuredCHMLoader
`CHM` means `Microsoft Compiled HTML Help`.
See a usage example in the API documentation.
```python
from langchain_community.document_loaders import UnstructuredCHMLoader
```
@@ -119,15 +115,6 @@ See a [usage example](/docs/integrations/document_loaders/google_drive#passing-i
from langchain_community.document_loaders import UnstructuredFileIOLoader
```
### UnstructuredFileLoader
See a [usage example](/docs/integrations/document_loaders/unstructured_file).
```python
from langchain_community.document_loaders import UnstructuredFileLoader
```
### UnstructuredHTMLLoader
See a [usage example](/docs/how_to/document_loader_html).

View File

@@ -0,0 +1,23 @@
# 01.AI
>[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL.
## Installation and Setup
Register and get an API key from either the China site [here](https://platform.lingyiwanwu.com/apikeys) or the global site [here](https://platform.01.ai/apikeys).
## LLMs
See a [usage example](/docs/integrations/llms/yi).
```python
from langchain_community.llms import YiLLM
```
## Chat models
See a [usage example](/docs/integrations/chat/yi).
```python
from langchain_community.chat_models import ChatYi
```

View File

@@ -2,14 +2,49 @@
"cells": [
{
"cell_type": "markdown",
"id": "9fc6205b",
"id": "00a924a0-57e2-43fa-95dc-3ea48a56d3a5",
"metadata": {},
"source": [
"# Arxiv\n",
"---\n",
"sidebar_label: Arxiv\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "0f1b8ddb-8b06-4e7e-b0bb-8786dea15e2b",
"metadata": {},
"source": [
"# ArxivRetriever\n",
"\n",
"## Overview\n",
"\n",
">[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\n",
"\n",
"This notebook shows how to retrieve scientific articles from `Arxiv.org` into the Document format that is used downstream."
"This notebook shows how to retrieve scientific articles from Arxiv.org into the [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) format that is used downstream.\n",
"\n",
"For detailed documentation of all `ArxivRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[ArxivRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) | Scholarly articles on [arxiv.org](https://arxiv.org/) | langchain_community |\n",
"\n",
"## Setup\n",
"\n",
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75d179b4-abc3-48db-9f8b-1cdb46d3aa77",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -17,15 +52,9 @@
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
"source": [
"First, you need to install `arxiv` python package."
"### Installation\n",
"\n",
"This retriever lives in the `langchain-community` package. We will also need the [arxiv](https://pypi.org/project/arxiv/) dependency:"
]
},
{
@@ -37,7 +66,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet arxiv"
"%pip install -qU langchain-community arxiv"
]
},
{
@@ -45,54 +74,44 @@
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
"source": [
"`ArxivRetriever` has these arguments:\n",
"## Instantiation\n",
"\n",
"`ArxivRetriever` parameters include:\n",
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\n",
"- optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `Title`, `Authors`, `Summary`. If True, other fields also downloaded.\n",
"- `get_full_documents`: boolean, default False. Determines whether to fetch full text of documents.\n",
"\n",
"`get_relevant_documents()` has one argument, `query`: free text which used to find documents in `Arxiv.org`"
"See [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) for more detail."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a13f9e92-24b3-4cea-8541-2584c1cdb2d1",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import ArxivRetriever\n",
"\n",
"retriever = ArxivRetriever(\n",
" load_max_docs=2,\n",
" get_ful_documents=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ae3c3d16",
"id": "30c27047-16cf-46b5-bb29-754f1696f2bb",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
"source": [
"### Running retriever"
"## Usage\n",
"\n",
"`ArxivRetriever` supports retrieval by article identifier:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0e6f506",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.retrievers import ArxivRetriever"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f381f642",
"metadata": {},
"outputs": [],
"source": [
"retriever = ArxivRetriever(load_max_docs=2)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "20ae1a74",
"metadata": {},
"outputs": [],
@@ -102,20 +121,20 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "1d5a5088",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Published': '2016-05-26',\n",
"{'Entry ID': 'http://arxiv.org/abs/1605.08386v1',\n",
" 'Published': datetime.date(2016, 5, 26),\n",
" 'Title': 'Heat-bath random walks with Markov bases',\n",
" 'Authors': 'Caprice Stanley, Tobias Windisch',\n",
" 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'}"
" 'Authors': 'Caprice Stanley, Tobias Windisch'}"
]
},
"execution_count": 9,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -126,17 +145,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "c0ccd0c7-f6a6-43e7-b842-5f57afb94224",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'arXiv:1605.08386v1 [math.CO] 26 May 2016\\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\\nCAPRICE STANLEY AND TOBIAS WINDISCH\\nAbstract. Graphs on lattice points are studied whose edges come from a nite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs onbers of a\\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\\nbehaviour of heat-b'"
"'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a ge'"
]
},
"execution_count": 10,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -147,159 +166,143 @@
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"id": "c525c5c2-0961-4f4c-a208-dd6ceed76ea1",
"metadata": {},
"source": [
"### Question Answering on facts"
"`ArxivRetriever` also supports retrieval based on natural language text:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 5,
"id": "4cd3d079-4496-4ab8-adff-b86e6418bc74",
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.invoke(\"What is the ImageBind model?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9318c790-d388-45da-8d5c-57256619e2a1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Entry ID': 'http://arxiv.org/abs/2305.05665v2',\n",
" 'Published': datetime.date(2023, 5, 31),\n",
" 'Title': 'ImageBind: One Embedding Space To Bind Them All',\n",
" 'Authors': 'Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra'}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata"
]
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"Like other retrievers, `ArxivRetriever` can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
"\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bcbeeaf5-79d1-4e29-8589-11dfb26761af",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bb3601df-53ea-4826-bdbe-554387bc3ad4",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e9c1a114-0410-4804-be30-05f34a9760f9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "51a33cc9-ec42-4afc-8a2d-3bfff476aa59",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "ea537767-a8bf-4adf-ae03-b353c9145d58",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What are Heat-bath random walks with Markov base? \n",
"\n",
"**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term \"Heat-bath random walks with Markov base\" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? \n",
"\n",
"-> **Question**: What is the ImageBind model? \n",
"\n",
"**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. \n",
"\n",
"-> **Question**: How does Compositional Reasoning with Large Language Models works? \n",
"\n",
"**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. \n",
"\n",
"In the context of the paper \"Does CLIP Bind Concepts? Probing Compositionality in Large Image Models\", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. \n",
"\n",
"The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. \n",
"\n"
]
}
],
"source": [
"questions = [\n",
" \"What are Heat-bath random walks with Markov base?\",\n",
" \"What is the ImageBind model?\",\n",
" \"How does Compositional Reasoning with Large Language Models works?\",\n",
"]\n",
"chat_history = []\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "8e0c3fc6-ae62-4036-a885-dc60176a7745",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. \n",
"\n",
"**Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.\n",
"\n",
"The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.\n",
"\n",
"References:\n",
"\n",
"Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.\n",
"\n",
"Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. \n",
"\n"
]
}
],
"source": [
"questions = [\n",
" \"What are Heat-bath random walks with Markov base? Include references to answer.\",\n",
"]\n",
"chat_history = []\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09794ab5-759c-4b56-95d4-2454d4d86da1",
"execution_count": 9,
"id": "62889c3c-8a49-4c76-9141-d777311af1f4",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"'The ImageBind model is an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. It shows that only image-paired data is sufficient to bind the modalities together and can leverage large scale vision-language models for zero-shot capabilities and emergent applications such as cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What is the ImageBind model?\")"
]
},
{
"cell_type": "markdown",
"id": "e419acb8-d7ac-42a1-916f-c796f23dce9b",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ArxivRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html)."
]
}
],
"metadata": {
@@ -318,7 +321,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,15 +2,39 @@
"cells": [
{
"cell_type": "markdown",
"id": "1edb9e6b",
"id": "f9a62e19-b00b-4f6c-a700-1e500e4c290a",
"metadata": {},
"source": [
"# Azure AI Search\n",
"---\n",
"sidebar_label: Azure AI Search\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "76f74245-7220-4446-ae8d-4e5a9e998f1f",
"metadata": {},
"source": [
"# AzureAISearchRetriever\n",
"\n",
"## Overview\n",
"[Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) (formerly known as `Azure Cognitive Search`) is a Microsoft cloud search service that gives developers infrastructure, APIs, and tools for information retrieval of vector, keyword, and hybrid queries at scale.\n",
"\n",
"`AzureAISearchRetriever` is an integration module that returns documents from an unstructured query. It's based on the BaseRetriever class and it targets the 2023-11-01 stable REST API version of Azure AI Search, which means it supports vector indexing and queries.\n",
"\n",
"This guide will help you getting started with the Azure AI Search [retriever](/docs/concepts/#retrievers). For detailed documentation of all `AzureAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html).\n",
"\n",
"`AzureAISearchRetriever` replaces `AzureCognitiveSearchRetriever`, which will soon be deprecated. We recommend switching to the newer version that's based on the most recent stable version of the search APIs.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[AzureAISearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) | ❌ | ✅ | langchain_community |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"To use this module, you need:\n",
"\n",
"+ An Azure AI Search service. You can [create one](https://learn.microsoft.com/azure/search/search-create-service-portal) for free if you sign up for the Azure trial. A free service has lower quotas, but it's sufficient for running the code in this notebook.\n",
@@ -19,7 +43,40 @@
"\n",
"+ An API key. API keys are generated when you create the search service. If you're just querying an index, you can use the query API key, otherwise use an admin API key. See [Find your API keys](https://learn.microsoft.com/azure/search/search-security-api-keys?tabs=rest-use%2Cportal-find%2Cportal-query#find-existing-keys) for details.\n",
"\n",
"`AzureAISearchRetriever` replaces `AzureCognitiveSearchRetriever`, which will soon be deprecated. We recommend switching to the newer version that's based on the most recent stable version of the search APIs."
"We can then set the search service name, index name, and API key as environment variables (alternatively, you can pass them as arguments to `AzureAISearchRetriever`). The search index provides the searchable content."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a56e83b-8563-4479-ab61-090fc79f5b00",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"AZURE_AI_SEARCH_SERVICE_NAME\"] = \"<YOUR_SEARCH_SERVICE_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_INDEX_NAME\"] = \"<YOUR_SEARCH_INDEX_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_API_KEY\"] = \"<YOUR_API_KEY>\""
]
},
{
"cell_type": "markdown",
"id": "3e635218-8634-4f39-abc5-39e319eeb136",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88751b84-7cb7-4dd2-af35-c1e9b369d012",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -27,9 +84,9 @@
"id": "f99d4456",
"metadata": {},
"source": [
"## Install packages\n",
"### Installation\n",
"\n",
"Use azure-documents-search package 11.4 or later."
"This retriever lives in the `langchain-community` package. We will need some additional dependencies as well:"
]
},
{
@@ -39,9 +96,9 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain\n",
"%pip install --upgrade --quiet langchain-community\n",
"%pip install --upgrade --quiet langchain-openai\n",
"%pip install --upgrade --quiet azure-search-documents\n",
"%pip install --upgrade --quiet azure-search-documents>=11.4\n",
"%pip install --upgrade --quiet azure-identity"
]
},
@@ -50,7 +107,9 @@
"id": "0474661d",
"metadata": {},
"source": [
"## Import required libraries"
"## Instantiation\n",
"\n",
"For `AzureAISearchRetriever`, provide an `index_name`, `content_key`, and `top_k` set to the number of number of results you'd like to retrieve. Setting `top_k` to zero (the default) returns all results."
]
},
{
@@ -60,52 +119,8 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain_community.retrievers import AzureAISearchRetriever\n",
"\n",
"from langchain_community.retrievers import (\n",
" AzureAISearchRetriever,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b7243e6d",
"metadata": {},
"source": [
"## Configure search settings\n",
"\n",
"Set the search service name, index name, and API key as environment variables (alternatively, you can pass them as arguments to `AzureAISearchRetriever`). The search index provides the searchable content. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33fd23d1",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"AZURE_AI_SEARCH_SERVICE_NAME\"] = \"<YOUR_SEARCH_SERVICE_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_INDEX_NAME\"] = \"<YOUR_SEARCH_INDEX_NAME>\"\n",
"os.environ[\"AZURE_AI_SEARCH_API_KEY\"] = \"<YOUR_API_KEY>\""
]
},
{
"cell_type": "markdown",
"id": "057deaad",
"metadata": {},
"source": [
"## Create the retriever\n",
"\n",
"For `AzureAISearchRetriever`, provide an `index_name`, `content_key`, and `top_k` set to the number of number of results you'd like to retrieve. Setting `top_k` to zero (the default) returns all results."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "c18d0c4c",
"metadata": {},
"outputs": [],
"source": [
"retriever = AzureAISearchRetriever(\n",
" content_key=\"content\", top_k=1, index_name=\"langchain-vector-demo\"\n",
")"
@@ -116,6 +131,8 @@
"id": "e94ea104",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"Now you can use it to retrieve documents from Azure AI Search. \n",
"This is the method you would call to do so. It will return all documents relevant to the query. "
]
@@ -259,6 +276,69 @@
"source": [
"retriever.invoke(\"does the president have a plan for covid-19?\")"
]
},
{
"cell_type": "markdown",
"id": "dd6c9ba9-978f-4e2c-9cc7-ccd1be58eafb",
"metadata": {},
"source": [
"## Use within a chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cbcd8ac6-12ea-4c22-8a98-c24825d598d7",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db80f3c7-83e1-4965-8ff2-a3dd66a07f0e",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"does the president have a plan for covid-19?\")"
]
},
{
"cell_type": "markdown",
"id": "a3d6140e-c2a0-40b2-a141-cab61ab39185",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AzureAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html)."
]
}
],
"metadata": {
@@ -277,7 +357,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,19 +1,86 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b0872249-1af5-4d54-b816-1babad7a8c9e",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Bedrock (Knowledge Bases)\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b6636c27-35da-4ba7-8313-eca21660cab3",
"metadata": {},
"source": [
"# Bedrock (Knowledge Bases)\n",
"# Bedrock (Knowledge Bases) Retriever\n",
"\n",
"> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n",
"## Overview\n",
"\n",
"> Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the users query. This can be time-consuming and inefficient.\n",
"This guide will help you getting started with the AWS Knowledge Bases [retriever](/docs/concepts/#retrievers).\n",
"\n",
"> With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.\n",
"[Knowledge Bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n",
"\n",
"> Knowledge base can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/)."
"Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the users query. This can be time-consuming and inefficient.\n",
"\n",
"With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[AmazonKnowledgeBasesRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) | ❌ | ✅ | langchain_aws |\n"
]
},
{
"cell_type": "markdown",
"id": "cd092536-61bd-4b3f-9050-076daccc9e72",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Knowledge Bases can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/). We will need the `knowledge_base_id` to instantiate the retriever."
]
},
{
"cell_type": "markdown",
"id": "238c0ceb-d4b6-409e-bed9-d30143d2f2c9",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e4426098-820c-48dc-9826-056a91bebe9e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "4ede6277-ea56-45f6-8ef4-fe14734ee279",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `langchain-aws` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4db1af24-0969-43bd-8438-af5e3024b0d0",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-aws"
]
},
{
@@ -21,17 +88,9 @@
"id": "b34c8cbe-c6e5-4398-adf1-4925204bcaed",
"metadata": {},
"source": [
"## Using the Knowledge Bases Retriever"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26c97d36-911c-4fe0-a478-546192728f30",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet boto3"
"## Instantiation\n",
"\n",
"Now we can instantiate our retriever:"
]
},
{
@@ -41,7 +100,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import AmazonKnowledgeBasesRetriever\n",
"from langchain_aws.retrievers import AmazonKnowledgeBasesRetriever\n",
"\n",
"retriever = AmazonKnowledgeBasesRetriever(\n",
" knowledge_base_id=\"PUIJP4EQUA\",\n",
@@ -49,6 +108,14 @@
")"
]
},
{
"cell_type": "markdown",
"id": "9dff39f8-b6ba-41bf-b95b-d345928ed07d",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -66,7 +133,7 @@
"id": "7de9b61b-597b-4aba-95fb-49d11e84510e",
"metadata": {},
"source": [
"### Using in a QA Chain"
"## Use within a chain"
]
},
{
@@ -78,7 +145,7 @@
"source": [
"from botocore.client import Config\n",
"from langchain.chains import RetrievalQA\n",
"from langchain_community.llms import Bedrock\n",
"from langchain_aws import Bedrock\n",
"\n",
"model_kwargs_claude = {\"temperature\": 0, \"top_k\": 10, \"max_tokens_to_sample\": 3000}\n",
"\n",
@@ -90,6 +157,16 @@
"\n",
"qa(query)"
]
},
{
"cell_type": "markdown",
"id": "22e2538a-e042-4997-bb81-b68ecb27d665",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AmazonKnowledgeBasesRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html)."
]
}
],
"metadata": {
@@ -108,7 +185,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -34,8 +34,7 @@
},
"outputs": [],
"source": [
"from langchain_cohere import ChatCohere\n",
"from langchain_community.retrievers import CohereRagRetriever\n",
"from langchain_cohere import ChatCohere, CohereRagRetriever\n",
"from langchain_core.documents import Document"
]
},
@@ -200,7 +199,7 @@
"source": [
"docs = rag.invoke(\n",
" \"Does langchain support cohere RAG?\",\n",
" source_documents=[\n",
" documents=[\n",
" Document(page_content=\"Langchain supports cohere RAG!\"),\n",
" Document(page_content=\"The sky is blue!\"),\n",
" ],\n",
@@ -208,6 +207,14 @@
"_pretty_print(docs)"
]
},
{
"cell_type": "markdown",
"id": "45a9470f",
"metadata": {},
"source": [
"Please note that connectors and documents cannot be used simultaneously. If you choose to provide documents in the `invoke` method, they will take precedence, and connectors will not be utilized for that particular request, as shown in the snippet above!"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -2,14 +2,72 @@
"cells": [
{
"cell_type": "markdown",
"id": "ab66dd43",
"id": "41ccce84-f6d9-4ba0-8281-22cbf29f20d3",
"metadata": {},
"source": [
"# Elasticsearch\n",
"---\n",
"sidebar_label: Elasticsearch\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "54c4d916-05db-4e01-9893-c711904205b3",
"metadata": {},
"source": [
"# ElasticsearchRetriever\n",
"\n",
"## Overview\n",
">[Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. It supports keyword search, vector search, hybrid search and complex filtering.\n",
"\n",
"The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don't you can use `ElasticsearchRetriever`."
"The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don't you can use `ElasticsearchRetriever`.\n",
"\n",
"This guide will help you getting started with the Elasticsearch [retriever](/docs/concepts/#retrievers). For detailed documentation of all `ElasticsearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[ElasticsearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) | ✅ | ✅ | langchain_elasticsearch |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"There are two main ways to set up an Elasticsearch instance:\n",
"\n",
"- Elastic Cloud: [Elastic Cloud](https://cloud.elastic.co/) is a managed Elasticsearch service. Sign up for a [free trial](https://www.elastic.co/cloud/cloud-trial-overview).\n",
"To connect to an Elasticsearch instance that does not require login credentials (starting the docker instance with security enabled), pass the Elasticsearch URL and index name along with the embedding object to the constructor.\n",
"\n",
"- Local Install Elasticsearch: Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the [Elasticsearch Docker documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) for more information."
]
},
{
"cell_type": "markdown",
"id": "e13a7b58-3a56-4ce6-a4d5-81a8dd2080df",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "492b81d0-c85b-4693-ae4f-3f33da571ddd",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "78335745-f14d-411d-9c06-64ff83eb9358",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `langchain-elasticsearch` package. For demonstration purposes, we will also install `langchain-community` to generate text [embeddings](/docs/concepts/#embedding-models)."
]
},
{
@@ -21,7 +79,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet elasticsearch langchain-elasticsearch"
"%pip install -qU langchain-community langchain-elasticsearch"
]
},
{
@@ -48,7 +106,7 @@
"id": "24c0d140",
"metadata": {},
"source": [
"## Configure\n",
"### Configure\n",
"\n",
"Here we define the conncection to Elasticsearch. In this example we use a locally running instance. Alternatively, you can make an account in [Elastic Cloud](https://cloud.elastic.co/) and start a [free trial](https://www.elastic.co/cloud/cloud-trial-overview)."
]
@@ -70,7 +128,7 @@
"id": "60aa7c20",
"metadata": {},
"source": [
"For vector search, we are going to use random embeddings just for illustration. For real use cases, pick one of the available LangChain `Embeddings` classes."
"For vector search, we are going to use random embeddings just for illustration. For real use cases, pick one of the available LangChain [Embeddings](/docs/integrations/text_embedding) classes."
]
},
{
@@ -88,7 +146,7 @@
"id": "b4eea654",
"metadata": {},
"source": [
"## Define example data"
"#### Define example data"
]
},
{
@@ -118,7 +176,7 @@
"id": "1c518c42",
"metadata": {},
"source": [
"## Index data\n",
"#### Index data\n",
"\n",
"Typically, users make use of `ElasticsearchRetriever` when they already have data in an Elasticsearch index. Here we index some example text documents. If you created an index for example using `ElasticsearchStore.from_documents` that's also fine."
]
@@ -209,14 +267,8 @@
"id": "08437fa2",
"metadata": {},
"source": [
"## Usage examples"
]
},
{
"cell_type": "markdown",
"id": "469aa295",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"### Vector search\n",
"\n",
"Dense vector retrival using fake embeddings in this example."
@@ -543,6 +595,91 @@
"\n",
"custom_mapped_retriever.invoke(\"foo\")"
]
},
{
"cell_type": "markdown",
"id": "1663feff-4527-4fb0-9395-b28af5c9ec99",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/#runnable-interface), such as `.batch`, as well."
]
},
{
"cell_type": "markdown",
"id": "f4f946ed-ff3a-43d7-9e0d-7983ff13c868",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"We can also incorporate retrievers into [chains](/docs/how_to/sequence/) to build larger applications, such as a simple [RAG](/docs/tutorials/rag/) application. For demonstration purposes, we instantiate an OpenAI chat model as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19302ef1-dd49-4f9c-8d87-4ea23b8296e2",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "832857a7-3b16-4a85-acc7-28efe6ebdae8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": vector_retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7317942b-7c9a-477d-ba11-3421da804a22",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"what is foo?\")"
]
},
{
"cell_type": "markdown",
"id": "eeb49714-ba5a-4b10-8e58-67d061a486d1",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ElasticsearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html)."
]
}
],
"metadata": {
@@ -561,7 +698,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -1,27 +1,44 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Vertex AI Search\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Vertex AI Search\n",
"\n",
"## Overview\n",
"\n",
">[Google Vertex AI Search](https://cloud.google.com/enterprise-search) (formerly known as `Enterprise Search` on `Generative AI App Builder`) is a part of the [Vertex AI](https://cloud.google.com/vertex-ai) machine learning platform offered by `Google Cloud`.\n",
">\n",
">`Vertex AI Search` lets organizations quickly build generative AI-powered search engines for customers and employees. It's underpinned by a variety of `Google Search` technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the users query input. Vertex AI Search also benefits from Googles expertise in understanding how users search and factors in content relevance to order displayed results.\n",
"\n",
">`Vertex AI Search` is available in the `Google Cloud Console` and via an API for enterprise workflow integration.\n",
"\n",
"This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install pre-requisites\n",
"This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search [retriever](/docs/concepts/#retrievers). The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).\n",
"\n",
"You need to install the `google-cloud-discoveryengine` package to use the Vertex AI Search retriever.\n"
"For detailed documentation of all `VertexAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[VertexAISearchRetriever](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) | ❌ | ✅ | langchain_google_community |\n",
"\n",
"\n",
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"You need to install the `langchain-google-community` and `google-cloud-discoveryengine` packages to use the Vertex AI Search retriever."
]
},
{
@@ -30,14 +47,14 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-discoveryengine"
"%pip install -qU langchain-google-community google-cloud-discoveryengine"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure access to Google Cloud and Vertex AI Search\n",
"### Configure access to Google Cloud and Vertex AI Search\n",
"\n",
"Vertex AI Search is generally available without allowlist as of August 2023.\n",
"\n",
@@ -48,7 +65,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a search engine and populate an unstructured data store\n",
"#### Create a search engine and populate an unstructured data store\n",
"\n",
"- Follow the instructions in the [Vertex AI Search Getting Started guide](https://cloud.google.com/generative-ai-app-builder/docs/try-enterprise-search) to set up a Google Cloud project and Vertex AI Search.\n",
"- [Use the Google Cloud Console to create an unstructured data store](https://cloud.google.com/generative-ai-app-builder/docs/create-engine-es#unstructured-data)\n",
@@ -60,7 +77,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set credentials to access Vertex AI Search API\n",
"#### Set credentials to access Vertex AI Search API\n",
"\n",
"The [Vertex AI Search client libraries](https://cloud.google.com/generative-ai-app-builder/docs/libraries) used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically.\n",
"Client libraries support [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API.\n",
@@ -87,16 +104,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure and use the Vertex AI Search retriever\n",
"### Configure and use the Vertex AI Search retriever\n",
"\n",
"The Vertex AI Search retriever is implemented in the `langchain.retriever.GoogleVertexAISearchRetriever` class. The `get_relevant_documents` method returns a list of `langchain.schema.Document` documents where the `page_content` field of each document is populated the document content.\n",
"The Vertex AI Search retriever is implemented in the `langchain_google_community.VertexAISearchRetriever` class. The `get_relevant_documents` method returns a list of `langchain.schema.Document` documents where the `page_content` field of each document is populated the document content.\n",
"Depending on the data type used in Vertex AI Search (website, structured or unstructured) the `page_content` field is populated as follows:\n",
"\n",
"- Website with advanced indexing: an `extractive answer` that matches a query. The `metadata` field is populated with metadata (if any) of the document from which the segments or answers were extracted.\n",
"- Unstructured data source: either an `extractive segment` or an `extractive answer` that matches a query. The `metadata` field is populated with metadata (if any) of the document from which the segments or answers were extracted.\n",
"- Structured data source: a string json containing all the fields returned from the structured data source. The `metadata` field is populated with metadata (if any) of the document\n",
"\n",
"### Extractive answers & extractive segments\n",
"#### Extractive answers & extractive segments\n",
"\n",
"An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.\n",
"\n",
@@ -108,7 +125,7 @@
"\n",
"When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.\n",
"\n",
"### The mandatory parameters are:\n",
"#### The mandatory parameters are:\n",
"\n",
"- `project_id` - Your Google Cloud Project ID.\n",
"- `location_id` - The location of the data store.\n",
@@ -148,15 +165,15 @@
"\n",
"To update to the new retriever, make the following changes:\n",
"\n",
"- Change the import from: `from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever` -> `from langchain.retrievers import GoogleVertexAISearchRetriever`.\n",
"- Change all class references from `GoogleCloudEnterpriseSearchRetriever` -> `GoogleVertexAISearchRetriever`.\n"
"- Change the import from: `from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever` -> `from langchain_google_community import VertexAISearchRetriever`.\n",
"- Change all class references from `GoogleCloudEnterpriseSearchRetriever` -> `VertexAISearchRetriever`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure and use the retriever for **unstructured** data with extractive segments\n"
"Note: When using the retriever, if you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
@@ -165,9 +182,28 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import (\n",
" GoogleVertexAIMultiTurnSearchRetriever,\n",
" GoogleVertexAISearchRetriever,\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"### Configure and use the retriever for **unstructured** data with extractive segments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_community import (\n",
" VertexAIMultiTurnSearchRetriever,\n",
" VertexAISearchRetriever,\n",
")\n",
"\n",
"PROJECT_ID = \"<YOUR PROJECT ID>\" # Set to your Project ID\n",
@@ -182,7 +218,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -216,7 +252,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -243,7 +279,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -269,7 +305,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" data_store_id=DATA_STORE_ID,\n",
@@ -297,7 +333,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAISearchRetriever(\n",
"retriever = VertexAISearchRetriever(\n",
" project_id=PROJECT_ID,\n",
" location_id=LOCATION_ID,\n",
" search_engine_id=SEARCH_ENGINE_ID,\n",
@@ -325,7 +361,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever = GoogleVertexAIMultiTurnSearchRetriever(\n",
"retriever = VertexAIMultiTurnSearchRetriever(\n",
" project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID\n",
")\n",
"\n",
@@ -333,6 +369,85 @@
"for doc in result:\n",
" print(doc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/#runnable-interface), such as `.batch`, as well."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"We can also incorporate retrievers into [chains](/docs/how_to/sequence/) to build larger applications, such as a simple [RAG](/docs/tutorials/rag/) application. For demonstration purposes, we instantiate a VertexAI chat model as well. See the corresponding Vertex [integration docs](/docs/integrations/chat/google_vertex_ai_palm/) for setup instructions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-google-vertexai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the question based only on the context provided.\n",
"\n",
"Context: {context}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatVertexAI(model_name=\"chat-bison\", temperature=0)\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `VertexAISearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html)."
]
}
],
"metadata": {
@@ -351,7 +466,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,41 @@
---
sidebar_position: 0
sidebar_class_name: hidden
---
# Retrievers
A [retriever](/docs/concepts/#retrievers) is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) as output.
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
Note that all [vector stores](/docs/concepts/#vector-stores) can be [cast to retrievers](/docs/how_to/vectorstore_retriever/).
Refer to the vector store [integration docs](/docs/integrations/vectorstores/) for available vector stores.
This page lists custom retrievers, implemented via subclassing [BaseRetriever](/docs/how_to/custom_retriever/).
## Bring-your-own documents
The below retrievers allow you to index and search a custom corpus of documents.
| Retriever | Self-host | Cloud offering | Package |
|-----------|-----------|----------------|---------|
| [AmazonKnowledgeBasesRetriever](/docs/integrations/retrievers/bedrock) | ❌ | ✅ | [langchain_aws](https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html) |
| [AzureAISearchRetriever](/docs/integrations/retrievers/azure_ai_search) | ❌ | ✅ | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html) |
| [ElasticsearchRetriever](/docs/integrations/retrievers/elasticsearch_retriever) | ✅ | ✅ | [langchain_elasticsearch](https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html) |
| [MilvusCollectionHybridSearchRetriever](/docs/integrations/retrievers/milvus_hybrid_search) | ✅ | ❌ | [langchain_milvus](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) |
| [VertexAISearchRetriever](/docs/integrations/retrievers/google_vertex_ai_search) | ❌ | ✅ | [langchain_google_community](https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html) |
## External index
The below retrievers will search over an external index (e.g., constructed from Internet data or similar).
| Retriever | Source | Package |
|-----------|--------|---------|
| [ArxivRetriever](/docs/integrations/retrievers/arxiv) | Scholarly articles on [arxiv.org](https://arxiv.org/) | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) |
| [TavilySearchAPIRetriever](/docs/integrations/retrievers/tavily) | Internet search | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) |
| [WikipediaRetriever](/docs/integrations/retrievers/wikipedia) | [Wikipedia](https://www.wikipedia.org/) articles | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) |

View File

@@ -2,21 +2,48 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"metadata": {},
"source": [
"# Milvus Hybrid Search\n",
"---\n",
"sidebar_label: Milvus Hybrid Search\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Milvus Hybrid Search Retriever\n",
"\n",
"## Overview\n",
"\n",
"> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n",
"\n",
"This notebook goes over how to use the Milvus Hybrid Search retriever, which combines the strengths of both dense and sparse vector search.\n",
"This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/#retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).\n",
"\n",
"For more reference please go to [Milvus Multi-Vector Search](https://milvus.io/docs/multi-vector-search.md)\n",
"\n"
"See also the Milvus Multi-Vector Search [docs](https://milvus.io/docs/multi-vector-search.md).\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Self-host | Cloud offering | Package |\n",
"| :--- | :--- | :---: | :---: |\n",
"[MilvusCollectionHybridSearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) | ✅ | ❌ | langchain_milvus |\n",
"\n",
"\n",
"\n",
"## Setup\n",
"\n",
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -28,9 +55,9 @@
}
},
"source": [
"## Prerequisites\n",
"### Install dependencies\n",
"You need to prepare to install the following dependencies\n"
"### Installation\n",
"\n",
"This retriever lives in the `langchain-milvus` package. This guide requires the following dependencies:"
]
},
{
@@ -50,32 +77,18 @@
"%pip install --upgrade --quiet pymilvus[model] langchain-milvus langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Import necessary modules and classes"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from pymilvus import (\n",
" Collection,\n",
" CollectionSchema,\n",
@@ -86,34 +99,15 @@
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"metadata": {},
"source": [
"### Start the Milvus service\n",
"\n",
"Please refer to the [Milvus documentation](https://milvus.io/docs/install_standalone-docker.md) to start the Milvus service.\n",
"\n",
"After starting milvus, you need to specify your milvus connection URI.\n"
"After starting milvus, you need to specify your milvus connection URI."
]
},
{
@@ -155,11 +149,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Prepare data and Load\n",
"### Prepare dense and sparse embedding functions\n",
"\n",
" Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
"Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
]
},
{
@@ -379,15 +371,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build RAG chain with Retriever\n",
"### Create the Retriever\n",
"## Instantiation\n",
"\n",
"Define search parameters for sparse and dense fields, and create a retriever"
"Now we can instantiate our retriever, defining search parameters for sparse and dense fields:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@@ -416,6 +407,13 @@
"In the input parameters of this Retriever, we use a dense embedding and a sparse embedding to perform hybrid search on the two fields of this Collection, and use WeightedRanker for reranking. Finally, 3 top-K Documents will be returned."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 14,
@@ -442,7 +440,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the RAG chain\n",
"## Use within a chain\n",
"\n",
"Initialize ChatOpenAI and define a prompt template"
]
@@ -610,6 +608,15 @@
"source": [
"collection.drop()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html)."
]
}
],
"metadata": {
@@ -628,9 +635,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -0,0 +1,135 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "661d5123-8ed2-4504-a846-7df0984e79f9",
"metadata": {},
"source": [
"# NanoPQ (Product Quantization)\n",
"\n",
">[Product Quantization algorithm (k-NN)](https://towardsdatascience.com/similarity-search-product-quantization-b2a1a6397701) in brief is a quantization algorithm that helps in compression of database vectors which helps in semantic search when large datasets are involved. In a nutshell, the embedding is split into M subspaces which further goes through clustering. Upon clustering the vectors the centroid vector gets mapped to the vectors present in the each of the clusters of the subspace. \n",
"\n",
"This notebook goes over how to use a retriever that under the hood uses a Product Quantization which has been implemented by the [nanopq](https://github.com/matsui528/nanopq) package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "68794637-c13b-4145-944f-3b0c2f1258f9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community langchain-openai nanopq"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "39ecbf50-4623-4ee6-9c8e-fea5da21767e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.spacy_embeddings import SpacyEmbeddings\n",
"from langchain_community.retrievers import NanoPQRetriever"
]
},
{
"cell_type": "markdown",
"id": "c1ce742a-5085-408a-a2c2-4bae0f605880",
"metadata": {},
"source": [
"## Create New Retriever with Texts"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6c80020e-bc9e-49e8-8f93-5f75fd823738",
"metadata": {},
"outputs": [],
"source": [
"retriever = NanoPQRetriever.from_texts(\n",
" [\"Great world\", \"great words\", \"world\", \"planets of the world\"],\n",
" SpacyEmbeddings(model_name=\"en_core_web_sm\"),\n",
" clusters=2,\n",
" subspace=2,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "743c26c1-0072-4e46-b41b-c28b3f1737c8",
"metadata": {},
"source": [
"## Use Retriever\n",
"\n",
"We can now use the retriever!"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f496de2d-9b8f-4f8b-a30f-279ef199259a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"M: 2, Ks: 2, metric : <class 'numpy.uint8'>, code_dtype: l2\n",
"iter: 20, seed: 123\n",
"Training the subspace: 0 / 2\n",
"Training the subspace: 1 / 2\n",
"Encoding the subspace: 0 / 2\n",
"Encoding the subspace: 1 / 2\n"
]
},
{
"data": {
"text/plain": [
"[Document(page_content='world'),\n",
" Document(page_content='Great world'),\n",
" Document(page_content='great words'),\n",
" Document(page_content='planets of the world')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"earth\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "617202a7-e3a6-49a8-b807-4b4d771159d5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,246 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SAP HANA Cloud Vector Engine\n",
"\n",
"For more information on how to setup the SAP HANA vetor store, take a look at the [documentation](/docs/integrations/vectorstores/sap_hanavector.ipynb).\n",
"\n",
"We use the same setup here:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Use OPENAI_API_KEY env variable\n",
"# os.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI API key\"\n",
"from hdbcli import dbapi\n",
"\n",
"# Use connection settings from the environment\n",
"connection = dbapi.connect(\n",
" address=os.environ.get(\"HANA_DB_ADDRESS\"),\n",
" port=os.environ.get(\"HANA_DB_PORT\"),\n",
" user=os.environ.get(\"HANA_DB_USER\"),\n",
" password=os.environ.get(\"HANA_DB_PASSWORD\"),\n",
" autocommit=True,\n",
" sslValidateCertificate=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To be able to self query with good performance we create additional metadata fields\n",
"for our vectorstore table in HANA:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create custom table with attribute\n",
"cur = connection.cursor()\n",
"cur.execute(\"DROP TABLE LANGCHAIN_DEMO_SELF_QUERY\", ignoreErrors=True)\n",
"cur.execute(\n",
" (\n",
" \"\"\"CREATE TABLE \"LANGCHAIN_DEMO_SELF_QUERY\" (\n",
" \"name\" NVARCHAR(100), \"is_active\" BOOLEAN, \"id\" INTEGER, \"height\" DOUBLE,\n",
" \"VEC_TEXT\" NCLOB, \n",
" \"VEC_META\" NCLOB, \n",
" \"VEC_VECTOR\" REAL_VECTOR\n",
" )\"\"\"\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's add some documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores.hanavector import HanaDB\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"# Prepare some test documents\n",
"docs = [\n",
" Document(\n",
" page_content=\"First\",\n",
" metadata={\"name\": \"adam\", \"is_active\": True, \"id\": 1, \"height\": 10.0},\n",
" ),\n",
" Document(\n",
" page_content=\"Second\",\n",
" metadata={\"name\": \"bob\", \"is_active\": False, \"id\": 2, \"height\": 5.7},\n",
" ),\n",
" Document(\n",
" page_content=\"Third\",\n",
" metadata={\"name\": \"jane\", \"is_active\": True, \"id\": 3, \"height\": 2.4},\n",
" ),\n",
"]\n",
"\n",
"db = HanaDB(\n",
" connection=connection,\n",
" embedding=embeddings,\n",
" table_name=\"LANGCHAIN_DEMO_SELF_QUERY\",\n",
" specific_metadata_columns=[\"name\", \"is_active\", \"id\", \"height\"],\n",
")\n",
"\n",
"# Delete already existing documents from the table\n",
"db.delete(filter={})\n",
"db.add_documents(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Self querying\n",
"\n",
"Now for the main act: here is how to construct a SelfQueryRetriever for HANA vectorstore:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_community.query_constructors.hanavector import HanaTranslator\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"\n",
"metadata_field_info = [\n",
" AttributeInfo(\n",
" name=\"name\",\n",
" description=\"The name of the person\",\n",
" type=\"string\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"is_active\",\n",
" description=\"Whether the person is active\",\n",
" type=\"boolean\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"id\",\n",
" description=\"The ID of the person\",\n",
" type=\"integer\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"height\",\n",
" description=\"The height of the person\",\n",
" type=\"float\",\n",
" ),\n",
"]\n",
"\n",
"document_content_description = \"A collection of persons\"\n",
"\n",
"hana_translator = HanaTranslator()\n",
"\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm,\n",
" db,\n",
" document_content_description,\n",
" metadata_field_info,\n",
" structured_query_translator=hana_translator,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's use this retriever to prepare a (self) query for a person:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query_prompt = \"Which person is not active?\"\n",
"\n",
"docs = retriever.invoke(input=query_prompt)\n",
"for doc in docs:\n",
" print(\"-\" * 80)\n",
" print(doc.page_content, \" \", doc.metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also take a look at how the query is being constructed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.query_constructor.base import (\n",
" StructuredQueryOutputParser,\n",
" get_query_constructor_prompt,\n",
")\n",
"\n",
"prompt = get_query_constructor_prompt(\n",
" document_content_description,\n",
" metadata_field_info,\n",
")\n",
"output_parser = StructuredQueryOutputParser.from_components()\n",
"query_constructor = prompt | llm | output_parser\n",
"\n",
"sq = query_constructor.invoke(input=query_prompt)\n",
"\n",
"print(\"Structured query: \", sq)\n",
"\n",
"print(\"Translated for hana vector store: \", hana_translator.visit_structured_query(sq))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -4,20 +4,70 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tavily Search API\n",
"---\n",
"sidebar_label: TavilySearchAPI\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TavilySearchAPIRetriever\n",
"\n",
"## Overview\n",
">[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n",
"\n",
"We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain.\n",
"\n",
"## Setup\n",
"### Integration details\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `tavily-python` package itself.\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[TavilySearchAPIRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) | Internet search | langchain_community |\n",
"\n",
"```bash\n",
"pip install -U langchain-community tavily-python\n",
"```\n",
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `tavily-python` package itself."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community tavily-python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also need to set our Tavily API key."
]
},
@@ -37,17 +87,20 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
"## Instantiation\n",
"\n",
"Now we can instantiate our retriever:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"from langchain_community.retrievers import TavilySearchAPIRetriever\n",
"\n",
"retriever = TavilySearchAPIRetriever(k=3)"
]
},
{
@@ -59,42 +112,40 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Trending topics\\nTrending topics\\nThe Legend of Zelda™: Breath of the Wild\\nSelect a product\\nThe Legend of Zelda™: Breath of the Wild\\nThe Legend of Zelda: Breath of the Wild\\nThe Legend of Zelda™: Breath of the Wild and The Legend of Zelda™: Breath of the Wild Expansion Pass Bundle\\nThis item will be sent to your system automatically after purchase or Nintendo Switch Game Voucher redemption. The Legend of Zelda: Breath of the Wild Expansion Pass\\nMore like this\\nSuper Mario Odyssey™\\nThe Legend of Zelda™: Tears of the Kingdom\\nMario + Rabbids® Kingdom Battle\\nThe Legend of Zelda™: Links Awakening\\nHollow Knight\\nThe Legend of Zelda: Skyward Sword HD\\nStarlink: Battle for Atlas™ Digital Edition\\nDRAGON QUEST BUILDERS™ 2\\nDragon Quest Builders™\\nWARNING: If you have epilepsy or have had seizures or other unusual reactions to flashing lights or patterns, consult a doctor before playing video games. Saddle up with a herd of horse-filled games!\\nESRB rating\\nSupported play modes\\nTV\\nTabletop\\nHandheld\\nProduct information\\nRelease date\\nNo. of players\\nGenre\\nPublisher\\nESRB rating\\nSupported play modes\\nGame file size\\nSupported languages\\nPlay online, access classic NES™ and Super NES™ games, and more with a Nintendo Switch Online membership.\\n Two Game Boy games are now available for Nintendo Switch Online members\\n02/01/23\\nNintendo Switch Online member exclusive: Save on two digital games\\n09/13/22\\nOut of the Shadows … the Legend of Zelda: About Nintendo\\nShop\\nMy Nintendo Store orders\\nSupport\\nParents\\nCommunity\\nPrivacy\\n© Nintendo.', metadata={'title': 'The Legend of Zelda™: Breath of the Wild - Nintendo', 'source': 'https://www.nintendo.com/us/store/products/the-legend-of-zelda-breath-of-the-wild-switch/', 'score': 0.97451, 'images': None}),\n",
" Document(page_content='The Legend of Zelda: Breath of the Wild is a masterpiece of open-world design and exploration, released on March 3, 2017 for Nintendo Switch. Find out the latest news, reviews, guides, videos, and more for this award-winning game on IGN.', metadata={'title': 'The Legend of Zelda: Breath of the Wild - IGN', 'source': 'https://www.ign.com/games/the-legend-of-zelda-breath-of-the-wild', 'score': 0.94496, 'images': None}),\n",
" Document(page_content='Reviewers also commented on the unexpected permutations of interactions between Link, villagers, pets, and enemies,[129][130][131] many of which were shared widely on social media.[132] A tribute to former Nintendo president Satoru Iwata, who died during development, also attracted praise.[129][134]\\nJim Sterling was more critical than most, giving Breath of the Wild a 7/10 score, criticizing the difficulty, weapon durability, and level design, but praising the open world and variety of content.[135] Other criticism focused on the unstable frame rate and the low resolution of 900p;[136] updates addressed some of these problems.[137][138]\\nSales\\nBreath of the Wild broke sales records for a Nintendo launch game in multiple regions.[139][140] In Japan, the Switch and Wii U versions sold a combined 230,000 copies in the first week of release, with the Switch version becoming the top-selling game released that week.[141] Nintendo reported that Breath of the Wild sold more than one million copies in the US that month—925,000 of which were for Switch, outselling the Switch itself.[145][146][147][148] Nintendo president Tatsumi Kimishima said that the attach rate on the Switch was \"unprecedented\".[149] Breath of the Wild had sold 31.15 million copies on the Switch by September 2023 and 1.70 million copies on the Wii U by December 2020.[150][151]\\nAwards\\nFollowing its demonstration at E3 2016, Breath of the Wild received several accolades from the Game Critics Awards[152] and from publications such as IGN and Destructoid.[153][154] It was listed among the best games at E3 by Eurogamer,[81] The game, he continued, would challenge the series\\' conventions, such as the requirement that players complete dungeons in a set order.[2][73] The next year, Nintendo introduced the game\\'s high-definition, cel-shaded visual style with in-game footage at its E3 press event.[74][75] Once planned for release in 2015, the game was delayed early in the year and did not show at that year\\'s E3.[76][77] Zelda series creator Shigeru Miyamoto reaffirmed that the game would still release for the Wii U despite the development of Nintendo\\'s next console, the Nintendo Switch.[78] The Switch version also has higher-quality environmental sounds.[53][54] Certain ideas that were planned for the game, like flying and underground dungeons were not implemented due to the Wii Us limitations; they would eventually resurface in the game\\'s sequel.[55] Aonuma stated that the art design was inspired by gouache and en plein air art to help identify the vast world.[56] Takizawa has also cited the Jōmon period as an inspiration for the ancient Sheikah technology and architecture that is found in the game, due to the mystery surrounding the period.[57] Journalists commented on unexpected interactions between game elements,[129][130][131] with serendipitous moments proving popular on social media.[132] Chris Plante of The Verge predicted that whereas prior open-world games tended to feature prescribed challenges, Zelda would influence a new generation of games with open-ended problem-solving.[132] Digital Trends wrote that the game\\'s level of experimentation allowed players to interact with and exploit the environment in creative ways, resulting in various \"tricks\" still discovered years after release.[127]\\nReviewers lauded the sense of detail and immersion.[133][129] Kotaku recommended turning off UI elements in praise of the indirect cues that contextually indicate the same information, such as Link shivering in the cold or waypoints appearing when using the scope.[133]', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Wikipedia', 'source': 'https://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.93348, 'images': None})]"
"[Document(metadata={'title': 'The Legend of Zelda: Breath of the Wild - Nintendo Switch Wiki', 'source': 'https://nintendo-switch.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.9961155, 'images': []}, page_content='The Legend of Zelda: Breath of the Wild is an open world action-adventure game published by Nintendo for the Wii U and as a launch title for the Nintendo Switch, and was released worldwide on March 3, 2017. It is the nineteenth installment of the The Legend of Zelda series and the first to be developed with a HD resolution. The game features a gigantic open world, with the player being able to ...'),\n",
" Document(metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zelda.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.9804313, 'images': []}, page_content='[]\\nReferences\\nThe Legend of Zelda \\xa0·\\nThe Adventure of Link \\xa0·\\nA Link to the Past (& Four Swords) \\xa0·\\nLink\\'s Awakening (DX; Nintendo Switch) \\xa0·\\nOcarina of Time (Master Quest; 3D) \\xa0·\\nMajora\\'s Mask (3D) \\xa0·\\nOracle of Ages \\xa0·\\nOracle of Seasons \\xa0·\\nFour Swords (Anniversary Edition) \\xa0·\\nThe Wind Waker (HD) \\xa0·\\nFour Swords Adventures \\xa0·\\nThe Minish Cap \\xa0·\\nTwilight Princess (HD) \\xa0·\\nPhantom Hourglass \\xa0·\\nSpirit Tracks \\xa0·\\nSkyward Sword (HD) \\xa0·\\nA Link Between Worlds \\xa0·\\nTri Force Heroes \\xa0·\\nBreath of the Wild \\xa0·\\nTears of the Kingdom\\nZelda (Game & Watch) \\xa0·\\nThe Legend of Zelda Game Watch \\xa0·\\nLink\\'s Crossbow Training \\xa0·\\nMy Nintendo Picross: Twilight Princess \\xa0·\\nCadence of Hyrule \\xa0·\\nGame & Watch: The Legend of Zelda\\nCD-i Games\\n Listings[]\\nCharacters[]\\nBosses[]\\nEnemies[]\\nDungeons[]\\nLocations[]\\nItems[]\\nTranslations[]\\nCredits[]\\nReception[]\\nSales[]\\nEiji Aonuma and Hidemaro Fujibayashi accepting the \"Game of the Year\" award for Breath of the Wild at The Game Awards 2017\\nBreath of the Wild was estimated to have sold approximately 1.3 million copies in its first three weeks and around 89% of Switch owners were estimated to have also purchased the game.[52] Sales of the game have remained strong and as of June 30, 2022, the Switch version has sold 27.14 million copies worldwide while the Wii U version has sold 1.69 million copies worldwide as of December 31, 2019,[53][54] giving Breath of the Wild a cumulative total of 28.83 million copies sold.\\n It also earned a Metacritic score of 97 from more than 100 critics, placing it among the highest-rated games of all time.[59][60] Notably, the game received the most perfect review scores for any game listed on Metacritic up to that point.[61]\\nIn 2022, Breath of the Wild was chosen as the best Legend of Zelda game of all time in their \"Top 10 Best Zelda Games\" list countdown; but was then placed as the \"second\" best Zelda game in their new revamped version of their \"Top 10 Best Zelda Games\" list in 2023, right behind it\\'s successor Tears of Video Game Canon ranks Breath of the Wild as one of the best video games of all time.[74] Metacritic ranked Breath of the Wild as the single best game of the 2010s.[75]\\nFan Reception[]\\nWatchMojo placed Breath of the Wild at the #2 spot in their \"Top 10 Legend of Zelda Games of All Time\" list countdown, right behind Ocarina of Time.[76] The Faces of Evil \\xa0·\\nThe Wand of Gamelon \\xa0·\\nZelda\\'s Adventure\\nHyrule Warriors Series\\nHyrule Warriors (Legends; Definitive Edition) \\xa0·\\nHyrule Warriors: Age of Calamity\\nSatellaview Games\\nBS The Legend of Zelda \\xa0·\\nAncient Stone Tablets\\nTingle Series\\nFreshly-Picked Tingle\\'s Rosy Rupeeland \\xa0·\\nTingle\\'s Balloon Fight DS \\xa0·\\n'),\n",
" Document(metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zeldawiki.wiki/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.9627432, 'images': []}, page_content='The Legend of Zelda\\xa0•\\nThe Adventure of Link\\xa0•\\nA Link to the Past (& Four Swords)\\xa0•\\nLink\\'s Awakening (DX; Nintendo Switch)\\xa0•\\nOcarina of Time (Master Quest; 3D)\\xa0•\\nMajora\\'s Mask (3D)\\xa0•\\nOracle of Ages\\xa0•\\nOracle of Seasons\\xa0•\\nFour Swords (Anniversary Edition)\\xa0•\\nThe Wind Waker (HD)\\xa0•\\nFour Swords Adventures\\xa0•\\nThe Minish Cap\\xa0•\\nTwilight Princess (HD)\\xa0•\\nPhantom Hourglass\\xa0•\\nSpirit Tracks\\xa0•\\nSkyward Sword (HD)\\xa0•\\nA Link Between Worlds\\xa0•\\nTri Force Heroes\\xa0•\\nBreath of the Wild\\xa0•\\nTears of the Kingdom\\nZelda (Game & Watch)\\xa0•\\nThe Legend of Zelda Game Watch\\xa0•\\nHeroes of Hyrule\\xa0•\\nLink\\'s Crossbow Training\\xa0•\\nMy Nintendo Picross: Twilight Princess\\xa0•\\nCadence of Hyrule\\xa0•\\nVermin\\nThe Faces of Evil\\xa0•\\nThe Wand of Gamelon\\xa0•\\nZelda\\'s Adventure\\nHyrule Warriors (Legends; Definitive Edition)\\xa0•\\nHyrule Warriors: Age of Calamity\\nBS The Legend of Zelda\\xa0•\\nAncient Stone Tablets\\nFreshly-Picked Tingle\\'s Rosy Rupeeland\\xa0•\\nTingle\\'s Balloon Fight DS\\xa0•\\nToo Much Tingle Pack\\xa0•\\nRipened Tingle\\'s Balloon Trip of Love\\nSoulcalibur II\\xa0•\\nWarioWare Series\\xa0•\\nCaptain Rainbow\\xa0•\\nNintendo Land\\xa0•\\nScribblenauts Unlimited\\xa0•\\nMario Kart 8\\xa0•\\nSplatoon 3\\nSuper Smash Bros (Series)\\nSuper Smash Bros.\\xa0•\\nSuper Smash Bros. Melee\\xa0•\\nSuper Smash Bros. Brawl\\xa0•\\nSuper Smash Bros. for Nintendo 3DS / Wii U\\xa0•\\n It also earned a Metacritic score of 97 from more than 100 critics, placing it among the highest-rated games of all time.[60][61] Notably, the game received the most perfect review scores for any game listed on Metacritic up to that point.[62]\\nAwards\\nThroughout 2016, Breath of the Wild won several awards as a highly anticipated game, including IGN\\'s and Destructoid\\'s Best of E3,[63][64] at the Game Critic Awards 2016,[65] and at The Game Awards 2016.[66] Following its release, Breath of the Wild received the title of \"Game of the Year\" from the Japan Game Awards 2017,[67] the Golden Joystick Awards 2017,<ref\"Our final award is for the Ultimate Game of the Year. Official website(s)\\nOfficial website(s)\\nCanonicity\\nCanonicity\\nCanon[citation needed]\\nPredecessor\\nPredecessor\\nTri Force Heroes\\nSuccessor\\nSuccessor\\nTears of the Kingdom\\nThe Legend of Zelda: Breath of the Wild guide at StrategyWiki\\nBreath of the Wild Guide at Zelda Universe\\nThe Legend of Zelda: Breath of the Wild is the nineteenth main installment of The Legend of Zelda series. Listings\\nCharacters\\nBosses\\nEnemies\\nDungeons\\nLocations\\nItems\\nTranslations\\nCredits\\nReception\\nSales\\nBreath of the Wild was estimated to have sold approximately 1.3 million copies in its first three weeks and around 89% of Switch owners were estimated to have also purchased the game.[53] Sales of the game have remained strong and as of September 30, 2023, the Switch version has sold 31.15 million copies worldwide while the Wii U version has sold 1.7 million copies worldwide as of December 31, 2021,[54][55] giving Breath of the Wild a cumulative total of 32.85 million copies sold.\\n The Legend of Zelda: Breath of the Wild\\nThe Legend of Zelda: Breath of the Wild\\nThe Legend of Zelda: Breath of the Wild\\nDeveloper(s)\\nDeveloper(s)\\nPublisher(s)\\nPublisher(s)\\nNintendo\\nDesigner(s)\\nDesigner(s)\\n')]"
]
},
"execution_count": 8,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.retrievers import TavilySearchAPIRetriever\n",
"query = \"what year was breath of the wild released?\"\n",
"\n",
"retriever = TavilySearchAPIRetriever(k=3)\n",
"\n",
"retriever.invoke(\"what year was breath of the wild released?\")"
"retriever.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"## Use within a chain\n",
"\n",
"We can easily combine this retriever in to a chain."
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@@ -110,40 +161,50 @@
"\n",
"Question: {question}\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" RunnablePassthrough.assign(context=(lambda x: x[\"question\"]) | retriever)\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | ChatOpenAI(model=\"gpt-4-1106-preview\")\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'As of the end of 2020, \"The Legend of Zelda: Breath of the Wild\" sold over 21.45 million copies worldwide.'"
"'As of August 2020, The Legend of Zelda: Breath of the Wild had sold over 20.1 million copies worldwide on Nintendo Switch and Wii U.'"
]
},
"execution_count": 13,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"how many units did bretch of the wild sell in 2020\"})"
"chain.invoke(\"how many units did bretch of the wild sell in 2020\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `TavilySearchAPIRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html)."
]
}
],
"metadata": {
@@ -162,7 +223,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,14 +2,51 @@
"cells": [
{
"cell_type": "markdown",
"id": "9fc6205b",
"id": "62727aaa-bcff-4087-891c-e539f824ee1f",
"metadata": {},
"source": [
"# Wikipedia\n",
"---\n",
"sidebar_label: Wikipedia\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "d62a16c1-10de-4f99-b392-c4ad2e6123a1",
"metadata": {},
"source": [
"# WikipediaRetriever\n",
"\n",
"## Overview\n",
">[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.\n",
"\n",
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the Document format that is used downstream."
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) format that is used downstream.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[WikipediaRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) | [Wikipedia](https://www.wikipedia.org/) articles | langchain_community |"
]
},
{
"cell_type": "markdown",
"id": "eb7d377c-168b-40e8-bd61-af6a4fb1b44f",
"metadata": {},
"source": [
"## Setup\n",
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1bbc6013-2617-4f7e-9d8b-7453d09315c0",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@@ -17,15 +54,9 @@
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
"source": [
"First, you need to install `wikipedia` python package."
"### Installation\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `wikipedia` python package itself."
]
},
{
@@ -37,7 +68,15 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet wikipedia"
"%pip install -qU langchain_community wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae622ac6-d18a-4754-a4bd-d30a078c19b5",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
@@ -45,7 +84,9 @@
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
"source": [
"`WikipediaRetriever` has these arguments:\n",
"Now we can instantiate our retriever:\n",
"\n",
"`WikipediaRetriever` parameters include:\n",
"- optional `lang`: default=\"en\". Use it to search in a specific language part of Wikipedia\n",
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\n",
"- optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `title`, `Summary`. If True, other fields also downloaded.\n",
@@ -53,200 +94,149 @@
"`get_relevant_documents()` has one argument, `query`: free text which used to find documents in Wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae3c3d16",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
"source": [
"### Running retriever"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "d0e6f506",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.retrievers import WikipediaRetriever"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "f381f642",
"execution_count": 1,
"id": "b78f0cd0-ffea-4fe3-9d1d-54639c4ef1ff",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import WikipediaRetriever\n",
"\n",
"retriever = WikipediaRetriever()"
]
},
{
"cell_type": "markdown",
"id": "12aead36-7b97-4d9c-82e7-ec644a3127f9",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "20ae1a74",
"execution_count": 2,
"id": "54a76605-6b1e-44bf-b8a2-7d48119290c4",
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.invoke(\"HUNTER X HUNTER\")"
"docs = retriever.invoke(\"TOKYO GHOUL\")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "1d5a5088",
"execution_count": 3,
"id": "65ada2b7-3507-4dcb-9982-5f8f4e97a2e1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'Hunter × Hunter',\n",
" 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s snen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\n'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Tokyo Ghoul (Japanese: 東京喰種(トーキョーグール), Hepburn: Tōkyō Gūru) is a Japanese dark fantasy manga series written and illustrated by Sui Ishida. It was serialized in Shueisha's seinen manga magazine Weekly Young Jump from September 2011 to September 2014, with its chapters collected in 14 tankōbon volumes. The story is set in an alternate version of Tokyo where humans coexist with ghouls, beings who loo\n"
]
}
],
"source": [
"docs[0].metadata # meta-information of the Document"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "c0ccd0c7-f6a6-43e7-b842-5f57afb94224",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:400] # a content of the Document"
"print(docs[0].page_content[:400])"
]
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"id": "ae3c3d16",
"metadata": {},
"source": [
"### Question Answering on facts"
"## Use within a chain\n",
"Like other retrievers, `WikipediaRetriever` can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
"\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "bb3601df-53ea-4826-bdbe-554387bc3ad4",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e9c1a114-0410-4804-be30-05f34a9760f9",
"metadata": {
"tags": []
},
"execution_count": 4,
"id": "4bd3d268-eb8c-46e9-930a-18f5e2a50008",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"# | output: false\n",
"# | echo: false\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "51a33cc9-ec42-4afc-8a2d-3bfff476aa59",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "ea537767-a8bf-4adf-ae03-b353c9145d58",
"metadata": {
"tags": []
},
"execution_count": 5,
"id": "9b52bc65-1b2e-4c30-ab43-41eaa5bf79c3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"\n",
" Answer the question based only on the context provided.\n",
" Context: {context}\n",
" Question: {question}\n",
" \"\"\"\n",
")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "0d268905-3b19-4338-ac10-223c0fe4d5e4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What is Apify? \n",
"\n",
"**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. \n",
"\n",
"-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? \n",
"\n",
"**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. \n",
"\n",
"-> **Question**: What is the Abhayagiri Vihāra? \n",
"\n",
"**Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. \n",
"\n"
]
"data": {
"text/plain": [
"'The main character in Tokyo Ghoul is Ken Kaneki, who transforms into a ghoul after receiving an organ transplant from a ghoul named Rize.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"questions = [\n",
" \"What is Apify?\",\n",
" \"When the Monument to the Martyrs of the 1830 Revolution was created?\",\n",
" \"What is the Abhayagiri Vihāra?\",\n",
" # \"How big is Wikipédia en français?\",\n",
"]\n",
"chat_history = []\n",
"chain.invoke(\n",
" \"Who is the main character in `Tokyo Ghoul` and does he transform into a ghoul?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "236bbafb-ebd4-4165-9b8f-d47605f6eef3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
"For detailed documentation of all `WikipediaRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html#langchain-community-retrievers-wikipedia-wikipediaretriever)."
]
}
],
@@ -266,7 +256,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -2,10 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Astra DB\n",
"sidebar_label: AstraDB\n",
"---"
]
},
@@ -13,130 +17,121 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Astra DB\n",
"# AstraDBByteStore\n",
"\n",
"This will help you get started with Astra DB [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `AstraDBByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html).\n",
"\n",
"## Overview\n",
"\n",
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.\n",
"\n",
"`AstraDBStore` and `AstraDBByteStore` need the `astrapy` package to be installed:"
"### Integration details\n",
"\n",
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [AstraDBByteStore](https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html) | [langchain_astradb](https://api.python.langchain.com/en/latest/astradb_api_reference.html) | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_astradb?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_astradb?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create an `AstraDBByteStore` byte store, you'll need to [create a DataStax account](https://www.datastax.com/products/datastax-astra).\n",
"\n",
"### Credentials\n",
"\n",
"After signing up, set the following credentials:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet astrapy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Store takes the following parameters:\n",
"\n",
"* `api_endpoint`: Astra DB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: Astra DB token. Looks like `AstraCS:6gBhNmsk135....`\n",
"* `collection_name` : Astra DB collection name\n",
"* `namespace`: (Optional) Astra DB namespace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AstraDBStore\n",
"\n",
"The `AstraDBStore` is an implementation of `BaseStore` that stores everything in your DataStax Astra DB instance.\n",
"The store keys must be strings and will be mapped to the `_id` field of the Astra DB document.\n",
"The store values can be any object that can be serialized by `json.dumps`.\n",
"In the database, entries will have the form:\n",
"\n",
"```json\n",
"{\n",
" \"_id\": \"<key>\",\n",
" \"value\": <value>\n",
"}\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import AstraDBStore"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_API_ENDPOINT = getpass(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AstraDB integration lives in the `langchain_astradb` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"store = AstraDBStore(\n",
"%pip install -qU langchain_astradb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_astradb import AstraDBByteStore\n",
"\n",
"kv_store = AstraDBByteStore(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"my_store\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['v1', [0.1, 0.2, 0.3]]\n"
]
}
],
"source": [
"store.mset([(\"k1\", \"v1\"), (\"k2\", [0.1, 0.2, 0.3])])\n",
"print(store.mget([\"k1\", \"k2\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage with CacheBackedEmbeddings\n",
"## Usage\n",
"\n",
"You may use the `AstraDBStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n",
"Note that `AstraDBStore` stores the embeddings as a list of floats without converting them first to bytes so we don't use `fromByteStore` there."
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_store=store\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
@@ -144,96 +139,67 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## AstraDBByteStore\n",
"\n",
"The `AstraDBByteStore` is an implementation of `ByteStore` that stores everything in your DataStax Astra DB instance.\n",
"The store keys must be strings and will be mapped to the `_id` field of the Astra DB document.\n",
"The store `bytes` values are converted to base64 strings for storage into Astra DB.\n",
"In the database, entries will have the form:\n",
"\n",
"```json\n",
"{\n",
" \"_id\": \"<key>\",\n",
" \"value\": \"bytes encoded in base 64\"\n",
"}\n",
"```"
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import AstraDBByteStore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"store = AstraDBByteStore(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"my_store\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
"source": [
"You can use an `AstraDBByteStore` anywhere you'd use other ByteStores, including as a [cache for embeddings](/docs/how_to/caching_embeddings)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AstraDBByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_astradb.storage.AstraDBByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Cassandra\n",
@@ -13,68 +17,62 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Cassandra\n",
"# CassandraByteStore\n",
"\n",
"This will help you get started with Cassandra [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `CassandraByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html).\n",
"\n",
"## Overview\n",
"\n",
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.\n",
"\n",
"`CassandraByteStore` needs the `cassio` package to be installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Store takes the following parameters:\n",
"### Integration details\n",
"\n",
"* table: The table where to store the data.\n",
"* session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used.\n",
"* keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used.\n",
"* setup_mode: (Optional) The mode used to create the Cassandra table (SYNC, ASYNC or OFF). Defaults to SYNC."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CassandraByteStore\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/cassandra_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [CassandraByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"The `CassandraByteStore` is an implementation of `ByteStore` that stores the data in your Cassandra instance.\n",
"The store keys must be strings and will be mapped to the `row_id` column of the Cassandra table.\n",
"The store `bytes` values are mapped to the `body_blob` column of the Cassandra table."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `CassandraByteStore` integration lives in the `langchain_community` package. You'll also need to install the `cassio` package or the `cassandra-driver` package as a peer dependency depending on which initialization method you're using:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.storage import CassandraByteStore"
"%pip install -qU langchain_community\n",
"%pip install -qU cassandra-driver\n",
"%pip install -qU cassio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Init from a cassandra driver Session\n",
"You'll also need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
],
"metadata": {
"collapsed": false
}
"You'll first need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
},
{
"cell_type": "code",
@@ -90,12 +88,10 @@
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You need to provide the name of an existing keyspace of the Cassandra instance:"
],
"metadata": {
"collapsed": false
}
"Then you can create your store! You'll also need to provide the name of an existing keyspace of the Cassandra instance:"
]
},
{
"cell_type": "code",
@@ -103,61 +99,91 @@
"metadata": {},
"outputs": [],
"source": [
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")"
"from langchain_community.storage import CassandraByteStore\n",
"\n",
"kv_store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=\"<YOUR KEYSPACE>\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating the store:"
],
"metadata": {
"collapsed": false
}
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"outputs": [],
"source": [
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
" session=session,\n",
" keyspace=CASSANDRA_KEYSPACE,\n",
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Init from cassio\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
],
"metadata": {
"collapsed": false
}
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Init using `cassio`\n",
"\n",
"It's also possible to use cassio to configure the session and keyspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=\"<YOUR KEYSPACE>\")\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
@@ -165,62 +191,27 @@
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage with CacheBackedEmbeddings\n",
"## API reference\n",
"\n",
"You may use the `CassandraByteStore` in conjunction with a [`CacheBackedEmbeddings`](/docs/how_to/caching_embeddings) to cache the result of embeddings computations.\n"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
"\n",
"store = CassandraByteStore(\n",
" table=\"my_store\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(), document_embedding_cache=store\n",
")"
],
"metadata": {
"collapsed": false
}
"For detailed documentation of all `CassandraByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.cassandra.CassandraByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,10 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Elasticsearch \n",
"sidebar_label: Elasticsearch\n",
"---"
]
},
@@ -15,25 +19,31 @@
"source": [
"# ElasticsearchEmbeddingsCache\n",
"\n",
"This will help you get started with Elasticsearch [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html).\n",
"\n",
"## Overview\n",
"\n",
"The `ElasticsearchEmbeddingsCache` is a `ByteStore` implementation that uses your Elasticsearch instance for efficient storage and retrieval of embeddings.\n",
"\n",
"### Integration details\n",
"\n",
"First install the LangChain integration with Elasticsearch."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-elasticsearch"
"| Class | Package | Local | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ElasticsearchEmbeddingsCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html) | [langchain_elasticsearch](https://api.python.langchain.com/en/latest/elasticsearch_api_reference.html) | ✅ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_elasticsearch?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_elasticsearch?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create a `ElasticsearchEmbeddingsCache` byte store, you'll need an Elasticsearch cluster. You can [set one up locally](https://www.elastic.co/downloads/elasticsearch) or create an [Elastic account](https://www.elastic.co/elasticsearch)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": "it can be instantiated using `CacheBackedEmbeddings.from_bytes_store` method."
"source": [
"### Installation\n",
"\n",
"The LangChain `ElasticsearchEmbeddingsCache` integration lives in the `__package_name__` package:"
]
},
{
"cell_type": "code",
@@ -41,23 +51,37 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import CacheBackedEmbeddings\n",
"%pip install -qU langchain_elasticsearch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"underlying_embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"\n",
"store = ElasticsearchEmbeddingsCache(\n",
" es_url=\"http://localhost:9200\",\n",
"# Example config for a locally running Elasticsearch instance\n",
"kv_store = ElasticsearchEmbeddingsCache(\n",
" es_url=\"https://localhost:9200\",\n",
" index_name=\"llm-chat-cache\",\n",
" metadata={\"project\": \"my_chatgpt_project\"},\n",
" namespace=\"my_chatgpt_project\",\n",
")\n",
"\n",
"embeddings = CacheBackedEmbeddings.from_bytes_store(\n",
" underlying_embeddings=OpenAIEmbeddings(),\n",
" document_embedding_cache=store,\n",
" query_embedding_cache=store,\n",
" es_user=\"elastic\",\n",
" es_password=\"<GENERATED PASSWORD>\",\n",
" es_params={\n",
" \"ca_certs\": \"~/http_ca.crt\",\n",
" },\n",
")"
]
},
@@ -65,19 +89,93 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The index_name parameter can also accept aliases. This allows to use the ILM: Manage the index lifecycle that we suggest to consider for managing retention and controlling cache growth.\n",
"## Usage\n",
"\n",
"Look at the class docstring for all parameters."
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Index the generated vectors\n",
"The cached vectors won't be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.\n",
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"This can be done by subclassing end overriding methods. "
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use as an embeddings cache\n",
"\n",
"Like other `ByteStores`, you can use an `ElasticsearchEmbeddingsCache` instance for [persistent caching in document ingestion](/docs/how_to/caching_embeddings/) for RAG.\n",
"\n",
"However, cached vectors won't be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.\n",
"\n",
"This can be done by subclassing and overriding methods:"
]
},
{
@@ -88,8 +186,6 @@
"source": [
"from typing import Any, Dict, List\n",
"\n",
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache\n",
"\n",
"\n",
"class SearchableElasticsearchStore(ElasticsearchEmbeddingsCache):\n",
" @property\n",
@@ -112,26 +208,29 @@
{
"cell_type": "markdown",
"metadata": {},
"source": "When overriding the mapping and the document building, please only make additive modifications, keeping the base mapping intact."
"source": [
"When overriding the mapping and the document building, please only make additive modifications, keeping the base mapping intact."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,11 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Local Filesystem\n",
"sidebar_position: 3\n",
"---"
]
},
@@ -16,51 +19,119 @@
"source": [
"# LocalFileStore\n",
"\n",
"The `LocalFileStore` is a persistent implementation of `ByteStore` that stores everything in a folder of your choosing."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"from pathlib import Path\n",
"This will help you get started with local filesystem [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all LocalFileStore features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html).\n",
"\n",
"from langchain.storage import LocalFileStore\n",
"## Overview\n",
"\n",
"root_path = Path.cwd() / \"data\" # can also be a path set by a string\n",
"store = LocalFileStore(root_path)\n",
"The `LocalFileStore` is a persistent implementation of `ByteStore` that stores everything in a folder of your choosing. It's useful if you're using a single machine and are tolerant of files being added or deleted.\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/file_system) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html) | [langchain](https://api.python.langchain.com/en/latest/langchain_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain?style=flat-square&label=%20) |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's see which files exist in our `data` folder:"
"### Installation\n",
"\n",
"The LangChain `LocalFileStore` integration lives in the `langchain` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"from langchain.storage import LocalFileStore\n",
"\n",
"root_path = Path.cwd() / \"data\" # can also be a path set by a string\n",
"\n",
"kv_store = LocalFileStore(root_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see the created files in your `data` folder:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"k1 k2\n"
"key1 key2\n"
]
}
],
@@ -69,16 +140,57 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `LocalFileStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -92,7 +204,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,12 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: In Memory\n",
"sidebar_position: 2\n",
"keywords: [InMemoryStore]\n",
"sidebar_label: In-memory\n",
"---"
]
},
@@ -17,29 +19,26 @@
"source": [
"# InMemoryByteStore\n",
"\n",
"The `InMemoryByteStore` is a non-persistent implementation of `ByteStore` that stores everything in a Python dictionary."
"This guide will help you get started with in-memory [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `InMemoryByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html).\n",
"\n",
"## Overview\n",
"\n",
"The `InMemoryByteStore` is a non-persistent implementation of a `ByteStore` that stores everything in a Python dictionary. It's intended for demos and cases where you don't need persistence past the lifetime of the Python process.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/in_memory/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html) | [langchain_core](https://api.python.langchain.com/en/latest/core_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_core?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_core?style=flat-square&label=%20) |"
]
},
{
"cell_type": "code",
"execution_count": 1,
"cell_type": "markdown",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"source": [
"from langchain.storage import InMemoryByteStore\n",
"### Installation\n",
"\n",
"store = InMemoryByteStore()\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"The LangChain `InMemoryByteStore` integration lives in the `langchain_core` package:"
]
},
{
@@ -47,12 +46,123 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
"source": [
"%pip install -qU langchain_core"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now you can instantiate your byte store:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.stores import InMemoryByteStore\n",
"\n",
"kv_store = InMemoryByteStore()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `InMemoryByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -66,7 +176,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -1,29 +0,0 @@
---
sidebar_position: 1
sidebar_class_name: hidden
---
# Stores
In many different applications, having some sort of key-value storage is helpful.
In this section, we will look at a few different ways to store key-value pairs
using implementations of the `ByteStore` interface.
## Features (natively supported)
All `ByteStore`s support the following functions, which are used for modifying
**m**ultiple key-value pairs at once:
- `mget(key: Sequence[str]) -> List[Optional[bytes]]`: get the contents of multiple keys, returning `None` if the key does not exist
- `mset(key_value_pairs: Sequence[Tuple[str, bytes]]) -> None`: set the contents of multiple keys
- `mdelete(key: Sequence[str]) -> None`: delete multiple keys
- `yield_keys(prefix: Optional[str] = None) -> Iterator[str]`: yield all keys in the store, optionally filtering by a prefix
## How to pick one
`ByteStore`s are designed to be interchangeable. By default, most dependent integrations
use the `InMemoryByteStore`, which is a simple in-memory key-value store.
However, if you start having other requirements, like massive scalability or persistence,
you can swap out the `ByteStore` implementation with one of the other ones documented
in this section.

View File

@@ -2,7 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Redis\n",
@@ -15,9 +19,30 @@
"source": [
"# RedisStore\n",
"\n",
"This will help you get started with Redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `RedisStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html).\n",
"\n",
"## Overview\n",
"\n",
"The `RedisStore` is an implementation of `ByteStore` that stores everything in your Redis instance.\n",
"\n",
"To configure Redis, follow our [Redis guide](/docs/integrations/providers/redis)."
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/ioredis_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [RedisStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"To create a Redis byte store, you'll need to set up a Redis instance. You can do this locally or via a provider - see our [Redis guide](/docs/integrations/providers/redis) for an overview of options."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain `RedisStore` integration lives in the `langchain_community` package:"
]
},
{
@@ -26,56 +51,128 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet redis"
"%pip install -qU langchain_community redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.storage import RedisStore\n",
"\n",
"store = RedisStore(redis_url=\"redis://localhost:6379\")\n",
"kv_store = RedisStore(redis_url=\"redis://localhost:6379\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `RedisStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,11 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Upstash Redis\n",
@@ -15,11 +19,48 @@
"source": [
"# UpstashRedisByteStore\n",
"\n",
"The `UpstashRedisStore` is an implementation of `ByteStore` that stores everything in your Upstash-hosted Redis instance.\n",
"This will help you get started with Upstash redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `UpstashRedisByteStore` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html).\n",
"\n",
"To use the base `RedisStore` instead, see [this guide](/docs/integrations/stores/redis/)\n",
"## Overview\n",
"\n",
"To configure Upstash Redis, follow our [Upstash guide](/docs/integrations/providers/upstash)."
"The `UpstashRedisStore` is an implementation of `ByteStore` that stores everything in your [Upstash](https://upstash.com/)-hosted Redis instance.\n",
"\n",
"To use the base `RedisStore` instead, see [this guide](/docs/integrations/stores/redis/).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | [JS support](https://js.langchain.com/v0.2/docs/integrations/stores/upstash_redis_storage) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [UpstashRedisByteStore](https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"\n",
"You'll first need to [sign up for an Upstash account](https://upstash.com/docs/redis/overall/getstarted). Next, you'll need to create a Redis database to connect to.\n",
"\n",
"### Credentials\n",
"\n",
"Once you've created your database, get your database URL (don't forget the `https://`!) and token:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"URL = getpass(\"Enter your Upstash URL\")\n",
"TOKEN = getpass(\"Enter your Upstash REST token\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Upstash integration lives in the `langchain_community` package. You'll also need to install the `upstash-redis` package as a peer dependency:"
]
},
{
@@ -28,61 +69,130 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet upstash-redis"
"%pip install -qU langchain_community upstash-redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our byte store:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[b'v1', b'v2']\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.storage import UpstashRedisByteStore\n",
"from upstash_redis import Redis\n",
"\n",
"URL = \"<UPSTASH_REDIS_REST_URL>\"\n",
"TOKEN = \"<UPSTASH_REDIS_REST_TOKEN>\"\n",
"\n",
"redis_client = Redis(url=URL, token=TOKEN)\n",
"store = UpstashRedisByteStore(client=redis_client, ttl=None, namespace=\"test-ns\")\n",
"kv_store = UpstashRedisByteStore(client=redis_client, ttl=None, namespace=\"test-ns\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"store.mset([(\"k1\", b\"v1\"), (\"k2\", b\"v2\")])\n",
"print(store.mget([\"k1\", \"k2\"]))"
"You can set data under keys like this using the `mset` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"[b'value1', b'value2']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mset(\n",
" [\n",
" [\"key1\", b\"value1\"],\n",
" [\"key2\", b\"value2\"],\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can delete data using the `mdelete` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"kv_store.mdelete(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")\n",
"\n",
"kv_store.mget(\n",
" [\n",
" \"key1\",\n",
" \"key2\",\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `UpstashRedisByteStore` features and configurations, head to the API reference: https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -73,16 +73,25 @@
"- `max_length: int` (default: 512)\n",
" > The maximum number of tokens. Unknown behavior for values > 512.\n",
"\n",
"- `cache_dir: Optional[str]`\n",
"- `cache_dir: Optional[str]` (default: None)\n",
" > The path to the cache directory. Defaults to `local_cache` in the parent directory.\n",
"\n",
"- `threads: Optional[int]`\n",
" > The number of threads a single onnxruntime session can use. Defaults to None.\n",
"- `threads: Optional[int]` (default: None)\n",
" > The number of threads a single onnxruntime session can use.\n",
"\n",
"- `doc_embed_type: Literal[\"default\", \"passage\"]` (default: \"default\")\n",
" > \"default\": Uses FastEmbed's default embedding method.\n",
" \n",
" > \"passage\": Prefixes the text with \"passage\" before embedding."
" > \"passage\": Prefixes the text with \"passage\" before embedding.\n",
"\n",
"- `batch_size: int` (default: 256)\n",
" > Batch size for encoding. Higher values will use more memory, but be faster.\n",
"\n",
"- `parallel: Optional[int]` (default: None)\n",
"\n",
" > If `>1`, data-parallel encoding will be used, recommended for offline encoding of large datasets.\n",
" > If `0`, use all available cores.\n",
" > If `None`, don't use data-parallel processing, use default onnxruntime threading instead."
]
},
{

View File

@@ -156,6 +156,29 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For certain requirements, there is an option to pass the IBM's [`APIClient`](https://ibm.github.io/watsonx-ai-python-sdk/base.html#apiclient) object into the `WatsonxEmbeddings` class."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from ibm_watsonx_ai import APIClient\n",
"\n",
"api_client = APIClient(...)\n",
"\n",
"watsonx_llm = WatsonxEmbeddings(\n",
" model_id=\"ibm/slate-125m-english-rtrvr\",\n",
" watsonx_client=api_client,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,155 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Pinecone Embeddings\n",
"\n",
"Pinecone's inference API can be accessed via `PineconeEmbeddings`. Providing text embeddings via the Pinecone service. We start by installing prerequisite libraries:"
],
"metadata": {
"collapsed": false
},
"id": "f4b5d823fee826c2"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"!pip install -qU \"langchain-pinecone>=0.2.0\" "
],
"metadata": {
"collapsed": false
},
"id": "3bc5d3a5ed7f5ce3",
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"Next, we [sign up / log in to Pinecone](https://app.pinecone.io) to get our API key:"
],
"metadata": {
"collapsed": false
},
"id": "62a77d25c3fd8bd5"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"PINECONE_API_KEY\"] = os.getenv(\"PINECONE_API_KEY\") or getpass(\n",
" \"Enter your Pinecone API key: \"\n",
")"
],
"metadata": {
"collapsed": false
},
"id": "8162dbcbcf7d3d55",
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"Check the document for available [models](https://docs.pinecone.io/models/overview). Now we initialize our embedding model like so:"
],
"metadata": {
"collapsed": false
},
"id": "98d860a0a2d8b907"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"from langchain_pinecone import PineconeEmbeddings\n",
"\n",
"embeddings = PineconeEmbeddings(model=\"multilingual-e5-large\")"
],
"metadata": {
"collapsed": false
},
"id": "2b3adb72786a5275",
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"From here we can create embeddings either sync or async, let's start with sync! We embed a single text as a query embedding (ie what we search with in RAG) using `embed_query`:"
],
"metadata": {
"collapsed": false
},
"id": "11e24da855517230"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"docs = [\n",
" \"Apple is a popular fruit known for its sweetness and crisp texture.\",\n",
" \"The tech company Apple is known for its innovative products like the iPhone.\",\n",
" \"Many people enjoy eating apples as a healthy snack.\",\n",
" \"Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.\",\n",
" \"An apple a day keeps the doctor away, as the saying goes.\",\n",
"]"
],
"metadata": {
"collapsed": false
},
"id": "2da515e2a61ef7e9",
"execution_count": null
},
{
"cell_type": "code",
"outputs": [],
"source": [
"doc_embeds = embeddings.embed_documents(docs)\n",
"doc_embeds"
],
"metadata": {
"collapsed": false
},
"id": "2897e0d570c90b2f",
"execution_count": null
},
{
"cell_type": "code",
"outputs": [],
"source": [
"query = \"Tell me about the tech company known as Apple\"\n",
"query_embed = embeddings.embed_query(query)\n",
"query_embed"
],
"metadata": {
"collapsed": false
},
"id": "510784963c0e17a",
"execution_count": null
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -101,7 +101,7 @@
" sambastudio_embeddings_project_id=sambastudio_project_id,\n",
" sambastudio_embeddings_endpoint_id=sambastudio_endpoint_id,\n",
" sambastudio_embeddings_api_key=sambastudio_api_key,\n",
" batch_size=32,\n",
" batch_size=32, # set depending on the deployed endpoint configuration\n",
")"
]
},

View File

@@ -8,71 +8,61 @@
"# Sentence Transformers on Hugging Face\n",
"\n",
">[Hugging Face sentence-transformers](https://huggingface.co/sentence-transformers) is a Python framework for state-of-the-art sentence, text and image embeddings.\n",
">One of the embedding models is used in the `HuggingFaceEmbeddings` class.\n",
">We have also added an alias for `SentenceTransformerEmbeddings` for users who are more familiar with directly using that package.\n",
">You can use these embedding models from the `HuggingFaceEmbeddings` class.\n",
"\n",
"`sentence_transformers` package models are originating from [Sentence-BERT](https://arxiv.org/abs/1908.10084)"
":::caution\n",
"\n",
"Running sentence-transformers locally can be affected by your operating system and other global factors. It is recommended for experienced users only.\n",
"\n",
":::\n",
"\n",
"## Setup\n",
"\n",
"You'll need to install the `langchain_huggingface` package as a dependency:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "06c9f47d",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-huggingface"
]
},
{
"cell_type": "markdown",
"id": "8fb16f74",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ff9be586",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
"[-0.038338568061590195, 0.12346471101045609, -0.028642969205975533, 0.05365273356437683, 0.008845377...\n"
]
}
],
"source": [
"%pip install --upgrade --quiet sentence_transformers > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "861521a9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFaceEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff9be586",
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"\n",
"embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"# Equivalent to SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d0a98ae9",
"metadata": {},
"outputs": [],
"source": [
"text = \"This is a test document.\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5d6c682b",
"metadata": {},
"outputs": [],
"source": [
"query_result = embeddings.embed_query(text)"
"\n",
"text = \"This is a test document.\"\n",
"query_result = embeddings.embed_query(text)\n",
"\n",
"# show only the first 100 characters of the stringified vector\n",
"print(str(query_result)[:100] + \"...\")"
]
},
{
@@ -80,18 +70,39 @@
"execution_count": 6,
"id": "bb5e74c0",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[-0.038338497281074524, 0.12346471846103668, -0.028642890974879265, 0.05365274101495743, 0.00884535...\n"
]
}
],
"source": [
"doc_result = embeddings.embed_documents([text, \"This is not a test document.\"])"
"doc_result = embeddings.embed_documents([text, \"This is not a test document.\"])\n",
"print(str(doc_result)[:100] + \"...\")"
]
},
{
"cell_type": "markdown",
"id": "1e6525cb",
"metadata": {},
"source": [
"## Troubleshooting\n",
"\n",
"If you are having issues with the `accelerate` package not being found or failing to import, installing/upgrading it may help:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aaad49f8",
"id": "bbae70f7",
"metadata": {},
"outputs": [],
"source": []
"source": [
"%pip install -qU accelerate"
]
}
],
"metadata": {
@@ -110,7 +121,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.5"
},
"vscode": {
"interpreter": {

View File

@@ -6,23 +6,28 @@
"source": [
"# Cassandra Database\n",
"\n",
"Apache Cassandra® is a widely used database for storing transactional application data. The introduction of functions and tooling in Large Language Models has opened up some exciting use cases for existing data in Generative AI applications. The Cassandra Database toolkit enables AI engineers to efficiently integrate Agents with Cassandra data, offering the following features: \n",
" - Fast data access through optimized queries. Most queries should run in single-digit ms or less. \n",
" - Schema introspection to enhance LLM reasoning capabilities \n",
" - Compatibility with various Cassandra deployments, including Apache Cassandra®, DataStax Enterprise™, and DataStax Astra™ \n",
" - Currently, the toolkit is limited to SELECT queries and schema introspection operations. (Safety first)\n",
">`Apache Cassandra®` is a widely used database for storing transactional application data. The introduction of functions and >tooling in Large Language Models has opened up some exciting use cases for existing data in Generative AI applications. \n",
"\n",
">The `Cassandra Database` toolkit enables AI engineers to integrate agents with Cassandra data efficiently, offering \n",
">the following features: \n",
"> - Fast data access through optimized queries. Most queries should run in single-digit ms or less.\n",
"> - Schema introspection to enhance LLM reasoning capabilities\n",
"> - Compatibility with various Cassandra deployments, including Apache Cassandra®, DataStax Enterprise™, and DataStax Astra™\n",
"> - Currently, the toolkit is limited to SELECT queries and schema introspection operations. (Safety first)\n",
"\n",
"For more information on creating a Cassandra DB agent see the [CQL agent cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/cql_agent.ipynb)\n",
"\n",
"## Quick Start\n",
" - Install the cassio library\n",
" - Install the `cassio` library\n",
" - Set environment variables for the Cassandra database you are connecting to\n",
" - Initialize CassandraDatabase\n",
" - Pass the tools to your agent with toolkit.get_tools()\n",
" - Initialize `CassandraDatabase`\n",
" - Pass the tools to your agent with `toolkit.get_tools()`\n",
" - Sit back and watch it do all your work for you\n",
"\n",
"## Theory of Operation\n",
"Cassandra Query Language (CQL) is the primary *human-centric* way of interacting with a Cassandra database. While offering some flexibility when generating queries, it requires knowledge of Cassandra data modeling best practices. LLM function calling gives an agent the ability to reason and then choose a tool to satisfy the request. Agents using LLMs should reason using Cassandra-specific logic when choosing the appropriate toolkit or chain of toolkits. This reduces the randomness introduced when LLMs are forced to provide a top-down solution. Do you want an LLM to have complete unfettered access to your database? Yeah. Probably not. To accomplish this, we provide a prompt for use when constructing questions for the agent: \n",
"\n",
"```json\n",
"`Cassandra Query Language (CQL)` is the primary *human-centric* way of interacting with a Cassandra database. While offering some flexibility when generating queries, it requires knowledge of Cassandra data modeling best practices. LLM function calling gives an agent the ability to reason and then choose a tool to satisfy the request. Agents using LLMs should reason using Cassandra-specific logic when choosing the appropriate toolkit or chain of toolkits. This reduces the randomness introduced when LLMs are forced to provide a top-down solution. Do you want an LLM to have complete unfettered access to your database? Yeah. Probably not. To accomplish this, we provide a prompt for use when constructing questions for the agent: \n",
"\n",
"You are an Apache Cassandra expert query analysis bot with the following features \n",
"and rules:\n",
" - You will take a question from the end user about finding specific \n",
@@ -38,6 +43,7 @@
"\n",
"The following is an example of a query path in JSON format:\n",
"\n",
"```json\n",
" {\n",
" \"query_paths\": [\n",
" {\n",
@@ -448,13 +454,6 @@
"\n",
"print(response[\"output\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For a deepdive on creating a Cassandra DB agent see the [CQL agent cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/cql_agent.ipynb)"
]
}
],
"metadata": {
@@ -473,7 +472,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -4,34 +4,31 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gmail\n",
"\n",
"This notebook walks through connecting a LangChain email to the `Gmail API`.\n",
"\n",
"To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API. Once this is done, we'll install the required libraries."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-api-python-client > /dev/null\n",
"%pip install --upgrade --quiet google-auth-oauthlib > /dev/null\n",
"%pip install --upgrade --quiet google-auth-httplib2 > /dev/null\n",
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages"
"---\n",
"sidebar_label: GMail\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You also need to install the `langchain-community` package where the integration lives:\n",
"# GmailToolkit\n",
"\n",
"```bash\n",
"pip install -U langchain-community\n",
"```"
"This will help you getting started with the GMail [toolkit](/docs/concepts/#toolkits). This toolkit interacts with the GMail API to read messages, draft and send messages, and more. For detailed documentation of all GmailToolkit features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html).\n",
"\n",
"## Setup\n",
"\n",
"To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-google-community` package. We'll need the `gmail` extra:"
]
},
{
@@ -40,14 +37,14 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
"%pip install -qU langchain-google-community\\[gmail\\]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
@@ -57,14 +54,14 @@
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Toolkit\n",
"## Instantiation\n",
"\n",
"By default the toolkit reads the local `credentials.json` file. You can also manually provide a `Credentials` object."
]
@@ -72,12 +69,10 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits import GmailToolkit\n",
"from langchain_google_community import GmailToolkit\n",
"\n",
"toolkit = GmailToolkit()"
]
@@ -100,7 +95,7 @@
},
"outputs": [],
"source": [
"from langchain_community.tools.gmail.utils import (\n",
"from langchain_google_community.gmail.utils import (\n",
" build_resource_service,\n",
" get_gmail_credentials,\n",
")\n",
@@ -116,9 +111,18 @@
"toolkit = GmailToolkit(api_resource=api_resource)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"metadata": {
"tags": []
},
@@ -126,14 +130,14 @@
{
"data": {
"text/plain": [
"[GmailCreateDraft(name='create_gmail_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=<class 'langchain_community.tools.gmail.create_draft.CreateDraftSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=<googleapiclient.discovery.Resource object at 0x10e5c6d10>),\n",
" GmailSendMessage(name='send_gmail_message', description='Use this tool to send email messages. The input is the message, recipents', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=<googleapiclient.discovery.Resource object at 0x10e5c6d10>),\n",
" GmailSearch(name='search_gmail', description=('Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.',), args_schema=<class 'langchain_community.tools.gmail.search.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=<googleapiclient.discovery.Resource object at 0x10e5c6d10>),\n",
" GmailGetMessage(name='get_gmail_message', description='Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema=<class 'langchain_community.tools.gmail.get_message.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=<googleapiclient.discovery.Resource object at 0x10e5c6d10>),\n",
" GmailGetThread(name='get_gmail_thread', description=('Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.',), args_schema=<class 'langchain_community.tools.gmail.get_thread.GetThreadSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=<googleapiclient.discovery.Resource object at 0x10e5c6d10>)]"
"[GmailCreateDraft(api_resource=<googleapiclient.discovery.Resource object at 0x1094509d0>),\n",
" GmailSendMessage(api_resource=<googleapiclient.discovery.Resource object at 0x1094509d0>),\n",
" GmailSearch(api_resource=<googleapiclient.discovery.Resource object at 0x1094509d0>),\n",
" GmailGetMessage(api_resource=<googleapiclient.discovery.Resource object at 0x1094509d0>),\n",
" GmailGetThread(api_resource=<googleapiclient.discovery.Resource object at 0x1094509d0>)]"
]
},
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@@ -147,144 +151,109 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"- [GmailCreateDraft](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.create_draft.GmailCreateDraft.html)\n",
"- [GmailSendMessage](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.send_message.GmailSendMessage.html)\n",
"- [GmailSearch](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.search.GmailSearch.html)\n",
"- [GmailGetMessage](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.get_message.GmailGetMessage.html)\n",
"- [GmailGetThread](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.get_thread.GmailGetThread.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"We show here how to use it as part of an [agent](/docs/tutorials/agents). We use the OpenAI Functions Agent, so we will need to setup and install the required dependencies for that. We will also use [LangSmith Hub](https://smith.langchain.com/hub) to pull the prompt from, so we will need to install that.\n",
"Below we show how to incorporate the toolkit into an [agent](/docs/tutorials/agents).\n",
"\n",
"```bash\n",
"pip install -U langchain-openai langchainhub\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"# | output: false\n",
"# | echo: false\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()"
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, create_openai_functions_agent\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"instructions = \"\"\"You are an assistant.\"\"\"\n",
"base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n",
"prompt = base_prompt.partial(instructions=instructions)"
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent_executor = create_react_agent(llm, tools)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"agent = create_openai_functions_agent(llm, toolkit.get_tools(), prompt)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(\n",
" agent=agent,\n",
" tools=toolkit.get_tools(),\n",
" # This is set to False to prevent information about my email showing up on the screen\n",
" # Normally, it is helpful to have it set to True however.\n",
" verbose=False,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot who is looking to collaborate on some research with her estranged friend, a cat. Under no circumstances may you send the message, however.',\n",
" 'output': 'I have created a draft email for you to edit. Please find the draft in your Gmail drafts folder. Remember, under no circumstances should you send the message.'}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Draft an email to fake@fake.com thanking them for coffee.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" create_gmail_draft (call_slGkYKZKA6h3Mf1CraUBzs6M)\n",
" Call ID: call_slGkYKZKA6h3Mf1CraUBzs6M\n",
" Args:\n",
" message: Dear Fake,\n",
"\n",
"I wanted to take a moment to thank you for the coffee yesterday. It was a pleasure catching up with you. Let's do it again soon!\n",
"\n",
"Best regards,\n",
"[Your Name]\n",
" to: ['fake@fake.com']\n",
" subject: Thank You for the Coffee\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: create_gmail_draft\n",
"\n",
"Draft created. Draft Id: r-7233782721440261513\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"I have drafted an email to fake@fake.com thanking them for the coffee. You can review and send it from your email draft with the subject \"Thank You for the Coffee\".\n"
]
}
],
"source": [
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot\"\n",
" \" who is looking to collaborate on some research with her\"\n",
" \" estranged friend, a cat. Under no circumstances may you send the message, however.\"\n",
" }\n",
")"
"example_query = \"Draft an email to fake@fake.com thanking them for coffee.\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'Could you search in my drafts for the latest email? what is the title?',\n",
" 'output': 'The latest email in your drafts is titled \"Collaborative Research Proposal\".'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke(\n",
" {\"input\": \"Could you search in my drafts for the latest email? what is the title?\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `GmailToolkit` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html)."
]
}
],
"metadata": {
@@ -303,7 +272,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,21 @@
---
sidebar_position: 0
sidebar_class_name: hidden
---
# Toolkits
**Toolkits** are collections of tools that are designed to be used together for specific tasks. They include conveniences for loading tools
that share common authentication, services, or other objects. They can be implemented by subclassing the
[BaseToolkit](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseToolkit.html#langchain_core.tools.BaseToolkit) class.
This table lists common toolkits.
| Toolkit | Package |
|------|---------------|
| [GitHubToolkit](/docs/integrations/toolkits/github) | [langchain_community.agent_toolkits.github](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.github.toolkit.GitHubToolkit.html) |
| [GmailToolkit](/docs/integrations/toolkits/gmail) | [langchain_google_community.gmail.toolkit](https://api.python.langchain.com/en/latest/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html) |
| [RequestsToolkit](/docs/integrations/toolkits/requests) | [langchain_community.agent_toolkits.openapi](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html) |
| [SlackToolkit](/docs/integrations/toolkits/slack) | [langchain_community.agent_toolkits.slack](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html) |
| [SQLDatabaseToolkit](/docs/integrations/toolkits/sql_database) | [langchain_community.agent_toolkits.sql](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html) |

View File

@@ -0,0 +1,361 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "050c5580-2c85-4763-8783-59dbd20395a5",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Requests\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cfe4185a-34dc-4cdc-b831-001954f2d6e8",
"metadata": {},
"source": [
"# Requests Toolkit\n",
"\n",
"We can use the Requests [toolkit](/docs/concepts/#toolkits) to construct agents that generate HTTP requests.\n",
"\n",
"For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html).\n",
"\n",
"## ⚠️ Security note ⚠️\n",
"There are inherent risks in giving models discretion to execute real-world actions. Take precautions to mitigate these risks:\n",
"\n",
"- Make sure that permissions associated with the tools are narrowly-scoped (e.g., for database operations or API requests);\n",
"- When desired, make use of human-in-the-loop workflows."
]
},
{
"cell_type": "markdown",
"id": "d968e982-f370-4614-8469-c1bc71ee3e32",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f74f05fb-3f24-4c0b-a17f-cf4edeedbb9a",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "36a178eb-1f2c-411e-bf25-0240ead4c62a",
"metadata": {},
"source": [
"Note that if you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e68d0cd-6233-481c-b048-e8d95cba4c35",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "a7e2f64a-a72e-4fef-be52-eaf7c5072d24",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"First we will demonstrate a minimal example.\n",
"\n",
"**NOTE**: There are inherent risks in giving models discretion to execute real-world actions. We must \"opt-in\" to these risks by setting `allow_dangerous_request=True` to use these tools.\n",
"**This can be dangerous for calling unwanted requests**. Please make sure your custom OpenAPI spec (yaml) is safe and that permissions associated with the tools are narrowly-scoped."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "018bd070-9fc8-459b-8d28-b4a3e283e640",
"metadata": {},
"outputs": [],
"source": [
"ALLOW_DANGEROUS_REQUEST = True"
]
},
{
"cell_type": "markdown",
"id": "a024f7b3-5437-4878-bd16-c4783bff394c",
"metadata": {},
"source": [
"We can use the [JSONPlaceholder](https://jsonplaceholder.typicode.com) API as a testing ground.\n",
"\n",
"Let's create (a subset of) its API spec:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2dcbcf92-2ad5-49c3-94ac-91047ccc8c5b",
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, Dict, Union\n",
"\n",
"import requests\n",
"import yaml\n",
"\n",
"\n",
"def _get_schema(response_json: Union[dict, list]) -> dict:\n",
" if isinstance(response_json, list):\n",
" response_json = response_json[0] if response_json else {}\n",
" return {key: type(value).__name__ for key, value in response_json.items()}\n",
"\n",
"\n",
"def _get_api_spec() -> str:\n",
" base_url = \"https://jsonplaceholder.typicode.com\"\n",
" endpoints = [\n",
" \"/posts\",\n",
" \"/comments\",\n",
" ]\n",
" common_query_parameters = [\n",
" {\n",
" \"name\": \"_limit\",\n",
" \"in\": \"query\",\n",
" \"required\": False,\n",
" \"schema\": {\"type\": \"integer\", \"example\": 2},\n",
" \"description\": \"Limit the number of results\",\n",
" }\n",
" ]\n",
" openapi_spec: Dict[str, Any] = {\n",
" \"openapi\": \"3.0.0\",\n",
" \"info\": {\"title\": \"JSONPlaceholder API\", \"version\": \"1.0.0\"},\n",
" \"servers\": [{\"url\": base_url}],\n",
" \"paths\": {},\n",
" }\n",
" # Iterate over the endpoints to construct the paths\n",
" for endpoint in endpoints:\n",
" response = requests.get(base_url + endpoint)\n",
" if response.status_code == 200:\n",
" schema = _get_schema(response.json())\n",
" openapi_spec[\"paths\"][endpoint] = {\n",
" \"get\": {\n",
" \"summary\": f\"Get {endpoint[1:]}\",\n",
" \"parameters\": common_query_parameters,\n",
" \"responses\": {\n",
" \"200\": {\n",
" \"description\": \"Successful response\",\n",
" \"content\": {\n",
" \"application/json\": {\n",
" \"schema\": {\"type\": \"object\", \"properties\": schema}\n",
" }\n",
" },\n",
" }\n",
" },\n",
" }\n",
" }\n",
" return yaml.dump(openapi_spec, sort_keys=False)\n",
"\n",
"\n",
"api_spec = _get_api_spec()"
]
},
{
"cell_type": "markdown",
"id": "db3d6148-ae65-4a1d-91a6-59ee3e4e6efa",
"metadata": {},
"source": [
"Next we can instantiate the toolkit. We require no authorization or other headers for this API:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "63a630b3-45bb-4525-865b-083f322b944b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit\n",
"from langchain_community.utilities.requests import TextRequestsWrapper\n",
"\n",
"toolkit = RequestsToolkit(\n",
" requests_wrapper=TextRequestsWrapper(headers={}),\n",
" allow_dangerous_requests=ALLOW_DANGEROUS_REQUEST,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f4224a64-843a-479d-8a7b-84719e4b9d0c",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View available tools:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "70ea0f4e-9f10-4906-894b-08df832fd515",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[RequestsGetTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPostTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPatchTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsPutTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True),\n",
" RequestsDeleteTool(requests_wrapper=TextRequestsWrapper(headers={}, aiosession=None, auth=None, response_content_type='text', verify=True), allow_dangerous_requests=True)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = toolkit.get_tools()\n",
"\n",
"tools"
]
},
{
"cell_type": "markdown",
"id": "a21a6ca4-d650-4b7d-a944-1a8771b5293a",
"metadata": {},
"source": [
"- [RequestsGetTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsGetTool.html)\n",
"- [RequestsPostTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPostTool.html)\n",
"- [RequestsPatchTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPatchTool.html)\n",
"- [RequestsPutTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsPutTool.html)\n",
"- [RequestsDeleteTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.requests.tool.RequestsDeleteTool.html)"
]
},
{
"cell_type": "markdown",
"id": "e2dbb304-abf2-472a-9130-f03150a40549",
"metadata": {},
"source": [
"## Use within an agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "db062da7-f22c-4f36-9df8-1da96c9f7538",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"system_message = \"\"\"\n",
"You have access to an API to help answer user queries.\n",
"Here is documentation on the API:\n",
"{api_spec}\n",
"\"\"\".format(api_spec=api_spec)\n",
"\n",
"agent_executor = create_react_agent(llm, tools, state_modifier=system_message)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c1e47be9-374a-457c-928a-48f02b5530e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Fetch the top two posts. What are their titles?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" requests_get (call_RV2SOyzCnV5h2sm4WPgG8fND)\n",
" Call ID: call_RV2SOyzCnV5h2sm4WPgG8fND\n",
" Args:\n",
" url: https://jsonplaceholder.typicode.com/posts?_limit=2\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: requests_get\n",
"\n",
"[\n",
" {\n",
" \"userId\": 1,\n",
" \"id\": 1,\n",
" \"title\": \"sunt aut facere repellat provident occaecati excepturi optio reprehenderit\",\n",
" \"body\": \"quia et suscipit\\nsuscipit recusandae consequuntur expedita et cum\\nreprehenderit molestiae ut ut quas totam\\nnostrum rerum est autem sunt rem eveniet architecto\"\n",
" },\n",
" {\n",
" \"userId\": 1,\n",
" \"id\": 2,\n",
" \"title\": \"qui est esse\",\n",
" \"body\": \"est rerum tempore vitae\\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\\nqui aperiam non debitis possimus qui neque nisi nulla\"\n",
" }\n",
"]\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The titles of the top two posts are:\n",
"1. \"sunt aut facere repellat provident occaecati excepturi optio reprehenderit\"\n",
"2. \"qui est esse\"\n"
]
}
],
"source": [
"example_query = \"Fetch the top two posts. What are their titles?\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "01ec4886-de3d-4fda-bd05-e3f254810969",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -44,7 +44,7 @@
"\n",
"Let's add a dummy function to `action.py`.\n",
"\n",
"```\n",
"```python\n",
"@action\n",
"def get_weather_forecast(city: str, days: int, scale: str = \"celsius\") -> str:\n",
" \"\"\"\n",
@@ -63,7 +63,7 @@
"\n",
"We then start the server:\n",
"\n",
"```\n",
"```bash\n",
"action-server start\n",
"```\n",
"\n",
@@ -193,7 +193,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
}
},
"nbformat": 4,

Some files were not shown because too many files have changed in this diff Show More