Compare commits

..

294 Commits

Author SHA1 Message Date
Eugene Yurtsev
bdc7383917 x 2024-06-24 17:08:57 -04:00
Eugene Yurtsev
8c1f310f11 Merge branch 'master' into renderer 2024-06-24 17:05:54 -04:00
Eugene Yurtsev
8c0217188a qxqx 2024-06-24 17:05:41 -04:00
yuncliu
398b2b9c51 community[minor]: Add Ascend NPU optimized Embeddings (#20260)
- **Description:** Add NPU support for embeddings

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-24 20:15:11 +00:00
Ikko Eltociear Ashimine
7b1066341b docs: update sql_query_checking.ipynb (#23345)
creat -> create
2024-06-24 16:03:32 -04:00
S M Zia Ur Rashid
d5b2a93c6d package: security update urllib3 to @1.26.19 (#23366)
urllib3 version update 1.26.18 to 1.26.19 to address a security
vulnerability.

**Reference:**
https://security.snyk.io/vuln/SNYK-PYTHON-URLLIB3-7267250
2024-06-24 19:44:39 +00:00
Jacob Lee
57c13b4ef8 docs[patch]: Fix typo in how to guide for message history (#23364) 2024-06-24 15:43:05 -04:00
Luis Rueda
168e9ed3a5 partners: add custom options to MongoDBChatMessageHistory (#22944)
**Description:** Adds options for configuring MongoDBChatMessageHistory
(no breaking changes):
- session_id_key: name of the field that stores the session id
- history_key: name of the field that stores the chat history
- create_index: whether to create an index on the session id field
- index_kwargs: additional keyword arguments to pass to the index
creation

**Discussion:**
https://github.com/langchain-ai/langchain/discussions/22918
**Twitter handle:** @userlerueda

---------

Co-authored-by: Jib <Jibzade@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-24 19:42:56 +00:00
Eugene Yurtsev
1e750f12f6 standard-tests[minor]: Add standard read write test suite for vectorstores (#23355)
Add standard read write test suite for vectorstores
2024-06-24 19:40:56 +00:00
Eugene Yurtsev
3b3ed72d35 standard-tests[minor]: Add standard tests for BaseStore (#23360)
Add standard tests to base store abstraction. These only work on [str,
str] right now. We'll need to check if it's possible to add
encoder/decoders to generalize
2024-06-24 19:38:50 +00:00
ccurme
e1190c8f3c mongodb[patch]: fix CI for python 3.12 (#23369) 2024-06-24 19:31:20 +00:00
RUO
2b87e330b0 community: fix issue with nested field extraction in MongodbLoader (#22801)
**Description:** 
This PR addresses an issue in the `MongodbLoader` where nested fields
were not being correctly extracted. The loader now correctly handles
nested fields specified in the `field_names` parameter.

**Issue:** 
Fixes an issue where attempting to extract nested fields from MongoDB
documents resulted in `KeyError`.

**Dependencies:** 
No new dependencies are required for this change.

**Twitter handle:** 
(Optional, your Twitter handle if you'd like a mention when the PR is
announced)

### Changes
1. **Field Name Parsing**:
- Added logic to parse nested field names and safely extract their
values from the MongoDB documents.

2. **Projection Construction**:
- Updated the projection dictionary to include nested fields correctly.

3. **Field Extraction**:
- Updated the `aload` method to handle nested field extraction using a
recursive approach to traverse the nested dictionaries.

### Example Usage
Updated usage example to demonstrate how to specify nested fields in the
`field_names` parameter:

```python
loader = MongodbLoader(
    connection_string=MONGO_URI,
    db_name=MONGO_DB,
    collection_name=MONGO_COLLECTION,
    filter_criteria={"data.job.company.industry_name": "IT", "data.job.detail": { "$exists": True }},
    field_names=[
        "data.job.detail.id",
        "data.job.detail.position",
        "data.job.detail.intro",
        "data.job.detail.main_tasks",
        "data.job.detail.requirements",
        "data.job.detail.preferred_points",
        "data.job.detail.benefits",
    ],
)

docs = loader.load()
print(len(docs))
for doc in docs:
    print(doc.page_content)
```
### Testing
Tested with a MongoDB collection containing nested documents to ensure
that the nested fields are correctly extracted and concatenated into a
single page_content string.
### Note
This change ensures backward compatibility for non-nested fields and
improves functionality for nested field extraction.
### Output Sample
```python
print(docs[:3])
```
```shell
# output sample:
[
    Document(
        # Here in this example, page_content is the combined text from the fields below
        # "position", "intro", "main_tasks", "requirements", "preferred_points", "benefits"
        page_content='all combined contents from the requested fields in the document',
        metadata={'database': 'Your Database name', 'collection': 'Your Collection name'}
    ),
    ...
]
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-24 19:29:11 +00:00
Tomaz Bratanic
aeeda370aa Sanitize backticks from neo4j labels and types for import (#23367) 2024-06-24 19:05:31 +00:00
Jacob Lee
d2db561347 docs[patch]: Adds callout in LLM concept docs, remove deprecated code (#23361)
CC @baskaryan @hwchase17
2024-06-24 12:03:18 -07:00
Rave Harpaz
f5ff7f178b Add OCI Generative AI new model support (#22880)
- [x] PR title: 
community: Add OCI Generative AI new model support
 
- [x] PR message:
- Description: adding support for new models offered by OCI Generative
AI services. This is a moderate update of our initial integration PR
16548 and includes a new integration for our chat models under
/langchain_community/chat_models/oci_generative_ai.py
    - Issue: NA
- Dependencies: No new Dependencies, just latest version of our OCI sdk
    - Twitter handle: NA


- [x] Add tests and docs: 
  1. we have updated our unit tests
2. we have updated our documentation including a new ipynb for our new
chat integration


- [x] Lint and test: 
 `make format`, `make lint`, and `make test` run successfully

---------

Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
2024-06-24 14:48:23 -04:00
Jacob Lee
753edf9c80 docs[patch]: Update chatbot tools how-to guide (#23362) 2024-06-24 11:46:06 -07:00
Baur
aa358f2be4 community: Add ZenGuard tool (#22959)
** Description**
This is the community integration of ZenGuard AI - the fastest
guardrails for GenAI applications. ZenGuard AI protects against:

- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.

**Twitter Handle** : @zenguardai

- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. Added an integration test
  2. Added colab


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

---------

Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
2024-06-24 17:40:56 +00:00
Mathis Joffre
60103fc4a5 community: Fix OVHcloud 401 Unauthorized on embedding. (#23260)
They are now rejecting with code 401 calls from users with expired or
invalid tokens (while before they were being considered anonymous).
Thus, the authorization header has to be removed when there is no token.

Related to: #23178

---------

Signed-off-by: Joffref <mariusjoffre@gmail.com>
2024-06-24 12:58:32 -04:00
Baskar Gopinath
4964ba74db Update multimodal_prompts.ipynb (#23301)
fixes #23294

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-24 15:58:51 +00:00
Eugene Yurtsev
d90379210a standard-tests[minor]: Add standard tests for cache (#23357)
Add standard tests for cache abstraction
2024-06-24 15:15:03 +00:00
Leonid Ganeline
987099cfcd community: toolkits docstrings (#23286)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-22 14:37:52 +00:00
Rahul Triptahi
0cd3f93361 Enhance metadata of sharepointLoader. (#22248)
Description: 2 feature flags added to SharePointLoader in this PR:

1. load_auth: if set to True, adds authorised identities to metadata
2. load_extended_metadata, adds source, owner and full_path to metadata

Unit tests:N/A
Documentation: To be done.

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-21 17:03:38 -07:00
Yuki Watanabe
5d4133d82f community: Overhaul Databricks provider documentation (#23203)
**Description**: Update [Databricks
Provider](https://python.langchain.com/v0.2/docs/integrations/providers/databricks/)
documentations to the latest component notebooks and draw better
navigation path to related notebooks.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-06-21 16:57:35 -07:00
Bagatur
bcac6c3aff openai[patch]: temp fix ignore lint (#23290) 2024-06-21 16:52:52 -07:00
William FH
efb4c12abe [Core] Add support for inferring Annotated types (#23284)
in bind_tools() / convert_to_openai_function
2024-06-21 15:16:30 -07:00
Vadym Barda
9ac302cb97 core[minor]: update draw_mermaid node label processing (#23285)
This fixes processing issue for nodes with numbers in their labels (e.g.
`"node_1"`, which would previously be relabeled as `"node__"`, and now
are correctly processed as `"node_1"`)
2024-06-21 21:35:32 +00:00
Rajendra Kadam
7ee2822ec2 community: Fix TypeError in PebbloRetrievalQA (#23170)
**Description:** 
Fix "`TypeError: 'NoneType' object is not iterable`" when the
auth_context is absent in PebbloRetrievalQA. The auth_context is
optional; hence, PebbloRetrievalQA should work without it, but it throws
an error at the moment. This PR fixes that issue.

**Issue:** NA
**Dependencies:** None
**Unit tests:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-21 17:04:00 -04:00
Iurii Umnov
3b7b933aa2 community[minor]: OpenAPI agent. Add support for PUT, DELETE and PATCH (#22962)
**Description**: Add PUT, DELETE and PATCH tools to tool list for
OpenAPI agent if dangerous requests are allowed.

**Issue**: https://github.com/langchain-ai/langchain/issues/20469
2024-06-21 20:44:23 +00:00
Guangdong Liu
3c42bf8d97 community(patch):Fix PineconeHynridSearchRetriever not having search_kwargs (#21577)
- close #21521
2024-06-21 16:27:52 -04:00
Rahul Triptahi
4bb3d5c488 [community][quick-fix]: changed from blob.path to blob.path.name in 0365BaseLoader. (#22287)
Description: file_metadata_ was not getting propagated to returned
documents. Changed the lookup key to the name of the blob's path.
Changed blob.path key to blob.path.name for metadata_dict key lookup.
Documentation: N/A
Unit tests: N/A

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-21 15:51:03 -04:00
Bagatur
f824f6d925 docs: fix merge message runs docstring (#23279) 2024-06-21 19:50:50 +00:00
wenngong
f9aea3db07 partners: add lint docstrings for chroma module (#23249)
Description: add lint docstrings for chroma module
Issue: the issue #23188 @baskaryan

test:  ruff check passed.


![image](https://github.com/langchain-ai/langchain/assets/76683249/5e168a0c-32d0-464f-8ddb-110233918019)

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-21 19:49:24 +00:00
Bagatur
9eda8f2fe8 docs: fix trim_messages code blocks (#23271) 2024-06-21 17:15:31 +00:00
Jacob Lee
86326269a1 docs[patch]: Adds prereqs to trim messages (#23270)
CC @baskaryan
2024-06-21 10:09:41 -07:00
Bagatur
4c97a9ee53 docs: fix message transformer docstrings (#23264) 2024-06-21 16:10:03 +00:00
Vwake04
0deb98ac0c pinecone: Fix multiprocessing issue in PineconeVectorStore (#22571)
**Description:**

Currently, the `langchain_pinecone` library forces the `async_req`
(asynchronous required) argument to Pinecone to `True`. This design
choice causes problems when deploying to environments that do not
support multiprocessing, such as AWS Lambda. In such environments, this
restriction can prevent users from successfully using
`langchain_pinecone`.

This PR introduces a change that allows users to specify whether they
want to use asynchronous requests by passing the `async_req` parameter
through `**kwargs`. By doing so, users can set `async_req=False` to
utilize synchronous processing, making the library compatible with AWS
Lambda and other environments that do not support multithreading.

**Issue:**
This PR does not address a specific issue number but aims to resolve
compatibility issues with AWS Lambda by allowing synchronous processing.

**Dependencies:**
None, that I'm aware of.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-21 15:46:01 +00:00
ccurme
75c7c3a1a7 openai: release 0.1.9 (#23263) 2024-06-21 11:15:29 -04:00
Brace Sproul
abe7566d7d core[minor]: BaseChatModel with_structured_output implementation (#22859) 2024-06-21 08:14:03 -07:00
mackong
360a70c8a8 core[patch]: fix no current event loop for sql history in async mode (#22933)
- **Description:** When use
RunnableWithMessageHistory/SQLChatMessageHistory in async mode, we'll
get the following error:
```
Error in RootListenersTracer.on_chain_end callback: RuntimeError("There is no current event loop in thread 'asyncio_3'.")
```
which throwed by
ddfbca38df/libs/community/langchain_community/chat_message_histories/sql.py (L259).
and no message history will be add to database.

In this patch, a new _aexit_history function which will'be called in
async mode is added, and in turn aadd_messages will be called.

In this patch, we use `afunc` attribute of a Runnable to check if the
end listener should be run in async mode or not.

  - **Issue:** #22021, #22022 
  - **Dependencies:** N/A
2024-06-21 10:39:47 -04:00
Philippe PRADOS
1c2b9cc9ab core[minor]: Update pgvector transalor for langchain_postgres (#23217)
The SelfQuery PGVectorTranslator is not correct. The operator is "eq"
and not "$eq".
This patch use a new version of PGVectorTranslator from
langchain_postgres.

It's necessary to release a new version of langchain_postgres (see
[here](https://github.com/langchain-ai/langchain-postgres/pull/75)
before accepting this PR in langchain.
2024-06-21 10:37:09 -04:00
Mu Yang
401d469a92 langchain: fix systax warning in create_json_chat_agent (#23253)
fix systax warning in `create_json_chat_agent`

```
.../langchain/agents/json_chat/base.py:22: SyntaxWarning: invalid escape sequence '\ '
  """Create an agent that uses JSON to format its logic, build for Chat Models.
```
2024-06-21 10:05:38 -04:00
mackong
b108b4d010 core[patch]: set schema format for AsyncRootListenersTracer (#23214)
- **Description:** AsyncRootListenersTracer support on_chat_model_start,
it's schema_format should be "original+chat".
  - **Issue:** N/A
  - **Dependencies:**
2024-06-21 09:30:27 -04:00
Bagatur
976b456619 docs: BaseChatModel key methods table (#23238)
If we're moving documenting inherited params think these kinds of tables
become more important

![Screenshot 2024-06-20 at 3 59 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/722266eb-2353-4e85-8fae-76b19bd333e0)
2024-06-20 21:00:22 -07:00
Jacob Lee
5da7eb97cb docs[patch]: Update link (#23240)
CC @agola11
2024-06-20 17:43:12 -07:00
ccurme
a7b4175091 standard tests: add test for tool calling (#23234)
Including streaming
2024-06-20 17:20:11 -04:00
Bagatur
12e0c28a6e docs: fix chat model methods table (#23233)
rst table not md
![Screenshot 2024-06-20 at 12 37 46
PM](https://github.com/langchain-ai/langchain/assets/22008038/7a03b869-c1f4-45d0-8d27-3e16f4c6eb19)
2024-06-20 19:51:10 +00:00
Zheng Robert Jia
a349fce880 docs[minor],community[patch]: Minor tutorial docs improvement, minor import error quick fix. (#22725)
minor changes to module import error handling and minor issues in
tutorial documents.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-20 15:36:49 -04:00
Eugene Yurtsev
7545b1d29b core[patch]: Fix doc-strings for code blocks (#23232)
Code blocks need extra space around them to be rendered properly by
sphinx
2024-06-20 19:34:52 +00:00
Luis Moros
d5be160af0 community[patch]: Fix sql_databse.from_databricks issue when ran from Job (#23224)
**Desscription**: When the ``sql_database.from_databricks`` is executed
from a Workflow Job, the ``context`` object does not have a
"browserHostName" property, resulting in an error. This change manages
the error so the "DATABRICKS_HOST" env variable value is used instead of
stoping the flow

Co-authored-by: lmorosdb <lmorosdb>
2024-06-20 19:34:15 +00:00
Cory Waddingham
cd6812342e pinecone[patch]: Update Poetry requirements for pinecone-client >=3.2.2 (#22094)
This change updates the requirements in
`libs/partners/pinecone/pyproject.toml` to allow all versions of
`pinecone-client` greater than or equal to 3.2.2.

This change resolves issue
[21955](https://github.com/langchain-ai/langchain/issues/21955).

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-20 18:59:36 +00:00
ccurme
abb3066150 docs: clarify streaming with RunnableLambda (#23228) 2024-06-20 14:49:00 -04:00
ccurme
bf7763d9b0 docs: add serialization guide (#23223) 2024-06-20 12:50:24 -04:00
Eugene Yurtsev
59d7adff8f core[patch]: Add clarification about streaming to RunnableLambda (#23227)
Add streaming clarification to runnable lambda docstring.
2024-06-20 16:47:16 +00:00
Jacob Lee
60db79a38a docs[patch]: Update Anthropic chat model docs (#23226)
CC @baskaryan
2024-06-20 09:46:43 -07:00
maang-h
bc4cd9c5cc community[patch]: Update root_validators ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat, ChatSparkLLM, ChatZhipuAI (#22853)
This PR updates root validators for:

- ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat,
ChatSparkLLM, ChatZhipuAI

Issues #22819

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 16:36:41 +00:00
ChrisDEV
cb6cf4b631 Fix return value type of dumpd (#20123)
The return type of `json.loads` is `Any`.

In fact, the return type of `dumpd` must be based on `json.loads`, so
the correction here is understandable.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 16:31:41 +00:00
Guangdong Liu
0bce28cd30 core(patch): Fix encoding problem of load_prompt method (#21559)
- description: Add encoding parameters.
- @baskaryan, @efriis, @eyurtsev, @hwchase17.


![54d25ac7b1d5c2e47741a56fe8ed8ba](https://github.com/langchain-ai/langchain/assets/48236177/ffea9596-2001-4e19-b245-f8a6e231b9f9)
2024-06-20 09:25:54 -07:00
Philippe PRADOS
8711c61298 core[minor]: Adds an in-memory implementation of RecordManager (#13200)
**Description:**
langchain offers three technologies to save data:
-
[vectorstore](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
- [docstore](https://js.langchain.com/docs/api/schema/classes/Docstore)
- [record
manager](https://python.langchain.com/docs/modules/data_connection/indexing)

If you want to combine these technologies in a sample persistence
stategy you need a common implementation for each. `DocStore` propose
`InMemoryDocstore`.

We propose the class `MemoryRecordManager` to complete the system.

This is the prelude to another full-request, which needs a consistent
combination of persistence components.

**Tag maintainer:**
@baskaryan

**Twitter handle:**
@pprados

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 12:19:10 -04:00
Eugene Yurtsev
3ab49c0036 docs: API reference remove Prev/Up/Next buttons (#23225)
These do not work anyway. Let's remove them for now for simplicity.
2024-06-20 16:15:45 +00:00
Eugene Yurtsev
61daa16e5d docs: Update clean up API reference (#23221)
- Fix bug with TypedDicts rendering inherited methods if inherting from
  typing_extensions.TypedDict rather than typing.TypedDict
- Do not surface inherited pydantic methods for subclasses of BaseModel
- Subclasses of RunnableSerializable will not how methods inherited from
  Runnable or from BaseModel
- Subclasses of Runnable that not pydantic models will include a link to
RunnableInterface (they still show inherited methods, we can fix this
later)
2024-06-20 11:35:00 -04:00
Leonid Ganeline
51e75cf59d community: docstrings (#23202)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-20 11:08:13 -04:00
Julian Weng
6a1a0d977a partners[minor]: Fix value error message for with_structured_output (#22877)
Currently, calling `with_structured_output()` with an invalid method
argument raises `Unrecognized method argument. Expected one of
'function_calling' or 'json_format'`, but the JSON mode option [is now
referred
to](https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method)
by `'json_mode'`. This fixes that.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 15:03:21 +00:00
Qingchuan Hao
dd4d4411c9 doc: replace function all with tool call (#23184)
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-20 09:27:39 -04:00
Yahkeef Davis
b03c801523 Docs: Update Rag tutorial so it includes an additional notebook cell with pip installs of required langchain_chroma and langchain_community. (#23204)
Description: Update Rag tutorial notebook so it includes an additional
notebook cell with pip installs of required langchain_chroma and
langchain_community.

This fixes the issue with the rag tutorial gives you a 'missing modules'
error if you run code in the notebook as is.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-20 09:22:49 -04:00
Leonid Ganeline
41f7620989 huggingface: docstrings (#23148)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-20 13:22:40 +00:00
ccurme
066a5a209f huggingface[patch]: fix CI for python 3.12 (#23197) 2024-06-20 09:17:26 -04:00
xyd
9b3a025f9c fix https://github.com/langchain-ai/langchain/issues/23215 (#23216)
fix bug 
The ZhipuAIEmbeddings class is not working.

Co-authored-by: xu yandong <shaonian@acsx1.onexmail.com>
2024-06-20 13:04:50 +00:00
Bagatur
ad7f2ec67d standard-tests[patch]: test stop not stop_sequences (#23200) 2024-06-19 18:07:33 -07:00
Bagatur
bd5c92a113 docs: standard params (#23199) 2024-06-19 17:57:05 -07:00
David DeCaprio
a4bcb45f65 core:Add optional max_messages to MessagePlaceholder (#16098)
- **Description:** Add optional max_messages to MessagePlaceholder
- **Issue:**
[16096](https://github.com/langchain-ai/langchain/issues/16096)
- **Dependencies:** None
- **Twitter handle:** @davedecaprio

Sometimes it's better to limit the history in the prompt itself rather
than the memory. This is needed if you want different prompts in the
chain to have different history lengths.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-19 23:39:51 +00:00
shaunakgodbole
7193634ae6 fireworks[patch]: fix api_key alias in Fireworks LLM (#23118)
Thank you for contributing to LangChain!

**Description**
The current code snippet for `Fireworks` had incorrect parameters. This
PR fixes those parameters.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-19 21:14:42 +00:00
Eugene Yurtsev
1fcf875fe3 core[patch]: Document agent schema (#23194)
* Document agent schema
* Refer folks to langgraph for more information on how to create agents.
2024-06-19 20:16:57 +00:00
Bagatur
255ad39ae3 infra: run CI on large diffs (#23192)
currently we skip CI on diffs >= 300 files. think we should just run it
on all packages instead

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-19 19:30:56 +00:00
Eugene Yurtsev
c2d43544cc core[patch]: Document messages namespace (#23154)
- Moved doc-strings below attribtues in TypedDicts -- seems to render
better on APIReference pages.
* Provided more description and some simple code examples
2024-06-19 15:00:00 -04:00
Eugene Yurtsev
3c917204dc core[patch]: Add doc-strings to outputs, fix @root_validator (#23190)
- Document outputs namespace
- Update a vanilla @root_validator that was missed
2024-06-19 14:59:06 -04:00
Bagatur
8698cb9b28 infra: add more formatter rules to openai (#23189)
Turns on
https://docs.astral.sh/ruff/settings/#format_docstring-code-format and
https://docs.astral.sh/ruff/settings/#format_skip-magic-trailing-comma

```toml
[tool.ruff.format]
docstring-code-format = true
skip-magic-trailing-comma = true
```
2024-06-19 11:39:58 -07:00
Michał Krassowski
710197e18c community[patch]: restore compatibility with SQLAlchemy 1.x (#22546)
- **Description:** Restores compatibility with SQLAlchemy 1.4.x that was
broken since #18992 and adds a test run for this version on CI (only for
Python 3.11)
- **Issue:** fixes #19681
- **Dependencies:** None
- **Twitter handle:** `@krassowski_m`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-19 17:58:57 +00:00
Erick Friis
48d6ea427f upstage: move to external repo (#22506) 2024-06-19 17:56:07 +00:00
Bagatur
0a4ee864e9 openai[patch]: image token counting (#23147)
Resolves #23000

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-19 10:41:47 -07:00
Jorge Piedrahita Ortiz
b3e53ffca0 community[patch]: sambanova llm integration improvement (#23137)
- **Description:** sambanova sambaverse integration improvement: removed
input parsing that was changing raw user input, and was making to use
process prompt parameter as true mandatory
2024-06-19 10:30:14 -07:00
Jorge Piedrahita Ortiz
e162893d7f community[patch]: update sambastudio embeddings (#23133)
Description: update sambastudio embeddings integration, now compatible
with generic endpoints and CoE endpoints
2024-06-19 10:26:56 -07:00
Philippe PRADOS
db6f46c1a6 langchain[small]: Change type to BasePromptTemplate (#23083)
```python
Change from_llm(
 prompt: PromptTemplate 
 ...
 )
```
 to
```python
Change from_llm(
 prompt: BasePromptTemplate 
 ...
 )
```
2024-06-19 13:19:36 -04:00
Sergey Kozlov
94452a94b1 core[patch[: add exceptions propagation test for astream_events v2 (#23159)
**Description:** `astream_events(version="v2")` didn't propagate
exceptions in `langchain-core<=0.2.6`, fixed in the #22916. This PR adds
a unit test to check that exceptions are propagated upwards.

Co-authored-by: Sergey Kozlov <sergey.kozlov@ludditelabs.io>
2024-06-19 13:00:25 -04:00
Leonid Ganeline
50484be330 prompty: docstring (#23152)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-19 12:50:58 -04:00
Qingchuan Hao
9b82707ea6 docs: add bing search tool to ms platform (#23183)
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-19 12:43:05 -04:00
chenxi
505a2e8743 fix: MoonshotChat fails when setting the moonshot_api_key through the OS environment. (#23176)
Close #23174

Co-authored-by: tianming <tianming@bytenew.com>
2024-06-19 16:28:24 +00:00
Bagatur
677408bfc9 core[patch]: fix chat history circular import (#23182) 2024-06-19 09:08:36 -07:00
Eugene Yurtsev
883e90d06e core[patch]: Add an example to the Document schema doc-string (#23131)
Add an example to the document schema
2024-06-19 11:35:30 -04:00
ccurme
2b08e9e265 core[patch]: update test to catch circular imports (#23172)
This raises ImportError due to a circular import:
```python
from langchain_core import chat_history
```

This does not:
```python
from langchain_core import runnables
from langchain_core import chat_history
```

Here we update `test_imports` to run each import in a separate
subprocess. Open to other ways of doing this!
2024-06-19 15:24:38 +00:00
Eugene Yurtsev
ae4c0ed25a core[patch]: Add documentation to load namespace (#23143)
Document some of the modules within the load namespace
2024-06-19 15:21:41 +00:00
Eugene Yurtsev
a34e650f8b core[patch]: Add doc-string to document compressor (#23085) 2024-06-19 11:03:49 -04:00
Eugene Yurtsev
1007a715a5 community[patch]: Prevent unit tests from making network requests (#23180)
* Prevent unit tests from making network requests
2024-06-19 14:56:30 +00:00
ccurme
ca798bc6ea community: move test to integration tests (#23178)
Tests failing on master with

> FAILED
tests/unit_tests/embeddings/test_ovhcloud.py::test_ovhcloud_embed_documents
- ValueError: Request failed with status code: 401, {"message":"Bad
token; invalid JSON"}
2024-06-19 14:39:48 +00:00
Eugene Yurtsev
4fe8403bfb core[patch]: Expand documentation in the indexing namespace (#23134) 2024-06-19 10:11:44 -04:00
Eugene Yurtsev
fe4f10047b core[patch]: Document embeddings namespace (#23132)
Document embeddings namespace
2024-06-19 10:11:16 -04:00
Eugene Yurtsev
a3bae56a48 core[patch]: Update documentation in LLM namespace (#23138)
Update documentation in lllm namespace.
2024-06-19 10:10:50 -04:00
Leonid Ganeline
a70b7a688e ai21: docstrings (#23142)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-19 08:51:15 -04:00
Jacob Lee
0c2ebe5f47 docs[patch]: Standardize prerequisites in tutorial docs (#23150)
CC @baskaryan
2024-06-18 23:10:13 -07:00
bilk0h
3d54784e6d text-splitters: Fix/recursive json splitter data persistence issue (#21529)
Thank you for contributing to LangChain!

**Description:** Noticed an issue with when I was calling
`RecursiveJsonSplitter().split_json()` multiple times that I was getting
weird results. I found an issue where `chunks` list in the `_json_split`
method. If chunks is not provided when _json_split (which is the case
when split_json calls _json_split) then the same list is used for
subsequent calls to `_json_split`.


You can see this in the test case i also added to this commit.

Output should be: 
```
[{'a': 1, 'b': 2}]
[{'c': 3, 'd': 4}]
```

Instead you get:
```
[{'a': 1, 'b': 2}]
[{'a': 1, 'b': 2, 'c': 3, 'd': 4}]
```

---------

Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-18 20:21:55 -07:00
Yuki Watanabe
9ab7a6df39 docs: Overhaul Databricks components documentation (#22884)
**Description:** Documentation at
[integrations/llms/databricks](https://python.langchain.com/v0.2/docs/integrations/llms/databricks/)
is not up-to-date and includes examples about chat model and embeddings,
which should be located in the different corresponding subdirectories.
This PR split the page into correct scope and overhaul the contents.

**Note**: This PR might be hard to review on the diffs view, please use
the following preview links for the changed pages.
- `ChatDatabricks`:
https://langchain-git-fork-b-step62-chat-databricks-doc-langchain.vercel.app/v0.2/docs/integrations/chat/databricks/
- `Databricks`:
https://langchain-git-fork-b-step62-chat-databricks-doc-langchain.vercel.app/v0.2/docs/integrations/llms/databricks/
- `DatabricksEmbeddings`:
https://langchain-git-fork-b-step62-chat-databricks-doc-langchain.vercel.app/v0.2/docs/integrations/text_embedding/databricks/

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-06-18 20:10:54 -07:00
鹿鹿鹿鲨
6b46b5e9ce community: add **request_kwargs and expect TimeError AsyncHtmlLoader (#23068)
- **Description:** add `**request_kwargs` and expect `TimeError` in
`_fetch` function for AsyncHtmlLoader. This allows you to fill in the
kwargs parameter when using the `load()` method of the `AsyncHtmlLoader`
class.

Co-authored-by: Yucolu <yucolu@tencent.com>
2024-06-18 20:02:46 -07:00
Leonid Ganeline
109a70fc64 ibm: docstrings (#23149)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-18 20:00:27 -07:00
Ryan Elston
86ee4f0daa text-splitters: Introduce Experimental Markdown Syntax Splitter (#22257)
#### Description
This MR defines a `ExperimentalMarkdownSyntaxTextSplitter` class. The
main goal is to replicate the functionality of the original
`MarkdownHeaderTextSplitter` which extracts the header stack as metadata
but with one critical difference: it keeps the whitespace of the
original text intact.

This draft reimplements the `MarkdownHeaderTextSplitter` with a very
different algorithmic approach. Instead of marking up each line of the
text individually and aggregating them back together into chunks, this
method builds each chunk sequentially and applies the metadata to each
chunk. This makes the implementation simpler. However, since it's
designed to keep white space intact its not a full drop in replacement
for the original. Since it is a radical implementation change to the
original code and I would like to get feedback to see if this is a
worthwhile replacement, should be it's own class, or is not a good idea
at all.

Note: I implemented the `return_each_line` parameter but I don't think
it's a necessary feature. I'd prefer to remove it.

This implementation also adds the following additional features:
- Splits out code blocks and includes the language in the `"Code"`
metadata key
- Splits text on the horizontal rule `---` as well
- The `headers_to_split_on` parameter is now optional - with sensible
defaults that can be overridden.

#### Issue
Keeping the whitespace keeps the paragraphs structure and the formatting
of the code blocks intact which allows the caller much more flexibility
in how they want to further split the individuals sections of the
resulting documents. This addresses the issues brought up by the
community in the following issues:
- https://github.com/langchain-ai/langchain/issues/20823
- https://github.com/langchain-ai/langchain/issues/19436
- https://github.com/langchain-ai/langchain/issues/22256

#### Dependencies
N/A

#### Twitter handle
@RyanElston

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-18 19:44:00 -07:00
Bagatur
93d0ad97fe anthropic[patch]: test image input (#23155) 2024-06-19 02:32:15 +00:00
Leonid Ganeline
3dfd055411 anthropic: docstrings (#23145)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-18 22:26:45 -04:00
Bagatur
90559fde70 openai[patch], standard-tests[patch]: don't pass in falsey stop vals (#23153)
adds an image input test to standard-tests as well
2024-06-18 18:13:13 -07:00
Bagatur
e8a8286012 core[patch]: runnablewithchathistory from core.runnables (#23136) 2024-06-19 00:15:18 +00:00
Jacob Lee
2ae718796e docs[patch]: Fix typo in feedback (#23146) 2024-06-18 16:32:04 -07:00
Jacob Lee
74749c909d docs[patch]: Adds feedback input after thumbs up/down (#23141)
CC @baskaryan
2024-06-18 16:08:22 -07:00
Bagatur
cf38981bb7 docs: use trim_messages in chatbot how to (#23139) 2024-06-18 15:48:03 -07:00
Vadym Barda
b483bf5095 core[minor]: handle boolean data in draw_mermaid (#23135)
This change should address graph rendering issues for edges with boolean
data

Example from langgraph:

```python
from typing import Annotated, TypedDict

from langchain_core.messages import AnyMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]


def branch(state: State) -> bool:
    return 1 + 1 == 3


graph_builder = StateGraph(State)
graph_builder.add_node("foo", lambda state: {"messages": [("ai", "foo")]})
graph_builder.add_node("bar", lambda state: {"messages": [("ai", "bar")]})

graph_builder.add_conditional_edges(
    START,
    branch,
    path_map={True: "foo", False: "bar"},
    then=END,
)

app = graph_builder.compile()
print(app.get_graph().draw_mermaid())
```

Previous behavior:

```python
AttributeError: 'bool' object has no attribute 'split'
```

Current behavior:

```python
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
	__start__[__start__]:::startclass;
	__end__[__end__]:::endclass;
	foo([foo]):::otherclass;
	bar([bar]):::otherclass;
	__start__ -. ('a',) .-> foo;
	foo --> __end__;
	__start__ -. ('b',) .-> bar;
	bar --> __end__;
	classDef startclass fill:#ffdfba;
	classDef endclass fill:#baffc9;
	classDef otherclass fill:#fad7de;
```
2024-06-18 20:15:42 +00:00
Bagatur
093ae04d58 core[patch]: Pin pydantic in py3.12.4 (#23130) 2024-06-18 12:00:02 -07:00
hmasdev
ff0c06b1e5 langchain[patch]: fix OutputType of OutputParsers and fix legacy API in OutputParsers (#19792)
# Description

This pull request aims to address specific issues related to the
ambiguity and error-proneness of the output types of certain output
parsers, as well as the absence of unit tests for some parsers. These
issues could potentially lead to runtime errors or unexpected behaviors
due to type mismatches when used, causing confusion for developers and
users. Through clarifying output types, this PR seeks to improve the
stability and reliability.

Therefore, this pull request

- fixes the `OutputType` of OutputParsers to be the expected type;
- e.g. `OutputType` property of `EnumOutputParser` raises `TypeError`.
This PR introduce a logic to extract `OutputType` from its attribute.
- and fixes the legacy API in OutputParsers like `LLMChain.run` to the
modern API like `LLMChain.invoke`;
- Note: For `OutputFixingParser`, `RetryOutputParser` and
`RetryWithErrorOutputParser`, this PR introduces `legacy` attribute with
False as default value in order to keep the backward compatibility
- and adds the tests for the `OutputFixingParser` and
`RetryOutputParser`.

The following table shows my expected output and the actual output of
the `OutputType` of OutputParsers.
I have used this table to fix `OutputType` of OutputParsers.

| Class Name of OutputParser | My Expected `OutputType` (after this PR)|
Actual `OutputType` [evidence](#evidence) (before this PR)| Fix Required
|
|---------|--------------|---------|--------|
| BooleanOutputParser | `<class 'bool'>` | `<class 'bool'>` | NO |
| CombiningOutputParser | `typing.Dict[str, Any]` | `TypeError` is
raised | YES |
| DatetimeOutputParser | `<class 'datetime.datetime'>` | `<class
'datetime.datetime'>` | NO |
| EnumOutputParser(enum=MyEnum) | `MyEnum` | `TypeError` is raised | YES
|
| OutputFixingParser | The same type as `self.parser.OutputType` | `~T`
| YES |
| CommaSeparatedListOutputParser | `typing.List[str]` |
`typing.List[str]` | NO |
| MarkdownListOutputParser | `typing.List[str]` | `typing.List[str]` |
NO |
| NumberedListOutputParser | `typing.List[str]` | `typing.List[str]` |
NO |
| JsonOutputKeyToolsParser | `typing.Any` | `typing.Any` | NO |
| JsonOutputToolsParser | `typing.Any` | `typing.Any` | NO |
| PydanticToolsParser | `typing.Any` | `typing.Any` | NO |
| PandasDataFrameOutputParser | `typing.Dict[str, Any]` | `TypeError` is
raised | YES |
| PydanticOutputParser(pydantic_object=MyModel) | `<class
'__main__.MyModel'>` | `<class '__main__.MyModel'>` | NO |
| RegexParser | `typing.Dict[str, str]` | `TypeError` is raised | YES |
| RegexDictParser | `typing.Dict[str, str]` | `TypeError` is raised |
YES |
| RetryOutputParser | The same type as `self.parser.OutputType` | `~T` |
YES |
| RetryWithErrorOutputParser | The same type as `self.parser.OutputType`
| `~T` | YES |
| StructuredOutputParser | `typing.Dict[str, Any]` | `TypeError` is
raised | YES |
| YamlOutputParser(pydantic_object=MyModel) | `MyModel` | `~T` | YES |

NOTE: In "Fix Required", "YES" means that it is required to fix in this
PR while "NO" means that it is not required.

# Issue

No issues for this PR.

# Twitter handle

- [hmdev3](https://twitter.com/hmdev3)

# Questions:

1. Is it required to create tests for legacy APIs `LLMChain.run` in the
following scripts?
   - libs/langchain/tests/unit_tests/output_parsers/test_fix.py;
   - libs/langchain/tests/unit_tests/output_parsers/test_retry.py.

2. Is there a more appropriate expected output type than I expect in the
above table?
- e.g. the `OutputType` of `CombiningOutputParser` should be
SOMETHING...

# Actual outputs (before this PR)

<div id='evidence'></div>

<details><summary>Actual outputs</summary>

## Requirements

- Python==3.9.13
- langchain==0.1.13

```python
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import langchain
>>> langchain.__version__
'0.1.13'
>>> from langchain import output_parsers
```

### `BooleanOutputParser`

```python
>>> output_parsers.BooleanOutputParser().OutputType
<class 'bool'>
```

### `CombiningOutputParser`

```python
>>> output_parsers.CombiningOutputParser(parsers=[output_parsers.DatetimeOutputParser(), output_parsers.CommaSeparatedListOutputParser()]).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable CombiningOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `DatetimeOutputParser`

```python
>>> output_parsers.DatetimeOutputParser().OutputType
<class 'datetime.datetime'>
```

### `EnumOutputParser`

```python
>>> from enum import Enum
>>> class MyEnum(Enum):
...     a = 'a'
...     b = 'b'
...
>>> output_parsers.EnumOutputParser(enum=MyEnum).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable EnumOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `OutputFixingParser`

```python
>>> output_parsers.OutputFixingParser(parser=output_parsers.DatetimeOutputParser()).OutputType
~T
```

### `CommaSeparatedListOutputParser`

```python
>>> output_parsers.CommaSeparatedListOutputParser().OutputType
typing.List[str]
```

### `MarkdownListOutputParser`

```python
>>> output_parsers.MarkdownListOutputParser().OutputType
typing.List[str]
```

### `NumberedListOutputParser`

```python
>>> output_parsers.NumberedListOutputParser().OutputType
typing.List[str]
```

### `JsonOutputKeyToolsParser`

```python
>>> output_parsers.JsonOutputKeyToolsParser(key_name='tool').OutputType
typing.Any
```

### `JsonOutputToolsParser`

```python
>>> output_parsers.JsonOutputToolsParser().OutputType
typing.Any
```

### `PydanticToolsParser`

```python
>>> from langchain.pydantic_v1 import BaseModel
>>> class MyModel(BaseModel):
...     a: int
...
>>> output_parsers.PydanticToolsParser(tools=[MyModel, MyModel]).OutputType
typing.Any
```

### `PandasDataFrameOutputParser`

```python
>>> output_parsers.PandasDataFrameOutputParser().OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable PandasDataFrameOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `PydanticOutputParser`

```python
>>> output_parsers.PydanticOutputParser(pydantic_object=MyModel).OutputType
<class '__main__.MyModel'>
```

### `RegexParser`

```python
>>> output_parsers.RegexParser(regex='$', output_keys=['a']).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable RegexParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `RegexDictParser`

```python
>>> output_parsers.RegexDictParser(output_key_to_format={'a':'a'}).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable RegexDictParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `RetryOutputParser`

```python
>>> output_parsers.RetryOutputParser(parser=output_parsers.DatetimeOutputParser()).OutputType
~T
```

### `RetryWithErrorOutputParser`

```python
>>> output_parsers.RetryWithErrorOutputParser(parser=output_parsers.DatetimeOutputParser()).OutputType
~T
```

### `StructuredOutputParser`

```python
>>> from langchain.output_parsers.structured import ResponseSchema
>>> response_schemas = [ResponseSchema(name="foo",description="a list of strings",type="List[string]"),ResponseSchema(name="bar",description="a string",type="string"), ]
>>> output_parsers.StructuredOutputParser.from_response_schemas(response_schemas).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable StructuredOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `YamlOutputParser`

```python
>>> output_parsers.YamlOutputParser(pydantic_object=MyModel).OutputType
~T
```


<div>

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-18 18:59:42 +00:00
Artem Mukhin
e271f75bee docs: Fix URL formatting in deprecation warnings (#23075)
**Description**

Updated the URLs in deprecation warning messages. The URLs were
previously written as raw strings and are now formatted to be clickable
HTML links.

Example of a broken link in the current API Reference:
https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html

<img width="942" alt="Screenshot 2024-06-18 at 13 21 07"
src="https://github.com/langchain-ai/langchain/assets/4854600/a1b1863c-cd03-4af2-a9bc-70375407fb00">
2024-06-18 14:49:58 -04:00
Gabriel Petracca
c6660df58e community[minor]: Implement Doctran async execution (#22372)
**Description**

The DoctranTextTranslator has an async transform function that was not
implemented because [the doctran
library](https://github.com/psychic-api/doctran) uses a sync version of
the `execute` method.

- I implemented the `DoctranTextTranslator.atransform_documents()`
method using `asyncio.to_thread` to run the function in a separate
thread.
- I updated the example in the Notebook with the new async version.
- The performance improvements can be appreciated when a big document is
divided into multiple chunks.

Relates to:
- Issue #14645: https://github.com/langchain-ai/langchain/issues/14645
- Issue #14437: https://github.com/langchain-ai/langchain/issues/14437
- https://github.com/langchain-ai/langchain/pull/15264

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-18 18:17:37 +00:00
Eugene Yurtsev
aa6415aa7d core[minor]: Support multiple keys in get_from_dict_or_env (#23086)
Support passing multiple keys for ge_from_dict_or_env
2024-06-18 14:13:28 -04:00
nold
226802f0c4 community: add args_schema to SearxSearch (#22954)
This change adds args_schema (pydantic BaseModel) to SearxSearchRun for
correct schema formatting on LLM function calls

Issue: currently using SearxSearchRun with OpenAI function calling
returns the following error "TypeError: SearxSearchRun._run() got an
unexpected keyword argument '__arg1' ".

This happens because the schema sent to the LLM is "input:
'{"__arg1":"foobar"}'" while the method should be called with the
"query" parameter.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-18 17:27:39 +00:00
Bagatur
01783d67fc core[patch]: Release 0.2.9 (#23091) 2024-06-18 17:15:04 +00:00
Finlay Macklon
616d06d7fe community: glob multiple patterns when using DirectoryLoader (#22852)
- **Description:** Updated
*community.langchain_community.document_loaders.directory.py* to enable
the use of multiple glob patterns in the `DirectoryLoader` class. Now,
the glob parameter is of type `list[str] | str` and still defaults to
the same value as before. I updated the docstring of the class to
reflect this, and added a unit test to
*community.tests.unit_tests.document_loaders.test_directory.py* named
`test_directory_loader_glob_multiple`. This test also shows an example
of how to use the new functionality.
- ~~Issue:~~**Discussion Thread:**
https://github.com/langchain-ai/langchain/discussions/18559
- **Dependencies:** None
- **Twitter handle:** N/a

- [x] **Add tests and docs**
    - Added test (described above)
    - Updated class docstring

- [x] **Lint and test**

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-18 09:24:50 -07:00
Eugene Yurtsev
5564d9e404 core[patch]: Document BaseStore (#23082)
Add doc-string to BaseStore
2024-06-18 11:47:47 -04:00
Takuya Igei
9f791b6ad5 core[patch],community[patch],langchain[patch]: tenacity dependency to version >=8.1.0,<8.4.0 (#22973)
Fix https://github.com/langchain-ai/langchain/issues/22972.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-18 10:34:28 -04:00
Raghav Dixit
74c4cbb859 LanceDB example minor change (#23069)
Removed package version `0.6.13` in the example.
2024-06-18 09:16:17 -04:00
Bagatur
ddfbca38df docs: add trim_messages to chatbot (#23061) 2024-06-17 22:39:39 -07:00
Lance Martin
931b41b30f Update Fireworks link (#23058) 2024-06-17 21:16:18 -07:00
Leonid Ganeline
6a66d8e2ca docs: AWS platform page update (#23063)
Added a reference to the `GlueCatalogLoader` new document loader.
2024-06-17 21:01:58 -07:00
Raviraj
858ce264ef SemanticChunker : Feature Addition ("Semantic Splitting with gradient") (#22895)
```SemanticChunker``` currently provide three methods to split the texts semantically:
- percentile
- standard_deviation
- interquartile

I propose new method ```gradient```. In this method, the gradient of distance is used to split chunks along with the percentile method (technically) . This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data.
I have tested this merge on a set of 10 domain specific documents (mostly legal).

Details : 
    - **Issue:** Improvement
    - **Dependencies:** NA
    - **Twitter handle:** [x.com/prajapat_ravi](https://x.com/prajapat_ravi)


@hwchase17

---------

Co-authored-by: Raviraj Prajapat <raviraj.prajapat@sirionlabs.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-17 21:01:08 -07:00
Raghav Dixit
55705c0f5e LanceDB integration update (#22869)
Added : 

- [x] relevance search (w/wo scores)
- [x] maximal marginal search
- [x] image ingestion
- [x] filtering support
- [x] hybrid search w reranking 

make test, lint_diff and format checked.
2024-06-17 20:54:26 -07:00
Chang Liu
62c8a67f56 community: add KafkaChatMessageHistory (#22216)
Add chat history store based on Kafka.

Files added: 
`libs/community/langchain_community/chat_message_histories/kafka.py`
`docs/docs/integrations/memory/kafka_chat_message_history.ipynb`

New issue to be created for future improvement:
1. Async method implementation.
2. Message retrieval based on timestamp.
3. Support for other configs when connecting to cloud hosted Kafka (e.g.
add `api_key` field)
4. Improve unit testing & integration testing.
2024-06-17 20:34:01 -07:00
shimajiroxyz
3e835a1aa1 langchain: add id_key option to EnsembleRetriever for metadata-based document merging (#22950)
**Description:**
- What I changed
- By specifying the `id_key` during the initialization of
`EnsembleRetriever`, it is now possible to determine which documents to
merge scores for based on the value corresponding to the `id_key`
element in the metadata, instead of `page_content`. Below is an example
of how to use the modified `EnsembleRetriever`:
    ```python
retriever = EnsembleRetriever(retrievers=[ret1, ret2], id_key="id") #
The Document returned by each retriever must keep the "id" key in its
metadata.
    ```

- Additionally, I added a script to easily test the behavior of the
`invoke` method of the modified `EnsembleRetriever`.

- Why I changed
- There are cases where you may want to calculate scores by treating
Documents with different `page_content` as the same when using
`EnsembleRetriever`. For example, when you want to ensemble the search
results of the same document described in two different languages.
- The previous `EnsembleRetriever` used `page_content` as the basis for
score aggregation, making the above usage difficult. Therefore, the
score is now calculated based on the specified key value in the
Document's metadata.

**Twitter handle:** @shimajiroxyz
2024-06-18 03:29:17 +00:00
mackong
39f6c4169d langchain[patch]: add tool messages formatter for tool calling agent (#22849)
- **Description:** add tool_messages_formatter for tool calling agent,
make tool messages can be formatted in different ways for your LLM.
  - **Issue:** N/A
  - **Dependencies:** N/A
2024-06-17 20:29:00 -07:00
Lucas Tucker
e25a5966b5 docs: Standardize DocumentLoader docstrings (#22932)
**Standardizing DocumentLoader docstrings (of which there are many)**

This PR addresses issue #22866 and adds docstrings according to the
issue's specified format (in the appendix) for files csv_loader.py and
json_loader.py in langchain_community.document_loaders. In particular,
the following sections have been added to both CSVLoader and JSONLoader:
Setup, Instantiate, Load, Async load, and Lazy load. It may be worth
adding a 'Metadata' section to the JSONLoader docstring to clarify how
we want to extract the JSON metadata (using the `metadata_func`
argument). The files I used to walkthrough the various sections were
`example_2.json` from
[HERE](https://support.oneskyapp.com/hc/en-us/articles/208047697-JSON-sample-files)
and `hw_200.csv` from
[HERE](https://people.sc.fsu.edu/~jburkardt/data/csv/csv.html).

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-18 03:26:36 +00:00
Leonid Ganeline
a56ff199a7 docs: embeddings classes (#22927)
Added a table with all Embedding classes.
2024-06-17 20:17:24 -07:00
Mohammad Mohtashim
60ba02f5db [Community]: Fixed DDG DuckDuckGoSearchResults Docstring (#22968)
- **Description:** A very small fix in the Docstring of
`DuckDuckGoSearchResults` identified in the following issue.
- **Issue:** #22961

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-18 03:16:24 +00:00
Eun Hye Kim
70761af8cf community: Fix #22975 (Add SSL Verification Option to Requests Class in langchain_community) (#22977)
- **PR title**: "community: Fix #22975 (Add SSL Verification Option to
Requests Class in langchain_community)"
- **PR message**: 
    - **Description:**
- Added an optional verify parameter to the Requests class with a
default value of True.
- Modified the get, post, patch, put, and delete methods to include the
verify parameter.
- Updated the _arequest async context manager to include the verify
parameter.
- Added the verify parameter to the GenericRequestsWrapper class and
passed it to the Requests class.
    - **Issue:** This PR fixes issue #22975.
- **Dependencies:** No additional dependencies are required for this
change.
    - **Twitter handle:** @lunara_x

You can check this change with below code.
```python
from langchain_openai.chat_models import ChatOpenAI
from langchain.requests import RequestsWrapper
from langchain_community.agent_toolkits.openapi import planner
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec

with open("swagger.yaml") as f:
    data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)

llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(verify=False) # modified point
superset_agent = planner.create_openapi_agent(swagger_api_spec, swagger_requests_wrapper, llm, allow_dangerous_requests=True, handle_parsing_errors=True)

superset_agent.run(
    "Tell me the number and types of charts and dashboards available."
)
```

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-18 03:12:40 +00:00
Mohammad Mohtashim
bf839676c7 [Community]: FIxed the DocumentDBVectorSearch _similarity_search_without_score (#22970)
- **Description:** The PR #22777 introduced a bug in
`_similarity_search_without_score` which was raising the
`OperationFailure` error. The mistake was syntax error for MongoDB
pipeline which has been corrected now.
    - **Issue:** #22770
2024-06-17 20:08:42 -07:00
Nuno Campos
f01f12ce1e Include "no escape" and "inverted section" mustache vars in Prompt.input_variables and Prompt.input_schema (#22981) 2024-06-17 19:24:13 -07:00
Bella Be
7a0b36501f docs: Update how to docs for pydantic compatibility (#22983)
Add missing imports in docs from langchain_core.tools  BaseTool

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-18 01:49:56 +00:00
Jacob Lee
3b7b276f6f docs[patch]: Adds evaluation sections (#23050)
Also want to add an index/rollup page to LangSmith docs to enable
linking to a how-to category as a group (e.g.
https://docs.smith.langchain.com/how_to_guides/evaluation/)

CC @agola11 @hinthornw
2024-06-17 17:25:04 -07:00
Jacob Lee
6605ae22f6 docs[patch]: Update docs links (#23013) 2024-06-17 15:58:28 -07:00
Bagatur
c2b2e3266c core[minor]: message transformer utils (#22752) 2024-06-17 15:30:07 -07:00
Qingchuan Hao
c5e0acf6f0 docs: add bing search integration to agent (#22929)
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-17 18:08:52 -04:00
Anders Swanson
aacc6198b9 community: OCI GenAI embedding batch size (#22986)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: OCI GenAI embedding batch size"



- [x] **PR message**:
    - **Issue:** #22985 


- [ ] **Add tests and docs**: N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-17 22:06:45 +00:00
Bagatur
8235bae48e core[patch]: Release 0.2.8 (#23012) 2024-06-17 20:55:39 +00:00
Bagatur
5ee6e22983 infra: test all dependents on any change (#22994) 2024-06-17 20:50:31 +00:00
Nuno Campos
bd4b68cd54 core: run_in_executor: Wrap StopIteration in RuntimeError (#22997)
- StopIteration can't be set on an asyncio.Future it raises a TypeError
and leaves the Future pending forever so we need to convert it to a
RuntimeError
2024-06-17 20:40:01 +00:00
Bagatur
d96f67b06f standard-tests[patch]: Update chat model standard tests (#22378)
- Refactor standard test classes to make them easier to configure
- Update openai to support stop_sequences init param
- Update groq to support stop_sequences init param
- Update fireworks to support max_retries init param
- Update ChatModel.bind_tools to type tool_choice
- Update groq to handle tool_choice="any". **this may be controversial**

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-17 13:37:41 -07:00
Bob Lin
14f0cdad58 docs: Add some 3rd party tutorials (#22931)
Langchain is very popular among developers in China, but there are still
no good Chinese books or documents, so I want to add my own Chinese
resources on langchain topics, hoping to give Chinese readers a better
experience using langchain. This is not a translation of the official
langchain documentation, but my understanding.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-17 20:12:49 +00:00
Jacob Lee
893299c3c9 docs[patch]: Reorder streaming guide, add tags (#22993)
CC @hinthornw
2024-06-17 13:10:51 -07:00
Oguz Vuruskaner
dd25d08c06 community[minor]: add tool calling for DeepInfraChat (#22745)
DeepInfra now supports tool calling for supported models.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-17 15:21:49 -04:00
Bagatur
158701ab3c docs: update universal init title (#22990) 2024-06-17 12:13:31 -07:00
Lance Martin
a54deba6bc Add RAG to conceptual guide (#22790)
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
2024-06-17 11:20:28 -07:00
maang-h
c6b7db6587 community: Add Baichuan Embeddings batch size (#22942)
- **Support batch size** 
Baichuan updates the document, indicating that up to 16 documents can be
imported at a time

- **Standardized model init arg names**
    - baichuan_api_key -> api_key
    - model_name  -> model
2024-06-17 14:11:04 -04:00
ccurme
722c8f50ea openai[patch]: add stream_usage parameter (#22854)
Here we add `stream_usage` to ChatOpenAI as:

1. a boolean attribute
2. a kwarg to _stream and _astream.

Question: should the `stream_usage` attribute be `bool`, or `bool |
None`?

Currently I've kept it `bool` and defaulted to False. It was implemented
on
[ChatAnthropic](e832bbb486/libs/partners/anthropic/langchain_anthropic/chat_models.py (L535))
as a bool. However, to maintain support for users who access the
behavior via OpenAI's `stream_options` param, this ends up being
possible:
```python
llm = ChatOpenAI(model_kwargs={"stream_options": {"include_usage": True}})
assert not llm.stream_usage
```
(and this model will stream token usage).

Some options for this:
- it's ok
- make the `stream_usage` attribute bool or None
- make an \_\_init\_\_ for ChatOpenAI, set a `._stream_usage` attribute
and read `.stream_usage` from a property

Open to other ideas as well.
2024-06-17 13:35:18 -04:00
Shubham Pandey
56ac94e014 community[minor]: add ChatSnowflakeCortex chat model (#21490)
**Description:** This PR adds a chat model integration for [Snowflake
Cortex](https://docs.snowflake.com/en/user-guide/snowflake-cortex/llm-functions),
which gives an instant access to industry-leading large language models
(LLMs) trained by researchers at companies like Mistral, Reka, Meta, and
Google, including [Snowflake
Arctic](https://www.snowflake.com/en/data-cloud/arctic/), an open
enterprise-grade model developed by Snowflake.

**Dependencies:** Snowflake's
[snowpark](https://pypi.org/project/snowflake-snowpark-python/) library
is required for using this integration.

**Twitter handle:** [@gethouseware](https://twitter.com/gethouseware)

- [x] **Add tests and docs**:
1. integration tests:
`libs/community/tests/integration_tests/chat_models/test_snowflake.py`
2. unit tests:
`libs/community/tests/unit_tests/chat_models/test_snowflake.py`
  3. example notebook: `docs/docs/integrations/chat/snowflake.ipynb`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-17 09:47:05 -07:00
Lance Martin
ea96133890 docs: Update llamacpp ntbk (#22907)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-17 15:42:56 +00:00
Bagatur
e2304ebcdb standard-tests[patch]: Release 0.1.1 (#22984) 2024-06-17 15:31:34 +00:00
Hakan Özdemir
c437b1aab7 [Partner]: Add metadata to stream response (#22716)
Adds `response_metadata` to stream responses from OpenAI. This is
returned with `invoke` normally, but wasn't implemented for `stream`.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-17 09:46:50 -04:00
Baskar Gopinath
42a379c75c docs: Standardise formatting (#22948)
Standardised formatting 


![image](https://github.com/langchain-ai/langchain/assets/73015364/ea3b5c5c-e7a6-4bb7-8c6b-e7d8cbbbf761)
2024-06-17 09:00:05 -04:00
Ikko Eltociear Ashimine
3e7bb7690c docs: update databricks.ipynb (#22949)
arbitary -> arbitrary
2024-06-17 08:57:49 -04:00
Baskar Gopinath
19356b6445 Update sql_qa.ipynb (#22966)
fixes #22798 
fixes #22963
2024-06-17 08:57:16 -04:00
Bagatur
9ff249a38d standard-tests[patch]: don't require str chunk contents (#22965) 2024-06-17 08:52:24 -04:00
Daniel Glogowski
892bd4c29b docs: nim model name update (#22943)
NIM Model name change in a notebook and mdx file.

Thanks!
2024-06-15 16:38:28 -04:00
Christopher Tee
ada03dd273 community(you): Better support for You.com News API (#22622)
## Description
While `YouRetriever` supports both You.com's Search and News APIs, news
is supported as an afterthought.
More specifically, not all of the News API parameters are exposed for
the user, only those that happen to overlap with the Search API.

This PR:
- improves support for both APIs, exposing the remaining News API
parameters while retaining backward compatibility
- refactor some REST parameter generation logic
- updates the docstring of `YouSearchAPIWrapper`
- add input validation and warnings to ensure parameters are properly
set by user
- 🚨 Breaking: Limit the news results to `k` items

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-15 20:05:19 +00:00
ccurme
e09c6bb58b infra: update integration test workflow (#22945) 2024-06-15 19:52:43 +00:00
Tomaz Bratanic
1c661fd849 Improve llm graph transformer docstring (#22939) 2024-06-15 15:33:26 -04:00
maang-h
7a0af56177 docs: update ZhipuAI ChatModel docstring (#22934)
- **Description:** Update ZhipuAI ChatModel rich docstring
- **Issue:** the issue #22296
2024-06-15 09:12:21 -04:00
Appletree24
6838804116 docs:Fix mispelling in streaming doc (#22936)
Description: Fix mispelling
Issue: None
Dependencies: None
Twitter handle: None

Co-authored-by: qcloud <ubuntu@localhost.localdomain>
2024-06-15 12:24:50 +00:00
Bitmonkey
570d45b2a1 Update ollama.py with optional raw setting. (#21486)
Ollama has a raw option now. 

https://github.com/ollama/ollama/blob/main/docs/api.md

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-14 17:19:26 -07:00
caiyueliang
9944ad7f5f community: 'Solve the issue where the _search function in ElasticsearchStore supports passing a query_vector parameter, but the parameter does not take effect. (#21532)
**Issue:**
When using the similarity_search_with_score function in
ElasticsearchStore, I expected to pass in the query_vector that I have
already obtained. I noticed that the _search function does support the
query_vector parameter, but it seems to be ineffective. I am attempting
to resolve this issue.

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-14 17:13:11 -07:00
Erick Friis
764f1958dd docs: add ollama json mode (#22926)
fixes #22910
2024-06-14 23:27:55 +00:00
Erick Friis
c374c98389 experimental: release 0.0.61 (#22924) 2024-06-14 15:55:07 -07:00
BuxianChen
af65cac609 cli[minor]: remove redefined DEFAULT_GIT_REF (#21471)
remove redefined DEFAULT_GIT_REF

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-14 15:49:15 -07:00
Erick Friis
79a64207f5 community: release 0.2.5 (#22923) 2024-06-14 15:45:07 -07:00
Jiejun Tan
c8c67dde6f text-splitters[patch]: Fix HTMLSectionSplitter (#22812)
Update former pull request:
https://github.com/langchain-ai/langchain/pull/22654.

Modified `langchain_text_splitters.HTMLSectionSplitter`, where in the
latest version `dict` data structure is used to store sections from a
html document, in function `split_html_by_headers`. The header/section
element names serve as dict keys. This can be a problem when duplicate
header/section element names are present in a single html document.
Latter ones can replace former ones with the same name. Therefore some
contents can be miss after html text splitting is conducted.

Using a list to store sections can hopefully solve the problem. A Unit
test considering duplicate header names has been added.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-14 22:40:39 +00:00
Erick Friis
fbeeb6da75 langchain: release 0.2.5 (#22922) 2024-06-14 15:37:54 -07:00
Erick Friis
551640a030 templates: remove lockfiles (#22920)
poetry will default to latest versions without
2024-06-14 21:42:30 +00:00
Baskar Gopinath
c4f2bc9540 docs: Fix wrongly referenced class name in confluence.py (#22879)
Fixes #22542

Changed ConfluenceReader to ConfluenceLoader
2024-06-14 14:00:48 -07:00
ccurme
32966a08a9 infra: remove nvidia from monorepo scheduled tests (#22915)
Scheduled tests run in
https://github.com/langchain-ai/langchain-nvidia/tree/main
2024-06-14 13:23:04 -07:00
Erick Friis
9ef15691d6 core: release 0.2.7 (#22917) 2024-06-14 20:03:58 +00:00
Nuno Campos
338180f383 core: in astream_events v2 always await task even if already finished (#22916)
- this ensures exceptions propagate to the caller
2024-06-14 19:54:20 +00:00
Istvan/Nebulinq
513e491ce9 experimental: LLMGraphTransformer - added relationship properties. (#21856)
- **Description:** 
The generated relationships in the graph had no properties, but the
Relationship class was properly defined with properties. This made it
very difficult to transform conditional sentences into a graph. Adding
properties to relationships can solve this issue elegantly.
The changes expand on the existing LLMGraphTransformer implementation
but add the possibility to define allowed relationship properties like
this: LLMGraphTransformer(llm=llm, relationship_properties=["Condition",
"Time"],)
- **Issue:** 
    no issue found
 - **Dependencies:**
    n/a
- **Twitter handle:** 
    @IstvanSpace


-Quick Test
=================================================================
from dotenv import load_dotenv
import os
from langchain_community.graphs import Neo4jGraph
from langchain_experimental.graph_transformers import
LLMGraphTransformer
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.documents import Document

load_dotenv()
os.environ["NEO4J_URI"] = os.getenv("NEO4J_URI")
os.environ["NEO4J_USERNAME"] = os.getenv("NEO4J_USERNAME")
os.environ["NEO4J_PASSWORD"] = os.getenv("NEO4J_PASSWORD")
graph = Neo4jGraph()
llm = ChatOpenAI(temperature=0, model_name="gpt-4o")
llm_transformer = LLMGraphTransformer(llm=llm)
#text = "Harry potter likes pies, but only if it rains outside"
text = "Jack has a dog named Max. Jack only walks Max if it is sunny
outside."
documents = [Document(page_content=text)]
llm_transformer_props = LLMGraphTransformer(
    llm=llm,
    relationship_properties=["Condition"],
)
graph_documents_props =
llm_transformer_props.convert_to_graph_documents(documents)
print(f"Nodes:{graph_documents_props[0].nodes}")
print(f"Relationships:{graph_documents_props[0].relationships}")
graph.add_graph_documents(graph_documents_props)

---------

Co-authored-by: Istvan Lorincz <istvan.lorincz@pm.me>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-14 14:41:04 -04:00
ccurme
694ae87748 docs: add groq to chatmodeltabs (#22913) 2024-06-14 14:31:48 -04:00
Eugene Yurtsev
c816d03699 dcos: Add admonition to PythonREPL tool (#22909)
Add admonition to the documentation to make sure users are aware that
the tool allows execution of code on the host machine using a python
interpreter (by design).
2024-06-14 14:06:40 -04:00
kiarina
8171efd07a core[patch]: Fix FunctionCallbackHandler._on_tool_end (#22908)
If the global `debug` flag is enabled, the agent will get the following
error in `FunctionCallbackHandler._on_tool_end` at runtime.

```
Error in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")
```

By calling str() before strip(), the error was avoided.
This error can be seen at
[debugging.ipynb](https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/debugging.ipynb).

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
2024-06-14 17:59:29 +00:00
Philippe PRADOS
b61de9728e community[minor]: Fix long_context_reorder.py async (#22839)
Implement `async def atransform_documents( self, documents:
Sequence[Document], **kwargs: Any ) -> Sequence[Document]` for
`LongContextReorder`
2024-06-14 13:55:18 -04:00
Eugene Yurtsev
c72bcda4f2 community[major], experimental[patch]: Remove Python REPL from community (#22904)
Remove the REPL from community, and suggest an alternative import from
langchain_experimental.

Fix for this issue:
https://github.com/langchain-ai/langchain/issues/14345

This is not a bug in the code or an actual security risk. The python
REPL itself is behaving as expected.

The PR is done to appease blanket security policies that are just
looking for the presence of exec in the code.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-14 17:53:29 +00:00
Eugene Yurtsev
9a877c7adb community[patch]: SitemapLoader restrict depth of parsing sitemap (CVE-2024-2965) (#22903)
This PR restricts the depth to which the sitemap can be parsed.

Fix for: CVE-2024-2965
2024-06-14 13:04:40 -04:00
Eugene Yurtsev
4a77a3ab19 core[patch]: fix validation of @deprecated decorator (#22513)
This PR moves the validation of the decorator to a better place to avoid
creating bugs while deprecating code.

Prevent issues like this from arising:
https://github.com/langchain-ai/langchain/issues/22510

we should replace with a linter at some point that just does static
analysis
2024-06-14 16:52:30 +00:00
Jacob Lee
181a61982f anthropic[minor]: Adds streaming tool call support for Anthropic (#22687)
Preserves string content chunks for non tool call requests for
convenience.

One thing - Anthropic events look like this:

```
RawContentBlockStartEvent(content_block=TextBlock(text='', type='text'), index=0, type='content_block_start')
RawContentBlockDeltaEvent(delta=TextDelta(text='<thinking>\nThe', type='text_delta'), index=0, type='content_block_delta')
RawContentBlockDeltaEvent(delta=TextDelta(text=' provide', type='text_delta'), index=0, type='content_block_delta')
...
RawContentBlockStartEvent(content_block=ToolUseBlock(id='toolu_01GJ6x2ddcMG3psDNNe4eDqb', input={}, name='get_weather', type='tool_use'), index=1, type='content_block_start')
RawContentBlockDeltaEvent(delta=InputJsonDelta(partial_json='', type='input_json_delta'), index=1, type='content_block_delta')
```

Note that `delta` has a `type` field. With this implementation, I'm
dropping it because `merge_list` behavior will concatenate strings.

We currently have `index` as a special field when merging lists, would
it be worth adding `type` too?

If so, what do we set as a context block chunk? `text` vs.
`text_delta`/`tool_use` vs `input_json_delta`?

CC @ccurme @efriis @baskaryan
2024-06-14 09:14:43 -07:00
ccurme
f40b2c6f9d fireworks[patch]: add usage_metadata to (a)invoke and (a)stream (#22906) 2024-06-14 12:07:19 -04:00
Mohammad Mohtashim
d1b7a934aa [Community]: HuggingFaceCrossEncoder score accounting for <not-relevant score,relevant score> pairs. (#22578)
- **Description:** Some of the Cross-Encoder models provide scores in
pairs, i.e., <not-relevant score (higher means the document is less
relevant to the query), relevant score (higher means the document is
more relevant to the query)>. However, the `HuggingFaceCrossEncoder`
`score` method does not currently take into account the pair situation.
This PR addresses this issue by modifying the method to consider only
the relevant score if score is being provided in pair. The reason for
focusing on the relevant score is that the compressors select the top-n
documents based on relevance.
    - **Issue:** #22556 
- Please also refer to this
[comment](https://github.com/UKPLab/sentence-transformers/issues/568#issuecomment-729153075)
2024-06-14 08:28:24 -07:00
Baskar Gopinath
83643cbdfe docs: Fix typo in tutorial about structured data extraction (#22888)
[Fixed typo](docs: Fix typo in tutorial about structured data
extraction)
2024-06-14 15:19:55 +00:00
Thanh Nguyen
b5e2ba3a47 community[minor]: add chat model llamacpp (#22589)
- **PR title**: [community] add chat model llamacpp


- **PR message**:
- **Description:** This PR introduces a new chat model integration with
llamacpp_python, designed to work similarly to the existing ChatOpenAI
model.
      + Work well with instructed chat, chain and function/tool calling.
+ Work with LangGraph (persistent memory, tool calling), will update
soon

- **Dependencies:** This change requires the llamacpp_python library to
be installed.
    
@baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-14 14:51:43 +00:00
Bagatur
e4279f80cd docs: doc loader feat table alignment (#22900) 2024-06-14 14:25:01 +00:00
Isaac Francisco
984c7a9d42 docs: generate table for document loaders (#22871)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-14 07:03:27 -07:00
Jacob Lee
8e89178047 docs[patch]: Expand embeddings docs (#22881) 2024-06-13 23:06:07 -07:00
ccurme
73c76b9628 anthropic[patch]: always add tool_result type to ToolMessage content (#22721)
Anthropic tool results can contain image data, which are typically
represented with content blocks having `"type": "image"`. Currently,
these content blocks are passed as-is as human/user messages to
Anthropic, which raises BadRequestError as it expects a tool_result
block to follow a tool_use.

Here we update ChatAnthropic to nest the content blocks inside a
tool_result content block.

Example:
```python
import base64

import httpx
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.pydantic_v1 import BaseModel, Field


# Fetch image
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")


class FetchImage(BaseModel):
    should_fetch: bool = Field(..., description="Whether an image is requested.")


llm = ChatAnthropic(model="claude-3-sonnet-20240229").bind_tools([FetchImage])

messages = [
    HumanMessage(content="Could you summon a beautiful image please?"),
    AIMessage(
        content=[
            {
                "type": "tool_use",
                "id": "toolu_01Rn6Qvj5m7955x9m9Pfxbcx",
                "name": "FetchImage",
                "input": {"should_fetch": True},
            },
        ],
        tool_calls=[
            {
                "name": "FetchImage",
                "args": {"should_fetch": True},
                "id": "toolu_01Rn6Qvj5m7955x9m9Pfxbcx",
            },
        ],
    ),
    ToolMessage(
        name="FetchImage",
        content=[
            {
                "type": "image",
                "source": {
                    "type": "base64",
                    "media_type": "image/jpeg",
                    "data": image_data,
                },
            },
        ],
        tool_call_id="toolu_01Rn6Qvj5m7955x9m9Pfxbcx",
    ),
]

llm.invoke(messages)
```

Trace:
https://smith.langchain.com/public/d27e4fc1-a96d-41e1-9f52-54f5004122db/r
2024-06-13 20:14:23 -07:00
Lucas Tucker
7114aed78f docs: Standardize ChatGroq (#22751)
Updated ChatGroq doc string as per issue
https://github.com/langchain-ai/langchain/issues/22296:"langchain_groq:
updated docstring for ChatGroq in langchain_groq to match that of the
description (in the appendix) provided in issue
https://github.com/langchain-ai/langchain/issues/22296. "

Issue: This PR is in response to issue
https://github.com/langchain-ai/langchain/issues/22296, and more
specifically the ChatGroq model. In particular, this PR updates the
docstring for langchain/libs/partners/groq/langchain_groq/chat_model.py
by adding the following sections: Instantiate, Invoke, Stream, Async,
Tool calling, Structured Output, and Response metadata. I used the
template from the Anthropic implementation and referenced the Appendix
of the original issue post. I also noted that: `usage_metadata `returns
none for all ChatGroq models I tested; there is no mention of image
input in the ChatGroq documentation; unlike that of ChatHuggingFace,
`.stream(messages)` for ChatGroq returned blocks of output.

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-14 03:08:36 +00:00
Anush
e002c855bd qdrant[patch]: Use collection_exists API instead of exceptions (#22764)
## Description

Currently, the Qdrant integration relies on exceptions raised by
[`get_collection`
](https://qdrant.tech/documentation/concepts/collections/#collection-info)
to check if a collection exists.

Using
[`collection_exists`](https://qdrant.tech/documentation/concepts/collections/#check-collection-existence)
is recommended to avoid missing any unhandled exceptions. This PR
addresses this.

## Testing
All integration and unit tests pass. No user-facing changes.
2024-06-13 20:01:32 -07:00
Anindyadeep
c417803908 community[minor]: Prem Templates (#22783)
This PR adds the feature add Prem Template feature in ChatPremAI.
Additionally it fixes a minor bug for API auth error when API passed
through arguments.
2024-06-13 19:59:28 -07:00
Stefano Lottini
4160b700e6 docs: Astra DB vectorstore, adjust syntax for automatic-embedding example (#22833)
Description: Adjusting the syntax for creating the vectorstore
collection (in the case of automatic embedding computation) for the most
idiomatic way to submit the stored secret name.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-14 02:52:32 +00:00
maang-h
1055b9a309 community[minor]: Implement ZhipuAIEmbeddings interface (#22821)
- **Description:** Implement ZhipuAIEmbeddings interface, include:
     - The `embed_query` method
     - The `embed_documents` method

refer to [ZhipuAI
Embedding-2](https://open.bigmodel.cn/dev/api#text_embedding)

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-13 19:45:11 -07:00
Leonid Ganeline
46c9784127 docs: ReAct reference (#22830)
The `ReAct` is used all across LangChain but it is not referenced
properly.
Added references to the original paper.
2024-06-13 19:39:28 -07:00
Giacomo Berardi
712aa0c529 docs: fixes for Elasticsearch integrations, cache doc and providers list (#22817)
Some minor fixes in the documentation:
 - ElasticsearchCache initilization is now correct
 - List of integrations for ES updated
2024-06-13 19:39:10 -07:00
Isaac Francisco
f9a6d5c845 infra: lint new docs to match doc loader template (#22867) 2024-06-13 19:34:50 -07:00
Bagatur
8bd368d07e cli[patch]: Release 0.0.25 (#22876) 2024-06-14 02:31:04 +00:00
Isaac Francisco
75e966a2fa docs, cli[patch]: document loaders doc template (#22862)
From: https://github.com/langchain-ai/langchain/pull/22290

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-13 19:28:57 -07:00
Hayden Wolff
d1cdde267a docs: update NVIDIA Riva tool to use NVIDIA NIM for LLM (#22873)
**Description:**
Update the NVIDIA Riva tool documentation to use NVIDIA NIM for the LLM.
Show how to use NVIDIA NIMs and link to documentation for LangChain with
NIM.

---------

Co-authored-by: Hayden Wolff <hwolff@nvidia.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-13 19:26:05 -07:00
Zeeshan Qureshi
ada1e5cc64 docs: s/path_images/images/ for ImageCaptionLoader keyword arguments (#22857)
Quick update to `ImageCaptionLoader` documentation to reflect what's in
code.
2024-06-13 18:37:12 -07:00
liuzc9
41e232cb82 Fix typo in vearch.md (#22840)
Fix typo
2024-06-13 18:24:51 -07:00
Kagura Chen
57783c5e55 Fix: lint errors and update Field alias in models.py and AutoSelectionScorer initialization (#22846)
This PR addresses several lint errors in the core package of LangChain.
Specifically, the following issues were fixed:

1.Unexpected keyword argument "required" for "Field"  [call-arg]
2.tests/integration_tests/chains/test_cpal.py:263: error: Unexpected
keyword argument "narrative_input" for "QueryModel" [call-arg]
2024-06-13 18:18:00 -07:00
Erick Friis
5bc774827b langchain: release 0.2.4 (#22872) 2024-06-14 00:14:48 +00:00
Erick Friis
7234fd0f51 core: release 0.2.6 (#22868) 2024-06-13 22:22:34 +00:00
Jacob Lee
bcbb43480c core[patch]: Treat type as a special field when merging lists (#22750)
Should we even log a warning? At least for Anthropic, it's expected to
get e.g. `text_block` followed by `text_delta`.

@ccurme @baskaryan @efriis
2024-06-13 15:08:24 -07:00
Nuno Campos
bae82e966a core: In astream_events v2 propagate cancel/break to the inner astream call (#22865)
- previous behavior was for the inner astream to continue running with
no interruption
- also propagate break in core runnable methods
2024-06-13 15:02:48 -07:00
Eugene Yurtsev
a766815a99 experimental[patch]/docs[patch]: Update links to security docs (#22864)
Minor update to newest version of security docs (content should be
identical).
2024-06-13 20:29:34 +00:00
Eugene Yurtsev
8f7cc73817 ci: Add script to check for pickle usage in community (#22863)
Add script to check for pickle usage in community.
2024-06-13 16:13:15 -04:00
Eugene Yurtsev
77209f315e community[patch]: FAISS VectorStore deserializer should be opt-in (#22861)
FAISS deserializer uses pickle module. Users have to opt-in to
de-serialize.
2024-06-13 15:48:13 -04:00
Eugene Yurtsev
ce0b0f22a1 experimental[major]: Force users to opt-in into code that relies on the python repl (#22860)
This should make it obvious that a few of the agents in langchain
experimental rely on the python REPL as a tool under the hood, and will
force users to opt-in.
2024-06-13 15:41:24 -04:00
Isaac Francisco
869523ad72 [docs]: added info for TavilySearchResults (#22765) 2024-06-13 12:14:11 -07:00
ccurme
42257b120f partners: fix numpy dep (#22858)
Following https://github.com/langchain-ai/langchain/pull/22813, which
added python 3.12 to CI, here we update numpy accordingly in partner
packages.
2024-06-13 14:46:42 -04:00
Isaac Francisco
345fd3a556 minor functionality change: adding API functionality to tavilysearch (#22761) 2024-06-13 11:10:28 -07:00
Isaac Francisco
034257e9bf docs: improved recursive url loader docs (#22648)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-13 11:09:35 -07:00
Isaac Francisco
e832bbb486 [docs]: bind tools (#22831) 2024-06-13 09:50:43 -07:00
ccurme
b626c3ca23 groq[patch]: add usage_metadata to (a)invoke and (a)stream (#22834) 2024-06-13 10:26:27 -04:00
Jacob Lee
e01e5d5a91 docs[patch]: Improve Groq integration page (#22844)
Was bare bones and got marked by folks as unhelpful.

CC @efriis @colemccracken
2024-06-13 03:40:29 -07:00
Jacob Lee
12eff6a130 docs[patch]: Readd Pydantic compatibility docs (#22836)
As a how-to guide.

CC @eyurtsev @hwchase17
2024-06-13 02:56:10 -07:00
Jacob Lee
cb654a3245 docs[patch]: Adds multimodal column to chat models table, move up in concepts (#22837)
CC @hwchase17 @baskaryan
2024-06-13 02:26:55 -07:00
James Braza
45b394268c core[patch]: allowing latest packaging versions (#22792)
Allowing version 24 of https://github.com/pypa/packaging

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-12 23:22:20 +00:00
Jacob Lee
00ad197502 docs[patch]: Add structured output to conceptual docs (#22791)
This downgrades `Function/tool calling` from a h3 to an h4 which means
it'll no longer show up in the right sidebar, but any direct links will
still work. I think that is ok, but LMK if you disapprove.

CC @hwchase17 @eyurtsev @rlancemartin
2024-06-12 15:30:51 -07:00
Karim Lalani
276be6cdd4 [experimental][llms][OllamaFunctions] tool calling related fixes (#22339)
Fixes issues with tool calling to handle tool objects correctly. Added
support to handle ToolMessage correctly.
Added additional checks for error conditions.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-12 16:34:43 -04:00
Christophe Bornet
d04e899b56 ci: add testing with Python 3.12 (#22813)
We need to use a different version of numpy for py3.8 and py3.12 in
pyproject.
And so do projects that use that Python version range and import
langchain.

    - **Twitter handle:** _cbornet
2024-06-12 16:31:36 -04:00
HyoJin Kang
b6bf2bb234 community[patch]: fix database uri type in SQLDatabase (#22661)
**Description**
sqlalchemy uses "sqlalchemy.engine.URL" type for db uri argument.
Added 'URL' type for compatibility.

**Issue**: None

**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-12 15:11:00 -04:00
Eugene Yurtsev
5dbbdcbf8e core[patch]: Update remaining root_validators (#22829)
This PR updates the remaining root_validators in core to either be explicit pre-init or post-init validators.
2024-06-12 14:47:40 -04:00
Eugene Yurtsev
265e650e64 community[patch]: Update root_validators embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub, Toolkits: Connery, ChatModels: PAI_EAS, (#22828)
This PR updates root validators for:

* Embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub
* Toolkits: Connery
* ChatModels: PAI_EAS

Following this issue:
https://github.com/langchain-ai/langchain/issues/22819
2024-06-12 13:59:05 -04:00
JonZeolla
32ba8cfab0 community[minor]: implement huggingface show_progress consistently (#22682)
- **Description:** This implements `show_progress` more consistently
(i.e. it is also added to the `HuggingFaceBgeEmbeddings` object).
- **Issue:** This implements `show_progress` more consistently in the
embeddings huggingface classes. Previously this could have been set via
`encode_kwargs`.
 - **Dependencies:** None
 - **Twitter handle:** @jonzeolla
2024-06-12 17:30:56 +00:00
Eugene Yurtsev
74e705250f core[patch]: update some root_validators (#22787)
Update some of the @root_validators to be explicit pre=True or
pre=False, skip_on_failure=True for pydantic 2 compatibility.
2024-06-12 13:04:57 -04:00
bincat
3d6e8547f9 docs: fix function name in tutorials/agents.ipynb (#22809)
the function called in the flowing example is `create_react_agent`, not
`create_tool_calling_executor `
2024-06-12 12:30:35 -04:00
mrhbj
a1268d9e9a community[patch]: fix hunyuan message include chinese signature error (#22795) (#22796)
… (#22795)

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-12 12:30:22 -04:00
Kagura Chen
513f1d8037 docs: update repo_structure.mdx to reflect latest code changes (#22810)
**Description:** This PR updates the documentation to reflect the recent
code changes.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-12 12:30:04 -04:00
Mr. Lance E Sloan «UMich»
08c466c603 community[patch]: bugfix for YoutubeLoader's LINES format (#22815)
- **Description:** A change I submitted recently introduced a bug in
`YoutubeLoader`'s `LINES` output format. In those conditions, curly
braces ("`{}`") creates a set, not a dictionary. This bugfix explicitly
specifies that a dictionary is created.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter:** lsloan_umich
- **Mastodon:**
[lsloan@mastodon.social](https://mastodon.social/@lsloan)
2024-06-12 12:29:34 -04:00
Philippe PRADOS
23c22fcbc9 langchain[minor]: Make EmbeddingsFilters async (#22737)
Add native async implementation for EmbeddingsFilter
2024-06-12 12:27:26 -04:00
endrajeet
b45bf78d2e Update index.mdx (#22818)
changed "# 🌟Recognition" to "### 🌟 Recognition" to match the rest of the
subheadings.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-12 12:27:16 -04:00
Bagatur
8203c1ff87 infra: lint new docs to match templates (#22786) 2024-06-11 13:26:35 -07:00
ccurme
936aedd10c mistral[patch]: add usage_metadata to (a)invoke and (a)stream (#22781) 2024-06-11 15:34:50 -04:00
Jiří Spilka
20e3662acf docs: Correct code examples in the Apify's notebooks (#22768)
**Description:** Correct code examples in the Apify document load
notebook and Apify Dataset notebook

**Issue**: None
**Dependencies**: None
**Twitter handle**: None
2024-06-11 15:20:16 -04:00
mrhbj
9212c9fcb8 community[patch]: fix hunyuan client json analysis (#22452) (#22767)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 19:05:18 +00:00
Rohan Aggarwal
86e8224cf1 community[patch]: Support for old clients (Thin and Thick) Oracle Vector Store (#22766)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
Support for old clients (Thin and Thick) Oracle Vector Store


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
Support for old clients (Thin and Thick) Oracle Vector Store

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
Have our own local tests

---------

Co-authored-by: rohan.aggarwal@oracle.com <rohaagga@phoenix95642.dev3sub2phx.databasede3phx.oraclevcn.com>
2024-06-11 11:36:06 -07:00
Jacob Lee
232908a46d docs[patch]: Adds streaming conceptual doc (#22760)
CC @hwchase17 @baskaryan
2024-06-11 11:03:52 -07:00
Mr. Lance E Sloan «UMich»
84dc2dd059 community[patch]: Load YouTube transcripts (captions) as fixed-duration chunks with start times (#21710)
- **Description:** Add a new format, `CHUNKS`, to
`langchain_community.document_loaders.youtube.YoutubeLoader` which
creates multiple `Document` objects from YouTube video transcripts
(captions), each of a fixed duration. The metadata of each chunk
`Document` includes the start time of each one and a URL to that time in
the video on the YouTube website.
  
I had implemented this for UMich (@umich-its-ai) in a local module, but
it makes sense to contribute this to LangChain community for all to
benefit and to simplify maintenance.

- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter:** lsloan_umich
- **Mastodon:**
[lsloan@mastodon.social](https://mastodon.social/@lsloan)

With regards to **tests and documentation**, most existing features of
the `YoutubeLoader` class are not tested. Only the
`YoutubeLoader.extract_video_id()` static method had a test. However,
while I was waiting for this PR to be reviewed and merged, I had time to
add a test for the chunking feature I've proposed in this PR.

I have added an example of using chunking to the
`docs/docs/integrations/document_loaders/youtube_transcript.ipynb`
notebook.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 17:44:36 +00:00
Aayush Kataria
71811e0547 community[minor]: Adds a vector store for Azure Cosmos DB for NoSQL (#21676)
This PR add supports for Azure Cosmos DB for NoSQL vector store.

Summary:

Description: added vector store integration for Azure Cosmos DB for
NoSQL Vector Store,
Dependencies: azure-cosmos dependency,
Tag maintainer: @hwchase17, @baskaryan @efriis @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-11 10:34:01 -07:00
Mohammad Mohtashim
36cad5d25c [Community]: Added Metadata filter support for DocumentDB Vector Store (#22777)
- **Description:** As pointed out in this issue #22770, DocumentDB
`similarity_search` does not support filtering through metadata which
this PR adds by passing in the parameter `filter`. Also this PR fixes a
minor Documentation error.
- **Issue:** #22770

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-11 16:37:53 +00:00
Dmitry Stepanov
912751e268 Ollama vision support (#22734)
**Description:** Ollama vision with messages in OpenAI-style support `{
"image_url": { "url": ... } }`
**Issue:** #22460 

Added flexible solution for ChatOllama to support chat messages with
images. Works when you provide either `image_url` as a string or as a
dict with "url" inside (like OpenAI does). So it makes available to use
tuples with `ChatPromptTemplate.from_messages()`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 16:10:19 +00:00
Philippe PRADOS
0908b01cb2 langchain[minor]: Add native async implementation to LLMFilter, add concurrency to both sync and async paths (#22739)
Thank you for contributing to LangChain!

- [ ] **PR title**: "langchain: Fix chain_filter.py to be compatible
with async"


- [ ] **PR message**: 
    - **Description:** chain_filter is not compatible with async.
    - **Twitter handle:** pprados


- [X ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Signed-off-by: zhangwangda <zhangwangda94@163.com>
Co-authored-by: Prakul <discover.prakul@gmail.com>
Co-authored-by: Lei Zhang <zhanglei@apache.org>
Co-authored-by: Gin <ictgtvt@gmail.com>
Co-authored-by: wangda <38549158+daziz@users.noreply.github.com>
Co-authored-by: Max Mulatz <klappradla@posteo.net>
2024-06-11 10:55:40 -04:00
Jaeyeon Kim(김재연)
ce4e29ae42 community[minor]: fix redis store docstring and streamline initialization code (#22730)
Thank you for contributing to LangChain!

### Description

Fix the example in the docstring of redis store.
Change the initilization logic and remove redundant check, enhance error
message.

### Issue

The example in docstring of how to use redis store was wrong.

![image](https://github.com/langchain-ai/langchain/assets/37469330/78c5d9ce-ee66-45b3-8dfe-ea29f125e6e9)

### Dependencies
Nothing



- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-11 14:08:05 +00:00
am-kinetica
ad101adec8 community[patch]: Kinetica Integrations handled error in querying; quotes in table names; updated gpudb API (#22724)
- [ ] **Miscellaneous updates and fixes**: 
- **Description:** Handled error in querying; quotes in table names;
updated gpudb API
- **Issue:** Threw an error with an error message difficult to
understand if a query failed or returned no records
    - **Dependencies:** Updated GPUDB API version to `7.2.0.9`


@baskaryan @hwchase17
2024-06-11 10:01:26 -04:00
NithinBairapaka
27b9ea14a5 docs: Updated integration docs with required package installations (#22392)
**Title:** Updated integration docs with required package installations
   **Issue:**  #22005
2024-06-11 01:44:05 +00:00
Albert Gil López
1710423de3 docs: correct path in readme (#22383)
Description: Fix incorrect path in README instructions.
Issue: N/A
Dependencies: None
Twitter handle: @jddam

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-10 17:47:39 -07:00
Greg Tracy
7e115da16c docs: Fix pixelation in stack graphic (#21554)
This change updates the stack graphic displayed in the top-level README.
The LangChain tile is pixelated in the current graphic.
2024-06-10 22:52:22 +00:00
Leonid Ganeline
55bd8e582b docs: integrations cache: added class table (#22368)
Added a table with the cache classes. See [this table
here](https://langchain-rnpqvikie-langchain.vercel.app/v0.2/docs/integrations/llm_caching/#cache-classes-summary-table).
2024-06-10 15:09:03 -07:00
Jacob Lee
89804c3026 docs: Adds pointers from LLM pages to equivalent chat model pages (#22759)
@baskaryan
2024-06-10 14:13:22 -07:00
Qingchuan Hao
7f180f996b docs: fix langchain expression language link (#22683) 2024-06-10 21:12:47 +00:00
Mathis Joffre
ea43f40daf community[minor]: Add support for OVHcloud AI Endpoints Embedding (#22667)
**Description:** Add support for [OVHcloud AI
Endpoints](https://endpoints.ai.cloud.ovh.net/) Embedding models.

Inspired by:
https://gist.github.com/gmasse/e1f99339e161f4830df6be5d0095349a

Signed-off-by: Joffref <mariusjoffre@gmail.com>
2024-06-10 21:07:25 +00:00
Erick Friis
2aaf86ddae core: fix mustache falsy cases (#22747) 2024-06-10 14:00:12 -07:00
Eugene Yurtsev
5a7eac191a core[patch]: Add missing type annotations (#22756)
Add missing type annotations.

The missing type annotations will raise exceptions with pydantic 2.
2024-06-10 16:59:41 -04:00
Eugene Yurtsev
05d31a2f00 community[patch]: Add missing type annotations (#22758)
Add missing type annotations to objects in community.
These missing type annotations will raise type errors in pydantic 2.
2024-06-10 16:59:28 -04:00
Naka Masato
3237909221 langchain[patch]: allow to use partial variables in create_sql_query_chain (#22688)
- **Description:** allow to use partial variables to pass `top_k` and
`table_info`
- **Issue:** no
- **Dependencies:** no
- **Twitter handle:** @gymnstcs

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-10 20:58:30 +00:00
Bharat Ramanathan
2b5631a6be community[patch]: fix WandbTracer to work with new "RunV2" API (#22673)
- **Description:** This PR updates the `WandbTracer` to work with the
new RunV2 API so that wandb Traces logging works correctly for new
LangChain versions. Here's an example
[run](https://wandb.ai/parambharat/langchain-tracing/runs/wpm99ftq) from
the existing tests
- **Issue:** https://github.com/wandb/wandb/issues/7762
- **Twitter handle:** @ParamBharat

_If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17._
2024-06-10 13:56:35 -07:00
Oguz Vuruskaner
f0f4532579 community[patch]: fix deepinfra inference (#22680)
This PR includes:

1. Update of default model to LLama3.
2. Handle some 400x errors with more user friendly error messages.
3. Handle user errors.
2024-06-10 13:55:55 -07:00
Lucas Tucker
cb79e80b0b docs: standardize ChatHuggingFace (#22693)
**Updated ChatHuggingFace doc string as per issue #22296**:
"langchain_huggingface: updated docstring for ChatHuggingFace in
langchain_huggingface to match that of the description (in the appendix)
provided in issue #22296. "

**Issue:** This PR is in response to issue #22296, and more specifically
ChatHuggingFace model. In particular, this PR updates the docstring for
langchain/libs/partners/hugging_face/langchain_huggingface/chat_models/huggingface.py
by adding the following sections: Instantiate, Invoke, Stream, Async,
Tool calling, and Response metadata. I used the template from the
Anthropic implementation and referenced the Appendix of the original
issue post. I also noted that: langchain_community hugging face llms do
not work with langchain_huggingface's ChatHuggingFace model (at least
for me); the .stream(messages) functionality of ChatHuggingFace only
returned a block of response.

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-10 20:54:36 +00:00
Erick Friis
d92f2251c8 docs: couchbase partner package (#22757) 2024-06-10 20:53:03 +00:00
Tomaz Bratanic
76a193decc community[patch]: Add function response to graph cypher qa chain (#22690)
LLMs struggle with Graph RAG, because it's different from vector RAG in
a way that you don't provide the whole context, only the answer and the
LLM has to believe. However, that doesn't really work a lot of the time.
However, if you wrap the context as function response the accuracy is
much better.

btw... `union[LLMChain, Runnable]` is linting fun, that's why so many
ignores
2024-06-10 13:52:17 -07:00
X-HAN
34edfe4a16 community[minor]: add Volcengine Rerank (#22700)
**Description:** this PR adds Volcengine Rerank capability to Langchain,
you can find Volcengine Rerank API from
[here](https://www.volcengine.com/docs/84313/1254474) &
[here](https://www.volcengine.com/docs/84313/1254605).
[Volcengine](https://www.volcengine.com/) is a cloud service platform
developed by ByteDance, the parent company of TikTok. You can obtain
Volcengine API AK/SK from
[here](https://www.volcengine.com/docs/84313/1254553).

**Dependencies:** VolcengineRerank depends on `volcengine` python
package.

**Twitter handle:** my twitter/x account is https://x.com/LastMonopoly
and I'd like a mention, thank you!


**Tests and docs**
  1. integration test: `test_volcengine_rerank.py`
  2. example notebook: `volcengine_rerank.ipynb`

**Lint and test**: I have run `make format`, `make lint` and `make test`
from the root of the package I've modified.
2024-06-10 13:41:05 -07:00
Prakul
9eacce9356 docs:Update reference to langchain-mongodb (#22705)
**Description**: Update reference to langchain-mongodb
2024-06-10 13:35:21 -07:00
Ikko Eltociear Ashimine
4197c9c85f docs: update azure_container_apps_dynamic_sessions_data_analyst.ipynb (#22718)
colum -> column
2024-06-10 13:33:40 -07:00
Jacob Lee
e4183cbc4e docs[patch]: Add caution on OpenAI LLMs integration page (#22754)
@baskaryan do we like?

<img width="1040" alt="Screenshot 2024-06-10 at 12 16 45 PM"
src="https://github.com/langchain-ai/langchain/assets/6952323/8893063f-1acf-4a56-9ee5-a8a2b1560277">
2024-06-10 13:27:22 -07:00
Mohammad Mohtashim
c3cce98d86 community[patch]: Small Fix in OutlookMessageLoader (Close the Message once Open) (#22744)
- **Description:** A very small fix where we close the message when it
opened
- **Issue:** #22729
2024-06-10 13:08:39 -07:00
Bagatur
86a3f6edf1 docs: standardize ChatVertexAI (#22686)
Part of #22296. Part two of
https://github.com/langchain-ai/langchain-google/pull/287
2024-06-10 12:50:50 -07:00
ccurme
f9fdca6cc2 openai: add parallel_tool_calls to api ref (#22746)
![Screenshot 2024-06-10 at 1 41 24
PM](https://github.com/langchain-ai/langchain/assets/26529506/2626bf9c-41c6-4431-b2e1-f59de1e4e468)
2024-06-10 17:44:43 +00:00
Max Mulatz
058a64c563 Community[minor]: Add language parser for Elixir (#22742)
Hi 👋 

First off, thanks a ton for your work on this 💚 Really appreciate what
you're providing here for the community.

## Description

This PR adds a basic language parser for the
[Elixir](https://elixir-lang.org/) programming language. The parser code
is based upon the approach outlined in
https://github.com/langchain-ai/langchain/pull/13318: it's using
`tree-sitter` under the hood and aligns with all the other `tree-sitter`
based parses added that PR.

The `CHUNK_QUERY` I'm using here is probably not the most sophisticated
one, but it worked for my application. It's a starting point to provide
"core" parsing support for Elixir in LangChain. It enables people to use
the language parser out in real world applications which may then lead
to further tweaking of the queries. I consider this PR just the ground
work.

- **Dependencies:** requires `tree-sitter` and `tree-sitter-languages`
from the extended dependencies
- **Twitter handle:**`@bitcrowd`

## Checklist

- [x] **PR title**: "package: description"
- [x] **Add tests and docs**
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2024-06-10 15:56:57 +00:00
wangda
28e956735c docs:Correcting spelling mistakes in readme (#22664)
Signed-off-by: zhangwangda <zhangwangda94@163.com>
2024-06-10 15:33:41 +00:00
Gin
6f54abc252 docs: Add a missing dot in concepts.mdx (#22677) 2024-06-10 15:30:56 +00:00
Philippe PRADOS
2d4689d721 langchain[minor]: Add pgvector to list of supported vectorstores in self query retriever (#22678)
The fact that we outsourced pgvector to another project has an
unintended effect. The mapping dictionary found by
`_get_builtin_translator()` cannot recognize the new version of pgvector
because it comes from another package.
`SelfQueryRetriever` no longer knows `PGVector`.

I propose to fix this by creating a global dictionary that can be
populated by various database implementations. Thus, importing
`langchain_postgres` will allow the registration of the `PGvector`
mapping.

But for the moment I'm just adding a lazy import

Furthermore, the implementation of _get_builtin_translator()
reconstructs the BUILTIN_TRANSLATORS variable with each invocation,
which is not very efficient. A global map would be an optimization.

- **Twitter handle:** pprados

@eyurtsev, can you review this PR? And unlock the PR [Add async mode for
pgvector](https://github.com/langchain-ai/langchain-postgres/pull/32)
and PR [community[minor]: Add SQL storage
implementation](https://github.com/langchain-ai/langchain/pull/22207)?

Are you in favour of a global dictionary-based implementation of
Translator?
2024-06-10 11:27:47 -04:00
Lei Zhang
5ba1899cd7 infra: Scheduled GitHub Actions to run only on the upstream repository (#22707)
**Description:** Scheduled GitHub Actions to run only on the upstream
repository

**Issue:** Fixes #22706 

**Twitter handle:** @coolbeevip
2024-06-10 11:07:42 -04:00
Prakul
3f76c9e908 docs: Update MongoDB information in llm_caching (#22708)
**Description:**: Update MongoDB information in llm_caching
2024-06-10 11:05:55 -04:00
fzowl
c1fced9269 docs: VoyageAI new embedding and reranking models (#22719) 2024-06-09 09:12:43 -07:00
Enzo Poggio
8f019e91d7 community[patch]: Use Custom Logger Instead of Root Logger in get_user_agent Function (#22691)
## Description
This PR addresses a logging inconsistency in the `get_user_agent`
function. Previously, the function was using the root logger to log a
warning message when the "USER_AGENT" environment variable was not set.
This bypassed the custom logger `log` that was created at the start of
the module, leading to potential inconsistencies in logging behavior.

Changes:
- Replaced `logging.warning` with `log.warning` in the `get_user_agent`
function to ensure that the custom logger is used.

This change ensures that all logging in the `get_user_agent` function
respects the configurations of the custom logger, leading to more
consistent and predictable logging behavior.

## Dependencies

None

## Issue 

None

## Tests and docs

☝🏻 see description


## `make format`, `make lint` & `cd libs/community; make test`

```shell
> make format 
poetry run ruff format docs templates cookbook
1417 files left unchanged
poetry run ruff check --select I --fix docs templates cookbook
All checks passed!
```

```shell
> make lint
poetry run ruff check docs templates cookbook
All checks passed!
poetry run ruff format docs templates cookbook --diff
1417 files already formatted
poetry run ruff check --select I docs templates cookbook
All checks passed!
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
```

~cd libs/community; make test~ too much dependencies for integration ...

```shell
>  poetry run pytest tests/unit_tests   
....
==== 884 passed, 466 skipped, 4447 warnings in 15.93s ====
```

I choose you randomly : @ccurme
2024-06-08 02:33:07 +00:00
Philippe PRADOS
9aabb446c5 community[minor]: Add SQL storage implementation (#22207)
Hello @eyurtsev

- package: langchain-comminity
- **Description**: Add SQL implementation for docstore. A new
implementation, in line with my other PR ([async
PGVector](https://github.com/langchain-ai/langchain-postgres/pull/32),
[SQLChatMessageMemory](https://github.com/langchain-ai/langchain/pull/22065))
- Twitter handler: pprados

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Piotr Mardziel <piotrm@gmail.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:17:02 +00:00
Nithish Raghunandanan
f2f0e0e13d couchbase: Add the initial version of Couchbase partner package (#22087)
Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-07 14:04:08 -07:00
Cahid Arda Öz
6c07eb0c12 community[minor]: Add UpstashRatelimitHandler (#21885)
Adding `UpstashRatelimitHandler` callback for rate limiting based on
number of chain invocations or LLM token usage.

For more details, see [upstash/ratelimit-py
repository](https://github.com/upstash/ratelimit-py) or the notebook
guide included in this PR.

Twitter handle: @cahidarda

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:02:06 +00:00
Erick Friis
9b3ce16982 docs: remove nonexistent headings (#22685) 2024-06-07 20:02:06 +00:00
Erick Friis
9e03864d64 core: add error message for non-structured llm to StructuredPrompt (#22684)
previously was the blank `NotImplementedError` from
`BaseLanguageModel.with_structured_output`
2024-06-07 19:42:09 +00:00
Eugene Yurtsev
5a9d4b0088 Merge branch 'master' into renderer 2024-05-23 17:56:46 -04:00
Eugene Yurtsev
a098d61226 qx 2024-05-23 16:54:19 -04:00
793 changed files with 30857 additions and 244569 deletions

View File

@@ -1,7 +1,11 @@
import json
import sys
import os
from typing import Dict
from typing import Dict, List, Set
import tomllib
from collections import defaultdict
import glob
LANGCHAIN_DIRS = [
"libs/core",
@@ -11,6 +15,38 @@ LANGCHAIN_DIRS = [
"libs/experimental",
]
def all_package_dirs() -> Set[str]:
return {"/".join(path.split("/")[:-1]) for path in glob.glob("./libs/**/pyproject.toml", recursive=True)}
def dependents_graph() -> dict:
dependents = defaultdict(set)
for path in glob.glob("./libs/**/pyproject.toml", recursive=True):
if "template" in path:
continue
with open(path, "rb") as f:
pyproject = tomllib.load(f)['tool']['poetry']
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in pyproject['dependencies']:
if "langchain" in dep:
dependents[dep].add(pkg_dir)
return dependents
def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
updated = set()
for dir_ in dirs_to_eval:
# handle core manually because it has so many dependents
if "core" in dir_:
updated.add(dir_)
continue
pkg = "langchain-" + dir_.split("/")[-1]
updated.update(dependents[pkg])
updated.add(dir_)
return list(updated)
if __name__ == "__main__":
files = sys.argv[1:]
@@ -21,10 +57,11 @@ if __name__ == "__main__":
}
docs_edited = False
if len(files) == 300:
if len(files) >= 300:
# max diff length is 300 files - there are likely files missing
raise ValueError("Max diff reached. Please manually run CI on changed libs.")
dirs_to_run["lint"] = all_package_dirs()
dirs_to_run["test"] = all_package_dirs()
dirs_to_run["extended-test"] = set(LANGCHAIN_DIRS)
for file in files:
if any(
file.startswith(dir_)
@@ -81,11 +118,13 @@ if __name__ == "__main__":
docs_edited = True
dirs_to_run["lint"].add(".")
dependents = dependents_graph()
outputs = {
"dirs-to-lint": list(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
"dirs-to-lint": add_dependents(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"], dependents
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-test": add_dependents(dirs_to_run["test"] | dirs_to_run["extended-test"], dependents),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
"docs-edited": "true" if docs_edited else "",
}

View File

@@ -24,6 +24,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -28,6 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: dependency checks ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,6 @@ env:
jobs:
build:
environment: Scheduled testing
defaults:
run:
working-directory: ${{ inputs.working-directory }}
@@ -53,8 +52,15 @@ jobs:
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@@ -34,7 +34,7 @@ jobs:
# so linting on fewer versions makes CI faster.
python-version:
- "3.8"
- "3.11"
- "3.12"
steps:
- uses: actions/checkout@v4

View File

@@ -28,6 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "make test #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,7 @@ jobs:
strategy:
matrix:
python-version:
- "3.11"
- "3.12"
name: "check doc imports #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -7,6 +7,7 @@ on:
jobs:
check-links:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

View File

@@ -26,7 +26,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
python-version: '3.11'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
@@ -104,6 +104,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
runs-on: ubuntu-latest
defaults:
run:

31
.github/workflows/check_new_docs.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
---
name: Integration docs lint
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- name: Check new docs
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -10,6 +10,7 @@ env:
jobs:
build:
if: github.repository_owner == 'langchain-ai'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:
@@ -30,7 +31,6 @@ jobs:
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
- "libs/partners/nvidia-ai-endpoints"
steps:
- uses: actions/checkout@v4
@@ -40,10 +40,6 @@ jobs:
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-nvidia
path: langchain-nvidia
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-cohere
@@ -58,11 +54,9 @@ jobs:
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/nvidia-ai-endpoints \
langchain/libs/partners/cohere
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-nvidia/libs/ai-endpoints langchain/libs/partners/nvidia-ai-endpoints
mv langchain-cohere/libs/cohere langchain/libs/partners/cohere
mv langchain-aws/libs/aws langchain/libs/partners/aws
@@ -122,7 +116,6 @@ jobs:
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/nvidia-ai-endpoints \
langchain/libs/partners/cohere \
langchain/libs/partners/aws

View File

@@ -273,7 +273,7 @@
"source": [
"# Tool schema for querying SQL db\n",
"class create_df_from_sql(BaseModel):\n",
" \"\"\"Execute a PostgreSQL SELECT statement and use the results to create a DataFrame with the given colum names.\"\"\"\n",
" \"\"\"Execute a PostgreSQL SELECT statement and use the results to create a DataFrame with the given column names.\"\"\"\n",
"\n",
" select_query: str = Field(..., description=\"A PostgreSQL SELECT statement.\")\n",
" # We're going to convert the results to a Pandas DataFrame that we pass\n",

View File

@@ -38,6 +38,8 @@ generate-files:
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md

View File

@@ -10,12 +10,21 @@ from pathlib import Path
from typing import Dict, List, Literal, Optional, Sequence, TypedDict, Union
import toml
import typing_extensions
from langchain_core.runnables import Runnable, RunnableSerializable
from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
ClassKind = Literal[
"TypedDict",
"Regular",
"Pydantic",
"enum",
"RunnablePydantic",
"RunnableNonPydantic",
]
class ClassInfo(TypedDict):
@@ -69,8 +78,36 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
continue
if inspect.isclass(type_):
if type(type_) == typing._TypedDictMeta: # type: ignore
# The clasification of the class is used to select a template
# for the object when rendering the documentation.
# See `templates` directory for defined templates.
# This is a hacky solution to distinguish between different
# kinds of thing that we want to render.
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif type(type_) is typing._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif (
issubclass(type_, Runnable)
and issubclass(type_, BaseModel)
and type_ is not Runnable
):
# RunnableSerializable subclasses from Pydantic which
# for which we use autodoc_pydantic for rendering.
# We need to distinguish these from regular Pydantic
# classes so we can hide inherited Runnable methods
# and provide a link to the Runnable interface from
# the template.
kind = "RunnablePydantic"
elif (
issubclass(type_, Runnable)
and not issubclass(type_, BaseModel)
and type_ is not Runnable
):
# These are not pydantic classes but are Runnable.
# We'll hide all the inherited methods from Runnable
# but use a regular class template to render.
kind = "RunnableNonPydantic"
elif issubclass(type_, Enum):
kind = "enum"
elif issubclass(type_, BaseModel):
@@ -251,6 +288,10 @@ Classes
template = "enum.rst"
elif class_["kind"] == "Pydantic":
template = "pydantic.rst"
elif class_["kind"] == "RunnablePydantic":
template = "runnable_pydantic.rst"
elif class_["kind"] == "RunnableNonPydantic":
template = "runnable_non_pydantic.rst"
else:
template = "class.rst"

View File

@@ -33,4 +33,4 @@
{% endblock %}
.. example_links:: {{ objname }}
.. example_links:: {{ objname }}

View File

@@ -15,6 +15,8 @@
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace
{% block attributes %}
{% endblock %}

View File

@@ -0,0 +1,39 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -0,0 +1,22 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
:model-show-json: False
:model-show-config-summary: False
:model-show-validator-members: False
:model-show-field-summary: False
:field-signature-prefix: param
:members:
:undoc-members:
:inherited-members:
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, invoke, ainvoke, batch, abatch, batch_as_completed, abatch_as_completed, astream_log, stream, astream, astream_events, transform, atransform, get_output_schema, get_prompts, configurable_fields, configurable_alternatives, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign
.. example_links:: {{ objname }}

View File

@@ -2,132 +2,129 @@
{%- set url_root = pathto('', 1) %}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- set lang_attr = 'en' %}
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="{{ lang_attr }}" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="{{ lang_attr }}" > <!--<![endif]-->
<!--[if gt IE 8]><!-->
<html class="no-js" lang="{{ lang_attr }}"> <!--<![endif]-->
<head>
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical" href="https://api.python.langchain.com/en/latest/{{pagename}}.html" />
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical"
href="https://api.python.langchain.com/en/latest/{{ pagename }}.html"/>
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
<link rel="stylesheet" href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}" type="text/css" />
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}" type="text/css"{% if css.title is not none %} title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css" />
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}" src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
<link rel="stylesheet"
href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}"
type="text/css"/>
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}"
type="text/css"{% if css.title is not none %}
title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css"/>
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css"/>
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}"
src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
</head>
<body>
{% include "nav.html" %}
{%- block content %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary" for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
<div class="btn-group w-100 mb-2" role="group" aria-label="rellinks">
{%- if prev %}
<a href="{{ prev.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ prev.title|striptags }}">Prev</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Prev</a>
{%- endif %}
{%- if parents -%}
<a href="{{ parents[-1].link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ parents[-1].title|striptags }}">Up</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink disabled py-1">Up</a>
{%- endif %}
{%- if next %}
<a href="{{ next.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ next.title|striptags }}">Next</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Next</a>
{%- endif %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary"
for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}"
class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}"
class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}" class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}" class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}
&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}
.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated
on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}"
rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
</div>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}" rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
</div>
</div>
{%- endblock %}
<script src="{{ pathto('_static/js/vendor/bootstrap.min.js', 1) }}"></script>
{% include "javascript.html" %}

View File

@@ -24,21 +24,22 @@ Here you find [such papers](https://arxiv.org/search/?query=langchain&searchtype
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb), [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022-10-06 | `Docs:` [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/chat/huggingface](https://python.langchain.com/docs/integrations/chat/huggingface), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), `API:` [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
@@ -418,7 +419,7 @@ publicly available.
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb), [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb)
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
@@ -540,7 +541,7 @@ more than 1/1,000th the compute of GPT-4.
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
@@ -683,6 +684,41 @@ accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
which uses chain-of-thought by absolute 15% top-1. Our code and data are
publicly available at http://reasonwithpal.com/ .
## ReAct: Synergizing Reasoning and Acting in Language Models
- **arXiv id:** 2210.03629v3
- **Title:** ReAct: Synergizing Reasoning and Acting in Language Models
- **Authors:** Shunyu Yao, Jeffrey Zhao, Dian Yu, et al.
- **Published Date:** 2022-10-06
- **URL:** http://arxiv.org/abs/2210.03629v3
- **LangChain:**
- **Documentation:** [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/chat/huggingface](https://python.langchain.com/docs/integrations/chat/huggingface), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping)
- **API Reference:** [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
**Abstract:** While large language models (LLMs) have demonstrated impressive capabilities
across tasks in language understanding and interactive decision making, their
abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g.
action plan generation) have primarily been studied as separate topics. In this
paper, we explore the use of LLMs to generate both reasoning traces and
task-specific actions in an interleaved manner, allowing for greater synergy
between the two: reasoning traces help the model induce, track, and update
action plans as well as handle exceptions, while actions allow it to interface
with external sources, such as knowledge bases or environments, to gather
additional information. We apply our approach, named ReAct, to a diverse set of
language and decision making tasks and demonstrate its effectiveness over
state-of-the-art baselines, as well as improved human interpretability and
trustworthiness over methods without reasoning or acting components.
Concretely, on question answering (HotpotQA) and fact verification (Fever),
ReAct overcomes issues of hallucination and error propagation prevalent in
chain-of-thought reasoning by interacting with a simple Wikipedia API, and
generates human-like task-solving trajectories that are more interpretable than
baselines without reasoning traces. On two interactive decision making
benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and
reinforcement learning methods by an absolute success rate of 34% and 10%
respectively, while being prompted with only one or two in-context examples.
Project site with code: https://react-lm.github.io
## Deep Lake: a Lakehouse for Deep Learning
- **arXiv id:** 2209.10785v2
@@ -768,7 +804,7 @@ few-shot examples.
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
@@ -832,7 +868,7 @@ https://github.com/OpenAI/CLIP.
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We

View File

@@ -11,6 +11,7 @@
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C)
## Courses
@@ -45,7 +46,6 @@
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
- [Dive into Langchain (Chinese language)](https://langchain.boblin.app/)
---------------------

View File

@@ -11,7 +11,7 @@ LangChain as a framework consists of a number of packages.
### `langchain-core`
This package contains base abstractions of different components and ways to compose them together.
The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here.
The interfaces for core components like LLMs, vector stores, retrievers and more are defined here.
No third party integrations are defined here.
The dependencies are kept purposefully very lightweight.
@@ -30,7 +30,7 @@ All chains, agents, and retrieval strategies here are NOT specific to any one in
This package contains third party integrations that are maintained by the LangChain community.
Key partner packages are separated out (see below).
This contains all integrations for various components (LLMs, vectorstores, retrievers).
This contains all integrations for various components (LLMs, vector stores, retrievers).
All dependencies in this package are optional to keep the package as lightweight as possible.
### [`langgraph`](https://langchain-ai.github.io/langgraph)
@@ -96,9 +96,9 @@ To make it as easy as possible to create custom chains, we've implemented a ["Ru
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way.
The standard interface includes:
- [`stream`](#stream): stream back chunks of the response
- [`invoke`](#invoke): call the chain on an input
- [`batch`](#batch): call the chain on a list of inputs
- `stream`: stream back chunks of the response
- `invoke`: call the chain on an input
- `batch`: call the chain on a list of inputs
These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:
@@ -133,19 +133,30 @@ Some components LangChain implements, some components we rely on third-party int
<span data-heading-keywords="chat model,chat models"></span>
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see above).
These are traditionally newer models (older models are generally `LLMs`, see below).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs.
When a string is passed in as input, it is converted to a HumanMessage and then passed to the underlying model.
When a string is passed in as input, it is converted to a `HumanMessage` and then passed to the underlying model.
LangChain does not provide any ChatModels, rather we rely on third party integrations.
LangChain does not host any Chat Models, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
- `model`: the name of the model
- `temperature`: the sampling temperature
- `timeout`: request timeout
- `max_tokens`: max tokens to generate
- `stop`: default stop sequences
- `max_retries`: max number of times to retry requests
- `api_key`: API key for the model provider
- `base_url`: endpoint to send requests to
ChatModels also accept other parameters that are specific to that integration.
Some important things to note:
- standard params only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these.
- standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in ``langchain-community``.
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model.
:::important
**Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling.
@@ -155,17 +166,34 @@ Please see the [tool calling section](/docs/concepts/#functiontool-calling) for
For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models).
#### Multimodality
Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).
For a full list of LangChain model providers with multimodal models, [check out this table](/docs/integrations/chat/#advanced-features).
### LLMs
<span data-heading-keywords="llm,llms"></span>
:::caution
Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models),
even for non-chat use cases.
You are probably looking for [the section above instead](/docs/concepts/#chat-models).
:::
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This makes them interchangeable with ChatModels.
This gives them the same interface as [Chat Models](/docs/concepts/#chat-models).
When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not provide any LLMs, rather we rely on third party integrations.
LangChain does not host any LLMs, rather we rely on third party integrations.
For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms).
@@ -363,7 +391,7 @@ An essential component of a conversation is being able to refer to information i
At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain.
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database.
Future interactions will then load those messages and pass them into the chain as part of the input.
### Documents
@@ -415,9 +443,14 @@ For specifics on how to use text splitters, see the [relevant how-to guides here
### Embedding models
<span data-heading-keywords="embedding,embeddings"></span>
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embedding models create a vector representation of a piece of text. You can think of a vector as an array of numbers that captures the semantic meaning of the text.
By representing the text in this way, you can perform mathematical operations that allow you to do things like search for other pieces of text that are most similar in meaning.
These natural language search capabilities underpin many types of [context retrieval](/docs/concepts/#retrieval),
where we provide an LLM with the relevant data it needs to effectively respond to a query.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
![](/img/embeddings.png)
The `Embeddings` class is a class designed for interfacing with text embedding models. There are many different embedding model providers (OpenAI, Cohere, Hugging Face, etc) and local models, and this class is designed to provide a standard interface for all of them.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
@@ -430,6 +463,9 @@ One of the most common ways to store and search over unstructured data is to emb
and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query.
A vector store takes care of storing embedded data and performing vector search for you.
Most vector stores can also store metadata about embedded vectors and support filtering on that metadata before
similarity search, allowing you more control over returned documents.
Vector stores can be converted to the retriever interface by doing:
```python
@@ -445,7 +481,7 @@ For specifics on how to use vector stores, see the [relevant how-to guides here]
A retriever is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers can be created from vectorstores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of Document's as output.
@@ -514,14 +550,6 @@ If you are still using AgentExecutor, do not fear: we still have a guide on [how
It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent).
### Multimodal
Some models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).
### Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
@@ -596,13 +624,220 @@ For specifics on how to use callbacks, see the [relevant how-to guides here](/do
## Techniques
### Function/tool calling
### Streaming
<span data-heading-keywords="stream,streaming"></span>
Individual LLM calls often run for much longer than traditional resource requests.
This compounds when you build more complex chains or agents that require multiple reasoning steps.
Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results
before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX
around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming.
Below, we'll discuss some concepts and considerations around streaming in LangChain.
#### `.stream()` and `.astream()`
Most modules in LangChain include the `.stream()` method (and the equivalent `.astream()` method for [async](https://docs.python.org/3/library/asyncio.html) environments) as an ergonomic streaming interface.
`.stream()` returns an iterator, which you can consume with a simple `for` loop. Here's an example with a chat model:
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-sonnet-20240229")
for chunk in model.stream("what color is the sky?"):
print(chunk.content, end="|", flush=True)
```
For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but
you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode
without the need to provide additional config.
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html).
Because this method is part of [LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel),
you can handle formatting differences from different outputs using an [output parser](/docs/concepts/#output-parsers) to transform
each yielded chunk.
You can check out [this guide](/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`.
#### `.astream_events()`
<span data-heading-keywords="astream_events,stream_events,stream events"></span>
While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls,
but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of
the chain alongside the final output - for example, returning sources alongside the final generation when building a chat
over documents app.
There are ways to do this [using callbacks](/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate
values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an
`.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator
which yields [various types of events](/docs/how_to/streaming/#event-reference) that you can filter and process according
to the needs of your project.
Here's one small example that prints just events containing streamed chat model output:
```python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-sonnet-20240229")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt | model | parser
async for event in chain.astream_events({"topic": "parrot"}, version="v2"):
kind = event["event"]
if kind == "on_chat_model_stream":
print(event, end="|", flush=True)
```
You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components!
See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`,
including a table listing available events.
#### Callbacks
The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/docs/concepts/#callbacks) system. You can pass a
callback handler that handles the [`on_llm_new_token`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_new_token) event into LangChain components. When that component is invoked, any
[LLM](/docs/concepts/#llms) or [chat model](/docs/concepts/#chat-models) contained in the component calls
the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response.
You can also handle the [`on_llm_end`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_end) event to perform any necessary cleanup.
You can see [this how-to section](/docs/how_to/#callbacks) for more specifics on using callbacks.
Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable,
they can be unwieldy for developers. For example:
- You need to explicitly initialize and manage some aggregator or other stream to collect results.
- The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes.
- Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once.
- You would often ignore the result of the actual model call in favor of callback results.
#### Tokens
The unit that most model providers use to measure input and output is via a unit called a **token**.
Tokens are the basic units that language models read and generate when processing or producing text.
The exact definition of a token can vary depending on the specific way the model was trained -
for instance, in English, a token could be a single word like "apple", or a part of a word like "app".
When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**.
The model then streams back generated output tokens, which the tokenizer decodes into human-readable text.
The below example shows how OpenAI models tokenize `LangChain is cool!`:
![](/img/tokenization.png)
You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries.
The reason language models use tokens rather than something more immediately intuitive like "characters"
has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on
the initial input and their previous generations. Training the model using tokens language models to handle linguistic
units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model
to learn and understand the structure of the language, including grammar and context.
Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
### Structured output
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide
range of inputs, but for some use-cases, it can be useful to constrain the LLM's output
to a specific format or structure. This is referred to as **structured output**.
For example, if the output is to be stored in a relational database,
it is much easier if the model generates output that adheres to a defined schema or format.
[Extracting specific information](/docs/tutorials/extraction/) from unstructured text is another
case where this is particularly useful. Most commonly, the output format will be JSON,
though other formats such as [YAML](/docs/how_to/output_parser_yaml/) can be useful too. Below, we'll discuss
a few ways to get structured output from models in LangChain.
#### `.with_structured_output()`
For convenience, some LangChain chat models support a `.with_structured_output()` method.
This method only requires a schema as input, and returns a dict or Pydantic object.
Generally, this method is only present on models that support one of the more advanced methods described below,
and will use one of them under the hood. It takes care of importing a suitable output parser and
formatting the schema in the right format for the model.
For more information, check out this [how-to guide](/docs/how_to/structured_output/#the-with_structured_output-method).
#### Raw prompting
The most intuitive way to get a model to structure output is to ask nicely.
In addition to your query, you can give instructions describing what kind of output you'd like, then
parse the output using an [output parser](/docs/concepts/#output-parsers) to convert the raw
model message or string output into something more easily manipulated.
The biggest benefit to raw prompting is its flexibility:
- Raw prompting does not require any special model features, only sufficient reasoning capability to understand
the passed schema.
- You can prompt for any format you'd like, not just JSON. This can be useful if the model you
are using is more heavily trained on a certain type of data, such as XML or YAML.
However, there are some drawbacks too:
- LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format
for smooth parsing can be surprisingly difficult and model-specific.
- Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult.
Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions,
and still others may prefer XML.
While we'll next go over some ways that you can take advantage of features offered by
model providers to increase reliability, prompting techniques remain important for tuning your
results no matter what method you choose.
#### JSON mode
<span data-heading-keywords="json mode"></span>
Some models, such as [Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/),
[Together AI](/docs/integrations/chat/together/) and [Ollama](/docs/integrations/chat/ollama/),
support a feature called **JSON mode**, usually enabled via config.
When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON.
Often they require some custom prompting, but it's usually much less burdensome and along the lines of,
`"you must always return JSON"`, and the [output is easier to parse](/docs/how_to/output_parser_json/).
It's also generally simpler and more commonly available than tool calling.
Here's an example:
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain.output_parsers.json import SimpleJsonOutputParser
model = ChatOpenAI(
model="gpt-4o",
model_kwargs={ "response_format": { "type": "json_object" } },
)
prompt = ChatPromptTemplate.from_template(
"Answer the user's question to the best of your ability."
'You must always output a JSON object with an "answer" key and a "followup_question" key.'
"{question}"
)
chain = prompt | model | SimpleJsonOutputParser()
chain.invoke({ "question": "What is the powerhouse of the cell?" })
```
```
{'answer': 'The powerhouse of the cell is the mitochondrion. It is responsible for producing energy in the form of ATP through cellular respiration.',
'followup_question': 'Would you like to know more about how mitochondria produce energy?'}
```
For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features).
#### Function/tool calling
:::info
We use the term tool calling interchangeably with function calling. Although
function calling is sometimes meant to refer to invocations of a single function,
we treat all models as though they can return multiple tool or function calls in
each message.
each message
:::
Tool calling allows a model to respond to a given prompt by generating output that
@@ -614,8 +849,10 @@ from unstructured text, you could give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.
A tool call includes a name, arguments dict, and an optional identifier. The
arguments dict is structured `{argument_name: argument_value}`.
For models that support it, tool calling can be very convenient. It removes the
guesswork around how best to prompt schemas in favor of a built-in model feature. It can also
more naturally support agentic flows, since you can just pass multiple tool schemas instead
of fiddling with enums or unions.
Many LLM providers, including [Anthropic](https://www.anthropic.com/),
[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
@@ -632,40 +869,174 @@ LangChain provides a standardized interface for tool calling that is consistent
The standard interface consists of:
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call.
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) here.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
There are two main use cases for function/tool calling:
The following how-to guides are good practical resources for using function/tool calling:
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
- [How to use a model to call tools](/docs/how_to/tool_calling/)
For a full list of model providers that support tool calling, [see this table](/docs/integrations/chat/#advanced-features).
### Retrieval
LangChain provides several advanced retrieval types. A full list is below, along with the following information:
LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise).
Retrieval is the process of providing relevant information to an LLM to improve its response for a given input. Retrieval augmented generation (RAG) is the process of grounding the LLM generation (output) using the retrieved information.
**Name**: Name of the retrieval algorithm.
:::tip
**Index Type**: Which index type (if any) this relies on.
* See our RAG from Scratch [code](https://github.com/langchain-ai/rag-from-scratch) and [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared).
* For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/).
**Uses an LLM**: Whether this retrieval method uses an LLM.
:::
**When to Use**: Our commentary on when you should considering using this retrieval method.
RAG is only as good as the retrieved documents relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections.
You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app.
**Description**: Description of what this retrieval algorithm is doing.
![](/img/rag_landscape.png)
#### Query Translation
First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries.
**Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system.
For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query.
| Name | When to use | Description |
|---------------|-------------|-------------|
| [Multi-query](/docs/how_to/MultiQueryRetriever/) | When you need to cover multiple perspectives of a question. | Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries. |
| [Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a question can be broken down into smaller subproblems. | Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). |
| [Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a higher-level conceptual understanding is required. | First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. |
| [HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | If you have challenges retrieving relevant documents using the raw user inputs. | Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. |
:::tip
See our RAG from Scratch videos for a few different specific approaches:
- [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared)
- [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared)
- [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared)
- [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared)
:::
#### Routing
Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.**
| Name | When to use | Description |
|------------------|--------------------------------------------|-------------|
| [Logical routing](/docs/how_to/routing/) | When you can prompt an LLM with rules to decide where to route the input. | Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. |
| [Semantic routing](/docs/how_to/routing/#routing-by-semantic-similarity) | When semantic similarity is an effective way to determine where to route the input. | Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. |
:::tip
See our RAG from Scratch video on [routing](https://youtu.be/pfpIndq7Fi8?feature=shared).
:::
#### Query Construction
Third, consider whether any of your data sources require specific query formats. Many structured databases use SQL. Vector stores often have specific syntax for applying keyword filters to document metadata. **Using an LLM to convert a natural language query into a query syntax is a popular and powerful approach.**
In particular, [text-to-SQL](/docs/tutorials/sql_qa/), [text-to-Cypher](/docs/tutorials/graph/), and [query analysis for metadata filters](/docs/tutorials/query_analysis/#query-analysis) are useful ways to interact with structured, graph, and vector databases respectively.
| Name | When to Use | Description |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Text to SQL](/docs/tutorials/sql_qa/) | If users are asking questions that require information housed in a relational database, accessible via SQL. | This uses an LLM to transform user input into a SQL query. |
| [Text-to-Cypher](/docs/tutorials/graph/) | If users are asking questions that require information housed in a graph database, accessible via Cypher. | This uses an LLM to transform user input into a Cypher query. |
| [Self Query](/docs/how_to/self_query/) | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). |
:::tip
See our [blog post overview](https://blog.langchain.dev/query-construction/) and RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared), the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries.
:::
#### Indexing
Fouth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/docs/concepts/#embedding-models).
Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens.
Two approaches can address this tension: (1) [Multi Vector](/docs/how_to/multi_vector/) retriever using an LLM to translate documents into any form (e.g., often into a summary) that is well-suited for indexing, but returns full documents to the LLM for generation. (2) [ParentDocument](/docs/how_to/parent_document_retriever/) retriever embeds document chunks, but also returns full documents. The idea is to get the best of both worlds: use concise representations (summaries or chunks) for retrieval, but use the full documents for answer generation.
| Name | Index Type | Uses an LLM | When to Use | Description |
|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Vectorstore](/docs/how_to/vectorstore_retriever/) | Vectorstore | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. |
| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vectorstore + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). |
| [Multi Vector](/docs/how_to/multi_vector/) | Vectorstore + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. |
| [Self Query](/docs/how_to/self_query/) | Vectorstore | Yes | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filer to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). |
| [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. |
| [Time-Weighted Vectorstore](/docs/how_to/time_weighted_vectorstore/) | Vectorstore | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) |
| [Multi-Query Retriever](/docs/how_to/MultiQueryRetriever/) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. |
| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |
| [Vector store](/docs/how_to/vectorstore_retriever/) | Vector store | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. |
| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vector store + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). |
| [Multi Vector](/docs/how_to/multi_vector/) | Vector store + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. |
| [Time-Weighted Vector store](/docs/how_to/time_weighted_vectorstore/) | Vector store | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) |
For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/).
:::tip
- See our RAG from Scratch video on [indexing fundamentals](https://youtu.be/bjb_EMsTDKI?feature=shared)
- See our RAG from Scratch video on [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared)
:::
Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding.
[ColBERT](https://docs.google.com/presentation/d/1IRhAdGjIevrrotdplHNcc4aXgIYyKamUKTWtB3m3aMU/edit?usp=sharing) is an interesting approach to address this with a higher granularity embeddings: (1) produce a contextually influenced embedding for each token in the document and query, (2) score similarity between each query token and all document tokens, (3) take the max, (4) do this for all query tokens, and (5) take the sum of the max scores (in step 3) for all query tokens to get a query-document similarity score; this token-wise scoring can yield strong results.
![](/img/colbert.png)
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
| Name | When to use | Description |
|-------------------|----------------------------------------------------------|-------------|
| [ColBERT](/docs/integrations/providers/ragatouille/#using-colbert-as-a-reranker) | When higher granularity embeddings are needed. | ColBERT uses contextually influenced embeddings for each token in the document and query to get a granular query-document similarity score. |
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. |
| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
:::tip
See our RAG from Scratch video on [ColBERT](https://youtu.be/cN6S0Ehm7_8?feature=shared>).
:::
#### Post-processing
Sixth, consider ways to filter or rank retrieved documents. This is very useful if you are [combining documents returned from multiple sources](/docs/integrations/retrievers/cohere-reranker/#doing-reranking-with-coherererank), since it can can down-rank less relevant documents and / or [compress similar documents](/docs/how_to/contextual_compression/#more-built-in-compressors-filters).
| Name | Index Type | Uses an LLM | When to Use | Description |
|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. |
| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |
| [Re-ranking](/docs/integrations/retrievers/cohere-reranker/) | Any | Yes | If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods . | Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query. |
:::tip
See our RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1).
:::
#### Generation
**Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow.
We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction):
- **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above
- **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query
- **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or dont address question
![](/img/langgraph_rag.png)
| Name | When to use | Description |
|-------------------|-----------------------------------------------------------|-------------|
| Self-RAG | When needing to fix answers with hallucinations or irrelevant content. | Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors. |
| Corrective-RAG | When needing a fallback mechanism for low relevance docs. | Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval. |
:::tip
See several videos and cookbooks showcasing RAG with LangGraph:
- [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck)
- [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts)
- [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag)
See our LangGraph RAG recipes with partners:
- [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/agents/langchain)
- [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain)
:::
### Text splitting
@@ -692,6 +1063,19 @@ Table columns:
| Semantic Chunker (Experimental) | [SemanticChunker](/docs/how_to/semantic-chunker/) | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) |
| Integration: AI21 Semantic | [AI21SemanticTextSplitter](/docs/integrations/document_transformers/ai21_semantic_text_splitter/) | ✅ | Identifies distinct topics that form coherent pieces of text and splits along those. |
### Evaluation
<span data-heading-keywords="evaluation,evaluate"></span>
Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications.
It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose.
This process is vital for building reliable applications.
![](/img/langsmith_evaluate.png)
[LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways:
- It makes it easier to create and curate datasets via its tracing and annotation features
- It provides an evaluation framework that helps you define metrics and run your app against your dataset
- It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code
To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation).

View File

@@ -55,7 +55,7 @@ The below sections are listed roughly in order of increasing level of abstractio
### Expression Language
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language) is the fundamental way that most LangChain components fit together, and this section is designed to teach
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,

View File

@@ -48,7 +48,7 @@ In a similar vein, we do enforce certain linting, formatting, and documentation
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
# 🌟 Recognition
### 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -15,12 +15,22 @@ Here's the structure visualized as a tree:
├── cookbook # Tutorials and examples
├── docs # Contains content for the documentation here: https://python.langchain.com/
├── libs
│ ├── langchain # Main package
│ ├── langchain
│ │ ├── langchain
│ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity)
│ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity)
│ ├── langchain-community # Third-party integrations
│ ├── langchain-core # Base interfaces for key abstractions
│ ├── langchain-experimental # Experimental components and chains
│ ├── community # Third-party integrations
│ ├── langchain-community
│ ├── core # Base interfaces for key abstractions
│ │ ├── langchain-core
│ ├── experimental # Experimental components and chains
│ │ ├── langchain-experimental
| ├── cli # Command line interface
│ │ ├── langchain-cli
│ ├── text-splitters
│ │ ├── langchain-text-splitters
│ ├── standard-tests
│ │ ├── langchain-standard-tests
│ ├── partners
│ ├── langchain-partner-1
│ ├── langchain-partner-2

View File

@@ -5,7 +5,7 @@
"id": "cfdf4f09-8125-4ed1-8063-6feed57da8a3",
"metadata": {},
"source": [
"# How to let your end users choose their model\n",
"# How to init any model in one line\n",
"\n",
"Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n",
"\n",

View File

@@ -71,13 +71,13 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")"
]
},
{
@@ -95,19 +95,15 @@
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said \"J\\'adore la programmation,\" which means \"I love programming\" in French.')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"I said \"J'adore la programmation,\" which means \"I love programming\" in French.\n"
]
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -115,23 +111,25 @@
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" (\"placeholder\", \"{messages}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
"ai_msg = chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" (\n",
" \"human\",\n",
" \"Translate this sentence from English to French: I love programming.\",\n",
" ),\n",
" AIMessage(content=\"J'adore la programmation.\"),\n",
" HumanMessage(content=\"What did you just say?\"),\n",
" (\"ai\", \"J'adore la programmation.\"),\n",
" (\"human\", \"What did you just say?\"),\n",
" ],\n",
" }\n",
")"
")\n",
"print(ai_msg.content)"
]
},
{
@@ -193,7 +191,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.')"
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 61, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5cbb21c2-9c30-4031-8ea8-bfc497989535-0', usage_metadata={'input_tokens': 61, 'output_tokens': 18, 'total_tokens': 79})"
]
},
"execution_count": 5,
@@ -250,7 +248,7 @@
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
@@ -304,10 +302,17 @@
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run dc4e2f79-4bcd-4a36-9506-55ace9040588 not found for run 34b5773e-3ced-46a6-8daf-4d464c15c940. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" in French is \"J\\'adore la programmation.\"')"
"AIMessage(content='\"J\\'adore la programmation.\"', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 39, 'total_tokens': 48}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-648b0822-b0bb-47a2-8e7d-7d34744be8f2-0', usage_metadata={'input_tokens': 39, 'output_tokens': 9, 'total_tokens': 48})"
]
},
"execution_count": 8,
@@ -327,10 +332,17 @@
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run cc14b9d8-c59e-40db-a523-d6ab3fc2fa4f not found for run 5b75e25c-131e-46ee-9982-68569db04330. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.')"
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 63, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5950435c-1dc2-43a6-836f-f989fd62c95e-0', usage_metadata={'input_tokens': 63, 'output_tokens': 17, 'total_tokens': 80})"
]
},
"execution_count": 9,
@@ -354,12 +366,12 @@
"\n",
"### Trimming messages\n",
"\n",
"LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is to only load and store the most recent `n` messages. Let's use an example history with some preloaded messages:"
"LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is trim the historic messages before passing them to the model. Let's use an example history with some preloaded messages:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 21,
"metadata": {},
"outputs": [
{
@@ -371,7 +383,7 @@
" AIMessage(content='Fine thanks!')]"
]
},
"execution_count": 10,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
@@ -396,34 +408,28 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 7ff2d8ec-65e2-4f67-8961-e498e2c4a591 not found for run 3881e990-6596-4326-84f6-2b76949e0657. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Nemo.')"
"AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72})"
]
},
"execution_count": 11,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
@@ -443,34 +449,33 @@
"source": [
"We can see the chain remembers the preloaded name.\n",
"\n",
"But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the `clear` method to remove messages and re-add them to the history. We don't have to, but let's put this method at the front of our chain to ensure it's always called:"
"But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the built in [trim_messages](/docs/how_to/trim_messages/) util to trim messages based on their token count before they reach our prompt. In this case we'll count each message as 1 \"token\" and keep only the last two messages:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.messages import trim_messages\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"def trim_messages(chain_input):\n",
" stored_messages = demo_ephemeral_chat_history.messages\n",
" if len(stored_messages) <= 2:\n",
" return False\n",
"\n",
" demo_ephemeral_chat_history.clear()\n",
"\n",
" for message in stored_messages[-2:]:\n",
" demo_ephemeral_chat_history.add_message(message)\n",
"\n",
" return True\n",
"\n",
"trimmer = trim_messages(strategy=\"last\", max_tokens=2, token_counter=len)\n",
"\n",
"chain_with_trimming = (\n",
" RunnablePassthrough.assign(messages_trimmed=trim_messages)\n",
" | chain_with_message_history\n",
" RunnablePassthrough.assign(chat_history=itemgetter(\"chat_history\") | trimmer)\n",
" | prompt\n",
" | chat\n",
")\n",
"\n",
"chain_with_trimmed_history = RunnableWithMessageHistory(\n",
" chain_with_trimming,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
@@ -483,22 +488,29 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 775cde65-8d22-4c44-80bb-f0b9811c32ca not found for run 5cf71d0e-4663-41cd-8dbe-e9752689cfac. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")"
"AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})"
]
},
"execution_count": 13,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
"chain_with_trimmed_history.invoke(\n",
" {\"input\": \"Where does P. Sherman live?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
@@ -506,19 +518,23 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.'),\n",
"[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n",
" AIMessage(content='Hello!'),\n",
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!'),\n",
" HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72}),\n",
" HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")]"
" AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})]"
]
},
"execution_count": 14,
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
@@ -536,48 +552,39 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run fde7123f-6fd3-421a-a3fc-2fb37dead119 not found for run 061a4563-2394-470d-a3ed-9bf1388ca431. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")"
"AIMessage(content=\"I'm sorry, but I don't have access to your personal information, so I don't know your name. How else may I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 74, 'total_tokens': 105}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0ab03495-1f7c-4151-9070-56d2d1c565ff-0', usage_metadata={'input_tokens': 74, 'output_tokens': 31, 'total_tokens': 105})"
]
},
"execution_count": 15,
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
"chain_with_trimmed_history.invoke(\n",
" {\"input\": \"What is my name?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"cell_type": "markdown",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\"),\n",
" HumanMessage(content='What is my name?'),\n",
" AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
"Check out our [how to guide on trimming messages](/docs/how_to/trim_messages/) for more."
]
},
{
@@ -638,7 +645,7 @@
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
@@ -672,7 +679,7 @@
" return False\n",
" summarization_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\n",
" \"user\",\n",
" \"Distill the above chat messages into a single summary message. Include as many specific details as you can.\",\n",
@@ -772,9 +779,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

View File

@@ -69,6 +69,17 @@
"Once we have loaded PDFs into LangChain `Document` objects, we can index them (e.g., a RAG application) in the usual way:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c3b932bb",
"metadata": {},
"outputs": [],
"source": [
"%pip install faiss-cpu \n",
"# use `pip install faiss-gpu` for CUDA GPU support"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -0,0 +1,203 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e389175d-8a65-4f0d-891c-dbdfabb3c3ef",
"metadata": {},
"source": [
"# How to filter messages\n",
"\n",
"In more complex chains and agents we might track state with a list of messages. This list can start to accumulate messages from multiple different models, speakers, sub-chains, etc., and we may only want to pass subsets of this full list of messages to each model call in the chain/agent.\n",
"\n",
"The `filter_messages` utility makes it easy to filter messages by type, id, or name.\n",
"\n",
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f4ad2fd3-3cab-40d4-a989-972115865b8b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='example input', name='example_user', id='2'),\n",
" HumanMessage(content='real input', name='bob', id='4')]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" filter_messages,\n",
")\n",
"\n",
"messages = [\n",
" SystemMessage(\"you are a good assistant\", id=\"1\"),\n",
" HumanMessage(\"example input\", id=\"2\", name=\"example_user\"),\n",
" AIMessage(\"example output\", id=\"3\", name=\"example_assistant\"),\n",
" HumanMessage(\"real input\", id=\"4\", name=\"bob\"),\n",
" AIMessage(\"real output\", id=\"5\", name=\"alice\"),\n",
"]\n",
"\n",
"filter_messages(messages, include_types=\"human\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7b663a1e-a8ae-453e-a072-8dd75dfab460",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content='you are a good assistant', id='1'),\n",
" HumanMessage(content='real input', name='bob', id='4'),\n",
" AIMessage(content='real output', name='alice', id='5')]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filter_messages(messages, exclude_names=[\"example_user\", \"example_assistant\"])"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "db170e46-03f8-4710-b967-23c70c3ac054",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='example input', name='example_user', id='2'),\n",
" HumanMessage(content='real input', name='bob', id='4'),\n",
" AIMessage(content='real output', name='alice', id='5')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filter_messages(messages, include_types=[HumanMessage, AIMessage], exclude_ids=[\"3\"])"
]
},
{
"cell_type": "markdown",
"id": "b7c4e5ad-d1b4-4c18-b250-864adde8f0dd",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"`filter_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "675f8f79-db39-401c-a582-1df2478cba30",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=[], response_metadata={'id': 'msg_01Wz7gBHahAwkZ1KCBNtXmwA', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 3}}, id='run-b5d8a3fe-004f-4502-a071-a6c025031827-0', usage_metadata={'input_tokens': 16, 'output_tokens': 3, 'total_tokens': 19})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# pip install -U langchain-anthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n",
"# Notice we don't pass in messages. This creates\n",
"# a RunnableLambda that takes messages as input\n",
"filter_ = filter_messages(exclude_names=[\"example_user\", \"example_assistant\"])\n",
"chain = filter_ | llm\n",
"chain.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4133ab28-f49c-480f-be92-b51eb6559153",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are filtered: https://smith.langchain.com/public/f808a724-e072-438e-9991-657cc9e7e253/r\n",
"\n",
"Looking at just the filter_, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c090116a-1fef-43f6-a178-7265dff9db00",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='real input', name='bob', id='4'),\n",
" AIMessage(content='real output', name='alice', id='5')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filter_.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "ff339066-d424-4042-8cca-cd4b007c1a8e",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.filter_messages.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -300,7 +300,11 @@
"id": "922b48bd",
"metadata": {},
"source": [
"# Streaming\n",
"## Streaming\n",
"\n",
":::{.callout-note}\n",
"[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) is best suited for code that does not need to support streaming. If you need to support streaming (i.e., be able to operate on chunks of inputs and yield chunks of outputs), use [RunnableGenerator](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableGenerator.html) instead as in the example below.\n",
":::\n",
"\n",
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain.\n",
"\n",

View File

@@ -14,6 +14,7 @@ For comprehensive descriptions of every class and function see the [API Referenc
## Installation
- [How to: install LangChain packages](/docs/how_to/installation/)
- [How to: use LangChain with different Pydantic versions](/docs/how_to/pydantic_compatibility)
## Key features
@@ -78,7 +79,15 @@ These are the core building blocks you can use when building applications.
- [How to: stream a response back](/docs/how_to/chat_streaming)
- [How to: track token usage](/docs/how_to/chat_token_usage_tracking)
- [How to: track response metadata across providers](/docs/how_to/response_metadata)
- [How to: let your end users choose their model](/docs/how_to/chat_models_universal_init/)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
### Messages
[Messages](/docs/concepts/#messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message.
- [How to: trim messages](/docs/how_to/trim_messages/)
- [How to: filter messages](/docs/how_to/filter_messages/)
- [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/)
### LLMs
@@ -216,6 +225,8 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: define a custom tool](/docs/how_to/custom_tools)
### Serialization
- [How to: save and load LangChain objects](/docs/how_to/serialization)
## Use cases
@@ -250,6 +261,7 @@ For a high-level tutorial on building chatbots, check out [this guide](/docs/tut
- [How to: manage memory](/docs/how_to/chatbots_memory)
- [How to: do retrieval](/docs/how_to/chatbots_retrieval)
- [How to: use tools](/docs/how_to/chatbots_tools)
- [How to: manage large chat history](/docs/how_to/trim_messages/)
### Query analysis
@@ -294,7 +306,15 @@ You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/lan
## [LangSmith](https://docs.smith.langchain.com/)
LangSmith allows you to closely trace, monitor and evaluate your LLM application.
It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build.
It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/).
### Evaluation
<span data-heading-keywords="evaluation,evaluate"></span>
Evaluating performance is a vital part of building LLM-powered applications.
LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators.
To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation).

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -0,0 +1,170 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ac47bfab-0f4f-42ce-8bb6-898ef22a0338",
"metadata": {},
"source": [
"# How to merge consecutive messages of the same type\n",
"\n",
"Certain models do not support passing in consecutive messages of the same type (a.k.a. \"runs\" of the same message type).\n",
"\n",
"The `merge_message_runs` utility makes it easy to merge consecutive messages of the same type.\n",
"\n",
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1a215bbb-c05c-40b0-a6fd-d94884d517df",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\")\n",
"\n",
"HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways'])\n",
"\n",
"AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')\n"
]
}
],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" merge_message_runs,\n",
")\n",
"\n",
"messages = [\n",
" SystemMessage(\"you're a good assistant.\"),\n",
" SystemMessage(\"you always respond with a joke.\"),\n",
" HumanMessage([{\"type\": \"text\", \"text\": \"i wonder why it's called langchain\"}]),\n",
" HumanMessage(\"and who is harrison chasing anyways\"),\n",
" AIMessage(\n",
" 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n",
" ),\n",
" AIMessage(\"Why, he's probably chasing after the last cup of coffee in the office!\"),\n",
"]\n",
"\n",
"merged = merge_message_runs(messages)\n",
"print(\"\\n\\n\".join([repr(x) for x in merged]))"
]
},
{
"cell_type": "markdown",
"id": "0544c811-7112-4b76-8877-cc897407c738",
"metadata": {},
"source": [
"Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character."
]
},
{
"cell_type": "markdown",
"id": "1b2eee74-71c8-4168-b968-bca580c25d18",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"`merge_message_runs` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6d5a0283-11f8-435b-b27b-7b18f7693592",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=[], response_metadata={'id': 'msg_01D6R8Naum57q8qBau9vLBUX', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 84, 'output_tokens': 3}}, id='run-ac0c465b-b54f-4b8b-9295-e5951250d653-0', usage_metadata={'input_tokens': 84, 'output_tokens': 3, 'total_tokens': 87})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# pip install -U langchain-anthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n",
"# Notice we don't pass in messages. This creates\n",
"# a RunnableLambda that takes messages as input\n",
"merger = merge_message_runs()\n",
"chain = merger | llm\n",
"chain.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "72e90dce-693c-4842-9526-ce6460fe956b",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are merged: https://smith.langchain.com/public/ab558677-cac9-4c59-9066-1ecce5bcd87c/r\n",
"\n",
"Looking at just the merger, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "460817a6-c327-429d-958e-181a8c46059c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\"),\n",
" HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways']),\n",
" AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"merger.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4548d916-ce21-4dc6-8f19-eedb8003ace6",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.merge_message_runs.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -129,7 +129,7 @@
"id": "a531da5e",
"metadata": {},
"source": [
"## What is the runnable you are trying wrap?\n",
"## What is the runnable you are trying to wrap?\n",
"\n",
"`RunnableWithMessageHistory` can only wrap certain types of Runnables. Specifically, it can be used for any Runnable that takes as input one of:\n",
"\n",
@@ -898,7 +898,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -52,7 +52,12 @@
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [{\"type\": \"image_url\", \"image_url\": \"data:image/jpeg;base64,{image_data}\"}],\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data}\"},\n",
" }\n",
" ],\n",
" ),\n",
" ]\n",
")"
@@ -110,11 +115,11 @@
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": \"data:image/jpeg;base64,{image_data1}\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data1}\"},\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": \"data:image/jpeg;base64,{image_data2}\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data2}\"},\n",
" },\n",
" ],\n",
" ),\n",

View File

@@ -94,7 +94,7 @@
"source": [
"## LCEL\n",
"\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
"Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type."
]

View File

@@ -0,0 +1,107 @@
# How to use LangChain with different Pydantic versions
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)
- Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time
## LangChain Pydantic migration plan
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
* During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial
migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.
Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in
the case of inheritance and in the case of passing objects to LangChain.
**Example 1: Extending via inheritance**
**YES**
```python
from pydantic.v1 import root_validator, validator
from langchain_core.tools import BaseTool
class CustomTool(BaseTool): # BaseTool is v1 code
x: int = Field(default=1)
def _run(*args, **kwargs):
return "hello"
@validator('x') # v1 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors
**NO**
```python
from pydantic import Field, field_validator # pydantic v2
from langchain_core.tools import BaseTool
class CustomTool(BaseTool): # BaseTool is v1 code
x: int = Field(default=1)
def _run(*args, **kwargs):
return "hello"
@field_validator('x') # v2 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
**Example 2: Passing objects to LangChain**
**YES**
```python
from langchain_core.tools import Tool
from pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace
class CalculatorInput(BaseModel):
question: str = Field()
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```
**NO**
```python
from langchain_core.tools import Tool
from pydantic import BaseModel, Field # <-- Uses v2 namespace
class CalculatorInput(BaseModel):
question: str = Field()
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```

View File

@@ -14,7 +14,7 @@
"We will cover two approaches:\n",
"\n",
"1. Using the built-in [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;\n",
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language) implementation, to show the operating principle."
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language-lcel) implementation, to show the operating principle."
]
},
{

View File

@@ -323,7 +323,7 @@
"id": "fa0f589d",
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"## Routing by semantic similarity\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example."
]
@@ -371,7 +371,7 @@
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
" | ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
" | StrOutputParser()\n",
")"
]

View File

@@ -297,13 +297,67 @@
"print(len(docs))"
]
},
{
"cell_type": "markdown",
"source": [
"### Gradient\n",
"\n",
"In this method, the gradient of distance is used to split chunks along with the percentile method.\n",
"This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data."
],
"metadata": {
"collapsed": false
},
"id": "423c6e099e94ca69"
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1f65472",
"metadata": {},
"outputs": [],
"source": []
"source": [
"text_splitter = SemanticChunker(\n",
" OpenAIEmbeddings(), breakpoint_threshold_type=\"gradient\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.\n"
]
}
],
"source": [
"docs = text_splitter.create_documents([state_of_the_union])\n",
"print(docs[0].page_content)"
],
"metadata": {},
"id": "e9f393d316ce1f6c"
},
{
"cell_type": "code",
"execution_count": 8,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"26\n"
]
}
],
"source": [
"print(len(docs))"
],
"metadata": {},
"id": "a407cd57f02a0db4"
}
],
"metadata": {

View File

@@ -0,0 +1,305 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ab3dc782-321e-4503-96ee-ac88a15e4b5e",
"metadata": {},
"source": [
"# How to save and load LangChain objects\n",
"\n",
"LangChain classes implement standard methods for serialization. Serializing LangChain objects using these methods confer some advantages:\n",
"\n",
"- Secrets, such as API keys, are separated from other parameters and can be loaded back to the object on de-serialization;\n",
"- De-serialization is kept compatible across package versions, so objects that were serialized with one version of LangChain can be properly de-serialized with another.\n",
"\n",
"To save and load LangChain objects using this system, use the `dumpd`, `dumps`, `load`, and `loads` functions in the [load module](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.load) of `langchain-core`. These functions support JSON and JSON-serializable objects.\n",
"\n",
"All LangChain objects that inherit from [Serializable](https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.Serializable.html) are JSON-serializable. Examples include [messages](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.messages), [document objects](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) (e.g., as returned from [retrievers](/docs/concepts/#retrievers)), and most [Runnables](/docs/concepts/#langchain-expression-language-lcel), such as chat models, retrievers, and [chains](/docs/how_to/sequence) implemented with the LangChain Expression Language.\n",
"\n",
"Below we walk through an example with a simple [LLM chain](/docs/tutorials/llm_chain).\n",
"\n",
":::{.callout-caution}\n",
"\n",
"De-serialization using `load` and `loads` can instantiate any serializable LangChain object. Only use this feature with trusted inputs!\n",
"\n",
"De-serialization is a beta feature and is subject to change.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f85d9e51-2a36-4f69-83b1-c716cd43f790",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.load import dumpd, dumps, load, loads\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Translate the following into {language}:\"),\n",
" (\"user\", \"{text}\"),\n",
" ],\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", api_key=\"llm-api-key\")\n",
"\n",
"chain = prompt | llm"
]
},
{
"cell_type": "markdown",
"id": "356ea99f-5cb5-4433-9a6c-2443d2be9ed3",
"metadata": {},
"source": [
"## Saving objects\n",
"\n",
"### To json"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "26516764-d46b-4357-a6c6-bd8315bfa530",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \"id\": [\n",
" \"langchain\",\n",
" \"schema\",\n",
" \"runnable\",\n",
" \"RunnableSequence\"\n",
" ],\n",
" \"kwargs\": {\n",
" \"first\": {\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \"id\": [\n",
" \"langchain\",\n",
" \"prompts\",\n",
" \"chat\",\n",
" \"ChatPromptTemplate\"\n",
" ],\n",
" \"kwargs\": {\n",
" \"input_variables\": [\n",
" \"language\",\n",
" \"text\"\n",
" ],\n",
" \"messages\": [\n",
" {\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \n"
]
}
],
"source": [
"string_representation = dumps(chain, pretty=True)\n",
"print(string_representation[:500])"
]
},
{
"cell_type": "markdown",
"id": "bd425716-545d-466b-a4e5-dc9952cfd72a",
"metadata": {},
"source": [
"### To a json-serializable Python dict"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6561a968-1741-4419-8c29-e705b9d0ef39",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'dict'>\n"
]
}
],
"source": [
"dict_representation = dumpd(chain)\n",
"\n",
"print(type(dict_representation))"
]
},
{
"cell_type": "markdown",
"id": "711e986e-dd24-4839-9e38-c57903378a5f",
"metadata": {},
"source": [
"### To disk"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f818378b-f4d6-43a7-895b-76cf7359b157",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"with open(\"/tmp/chain.json\", \"w\") as fp:\n",
" json.dump(string_representation, fp)"
]
},
{
"cell_type": "markdown",
"id": "1e621a32-ff5f-4627-ad59-88cacba73c6b",
"metadata": {},
"source": [
"Note that the API key is withheld from the serialized representations. Parameters that are considered secret are specified by the `.lc_secrets` attribute of the LangChain object:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8225e150-000a-4fbc-9f3d-09568f4b560b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'openai_api_key': 'OPENAI_API_KEY'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.last.lc_secrets"
]
},
{
"cell_type": "markdown",
"id": "6d090177-eb1c-4bfb-8c13-29286afe17d9",
"metadata": {},
"source": [
"## Loading objects\n",
"\n",
"Specifying `secrets_map` in `load` and `loads` will load the corresponding secrets onto the de-serialized LangChain object.\n",
"\n",
"### From string"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "54a66267-5f3a-40a2-bfcc-8b44bb24c154",
"metadata": {},
"outputs": [],
"source": [
"chain = loads(string_representation, secrets_map={\"OPENAI_API_KEY\": \"llm-api-key\"})"
]
},
{
"cell_type": "markdown",
"id": "5ed9aff1-92cc-44ba-b2ec-4d12f924fa03",
"metadata": {},
"source": [
"### From dict"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "76979932-13de-4427-9f88-040fb05a6778",
"metadata": {},
"outputs": [],
"source": [
"chain = load(dict_representation, secrets_map={\"OPENAI_API_KEY\": \"llm-api-key\"})"
]
},
{
"cell_type": "markdown",
"id": "7dd81a2a-5163-414d-ab42-f1c35e30471b",
"metadata": {},
"source": [
"### From disk"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "033f62a7-3377-472a-be58-718baa6ab445",
"metadata": {},
"outputs": [],
"source": [
"with open(\"/tmp/chain.json\", \"r\") as fp:\n",
" chain = loads(json.load(fp), secrets_map={\"OPENAI_API_KEY\": \"llm-api-key\"})"
]
},
{
"cell_type": "markdown",
"id": "dc520fdb-035a-468f-a8a8-c3ffe8ed98eb",
"metadata": {},
"source": [
"Note that we recover the API key specified at the start of the guide:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "566b2475-d9b4-432b-8c3b-27c2f183624e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'llm-api-key'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.last.openai_api_key.get_secret_value()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b4cba53-e1d5-4979-927e-b5794a02afc3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -243,7 +243,7 @@
"text": [
"================================\u001b[1m System Message \u001b[0m================================\n",
"\n",
"You are a \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m expert. Given an input question, creat a syntactically correct \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m query to run.\n",
"You are a \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m expert. Given an input question, create a syntactically correct \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m query to run.\n",
"Unless the user specifies in the question a specific number of examples to obtain, query for at most \u001b[33;1m\u001b[1;3m{top_k}\u001b[0m results using the LIMIT clause as per \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m. You can order the results to return the most informative data in the database.\n",
"Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
"Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",
@@ -275,7 +275,7 @@
}
],
"source": [
"system = \"\"\"You are a {dialect} expert. Given an input question, creat a syntactically correct {dialect} query to run.\n",
"system = \"\"\"You are a {dialect} expert. Given an input question, create a syntactically correct {dialect} query to run.\n",
"Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database.\n",
"Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
"Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",

View File

@@ -41,6 +41,10 @@
"\n",
"Let's take a look at both approaches, and try to understand how to use them.\n",
"\n",
":::info\n",
"For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/docs/concepts/#streaming).\n",
":::\n",
"\n",
"## Using Stream\n",
"\n",
"All `Runnable` objects implement a sync method called `stream` and an async variant called `astream`. \n",
@@ -1003,7 +1007,7 @@
"id": "798ea891-997c-454c-bf60-43124f40ee1b",
"metadata": {},
"source": [
"Because both the model and the parser support streaming, we see sreaming events from both components in real time! Kind of cool isn't it? 🦜"
"Because both the model and the parser support streaming, we see streaming events from both components in real time! Kind of cool isn't it? 🦜"
]
},
{

View File

@@ -33,6 +33,8 @@
"\n",
"## The `.with_structured_output()` method\n",
"\n",
"<span data-heading-keywords=\"with_structured_output\"></span>\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of models that support this method here](/docs/integrations/chat/).\n",
@@ -74,7 +76,7 @@
"id": "a808a401-be1f-49f9-ad13-58dd68f7db5f",
"metadata": {},
"source": [
"If we want the model to return a Pydantic object, we just need to pass in desired the Pydantic class:"
"If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class:"
]
},
{

View File

@@ -167,13 +167,83 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = llm.bind_tools(tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the `tool_choice` parameter to ensure certain behavior. For example, we can force our tool to call the multiply tool by using the following code:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{\"a\":2,\"b\":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_forced_to_multiply = llm.bind_tools(tools, tool_choice=\"Multiply\")\n",
"llm_forced_to_multiply.invoke(\"what is 2 + 4\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even if we pass it something that doesn't require multiplcation - it will still call the tool!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also just force our tool to select at least one of our tools by passing in the \"any\" (or \"required\" which is OpenAI specific) keyword to the `tool_choice` parameter."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{\"a\":1,\"b\":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice=\"any\")\n",
"llm_forced_to_use_tool.invoke(\"What day is today?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, even though the prompt didn't really suggest a tool call, our LLM made one since it was forced to do so. You can look at the docs for [`bind_tool`](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools."
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -711,7 +781,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -36,7 +36,7 @@
"\n",
"When using 3rd party tools, make sure that you understand how the tool works, what permissions\n",
"it has. Read over its documentation and check if anything is required from you\n",
"from a security point of view. Please see our [security](https://python.langchain.com/v0.1/docs/security/) \n",
"from a security point of view. Please see our [security](https://python.langchain.com/v0.2/docs/security/) \n",
"guidelines for more information.\n",
"\n",
":::\n",

View File

@@ -0,0 +1,486 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b5ee5b75-6876-4d62-9ade-5a7a808ae5a2",
"metadata": {},
"source": [
"# How to trim messages\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Messages](/docs/concepts/#messages)\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [Chaining](/docs/how_to/sequence/)\n",
"- [Chat history](/docs/concepts/#chat-history)\n",
"\n",
"The methods in this guide also require `langchain-core>=0.2.9`.\n",
"\n",
":::\n",
"\n",
"All models have finite context windows, meaning there's a limit to how many tokens they can take as input. If you have very long messages or a chain/agent that accumulates a long message is history, you'll need to manage the length of the messages you're passing in to the model.\n",
"\n",
"The `trim_messages` util provides some basic strategies for trimming a list of messages to be of a certain token length.\n",
"\n",
"## Getting the last `max_tokens` tokens\n",
"\n",
"To get the last `max_tokens` in the list of Messages we can set `strategy=\"last\"`. Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c974633b-3bd0-4844-8a8f-85e3e25f13fe",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Hmmm let me think.\\n\\nWhy, he's probably chasing after the last cup of coffee in the office!\"),\n",
" HumanMessage(content='what do you call a speechless parrot')]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# pip install -U langchain-openai\n",
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" trim_messages,\n",
")\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"messages = [\n",
" SystemMessage(\"you're a good assistant, you always respond with a joke.\"),\n",
" HumanMessage(\"i wonder why it's called langchain\"),\n",
" AIMessage(\n",
" 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n",
" ),\n",
" HumanMessage(\"and who is harrison chasing anyways\"),\n",
" AIMessage(\n",
" \"Hmmm let me think.\\n\\nWhy, he's probably chasing after the last cup of coffee in the office!\"\n",
" ),\n",
" HumanMessage(\"what do you call a speechless parrot\"),\n",
"]\n",
"\n",
"trim_messages(\n",
" messages,\n",
" max_tokens=45,\n",
" strategy=\"last\",\n",
" token_counter=ChatOpenAI(model=\"gpt-4o\"),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d3f46654-c4b2-4136-b995-91c3febe5bf9",
"metadata": {},
"source": [
"If we want to always keep the initial system message we can specify `include_system=True`:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "589b0223-3a73-44ec-8315-2dba3ee6117d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant, you always respond with a joke.\"),\n",
" HumanMessage(content='what do you call a speechless parrot')]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trim_messages(\n",
" messages,\n",
" max_tokens=45,\n",
" strategy=\"last\",\n",
" token_counter=ChatOpenAI(model=\"gpt-4o\"),\n",
" include_system=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8a8b542c-04d1-4515-8d82-b999ea4fac4f",
"metadata": {},
"source": [
"If we want to allow splitting up the contents of a message we can specify `allow_partial=True`:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8c46a209-dddd-4d01-81f6-f6ae55d3225c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant, you always respond with a joke.\"),\n",
" AIMessage(content=\"\\nWhy, he's probably chasing after the last cup of coffee in the office!\"),\n",
" HumanMessage(content='what do you call a speechless parrot')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trim_messages(\n",
" messages,\n",
" max_tokens=56,\n",
" strategy=\"last\",\n",
" token_counter=ChatOpenAI(model=\"gpt-4o\"),\n",
" include_system=True,\n",
" allow_partial=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "306adf9c-41cd-495c-b4dc-e4f43dd7f8f8",
"metadata": {},
"source": [
"If we need to make sure that our first message (excluding the system message) is always of a specific type, we can specify `start_on`:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "878a730b-fe44-4e9d-ab65-7b8f7b069de8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant, you always respond with a joke.\"),\n",
" HumanMessage(content='what do you call a speechless parrot')]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trim_messages(\n",
" messages,\n",
" max_tokens=60,\n",
" strategy=\"last\",\n",
" token_counter=ChatOpenAI(model=\"gpt-4o\"),\n",
" include_system=True,\n",
" start_on=\"human\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7f5d391d-235b-4091-b2de-c22866b478f3",
"metadata": {},
"source": [
"## Getting the first `max_tokens` tokens\n",
"\n",
"We can perform the flipped operation of getting the *first* `max_tokens` by specifying `strategy=\"first\"`:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5f56ae54-1a39-4019-9351-3b494c003d5b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant, you always respond with a joke.\"),\n",
" HumanMessage(content=\"i wonder why it's called langchain\")]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trim_messages(\n",
" messages,\n",
" max_tokens=45,\n",
" strategy=\"first\",\n",
" token_counter=ChatOpenAI(model=\"gpt-4o\"),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ab70bf70-1e5a-4d51-b9b8-a823bf2cf532",
"metadata": {},
"source": [
"## Writing a custom token counter\n",
"\n",
"We can write a custom token counter function that takes in a list of messages and returns an int."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1c1c3b1e-2ece-49e7-a3b6-e69877c1633b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Hmmm let me think.\\n\\nWhy, he's probably chasing after the last cup of coffee in the office!\"),\n",
" HumanMessage(content='what do you call a speechless parrot')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import List\n",
"\n",
"# pip install tiktoken\n",
"import tiktoken\n",
"from langchain_core.messages import BaseMessage, ToolMessage\n",
"\n",
"\n",
"def str_token_counter(text: str) -> int:\n",
" enc = tiktoken.get_encoding(\"o200k_base\")\n",
" return len(enc.encode(text))\n",
"\n",
"\n",
"def tiktoken_counter(messages: List[BaseMessage]) -> int:\n",
" \"\"\"Approximately reproduce https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb\n",
"\n",
" For simplicity only supports str Message.contents.\n",
" \"\"\"\n",
" num_tokens = 3 # every reply is primed with <|start|>assistant<|message|>\n",
" tokens_per_message = 3\n",
" tokens_per_name = 1\n",
" for msg in messages:\n",
" if isinstance(msg, HumanMessage):\n",
" role = \"user\"\n",
" elif isinstance(msg, AIMessage):\n",
" role = \"assistant\"\n",
" elif isinstance(msg, ToolMessage):\n",
" role = \"tool\"\n",
" elif isinstance(msg, SystemMessage):\n",
" role = \"system\"\n",
" else:\n",
" raise ValueError(f\"Unsupported messages type {msg.__class__}\")\n",
" num_tokens += (\n",
" tokens_per_message\n",
" + str_token_counter(role)\n",
" + str_token_counter(msg.content)\n",
" )\n",
" if msg.name:\n",
" num_tokens += tokens_per_name + str_token_counter(msg.name)\n",
" return num_tokens\n",
"\n",
"\n",
"trim_messages(\n",
" messages,\n",
" max_tokens=45,\n",
" strategy=\"last\",\n",
" token_counter=tiktoken_counter,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4b2a672b-c007-47c5-9105-617944dc0a6a",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"`trim_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "96aa29b2-01e0-437c-a1ab-02fb0141cb57",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='A \"polygon\"! Because it\\'s a \"poly-gone\" silent!', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 32, 'total_tokens': 46}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-64cc4575-14d1-4f3f-b4af-97f24758f703-0', usage_metadata={'input_tokens': 32, 'output_tokens': 14, 'total_tokens': 46})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-4o\")\n",
"\n",
"# Notice we don't pass in messages. This creates\n",
"# a RunnableLambda that takes messages as input\n",
"trimmer = trim_messages(\n",
" max_tokens=45,\n",
" strategy=\"last\",\n",
" token_counter=llm,\n",
" include_system=True,\n",
")\n",
"\n",
"chain = trimmer | llm\n",
"chain.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4d91d390-e7f7-467b-ad87-d100411d7a21",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r\n",
"\n",
"Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1ff02d0a-353d-4fac-a77c-7c2c5262abd9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant, you always respond with a joke.\"),\n",
" HumanMessage(content='what do you call a speechless parrot')]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trimmer.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "dc4720c8-4062-4ebc-9385-58411202ce6e",
"metadata": {},
"source": [
"## Using with ChatMessageHistory\n",
"\n",
"Trimming messages is especially useful when [working with chat histories](/docs/how_to/message_history/), which can get arbitrarily long:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "a9517858-fc2f-4dc3-898d-bf98a0e905a0",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run c87e2f1b-81ad-4fa7-bfd9-ce6edb29a482 not found for run 7892ee8f-0669-4d6b-a2ca-ef8aae81042a. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"A polygon! Because it's a parrot gone quiet!\", response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 32, 'total_tokens': 43}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-72dad96e-8b58-45f4-8c08-21f9f1a6b68f-0', usage_metadata={'input_tokens': 32, 'output_tokens': 11, 'total_tokens': 43})"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"chat_history = InMemoryChatMessageHistory(messages=messages[:-1])\n",
"\n",
"\n",
"def dummy_get_session_history(session_id):\n",
" if session_id != \"1\":\n",
" raise InMemoryChatMessageHistory()\n",
" return chat_history\n",
"\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o\")\n",
"\n",
"trimmer = trim_messages(\n",
" max_tokens=45,\n",
" strategy=\"last\",\n",
" token_counter=llm,\n",
" include_system=True,\n",
")\n",
"\n",
"chain = trimmer | llm\n",
"chain_with_history = RunnableWithMessageHistory(chain, dummy_get_session_history)\n",
"chain_with_history.invoke(\n",
" [HumanMessage(\"what do you call a speechless parrot\")],\n",
" config={\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "556b7b4c-43cb-41de-94fc-1a41f4ec4d2e",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r"
]
},
{
"cell_type": "markdown",
"id": "75dc7b84-b92f-44e7-8beb-ba22398e4efb",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.trim_messages.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,245 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Upstash Ratelimit Callback\n",
"\n",
"In this guide, we will go over how to add rate limiting based on number of requests or the number of tokens using `UpstashRatelimitHandler`. This handler uses [ratelimit library of Upstash](https://github.com/upstash/ratelimit-py/), which utilizes [Upstash Redis](https://upstash.com/docs/redis/overall/getstarted).\n",
"\n",
"Upstash Ratelimit works by sending an HTTP request to Upstash Redis everytime the `limit` method is called. Remaining tokens/requests of the user are checked and updated. Based on the remaining tokens, we can stop the execution of costly operations like invoking an LLM or querying a vector store:\n",
"\n",
"```py\n",
"response = ratelimit.limit()\n",
"if response.allowed:\n",
" execute_costly_operation()\n",
"```\n",
"\n",
"`UpstashRatelimitHandler` allows you to incorporate the ratelimit logic into your chain in a few minutes.\n",
"\n",
"First, you will need to go to [the Upstash Console](https://console.upstash.com/login) and create a redis database ([see our docs](https://upstash.com/docs/redis/overall/getstarted)). After creating a database, you will need to set the environment variables:\n",
"\n",
"```\n",
"UPSTASH_REDIS_REST_URL=\"****\"\n",
"UPSTASH_REDIS_REST_TOKEN=\"****\"\n",
"```\n",
"\n",
"Next, you will need to install Upstash Ratelimit and Redis library with:\n",
"\n",
"```\n",
"pip install upstash-ratelimit upstash-redis\n",
"```\n",
"\n",
"You are now ready to add rate limiting to your chain!\n",
"\n",
"## Ratelimiting Per Request\n",
"\n",
"Let's imagine that we want to allow our users to invoke our chain 10 times per minute. Achieving this is as simple as:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in UpstashRatelimitHandler.on_chain_start callback: UpstashRatelimitError('Request limit reached!')\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Handling ratelimit. <class 'langchain_community.callbacks.upstash_ratelimit_callback.UpstashRatelimitError'>\n"
]
}
],
"source": [
"# set env variables\n",
"import os\n",
"\n",
"os.environ[\"UPSTASH_REDIS_REST_URL\"] = \"****\"\n",
"os.environ[\"UPSTASH_REDIS_REST_TOKEN\"] = \"****\"\n",
"\n",
"from langchain_community.callbacks import UpstashRatelimitError, UpstashRatelimitHandler\n",
"from langchain_core.runnables import RunnableLambda\n",
"from upstash_ratelimit import FixedWindow, Ratelimit\n",
"from upstash_redis import Redis\n",
"\n",
"# create ratelimit\n",
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 10 requests per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=10, window=60),\n",
")\n",
"\n",
"# create handler\n",
"user_id = \"user_id\" # should be a method which gets the user id\n",
"handler = UpstashRatelimitHandler(identifier=user_id, request_ratelimit=ratelimit)\n",
"\n",
"# create mock chain\n",
"chain = RunnableLambda(str)\n",
"\n",
"# invoke chain with handler:\n",
"try:\n",
" result = chain.invoke(\"Hello world!\", config={\"callbacks\": [handler]})\n",
"except UpstashRatelimitError:\n",
" print(\"Handling ratelimit.\", UpstashRatelimitError)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we pass the handler to the `invoke` method instead of passing the handler when defining the chain.\n",
"\n",
"For rate limiting algorithms other than `FixedWindow`, see [upstash-ratelimit docs](https://github.com/upstash/ratelimit-py?tab=readme-ov-file#ratelimiting-algorithms).\n",
"\n",
"Before executing any steps in our pipeline, ratelimit will check whether the user has passed the request limit. If so, `UpstashRatelimitError` is raised.\n",
"\n",
"## Ratelimiting Per Token\n",
"\n",
"Another option is to rate limit chain invokations based on:\n",
"1. number of tokens in prompt\n",
"2. number of tokens in prompt and LLM completion\n",
"\n",
"This only works if you have an LLM in your chain. Another requirement is that the LLM you are using should return the token usage in it's `LLMOutput`.\n",
"\n",
"### How it works\n",
"\n",
"The handler will get the remaining tokens before calling the LLM. If the remaining tokens is more than 0, LLM will be called. Otherwise `UpstashRatelimitError` will be raised.\n",
"\n",
"After LLM is called, token usage information will be used to subtracted from the remaining tokens of the user. No error is raised at this stage of the chain.\n",
"\n",
"### Configuration\n",
"\n",
"For the first configuration, simply initialize the handler like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 1000 tokens per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=1000, window=60),\n",
")\n",
"\n",
"handler = UpstashRatelimitHandler(identifier=user_id, token_ratelimit=ratelimit)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the second configuration, here is how to initialize the handler:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 1000 tokens per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=1000, window=60),\n",
")\n",
"\n",
"handler = UpstashRatelimitHandler(\n",
" identifier=user_id,\n",
" token_ratelimit=ratelimit,\n",
" include_output_tokens=True, # set to True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also employ ratelimiting based on requests and tokens at the same time, simply by passing both `request_ratelimit` and `token_ratelimit` parameters.\n",
"\n",
"Here is an example with a chain utilizing an LLM:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in UpstashRatelimitHandler.on_llm_start callback: UpstashRatelimitError('Token limit reached!')\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Handling ratelimit. <class 'langchain_community.callbacks.upstash_ratelimit_callback.UpstashRatelimitError'>\n"
]
}
],
"source": [
"# set env variables\n",
"import os\n",
"\n",
"os.environ[\"UPSTASH_REDIS_REST_URL\"] = \"****\"\n",
"os.environ[\"UPSTASH_REDIS_REST_TOKEN\"] = \"****\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"****\"\n",
"\n",
"from langchain_community.callbacks import UpstashRatelimitError, UpstashRatelimitHandler\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_openai import ChatOpenAI\n",
"from upstash_ratelimit import FixedWindow, Ratelimit\n",
"from upstash_redis import Redis\n",
"\n",
"# create ratelimit\n",
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 500 tokens per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=500, window=60),\n",
")\n",
"\n",
"# create handler\n",
"user_id = \"user_id\" # should be a method which gets the user id\n",
"handler = UpstashRatelimitHandler(identifier=user_id, token_ratelimit=ratelimit)\n",
"\n",
"# create mock chain\n",
"as_str = RunnableLambda(str)\n",
"model = ChatOpenAI()\n",
"\n",
"chain = as_str | model\n",
"\n",
"# invoke chain with handler:\n",
"try:\n",
" result = chain.invoke(\"Hello world!\", config={\"callbacks\": [handler]})\n",
"except UpstashRatelimitError:\n",
" print(\"Handling ratelimit.\", UpstashRatelimitError)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "lc39",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -51,7 +51,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
@@ -59,7 +59,7 @@
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"anthropic_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")"
"os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")"
]
},
{
@@ -72,7 +72,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
@@ -113,7 +113,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -121,7 +121,7 @@
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-3-sonnet-20240229\",\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" temperature=0,\n",
" max_tokens=1024,\n",
" timeout=None,\n",
@@ -140,7 +140,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -149,10 +149,10 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", response_metadata={'id': 'msg_013qztabaFADNnKsHR1rdrju', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 21}}, id='run-a22ab30c-7e09-48f5-bc27-a08a9d8f7fa1-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})"
]
},
"execution_count": 2,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -171,7 +171,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -179,9 +179,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Voici la traduction en français :\n",
"\n",
"J'aime la programmation.\n"
"J'adore la programmation.\n"
]
}
],
@@ -201,17 +199,17 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', response_metadata={'id': 'msg_01FWrA8w9HbjqYPTQ7VryUnp', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 11}}, id='run-b749bf20-b46d-4d62-ac73-f59adab6dd7e-0', usage_metadata={'input_tokens': 23, 'output_tokens': 11, 'total_tokens': 34})"
"AIMessage(content=\"Here's the German translation:\\n\\nIch liebe Programmieren.\", response_metadata={'id': 'msg_01GhkRtQZUkA5Ge9hqmD8HGY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 18}}, id='run-da5906b4-b200-4e08-b81a-64d4453643b6-0', usage_metadata={'input_tokens': 23, 'output_tokens': 18, 'total_tokens': 41})"
]
},
"execution_count": 4,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -251,22 +249,26 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 8,
"id": "4a374a24-2534-4e6f-825b-30fab7bbe0cb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'text': \"Okay, let's use the GetWeather tool to check the current temperatures in Los Angeles and New York City.\",\n",
"[{'text': \"To answer this question, we'll need to check the current weather in both Los Angeles (LA) and New York (NY). I'll use the GetWeather function to retrieve this information for both cities.\",\n",
" 'type': 'text'},\n",
" {'id': 'toolu_01Tnp5tL7LJZaVyQXKEjbqcC',\n",
" {'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A',\n",
" 'input': {'location': 'Los Angeles, CA'},\n",
" 'name': 'GetWeather',\n",
" 'type': 'tool_use'},\n",
" {'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP',\n",
" 'input': {'location': 'New York, NY'},\n",
" 'name': 'GetWeather',\n",
" 'type': 'tool_use'}]"
]
},
"execution_count": 10,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -288,7 +290,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 9,
"id": "6b4a1ead-952c-489f-a8d4-355d3fb55f3f",
"metadata": {},
"outputs": [
@@ -297,10 +299,13 @@
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'toolu_01Tnp5tL7LJZaVyQXKEjbqcC'}]"
" 'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},\n",
" {'name': 'GetWeather',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]"
]
},
"execution_count": 11,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -336,7 +341,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -23,13 +23,11 @@
]
},
{
"cell_type": "raw",
"cell_type": "code",
"execution_count": null,
"id": "d83ba7de",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]

View File

@@ -201,7 +201,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -0,0 +1,429 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Databricks\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatDatabricks\n",
"\n",
"> [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform. \n",
"\n",
"This notebook provides a quick overview for getting started with Databricks [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatDatabricks features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html).\n",
"\n",
"## Overview\n",
"\n",
"`ChatDatabricks` class wraps a chat model endpoint hosted on [Databricks Model Serving](https://docs.databricks.com/en/machine-learning/model-serving/index.html). This example notebook shows how to wrap your serving endpoint and use it as a chat model in your LangChain application.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ChatDatabricks](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"### Supported Methods\n",
"\n",
"`ChatDatabricks` supports all methods of `ChatModel` including async APIs.\n",
"\n",
"\n",
"### Endpoint Requirement\n",
"\n",
"The serving endpoint `ChatDatabricks` wraps must have OpenAI-compatible chat input/output format ([reference](https://mlflow.org/docs/latest/llms/deployments/index.html#chat)). As long as the input format is compatible, `ChatDatabricks` can be used for any endpoint type hosted on [Databricks Model Serving](https://docs.databricks.com/en/machine-learning/model-serving/index.html):\n",
"\n",
"1. Foundation Models - Curated list of state-of-the-art foundation models such as DRBX, Llama3, Mixtral-8x7B, and etc. These endpoint are ready to use in your Databricks workspace without any set up.\n",
"2. Custom Models - You can also deploy custom models to a serving endpoint via MLflow with\n",
"your choice of framework such as LangChain, Pytorch, Transformers, etc.\n",
"3. External Models - Databricks endpoints can serve models that are hosted outside Databricks as a proxy, such as proprietary model service like OpenAI GPT4.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"source": [
"## Setup\n",
"\n",
"To access Databricks models you'll need to create a Databricks account, set up credentials (only if you are outside Databricks workspace), and install required packages.\n",
"\n",
"### Credentials (only if you are outside Databricks)\n",
"\n",
"If you are running LangChain app inside Databricks, you can skip this step.\n",
"\n",
"Otherwise, you need manually set the Databricks workspace hostname and personal access token to `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively. See [Authentication Documentation](https://docs.databricks.com/en/dev-tools/auth/index.html#databricks-personal-access-tokens) for how to get an access token."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your Databricks access token: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"DATABRICKS_HOST\"] = \"https://your-workspace.cloud.databricks.com\"\n",
"os.environ[\"DATABRICKS_TOKEN\"] = getpass.getpass(\"Enter your Databricks access token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Databricks integration lives in the `langchain-community` package. Also, `mlflow >= 2.9 ` is required to run the code in this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community mlflow>=2.9.0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We first demonstrates how to query DBRX-instruct model hosted as Foundation Models endpoint with `ChatDatabricks`.\n",
"\n",
"For other type of endpoints, there are some difference in how to set up the endpoint itself, however, once the endpoint is ready, there is no difference in how to query it with `ChatDatabricks`. Please refer to the bottom of this notebook for the examples with other type of endpoints."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatDatabricks\n",
"\n",
"chat_model = ChatDatabricks(\n",
" endpoint=\"databricks-dbrx-instruct\",\n",
" temperature=0.1,\n",
" max_tokens=256,\n",
" # See https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html for other supported parameters\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='MLflow is an open-source platform for managing end-to-end machine learning workflows. It was introduced by Databricks in 2018. MLflow provides tools for tracking experiments, packaging and sharing code, and deploying models. It is designed to work with any machine learning library and can be used in a variety of environments, including local machines, virtual machines, and cloud-based clusters. MLflow aims to streamline the machine learning development lifecycle, making it easier for data scientists and engineers to collaborate and deploy models into production.', response_metadata={'prompt_tokens': 229, 'completion_tokens': 104, 'total_tokens': 333}, id='run-d3fb4d06-3e10-4471-83c9-c282cc62b74d-0')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_model.invoke(\"What is MLflow?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Databricks Model Serving is a feature of the Databricks platform that allows data scientists and engineers to easily deploy machine learning models into production. With Model Serving, you can host, manage, and serve machine learning models as APIs, making it easy to integrate them into applications and business processes. It supports a variety of popular machine learning frameworks, including TensorFlow, PyTorch, and scikit-learn, and provides tools for monitoring and managing the performance of deployed models. Model Serving is designed to be scalable, secure, and easy to use, making it a great choice for organizations that want to quickly and efficiently deploy machine learning models into production.', response_metadata={'prompt_tokens': 35, 'completion_tokens': 130, 'total_tokens': 165}, id='run-b3feea21-223e-4105-8627-41d647d5ccab-0')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# You can also pass a list of messages\n",
"messages = [\n",
" (\"system\", \"You are a chatbot that can answer questions about Databricks.\"),\n",
" (\"user\", \"What is Databricks Model Serving?\"),\n",
"]\n",
"chat_model.invoke(messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"Similar to other chat models, `ChatDatabricks` can be used as a part of a complex chain."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Unity Catalog is a new data catalog feature in Databricks that allows you to discover, manage, and govern all your data assets across your data landscape, including data lakes, data warehouses, and data marts. It provides a centralized repository for storing and managing metadata, data lineage, and access controls for all your data assets. Unity Catalog enables data teams to easily discover and access the data they need, while ensuring compliance with data privacy and security regulations. It is designed to work seamlessly with Databricks' Lakehouse platform, providing a unified experience for managing and analyzing all your data.\", response_metadata={'prompt_tokens': 32, 'completion_tokens': 118, 'total_tokens': 150}, id='run-82d72624-f8df-4c0d-a976-919feec09a55-0')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a chatbot that can answer questions about {topic}.\",\n",
" ),\n",
" (\"user\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat_model\n",
"chain.invoke(\n",
" {\n",
" \"topic\": \"Databricks\",\n",
" \"question\": \"What is Unity Catalog?\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation (streaming)\n",
"\n",
"`ChatDatabricks` supports streaming response by `stream` method since `langchain-community>=0.2.1`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I|'m| an| AI| and| don|'t| have| feelings|,| but| I|'m| here| and| ready| to| assist| you|.| How| can| I| help| you| today|?||"
]
}
],
"source": [
"for chunk in chat_model.stream(\"How are you?\"):\n",
" print(chunk.content, end=\"|\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Async Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"\n",
"country = [\"Japan\", \"Italy\", \"Australia\"]\n",
"futures = [chat_model.ainvoke(f\"Where is the capital of {c}?\") for c in country]\n",
"await asyncio.gather(*futures)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping Custom Model Endpoint\n",
"\n",
"Prerequisites:\n",
"\n",
"* An LLM was registered and deployed to [a Databricks serving endpoint](https://docs.databricks.com/machine-learning/model-serving/index.html) via MLflow. The endpoint must have OpenAI-compatible chat input/output format ([reference](https://mlflow.org/docs/latest/llms/deployments/index.html#chat))\n",
"* You have [\"Can Query\" permission](https://docs.databricks.com/security/auth-authz/access-control/serving-endpoint-acl.html) to the endpoint.\n",
"\n",
"Once the endpoint is ready, the usage pattern is completely same as Foundation Models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chat_model_custom = ChatDatabricks(\n",
" endpoint=\"YOUR_ENDPOINT_NAME\",\n",
" temperature=0.1,\n",
" max_tokens=256,\n",
")\n",
"\n",
"chat_model_custom.invoke(\"How are you?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping External Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Prerequisite: Create Proxy Endpoint\n",
"\n",
"First, create a new Databricks serving endpoint that proxies requests to the target external model. The endpoint creation should be fairy quick for proxying external models.\n",
"\n",
"This requires registering OpenAI API Key in Databricks secret manager with the following comment:\n",
"```sh\n",
"# Replace `<scope>` with your scope\n",
"databricks secrets create-scope <scope>\n",
"databricks secrets put-secret <scope> openai-api-key --string-value $OPENAI_API_KEY\n",
"```\n",
"\n",
"For how to set up Databricks CLI and manage secrets, please refer to https://docs.databricks.com/en/security/secrets/secrets.html"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from mlflow.deployments import get_deploy_client\n",
"\n",
"client = get_deploy_client(\"databricks\")\n",
"\n",
"secret = \"secrets/<scope>/openai-api-key\" # replace `<scope>` with your scope\n",
"endpoint_name = \"my-chat\" # rename this if my-chat already exists\n",
"client.create_endpoint(\n",
" name=endpoint_name,\n",
" config={\n",
" \"served_entities\": [\n",
" {\n",
" \"name\": \"my-chat\",\n",
" \"external_model\": {\n",
" \"name\": \"gpt-3.5-turbo\",\n",
" \"provider\": \"openai\",\n",
" \"task\": \"llm/v1/chat\",\n",
" \"openai_config\": {\n",
" \"openai_api_key\": \"{{\" + secret + \"}}\",\n",
" },\n",
" },\n",
" }\n",
" ],\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the endpoint status has become \"Ready\", you can query the endpoint in the same way as other types of endpoints."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chat_model_external = ChatDatabricks(\n",
" endpoint=endpoint_name,\n",
" temperature=0.1,\n",
" max_tokens=256,\n",
")\n",
"chat_model_external.invoke(\"How to use Databricks?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatDatabricks features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatDatabricks.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -98,6 +98,78 @@
")\n",
"chat.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "466c3cb41ace1410",
"metadata": {},
"source": [
"# Tool Calling\n",
"\n",
"DeepInfra currently supports only invoke and async invoke tool calling.\n",
"\n",
"For a complete list of models that support tool calling, please refer to our [tool calling documentation](https://deepinfra.com/docs/advanced/function_calling)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ddc4f4299763651c",
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"\n",
"from dotenv import find_dotenv, load_dotenv\n",
"from langchain_community.chat_models import ChatDeepInfra\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.tools import tool\n",
"\n",
"model_name = \"meta-llama/Meta-Llama-3-70B-Instruct\"\n",
"\n",
"_ = load_dotenv(find_dotenv())\n",
"\n",
"\n",
"# Langchain tool\n",
"@tool\n",
"def foo(something):\n",
" \"\"\"\n",
" Called when foo\n",
" \"\"\"\n",
" pass\n",
"\n",
"\n",
"# Pydantic class\n",
"class Bar(BaseModel):\n",
" \"\"\"\n",
" Called when Bar\n",
" \"\"\"\n",
"\n",
" pass\n",
"\n",
"\n",
"llm = ChatDeepInfra(model=model_name)\n",
"tools = [foo, Bar]\n",
"llm_with_tools = llm.bind_tools(tools)\n",
"messages = [\n",
" HumanMessage(\"Foo and bar, please.\"),\n",
"]\n",
"\n",
"response = llm_with_tools.invoke(messages)\n",
"print(response.tool_calls)\n",
"# [{'name': 'foo', 'args': {'something': None}, 'id': 'call_Mi4N4wAtW89OlbizFE1aDxDj'}, {'name': 'Bar', 'args': {}, 'id': 'call_daiE0mW454j2O1KVbmET4s2r'}]\n",
"\n",
"\n",
"async def call_ainvoke():\n",
" result = await llm_with_tools.ainvoke(messages)\n",
" print(result.tool_calls)\n",
"\n",
"\n",
"# Async call\n",
"asyncio.run(call_ainvoke())\n",
"# [{'name': 'foo', 'args': {'something': None}, 'id': 'call_ZH7FetmgSot4LHcMU6CEb8tI'}, {'name': 'Bar', 'args': {}, 'id': 'call_2MQhDifAJVoijZEvH8PeFSVB'}]"
]
}
],
"metadata": {

View File

@@ -147,7 +147,7 @@
"source": [
"# Tool Calling\n",
"\n",
"Fireworks offers the [`FireFunction-v1` tool calling model](https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling). You can use it for structured output and function calling use cases:"
"Fireworks offers the `FireFunction-v2` tool calling model. You can use it for structured output and function calling use cases:"
]
},
{
@@ -180,7 +180,7 @@
"\n",
"\n",
"chat = ChatFireworks(\n",
" model=\"accounts/fireworks/models/firefunction-v1\",\n",
" model=\"accounts/fireworks/models/firefunction-v2\",\n",
").bind_tools([ExtractFields])\n",
"\n",
"result = chat.invoke(\"I am a 27 year old named Erick\")\n",

View File

@@ -2,33 +2,50 @@
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Cloud Vertex AI\n",
"keywords: [gemini, vertex, ChatVertexAI, gemini-pro]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatVertexAI\n",
"\n",
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"This page provides a quick overview for getting started with VertexAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatVertexAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html).\n",
"\n",
"ChatVertexAI exposes all foundational models available in Google Cloud:\n",
"ChatVertexAI exposes all foundational models available in Google Cloud, like `gemini-1.5-pro`, `gemini-1.5-flash`, etc. For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
"\n",
"- Gemini (`gemini-pro` and `gemini-pro-vision`)\n",
"- PaLM 2 for Text (`text-bison`)\n",
"- Codey for Code Generation (`codechat-bison`)\n",
":::info Google Cloud VertexAI vs Google PaLM\n",
"\n",
"For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
"The Google Cloud VertexAI integration is separate from the [Google PaLM integration](/docs/integrations/chat/google_generative_ai/). Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) customer data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
":::\n",
"\n",
"To use `Google Cloud Vertex AI` PaLM you must have the `langchain-google-vertexai` Python package installed and either:\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/google_vertex_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatVertexAI](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) | [langchain-google-vertexai](https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-vertexai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-vertexai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access VertexAI models you'll need to create a Google Cloud Platform account, set up credentials, and install the `langchain-google-vertexai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"To use the integration you must:\n",
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
"- Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\n",
"\n",
@@ -37,432 +54,156 @@
"For more information, see: \n",
"- https://cloud.google.com/docs/authentication/application-default-credentials#GAC\n",
"- https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-google-vertexai"
"\n",
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_google_vertexai import ChatVertexAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\")"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = \"You are a helpful assistant who translate English to French\"\n",
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"Gemini doesn't support SystemMessage at the moment, but it can be added to the first human message in the row. If you want such behavior, just set the `convert_system_message_to_human` to `True`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime la programmation.\")"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = \"You are a helpful assistant who translate English to French\"\n",
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"### Installation\n",
"\n",
"chat = ChatVertexAI(model=\"gemini-pro\", convert_system_message_to_human=True)\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to construct a simple chain that takes user specified parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' プログラミングが大好きです')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"Japanese\",\n",
" \"text\": \"I love programming\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Code generation chat models\n",
"You can now leverage the Codey API for code chat within Vertex AI. The model available is:\n",
"- `codechat-bison`: for code assistance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ```python\n",
"def is_prime(n):\n",
" \"\"\"\n",
" Check if a number is prime.\n",
"\n",
" Args:\n",
" n: The number to check.\n",
"\n",
" Returns:\n",
" True if n is prime, False otherwise.\n",
" \"\"\"\n",
"\n",
" # If n is 1, it is not prime.\n",
" if n == 1:\n",
" return False\n",
"\n",
" # Iterate over all numbers from 2 to the square root of n.\n",
" for i in range(2, int(n ** 0.5) + 1):\n",
" # If n is divisible by any number from 2 to its square root, it is not prime.\n",
" if n % i == 0:\n",
" return False\n",
"\n",
" # If n is divisible by no number from 2 to its square root, it is prime.\n",
" return True\n",
"\n",
"\n",
"def find_prime_numbers(n):\n",
" \"\"\"\n",
" Find all prime numbers up to a given number.\n",
"\n",
" Args:\n",
" n: The upper bound for the prime numbers to find.\n",
"\n",
" Returns:\n",
" A list of all prime numbers up to n.\n",
" \"\"\"\n",
"\n",
" # Create a list of all numbers from 2 to n.\n",
" numbers = list(range(2, n + 1))\n",
"\n",
" # Iterate over the list of numbers and remove any that are not prime.\n",
" for number in numbers:\n",
" if not is_prime(number):\n",
" numbers.remove(number)\n",
"\n",
" # Return the list of prime numbers.\n",
" return numbers\n",
"```\n"
]
}
],
"source": [
"chat = ChatVertexAI(model=\"codechat-bison\", max_tokens=1000, temperature=0.5)\n",
"\n",
"message = chat.invoke(\"Write a Python function generating all prime numbers\")\n",
"print(message.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Full generation info\n",
"\n",
"We can use the `generate` method to get back extra metadata like [safety attributes](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_confidence_scoring) and not just chat completions\n",
"\n",
"Note that the `generation_info` will be different depending if you're using a gemini model or not.\n",
"\n",
"### Gemini model\n",
"\n",
"`generation_info` will include:\n",
"\n",
"- `is_blocked`: whether generation was blocked or not\n",
"- `safety_ratings`: safety ratings' categories and probability labels"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from pprint import pprint\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_vertexai import HarmBlockThreshold, HarmCategory"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'citation_metadata': None,\n",
" 'is_blocked': False,\n",
" 'safety_ratings': [{'blocked': False,\n",
" 'category': 'HARM_CATEGORY_HATE_SPEECH',\n",
" 'probability_label': 'NEGLIGIBLE'},\n",
" {'blocked': False,\n",
" 'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',\n",
" 'probability_label': 'NEGLIGIBLE'},\n",
" {'blocked': False,\n",
" 'category': 'HARM_CATEGORY_HARASSMENT',\n",
" 'probability_label': 'NEGLIGIBLE'},\n",
" {'blocked': False,\n",
" 'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',\n",
" 'probability_label': 'NEGLIGIBLE'}],\n",
" 'usage_metadata': {'candidates_token_count': 6,\n",
" 'prompt_token_count': 12,\n",
" 'total_token_count': 18}}\n"
]
}
],
"source": [
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"messages = [HumanMessage(content=human)]\n",
"\n",
"\n",
"chat = ChatVertexAI(\n",
" model_name=\"gemini-pro\",\n",
" safety_settings={\n",
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE\n",
" },\n",
")\n",
"\n",
"result = chat.generate([messages])\n",
"pprint(result.generations[0][0].generation_info)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Non-gemini model\n",
"\n",
"`generation_info` will include:\n",
"\n",
"- `is_blocked`: whether generation was blocked or not\n",
"- `safety_attributes`: a dictionary mapping safety attributes to their scores"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'errors': (),\n",
" 'grounding_metadata': {'citations': [], 'search_queries': []},\n",
" 'is_blocked': False,\n",
" 'safety_attributes': [{'Derogatory': 0.1, 'Insult': 0.1, 'Sexual': 0.2}],\n",
" 'usage_metadata': {'candidates_billable_characters': 88.0,\n",
" 'candidates_token_count': 24.0,\n",
" 'prompt_billable_characters': 58.0,\n",
" 'prompt_token_count': 12.0}}\n"
]
}
],
"source": [
"chat = ChatVertexAI() # default is `chat-bison`\n",
"\n",
"result = chat.generate([messages])\n",
"pprint(result.generations[0][0].generation_info)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool calling (a.k.a. function calling) with Gemini\n",
"\n",
"We can pass tool definitions to Gemini models to get the model to invoke those tools when appropriate. This is useful not only for LLM-powered tool use but also for getting structured outputs out of models more generally.\n",
"\n",
"With `ChatVertexAI.bind_tools()`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to a Gemini tool schema, which looks like:\n",
"```python\n",
"{\n",
" \"name\": \"...\", # tool name\n",
" \"description\": \"...\", # tool description\n",
" \"parameters\": {...} # tool input schema as JSONSchema\n",
"}\n",
"```"
"The LangChain VertexAI integration lives in the `langchain-google-vertexai` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'GetWeather', 'arguments': '{\"location\": \"San Francisco, CA\"}'}}, response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 41, 'candidates_token_count': 7, 'total_token_count': 48}}, id='run-05e760dc-0682-4286-88e1-5b23df69b083-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"from langchain.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm = ChatVertexAI(model=\"gemini-pro\", temperature=0)\n",
"llm_with_tools = llm.bind_tools([GetWeather])\n",
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")\n",
"ai_msg"
"%pip install -qU langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"The tool calls can be access via the `AIMessage.tool_calls` attribute, where they are extracted in a model-agnostic format:"
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"llm = ChatVertexAI(\n",
" model=\"gemini-1.5-flash-001\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" max_retries=6,\n",
" stop=None,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'San Francisco, CA'},\n",
" 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}]"
"AIMessage(content=\"J'adore programmer. \\n\", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 20, 'candidates_token_count': 7, 'total_token_count': 27}}, id='run-7032733c-d05c-4f0c-a17a-6c575fdd1ae0-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27})"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg.tool_calls"
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer. \n",
"\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"For a complete guide on tool calling [head here](/docs/how_to/function_calling)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Structured outputs\n",
"## Chaining\n",
"\n",
"Many applications require structured model outputs. Tool calling makes it much easier to do this reliably. The [with_structured_outputs](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) constructor provides a simple interface built on top of tool calling for getting structured outputs out of a model. For a complete guide on structured outputs [head here](/docs/how_to/structured_output).\n",
"\n",
"### ChatVertexAI.with_structured_outputs()\n",
"\n",
"To get structured outputs from our Gemini model all we need to do is to specify a desired schema, either as a Pydantic class or as a JSON schema, "
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Person(name='Stefan', age=13)"
"AIMessage(content='Ich liebe Programmieren. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 8, 'total_token_count': 23}}, id='run-c71955fd-8dc1-422b-88a7-853accf4811b-0', usage_metadata={'input_tokens': 15, 'output_tokens': 8, 'total_tokens': 23})"
]
},
"execution_count": 6,
@@ -471,139 +212,36 @@
}
],
"source": [
"class Person(BaseModel):\n",
" \"\"\"Save information about a person.\"\"\"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
" name: str = Field(..., description=\"The person's name.\")\n",
" age: int = Field(..., description=\"The person's age.\")\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Person)\n",
"structured_llm.invoke(\"Stefan is already 13 years old\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Legacy] Using `create_structured_runnable()`\n",
"\n",
"The legacy wasy to get structured outputs is using the `create_structured_runnable` constructor:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import create_structured_runnable\n",
"\n",
"chain = create_structured_runnable(Person, llm)\n",
"chain.invoke(\"My name is Erick and I'm 27 years old\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Asynchronous calls\n",
"\n",
"We can make asynchronous calls via the Runnables [Async Interface](/docs/concepts#interface)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# for running these examples in the notebook:\n",
"import asyncio\n",
"\n",
"import nest_asyncio\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' अहं प्रोग्रामनं प्रेमामि')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI(model=\"chat-bison\", max_tokens=1000, temperature=0.5)\n",
"chain = prompt | chat\n",
"\n",
"asyncio.run(\n",
" chain.ainvoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"Sanskrit\",\n",
" \"text\": \"I love programming\",\n",
" }\n",
" )\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## Streaming calls\n",
"## API reference\n",
"\n",
"We can also stream outputs via the `stream` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The five most populous countries in the world are:\n",
"1. China (1.4 billion)\n",
"2. India (1.3 billion)\n",
"3. United States (331 million)\n",
"4. Indonesia (273 million)\n",
"5. Pakistan (220 million)"
]
}
],
"source": [
"import sys\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"human\", \"List out the 5 most populous countries in the world\")]\n",
")\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"\n",
"for chunk in chain.stream({}):\n",
" sys.stdout.write(chunk.content)\n",
" sys.stdout.flush()"
"For detailed documentation of all ChatVertexAI features and configurations, like how to send multimodal inputs and configure safety settings, head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html"
]
}
],
@@ -627,5 +265,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 5
}

View File

@@ -2,10 +2,15 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Groq\n",
"keywords: [chatgroq]\n",
"---"
]
},
@@ -15,45 +20,67 @@
"source": [
"# Groq\n",
"\n",
"Install the langchain-groq package if not already installed:\n",
"LangChain supports integration with [Groq](https://groq.com/) chat models. Groq specializes in fast AI inference.\n",
"\n",
"```bash\n",
"pip install langchain-groq\n",
"```\n",
"\n",
"Request an [API key](https://wow.groq.com) and set it as an environment variable:\n",
"\n",
"```bash\n",
"export GROQ_API_KEY=<YOUR API KEY>\n",
"```\n",
"\n",
"Alternatively, you may configure the API key when you initialize ChatGroq."
"To get started, you'll first need to install the langchain-groq package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the ChatGroq class and initialize it with a model:"
"Request an [API key](https://wow.groq.com) and set it as an environment variable:\n",
"\n",
"```bash\n",
"export GROQ_API_KEY=<YOUR API KEY>\n",
"```\n",
"\n",
"Alternatively, you may configure the API key when you initialize ChatGroq.\n",
"\n",
"Here's an example of it in action:"
]
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 8,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Low latency is crucial for Large Language Models (LLMs) because it directly impacts the user experience, model performance, and overall efficiency. Here are some reasons why low latency is essential for LLMs:\\n\\n1. **Real-time Interaction**: LLMs are often used in applications that require real-time interaction, such as chatbots, virtual assistants, and language translation. Low latency ensures that the model responds quickly to user input, providing a seamless and engaging experience.\\n2. **Conversational Flow**: In conversational AI, latency can disrupt the natural flow of conversation. Low latency helps maintain a smooth conversation, allowing users to respond quickly and naturally, without feeling like they're waiting for the model to catch up.\\n3. **Model Performance**: High latency can lead to increased error rates, as the model may struggle to keep up with the input pace. Low latency enables the model to process information more efficiently, resulting in better accuracy and performance.\\n4. **Scalability**: As the number of users and requests increases, low latency becomes even more critical. It allows the model to handle a higher volume of requests without sacrificing performance, making it more scalable and efficient.\\n5. **Resource Utilization**: Low latency can reduce the computational resources required to process requests. By minimizing latency, you can optimize resource allocation, reduce costs, and improve overall system efficiency.\\n6. **User Experience**: High latency can lead to frustration, abandonment, and a poor user experience. Low latency ensures that users receive timely responses, which is essential for building trust and satisfaction.\\n7. **Competitive Advantage**: In applications like customer service or language translation, low latency can be a key differentiator. It can provide a competitive advantage by offering a faster and more responsive experience, setting your application apart from others.\\n8. **Edge Computing**: With the increasing adoption of edge computing, low latency is critical for processing data closer to the user. This reduces latency even further, enabling real-time processing and analysis of data.\\n9. **Real-time Analytics**: Low latency enables real-time analytics and insights, which are essential for applications like sentiment analysis, trend detection, and anomaly detection.\\n10. **Future-Proofing**: As LLMs continue to evolve and become more complex, low latency will become even more critical. By prioritizing low latency now, you'll be better prepared to handle the demands of future LLM applications.\\n\\nIn summary, low latency is vital for LLMs because it ensures a seamless user experience, improves model performance, and enables efficient resource utilization. By prioritizing low latency, you can build more effective, scalable, and efficient LLM applications that meet the demands of real-time interaction and processing.\", response_metadata={'token_usage': {'completion_tokens': 541, 'prompt_tokens': 33, 'total_tokens': 574, 'completion_time': 1.499777658, 'prompt_time': 0.008344704, 'queue_time': None, 'total_time': 1.508122362}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_87cbfbbc4d', 'finish_reason': 'stop', 'logprobs': None}, id='run-49dad960-ace8-4cd7-90b3-2db99ecbfa44-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_groq import ChatGroq"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"chat = ChatGroq(temperature=0, model_name=\"mixtral-8x7b-32768\")"
"from langchain_groq import ChatGroq\n",
"\n",
"chat = ChatGroq(\n",
" temperature=0,\n",
" model=\"llama3-70b-8192\",\n",
" # api_key=\"\" # Optional if not set as an environment variable\n",
")\n",
"\n",
"system = \"You are a helpful assistant.\"\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({\"text\": \"Explain the importance of low latency for LLMs.\"})"
]
},
{
@@ -62,97 +89,206 @@
"source": [
"You can view the available models [here](https://console.groq.com/docs/models).\n",
"\n",
"If you do not want to set your API key in the environment, you can pass it directly to the client:\n",
"```python\n",
"chat = ChatGroq(temperature=0, groq_api_key=\"YOUR_API_KEY\", model_name=\"mixtral-8x7b-32768\")\n",
"## Tool calling\n",
"\n",
"```"
"Groq chat models support [tool calling](/docs/how_to/tool_calling/) to generate output matching a specific schema. The model may choose to call multiple tools or the same tool multiple times if appropriate.\n",
"\n",
"Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'get_current_weather',\n",
" 'args': {'location': 'San Francisco', 'unit': 'Celsius'},\n",
" 'id': 'call_pydj'},\n",
" {'name': 'get_current_weather',\n",
" 'args': {'location': 'Tokyo', 'unit': 'Celsius'},\n",
" 'id': 'call_jgq3'}]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_current_weather(location: str, unit: Optional[str]):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
" return \"Cloudy with a chance of rain.\"\n",
"\n",
"\n",
"tool_model = chat.bind_tools([get_current_weather], tool_choice=\"auto\")\n",
"\n",
"res = tool_model.invoke(\"What is the weather like in San Francisco and Tokyo?\")\n",
"\n",
"res.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Write a prompt and invoke ChatGroq to create completions:"
"### `.with_structured_output()`\n",
"\n",
"You can also use the convenience [`.with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method) method to coerce `ChatGroq` into returning a structured output.\n",
"Here is an example:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Low Latency Large Language Models (LLMs) are a type of artificial intelligence model that can understand and generate human-like text. The term \"low latency\" refers to the model\\'s ability to process and respond to inputs quickly, with minimal delay.\\n\\nThe importance of low latency in LLMs can be explained through the following points:\\n\\n1. Improved user experience: In real-time applications such as chatbots, virtual assistants, and interactive games, users expect quick and responsive interactions. Low latency LLMs can provide instant feedback and responses, creating a more seamless and engaging user experience.\\n\\n2. Better decision-making: In time-sensitive scenarios, such as financial trading or autonomous vehicles, low latency LLMs can quickly process and analyze vast amounts of data, enabling faster and more informed decision-making.\\n\\n3. Enhanced accessibility: For individuals with disabilities, low latency LLMs can help create more responsive and inclusive interfaces, such as voice-controlled assistants or real-time captioning systems.\\n\\n4. Competitive advantage: In industries where real-time data analysis and decision-making are crucial, low latency LLMs can provide a competitive edge by enabling businesses to react more quickly to market changes, customer needs, or emerging opportunities.\\n\\n5. Scalability: Low latency LLMs can efficiently handle a higher volume of requests and interactions, making them more suitable for large-scale applications and services.\\n\\nIn summary, low latency is an essential aspect of LLMs, as it significantly impacts user experience, decision-making, accessibility, competitiveness, and scalability. By minimizing delays and response times, low latency LLMs can unlock new possibilities and applications for artificial intelligence in various industries and scenarios.')"
"Joke(setup='Why did the cat join a band?', punchline='Because it wanted to be the purr-cussionist!', rating=None)"
]
},
"execution_count": 29,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = \"You are a helpful assistant.\"\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({\"text\": \"Explain the importance of low latency LLMs.\"})"
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(description=\"How funny the joke is, from 1 to 10\")\n",
"\n",
"\n",
"structured_llm = chat.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `ChatGroq` also supports async and streaming functionality:"
"Behind the scenes, this takes advantage of the above tool calling functionality.\n",
"\n",
"## Async"
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"There's a star that shines up in the sky,\\nThe Sun, that makes the day bright and spry.\\nIt rises and sets,\\nIn a daily, predictable bet,\\nGiving life to the world, oh my!\")"
"AIMessage(content='Here is a limerick about the sun:\\n\\nThere once was a sun in the sky,\\nWhose warmth and light caught the eye,\\nIt shone bright and bold,\\nWith a fiery gold,\\nAnd brought life to all, as it flew by.', response_metadata={'token_usage': {'completion_tokens': 51, 'prompt_tokens': 18, 'total_tokens': 69, 'completion_time': 0.144614022, 'prompt_time': 0.00585394, 'queue_time': None, 'total_time': 0.150467962}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'stop', 'logprobs': None}, id='run-e42340ba-f0ad-4b54-af61-8308d8ec8256-0')"
]
},
"execution_count": 32,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatGroq(temperature=0, model_name=\"mixtral-8x7b-32768\")\n",
"chat = ChatGroq(temperature=0, model=\"llama3-70b-8192\")\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Write a Limerick about {topic}\")])\n",
"chain = prompt | chat\n",
"await chain.ainvoke({\"topic\": \"The Sun\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The moon's gentle glow\n",
"Illuminates the night sky\n",
"Peaceful and serene"
"Silvery glow bright\n",
"Luna's gentle light shines down\n",
"Midnight's gentle queen"
]
}
],
"source": [
"chat = ChatGroq(temperature=0, model_name=\"llama2-70b-4096\")\n",
"chat = ChatGroq(temperature=0, model=\"llama3-70b-8192\")\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Write a haiku about {topic}\")])\n",
"chain = prompt | chat\n",
"for chunk in chain.stream({\"topic\": \"The Moon\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Passing custom parameters\n",
"\n",
"You can pass other Groq-specific parameters using the `model_kwargs` argument on initialization. Here's an example of enabling JSON mode:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='{ \"response\": \"That\\'s a tough question! There are eight species of bears found in the world, and each one is unique and amazing in its own way. However, if I had to pick one, I\\'d say the giant panda is a popular favorite among many people. Who can resist those adorable black and white markings?\", \"followup_question\": \"Would you like to know more about the giant panda\\'s habitat and diet?\" }', response_metadata={'token_usage': {'completion_tokens': 89, 'prompt_tokens': 50, 'total_tokens': 139, 'completion_time': 0.249032839, 'prompt_time': 0.011134497, 'queue_time': None, 'total_time': 0.260167336}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'stop', 'logprobs': None}, id='run-558ce67e-8c63-43fe-a48f-6ecf181bc922-0')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatGroq(\n",
" model=\"llama3-70b-8192\", model_kwargs={\"response_format\": {\"type\": \"json_object\"}}\n",
")\n",
"\n",
"system = \"\"\"\n",
"You are a helpful assistant.\n",
"Always respond with a JSON object with two string keys: \"response\" and \"followup_question\".\n",
"\"\"\"\n",
"human = \"{question}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke({\"question\": \"what bear is best?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -171,7 +307,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -315,7 +315,11 @@
"source": [
"## 4. Take it for a spin as an agent!\n",
"\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/v0.1/docs/modules/agents/agent_types/react/#using-chat-models).\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. \n",
"\n",
"The agent is based on the paper [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629)\n",
"\n",
"The example below is taken from [here](https://python.langchain.com/v0.1/docs/modules/agents/agent_types/react/#using-chat-models).\n",
"\n",
"> Note: To run this section, you'll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`"
]

File diff suppressed because one or more lines are too long

View File

@@ -225,7 +225,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -134,7 +134,7 @@
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# connect to an embedding NIM running at localhost:8000, specifying a specific model\n",
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta-llama3-8b-instruct\")"
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta/llama3-8b-instruct\")"
]
},
{
@@ -658,7 +658,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.2"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,190 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: OCIGenAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatOCIGenAI\n",
"\n",
"This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html).\n",
"\n",
"Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n",
"Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available __[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)__ and __[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/)__.\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/oci_generative_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-oci-generative-ai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-oci-generative-ai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access OCIGenAI models you'll need to install the `oci` and `langchain-community` packages.\n",
"\n",
"### Credentials\n",
"\n",
"The credentials and authentication methods supported for this integration are equivalent to those used with other OCI services and follow the __[standard SDK authentication](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)__ methods, specifically API Key, session token, instance principal, and resource principal.\n",
"\n",
"API key is the default authentication method used in the examples above. The following example demonstrates how to use a different authentication method (session token)"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain OCIGenAI integration lives in the `langchain-community` package and you will also need to install the `oci` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community oci"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI\n",
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
"\n",
"chat = ChatOCIGenAI(\n",
" model_id=\"cohere.command-r-16k\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" model_kwargs={\"temperature\": 0.7, \"max_tokens\": 500},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" SystemMessage(content=\"your are an AI assistant.\"),\n",
" AIMessage(content=\"Hi there human!\"),\n",
" HumanMessage(content=\"tell me a joke.\"),\n",
"]\n",
"response = chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [],
"source": [
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat\n",
"\n",
"response = chain.invoke({\"topic\": \"dogs\"})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOCIGenAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -238,6 +238,67 @@
"> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in prem platform. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prem Templates\n",
"\n",
"Writing Prompt Templates can be super messy. Prompt templates are long, hard to manage, and must be continuously tweaked to improve and keep the same throughout the application. \n",
"\n",
"With **Prem**, writing and managing prompts can be super easy. The **_Templates_** tab inside the [launchpad](https://docs.premai.io/get-started/launchpad) helps you write as many prompts you need and use it inside the SDK to make your application running using those prompts. You can read more about Prompt Templates [here](https://docs.premai.io/get-started/prem-templates). \n",
"\n",
"To use Prem Templates natively with LangChain, you need to pass an id the `HumanMessage`. This id should be the name the variable of your prompt template. the `content` in `HumanMessage` should be the value of that variable. \n",
"\n",
"let's say for example, if your prompt template was this:\n",
"\n",
"```text\n",
"Say hello to my name and say a feel-good quote\n",
"from my age. My name is: {name} and age is {age}\n",
"```\n",
"\n",
"So now your human_messages should look like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"human_messages = [\n",
" HumanMessage(content=\"Shawn\", id=\"name\"),\n",
" HumanMessage(content=\"22\", id=\"age\"),\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Pass this `human_messages` to ChatPremAI Client. Please note: Do not forget to\n",
"pass the additional `template_id` to invoke generation with Prem Templates. If you are not aware of `template_id` you can learn more about that [in our docs](https://docs.premai.io/get-started/prem-templates). Here is an example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"template_id = \"78069ce8-xxxxx-xxxxx-xxxx-xxx\"\n",
"response = chat.invoke([human_message], template_id=template_id)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Prem Template feature is available in streaming too. "
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -0,0 +1,180 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Snowflake Cortex\n",
"\n",
"[Snowflake Cortex](https://docs.snowflake.com/en/user-guide/snowflake-cortex/llm-functions) gives you instant access to industry-leading large language models (LLMs) trained by researchers at companies like Mistral, Reka, Meta, and Google, including [Snowflake Arctic](https://www.snowflake.com/en/data-cloud/arctic/), an open enterprise-grade model developed by Snowflake.\n",
"\n",
"This example goes over how to use LangChain to interact with Snowflake Cortex."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation and setup\n",
"\n",
"We start by installing the `snowflake-snowpark-python` library, using the command below. Then we configure the credentials for connecting to Snowflake, as environment variables or pass them directly."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet snowflake-snowpark-python"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# First step is to set up the environment variables, to connect to Snowflake,\n",
"# you can also pass these snowflake credentials while instantiating the model\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_ACCOUNT\") is None:\n",
" os.environ[\"SNOWFLAKE_ACCOUNT\"] = getpass.getpass(\"Account: \")\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_USERNAME\") is None:\n",
" os.environ[\"SNOWFLAKE_USERNAME\"] = getpass.getpass(\"Username: \")\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_PASSWORD\") is None:\n",
" os.environ[\"SNOWFLAKE_PASSWORD\"] = getpass.getpass(\"Password: \")\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_DATABASE\") is None:\n",
" os.environ[\"SNOWFLAKE_DATABASE\"] = getpass.getpass(\"Database: \")\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_SCHEMA\") is None:\n",
" os.environ[\"SNOWFLAKE_SCHEMA\"] = getpass.getpass(\"Schema: \")\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_WAREHOUSE\") is None:\n",
" os.environ[\"SNOWFLAKE_WAREHOUSE\"] = getpass.getpass(\"Warehouse: \")\n",
"\n",
"if os.environ.get(\"SNOWFLAKE_ROLE\") is None:\n",
" os.environ[\"SNOWFLAKE_ROLE\"] = getpass.getpass(\"Role: \")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatSnowflakeCortex\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"# By default, we'll be using the cortex provided model: `snowflake-arctic`, with function: `complete`\n",
"chat = ChatSnowflakeCortex()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above cell assumes that your Snowflake credentials are set in your environment variables. If you would rather manually specify them, use the following code:\n",
"\n",
"```python\n",
"chat = ChatSnowflakeCortex(\n",
" # change default cortex model and function\n",
" model=\"snowflake-arctic\",\n",
" cortex_function=\"complete\",\n",
"\n",
" # change default generation parameters\n",
" temperature=0,\n",
" max_tokens=10,\n",
" top_p=0.95,\n",
"\n",
" # specify snowflake credentials\n",
" account=\"YOUR_SNOWFLAKE_ACCOUNT\",\n",
" username=\"YOUR_SNOWFLAKE_USERNAME\",\n",
" password=\"YOUR_SNOWFLAKE_PASSWORD\",\n",
" database=\"YOUR_SNOWFLAKE_DATABASE\",\n",
" schema=\"YOUR_SNOWFLAKE_SCHEMA\",\n",
" role=\"YOUR_SNOWFLAKE_ROLE\",\n",
" warehouse=\"YOUR_SNOWFLAKE_WAREHOUSE\"\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calling the model\n",
"We can now call the model using the `invoke` or `generate` method.\n",
"\n",
"#### Generation"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" Large language models are artificial intelligence systems designed to understand, generate, and manipulate human language. These models are typically based on deep learning techniques and are trained on vast amounts of text data to learn patterns and structures in language. They can perform a wide range of language-related tasks, such as language translation, text generation, sentiment analysis, and answering questions. Some well-known large language models include Google's BERT, OpenAI's GPT series, and Facebook's RoBERTa. These models have shown remarkable performance in various natural language processing tasks, and their applications continue to expand as research in AI progresses.\", response_metadata={'completion_tokens': 131, 'prompt_tokens': 29, 'total_tokens': 160}, id='run-5435bd0a-83fd-4295-b237-66cbd1b5c0f3-0')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" SystemMessage(content=\"You are a friendly assistant.\"),\n",
" HumanMessage(content=\"What are large language models?\"),\n",
"]\n",
"chat.invoke(messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"`ChatSnowflakeCortex` doesn't support streaming as of now. Support for streaming will be coming in the later versions!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -13,7 +13,7 @@
"\n",
"## Prerequisites\n",
"\n",
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs. This example shows how to load a dataset produced by the [Website Content Crawler](https://apify.com/apify/website-content-crawler)."
]
},
{
@@ -101,8 +101,10 @@
"outputs": [],
"source": [
"from langchain.indexes import VectorstoreIndexCreator\n",
"from langchain_community.document_loaders import ApifyDatasetLoader\n",
"from langchain_core.documents import Document"
"from langchain_community.utilities import ApifyWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings"
]
},
{
@@ -125,7 +127,7 @@
"metadata": {},
"outputs": [],
"source": [
"index = VectorstoreIndexCreator().from_loaders([loader])"
"index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])"
]
},
{
@@ -135,7 +137,7 @@
"outputs": [],
"source": [
"query = \"What is Apify?\"\n",
"result = index.query_with_sources(query)"
"result = index.query_with_sources(query, llm=OpenAI())"
]
},
{

View File

@@ -83,7 +83,7 @@
},
"outputs": [],
"source": [
"loader = ImageCaptionLoader(path_images=list_image_urls)\n",
"loader = ImageCaptionLoader(images=list_image_urls)\n",
"list_docs = loader.load()\n",
"list_docs"
]

View File

@@ -15,7 +15,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install gpudb==7.2.0.1"
"%pip install gpudb==7.2.0.9"
]
},
{
@@ -97,14 +97,14 @@
"# data and the `SCHEMA.TABLE` combination must exist in Kinetica.\n",
"\n",
"QUERY = \"select text, survey_id as source from SCHEMA.TABLE limit 10\"\n",
"snowflake_loader = KineticaLoader(\n",
"kl = KineticaLoader(\n",
" query=QUERY,\n",
" host=HOST,\n",
" username=USERNAME,\n",
" password=PASSWORD,\n",
" metadata_columns=[\"source\"],\n",
")\n",
"kinetica_documents = snowflake_loader.load()\n",
"kinetica_documents = kl.load()\n",
"print(kinetica_documents)"
]
}

View File

@@ -12,6 +12,19 @@
"This covers how to load `Microsoft PowerPoint` documents into a document format that we can use downstream."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aef1500f",
"metadata": {},
"outputs": [],
"source": [
"# Install packages\n",
"%pip install unstructured\n",
"%pip install python-magic\n",
"%pip install python-pptx"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -7,140 +7,99 @@
"source": [
"# Recursive URL\n",
"\n",
"We may want to process load all URLs under a root directory.\n",
"\n",
"For example, let's look at the [Python 3.9 Document](https://docs.python.org/3.9/).\n",
"\n",
"This has many interesting child pages that we may want to read in bulk.\n",
"\n",
"Of course, the `WebBaseLoader` can load a list of pages. \n",
"\n",
"But, the challenge is traversing the tree of child pages and actually assembling that list!\n",
" \n",
"We do this using the `RecursiveUrlLoader`.\n",
"\n",
"This also gives us the flexibility to exclude some children, customize the extractor, and more."
"The `RecursiveUrlLoader` lets you recursively scrape all child links from a root URL and parse them into Documents."
]
},
{
"cell_type": "markdown",
"id": "1be8094f",
"id": "947d29e7-3679-483d-973f-79ea3403a370",
"metadata": {},
"source": [
"# Parameters\n",
"- url: str, the target url to crawl.\n",
"- exclude_dirs: Optional[str], webpage directories to exclude.\n",
"- use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.\n",
"- extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.\n",
"- max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.\n",
"- timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.\n",
"- prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True."
"## Setup\n",
"\n",
"The `RecursiveUrlLoader` lives in the `langchain-community` package. There's no other required packages, though you will get richer default Document metadata if you have ``beautifulsoup4` installed as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23c18539",
"id": "23359ab0-8056-4dee-8bff-c38dc079f17f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader"
"%pip install -qU langchain-community beautifulsoup4"
]
},
{
"cell_type": "markdown",
"id": "6384c057",
"id": "07985766-e4e9-4ea1-8a18-924fa4f294e5",
"metadata": {},
"source": [
"Let's try a simple example."
"## Instantiation\n",
"\n",
"Now we can instantiate our document loader object and load Documents:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "55394afe",
"execution_count": 1,
"id": "cb208dcf-9ce9-4197-bc44-b80d20aa4e50",
"metadata": {},
"outputs": [],
"source": [
"from bs4 import BeautifulSoup as Soup\n",
"from langchain_community.document_loaders import RecursiveUrlLoader\n",
"\n",
"url = \"https://docs.python.org/3.9/\"\n",
"loader = RecursiveUrlLoader(\n",
" url=url, max_depth=2, extractor=lambda x: Soup(x, \"html.parser\").text\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "084fb2ce",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\n\\n\\n\\nPython Frequently Asked Questions — Python 3.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:50]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "13bd7e16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://docs.python.org/3.9/library/index.html',\n",
" 'title': 'The Python Standard Library — Python 3.9.17 documentation',\n",
" 'language': None}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[-1].metadata"
" \"https://docs.python.org/3.9/\",\n",
" # max_depth=2,\n",
" # use_async=False,\n",
" # extractor=None,\n",
" # metadata_extractor=None,\n",
" # exclude_dirs=(),\n",
" # timeout=10,\n",
" # check_response_status=True,\n",
" # continue_on_failure=True,\n",
" # prevent_outside=True,\n",
" # base_url=None,\n",
" # ...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5866e5a6",
"id": "0fac4425-735f-487d-a12b-c8ed2a209039",
"metadata": {},
"source": [
"However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough."
]
},
{
"cell_type": "markdown",
"id": "4ec8ecef",
"metadata": {},
"source": [
"Testing on LangChain docs."
"## Load\n",
"\n",
"Use ``.load()`` to synchronously load into memory all Documents, with one\n",
"Document per visited URL. Starting from the initial URL, we recurse through\n",
"all linked URLs up to the specified max_depth.\n",
"\n",
"Let's run through a basic example of how to use the `RecursiveUrlLoader` on the [Python 3.9 Documentation](https://docs.python.org/3.9/)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "349b5598",
"id": "a30843c8-4a59-43dc-bf60-f26532f0f8e1",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/.pyenv/versions/3.9.1/lib/python3.9/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" k = self.parse_starttag(i)\n"
]
},
{
"data": {
"text/plain": [
"8"
"{'source': 'https://docs.python.org/3.9/',\n",
" 'content_type': 'text/html',\n",
" 'title': '3.9.19 Documentation',\n",
" 'language': None}"
]
},
"execution_count": 2,
@@ -149,10 +108,208 @@
}
],
"source": [
"url = \"https://js.langchain.com/docs/modules/memory/integrations/\"\n",
"loader = RecursiveUrlLoader(url=url)\n",
"docs = loader.load()\n",
"len(docs)"
"docs[0].metadata"
]
},
{
"cell_type": "markdown",
"id": "211856ed-6dd7-46c6-859e-11aaea9093db",
"metadata": {},
"source": [
"Great! The first document looks like the root page we started from. Let's look at the metadata of the next document"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2d842c03-fab8-4097-9f4f-809b2e71c0ba",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://docs.python.org/3.9/using/index.html',\n",
" 'content_type': 'text/html',\n",
" 'title': 'Python Setup and Usage — Python 3.9.19 documentation',\n",
" 'language': None}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[1].metadata"
]
},
{
"cell_type": "markdown",
"id": "f5714ace-7cc5-4c5c-9426-f68342880da0",
"metadata": {},
"source": [
"That url looks like a child of our root page, which is great! Let's move on from metadata to examine the content of one of our documents"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "51dc6c67-6857-4298-9472-08b147f3a631",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"<!DOCTYPE html>\n",
"\n",
"<html xmlns=\"http://www.w3.org/1999/xhtml\">\n",
" <head>\n",
" <meta charset=\"utf-8\" /><title>3.9.19 Documentation</title><meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n",
" \n",
" <link rel=\"stylesheet\" href=\"_static/pydoctheme.css\" type=\"text/css\" />\n",
" <link rel=\n"
]
}
],
"source": [
"print(docs[0].page_content[:300])"
]
},
{
"cell_type": "markdown",
"id": "d87cc239",
"metadata": {},
"source": [
"That certainly looks like HTML that comes from the url https://docs.python.org/3.9/, which is what we expected. Let's now look at some variations we can make to our basic example that can be helpful in different situations. "
]
},
{
"cell_type": "markdown",
"id": "8f41cc89",
"metadata": {},
"source": [
"## Adding an Extractor\n",
"\n",
"By default the loader sets the raw HTML from each link as the Document page content. To parse this HTML into a more human/LLM-friendly format you can pass in a custom ``extractor`` method:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "33a6f5b8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/td/vzm913rx77x21csd90g63_7c0000gn/T/ipykernel_10935/1083427287.py:6: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" soup = BeautifulSoup(html, \"lxml\")\n",
"/Users/isaachershenson/.pyenv/versions/3.11.9/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" k = self.parse_starttag(i)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"3.9.19 Documentation\n",
"\n",
"Download\n",
"Download these documents\n",
"Docs by version\n",
"\n",
"Python 3.13 (in development)\n",
"Python 3.12 (stable)\n",
"Python 3.11 (security-fixes)\n",
"Python 3.10 (security-fixes)\n",
"Python 3.9 (securit\n"
]
}
],
"source": [
"import re\n",
"\n",
"from bs4 import BeautifulSoup\n",
"\n",
"\n",
"def bs4_extractor(html: str) -> str:\n",
" soup = BeautifulSoup(html, \"lxml\")\n",
" return re.sub(r\"\\n\\n+\", \"\\n\\n\", soup.text).strip()\n",
"\n",
"\n",
"loader = RecursiveUrlLoader(\"https://docs.python.org/3.9/\", extractor=bs4_extractor)\n",
"docs = loader.load()\n",
"print(docs[0].page_content[:200])"
]
},
{
"cell_type": "markdown",
"id": "c8e8a826",
"metadata": {},
"source": [
"This looks much nicer!\n",
"\n",
"You can similarly pass in a `metadata_extractor` to customize how Document metadata is extracted from the HTTP response. See the [API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html) for more on this."
]
},
{
"cell_type": "markdown",
"id": "1dddbc94",
"metadata": {},
"source": [
"## Lazy loading\n",
"\n",
"If we're loading a large number of Documents and our downstream operations can be done over subsets of all loaded Documents, we can lazily load our Documents one at a time to minimize our memory footprint:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "7d0114fc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/4j/2rz3865x6qg07tx43146py8h0000gn/T/ipykernel_73962/2110507528.py:6: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" soup = BeautifulSoup(html, \"lxml\")\n"
]
}
],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"id": "f88a7c2f-35df-4c3a-b238-f91be2674b96",
"metadata": {},
"source": [
"In this example we never have more than 10 Documents loaded into memory at a time."
]
},
{
"cell_type": "markdown",
"id": "3e4d1c8f",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"These examples show just a few of the ways in which you can modify the default `RecursiveUrlLoader`, but there are many more modifications that can be made to best fit your use case. Using the parameters `link_regex` and `exclude_dirs` can help you filter out unwanted URLs, `aload()` and `alazy_load()` can be used for aynchronous loading, and more.\n",
"\n",
"For detailed information on configuring and calling the ``RecursiveUrlLoader``, please see the API reference: https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html."
]
}
],
@@ -172,7 +329,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -17,6 +17,7 @@
"- C++ (*)\n",
"- C# (*)\n",
"- COBOL\n",
"- Elixir\n",
"- Go (*)\n",
"- Java (*)\n",
"- JavaScript (requires package `esprima`)\n",

View File

@@ -113,7 +113,7 @@
"\n",
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
"\n",
"- **[Overview](/docs/concepts#langchain-expression-language)**: LCEL and its benefits\n",
"- **[Overview](/docs/concepts#langchain-expression-language-lcel)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts#interface)**: The standard interface for LCEL objects\n",
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",

View File

@@ -15,47 +15,45 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "427d5745",
"metadata": {},
"source": "from langchain_community.document_loaders import YoutubeLoader",
"outputs": [],
"source": [
"from langchain_community.document_loaders import YoutubeLoader"
]
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "34a25b57",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet youtube-transcript-api"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc8b308a",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=False\n",
")"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "d073dd36",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
],
"outputs": [],
"execution_count": null
},
{
"attachments": {},
@@ -68,26 +66,26 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba28af69",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pytube"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b8ea390",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True\n",
")\n",
"loader.load()"
]
],
"outputs": [],
"execution_count": null
},
{
"attachments": {},
@@ -104,10 +102,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "08510625",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\",\n",
@@ -116,7 +112,41 @@
" translation=\"en\",\n",
")\n",
"loader.load()"
]
],
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### Get transcripts as timestamped chunks\n",
"\n",
"Get one or more `Document` objects, each containing a chunk of the video transcript. The length of the chunks, in seconds, may be specified. Each chunk's metadata includes a URL of the video on YouTube, which will start the video at the beginning of the specific chunk.\n",
"\n",
"`transcript_format` param: One of the `langchain_community.document_loaders.youtube.TranscriptFormat` values. In this case, `TranscriptFormat.CHUNKS`.\n",
"\n",
"`chunk_size_seconds` param: An integer number of video seconds to be represented by each chunk of transcript data. Default is 120 seconds."
],
"id": "69f4e399a9764d73"
},
{
"metadata": {},
"cell_type": "code",
"source": [
"from langchain_community.document_loaders.youtube import TranscriptFormat\n",
"\n",
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=TKCMw0utiak\",\n",
" add_video_info=True,\n",
" transcript_format=TranscriptFormat.CHUNKS,\n",
" chunk_size_seconds=30,\n",
")\n",
"print(\"\\n\\n\".join(map(repr, loader.load())))"
],
"id": "540bbf19182f38bc",
"outputs": [],
"execution_count": null
},
{
"attachments": {},
@@ -142,10 +172,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "c345bc43",
"metadata": {},
"outputs": [],
"source": [
"# Init the GoogleApiClient\n",
"from pathlib import Path\n",
@@ -170,7 +198,9 @@
"\n",
"# returns a list of Documents\n",
"youtube_loader_channel.load()"
]
],
"outputs": [],
"execution_count": null
}
],
"metadata": {

View File

@@ -15,16 +15,24 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet doctran"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +42,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [
{
@@ -43,7 +51,7 @@
"True"
]
},
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -64,7 +72,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -107,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -119,13 +127,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Output\n",
"## Output using Sync version\n",
"After translating a document, the result will be returned as a new document with the page_content translated into the target language"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -134,7 +142,82 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Documento Confidencial - Solo para Uso Interno\n",
"\n",
"Fecha: 1 de Julio de 2023\n",
"\n",
"Asunto: Actualizaciones y Discusiones sobre Varios Temas\n",
"\n",
"Estimado Equipo,\n",
"\n",
"Espero que este correo electrónico los encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial.\n",
"\n",
"Medidas de Seguridad y Privacidad\n",
"Como parte de nuestro compromiso continuo para garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas sólidas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (email: john.doe@example.com) del departamento de TI por su trabajo diligente en mejorar nuestra seguridad de red. En adelante, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y pautas de protección de datos. Además, si encuentran algún riesgo o incidente de seguridad potencial, por favor repórtenlo inmediatamente a nuestro equipo dedicado en security@example.com.\n",
"\n",
"Actualizaciones de Recursos Humanos y Beneficios para Empleados\n",
"Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han hecho contribuciones significativas a sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su destacado desempeño en servicio al cliente. Jane ha recibido consistentemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o requieren asistencia, por favor contacten a nuestro representante de Recursos Humanos, Michael Johnson (teléfono: 418-492-3850, email: michael.johnson@example.com).\n",
"\n",
"Iniciativas y Campañas de Marketing\n",
"Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar el conocimiento de la marca y fomentar la participación de los clientes. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus esfuerzos excepcionales en la gestión de nuestras plataformas de redes sociales. Sarah ha aumentado con éxito nuestra base de seguidores en un 20% solo en el último mes. Además, marquen sus calendarios para el próximo evento de lanzamiento de productos el 15 de Julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa.\n",
"\n",
"Proyectos de Investigación y Desarrollo\n",
"En nuestra búsqueda de innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el trabajo excepcional de David Rodriguez (email: david.rodriguez@example.com) en su rol como líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, recordamos a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión mensual de lluvia de ideas de I+D, programada para el 10 de Julio.\n",
"\n",
"Por favor, traten la información en este documento con la máxima confidencialidad y asegúrense de que no sea compartida con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, por favor no duden en comunicarse directamente conmigo.\n",
"\n",
"Gracias por su atención, y sigamos trabajando juntos para alcanzar nuestros objetivos.\n",
"\n",
"Saludos cordiales,\n",
"\n",
"Jason Fan\n",
"Cofundador y CEO\n",
"Psychic\n",
"jason@psychic.dev\n"
]
}
],
"source": [
"print(translated_document[0].page_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Output using the Async version\n",
"\n",
"After translating a document, the result will be returned as a new document with the page_content translated into the target language. The async version will improve performance when the documents are chunked in multiple parts. It will also make sure to return the output in the correct order."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"import asyncio"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"result = await qa_translator.atransform_documents(documents)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
@@ -152,22 +235,22 @@
"Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial.\n",
"\n",
"Medidas de Seguridad y Privacidad\n",
"Como parte de nuestro compromiso continuo de garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas sólidas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: john.doe@example.com) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En el futuro, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y pautas de protección de datos. Además, si encuentran algún riesgo o incidente de seguridad potencial, por favor, repórtelo de inmediato a nuestro equipo dedicado en security@example.com.\n",
"Como parte de nuestro compromiso continuo de garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas sólidas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (email: john.doe@example.com) del departamento de TI por su trabajo diligente en mejorar nuestra seguridad de red. En adelante, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y pautas de protección de datos. Además, si encuentran algún riesgo o incidente de seguridad potencial, por favor repórtenlo inmediatamente a nuestro equipo dedicado en security@example.com.\n",
"\n",
"Actualizaciones de Recursos Humanos y Beneficios para Empleados\n",
"Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han realizado contribuciones significativas en sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su destacado desempeño en servicio al cliente. Jane ha recibido consistentemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan ayuda, por favor, contacten a nuestro representante de Recursos Humanos, Michael Johnson (teléfono: 418-492-3850, correo electrónico: michael.johnson@example.com).\n",
"Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han hecho contribuciones significativas a sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su destacado desempeño en servicio al cliente. Jane ha recibido consistentemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o requieren asistencia, por favor contacten a nuestro representante de Recursos Humanos, Michael Johnson (teléfono: 418-492-3850, email: michael.johnson@example.com).\n",
"\n",
"Iniciativas y Campañas de Marketing\n",
"Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar el conocimiento de nuestra marca y fomentar la participación de los clientes. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus esfuerzos excepcionales en la gestión de nuestras plataformas de redes sociales. Sarah ha logrado aumentar nuestra base de seguidores en un 20% solo en el último mes. Además, marquen sus calendarios para el próximo evento de lanzamiento de productos el 15 de Julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa.\n",
"Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar el conocimiento de la marca y fomentar la participación de los clientes. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus esfuerzos excepcionales en la gestión de nuestras plataformas de redes sociales. Sarah ha aumentado con éxito nuestra base de seguidores en un 20% solo en el último mes. Además, marquen sus calendarios para el próximo evento de lanzamiento de productos el 15 de Julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa.\n",
"\n",
"Proyectos de Investigación y Desarrollo\n",
"En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el trabajo excepcional de David Rodriguez (correo electrónico: david.rodriguez@example.com) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión mensual de lluvia de ideas de I+D, programada para el 10 de Julio.\n",
"En nuestra búsqueda de innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el trabajo excepcional de David Rodriguez (email: david.rodriguez@example.com) en su rol como líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, recordamos a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión mensual de lluvia de ideas de I+D, programada para el 10 de Julio.\n",
"\n",
"Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de no compartirla con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, por favor, no duden en comunicarse directamente conmigo.\n",
"Por favor, traten la información en este documento con la máxima confidencialidad y asegúrense de que no sea compartida con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, por favor no duden en comunicarse directamente conmigo.\n",
"\n",
"Gracias por su atención y sigamos trabajando juntos para alcanzar nuestros objetivos.\n",
"Gracias por su atención, y sigamos trabajando juntos para alcanzar nuestros objetivos.\n",
"\n",
"Atentamente,\n",
"Saludos cordiales,\n",
"\n",
"Jason Fan\n",
"Cofundador y CEO\n",
@@ -177,7 +260,7 @@
}
],
"source": [
"print(translated_document[0].page_content)"
"print(result[0].page_content)"
]
}
],
@@ -197,7 +280,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,420 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Volcengine Reranker\n",
"\n",
"This notebook shows how to use Volcengine Reranker for document compression and retrieval. [Volcengine](https://www.volcengine.com/) is a cloud service platform developed by ByteDance, the parent company of TikTok.\n",
"\n",
"Volcengine's Rerank Service supports reranking up to 50 documents with a maximum of 4000 tokens. For more, please visit [here](https://www.volcengine.com/docs/84313/1254474) and [here](https://www.volcengine.com/docs/84313/1254605)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet volcengine"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss\n",
"\n",
"# OR (depending on Python version)\n",
"\n",
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# To obtain ak/sk: https://www.volcengine.com/docs/84313/1254488\n",
"\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"VOLC_API_AK\"] = getpass.getpass(\"Volcengine API AK:\")\n",
"os.environ[\"VOLC_API_SK\"] = getpass.getpass(\"Volcengine API SK:\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/terminator/Developer/langchain/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:11: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n",
" from tqdm.autonotebook import tqdm, trange\n",
"/Users/terminator/Developer/langchain/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
"\n",
"The pandemic has been punishing. \n",
"\n",
"And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n",
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do. \n",
"\n",
"Thats why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n",
"\n",
"Lets get it done once and for all. \n",
"\n",
"Advancing liberty and justice also requires protecting the rights of women. \n",
"\n",
"The constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"I understand. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Third we can end the shutdown of schools and businesses. We have the tools we need. \n",
"\n",
"Its time for Americans to get back to work and fill our great downtowns again. People working from home can feel safe to begin to return to the office. \n",
"\n",
"Were doing that here in the federal government. The vast majority of federal workers will once again work in person. \n",
"\n",
"Our schools are open. Lets keep it that way. Our kids need to be in school.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"The widow of Sergeant First Class Heath Robinson. \n",
"\n",
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n",
"\n",
"Stationed near Baghdad, just yards from burn pits the size of football fields. \n",
"\n",
"Heaths widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n",
"\n",
"But cancer from prolonged exposure to burn pits ravaged Heaths lungs and body. \n",
"\n",
"Danielle says Heath was a fighter to the very end.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Danielle says Heath was a fighter to the very end. \n",
"\n",
"He didnt know how to stop fighting, and neither did she. \n",
"\n",
"Through her pain she found purpose to demand we do better. \n",
"\n",
"Tonight, Danielle—we are. \n",
"\n",
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
"\n",
"And tonight, Im announcing were expanding eligibility to veterans suffering from nine respiratory cancers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"We can do all this while keeping lit the torch of liberty that has led generations of immigrants to this land—my forefathers and so many of yours. \n",
"\n",
"Provide a pathway to citizenship for Dreamers, those on temporary status, farm workers, and essential workers. \n",
"\n",
"Revise our laws so businesses have the workers they need and families dont wait decades to reunite. \n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"He rejected repeated efforts at diplomacy. \n",
"\n",
"He thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n",
"\n",
"We prepared extensively and carefully. \n",
"\n",
"We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
"Well create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"Look at cars. \n",
"\n",
"Last year, there werent enough semiconductors to make all the cars that people wanted to buy. \n",
"\n",
"And guess what, prices of automobiles went up. \n",
"\n",
"So—we have a choice. \n",
"\n",
"One way to fight inflation is to drive down wages and make Americans poorer. \n",
"\n",
"I have a better plan to fight inflation. \n",
"\n",
"Lower your costs, not your wages. \n",
"\n",
"Make more cars and semiconductors in America. \n",
"\n",
"More infrastructure and innovation in America. \n",
"\n",
"More goods moving faster and cheaper in America.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Its based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n",
"\n",
"ARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimers, diabetes, and more. \n",
"\n",
"A unity agenda for the nation. \n",
"\n",
"We can do this. \n",
"\n",
"My fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n",
"\n",
"In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores.faiss import FAISS\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(\n",
" texts, HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reranking with VolcengineRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `VolcengineRerank` to rerank the returned results."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n"
]
}
],
"source": [
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.volcengine_rerank import VolcengineRerank\n",
"\n",
"compressor = VolcengineRerank()\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -90,7 +90,9 @@
"- `voyage-code-2`\n",
"- `voyage-2`\n",
"- `voyage-law-2`\n",
"- `voyage-lite-02-instruct`"
"- `voyage-lite-02-instruct`\n",
"- `voyage-finance-2`\n",
"- `voyage-multilingual-2`"
]
},
{
@@ -336,7 +338,10 @@
"metadata": {},
"source": [
"## Doing reranking with VoyageAIRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the Voyage AI reranker to rerank the returned results."
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the Voyage AI reranker to rerank the returned results. You can use any of the following Reranking models: ([source](https://docs.voyageai.com/docs/reranker)):\n",
"\n",
"- `rerank-1`\n",
"- `rerank-lite-1`"
]
},
{

View File

@@ -164,10 +164,10 @@
"text": [
"Node properties:\n",
"- **Movie**\n",
" - `runtime: INTEGER` Min: 120, Max: 120\n",
" - `name: STRING` Available options: ['Top Gun']\n",
" - `runtime`: INTEGER Min: 120, Max: 120\n",
" - `name`: STRING Available options: ['Top Gun']\n",
"- **Actor**\n",
" - `name: STRING` Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']\n",
" - `name`: STRING Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']\n",
"Relationship properties:\n",
"\n",
"The relationships:\n",
@@ -225,7 +225,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -234,7 +234,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 8,
@@ -286,7 +286,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -295,7 +295,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer played in Top Gun.'}"
]
},
"execution_count": 10,
@@ -346,11 +346,11 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\\nWHERE m.name = 'Top Gun'\\nRETURN a.name\"}, {'context': [{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]}]\n",
"Final answer: Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.\n"
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\\nWHERE m.name = 'Top Gun'\\nRETURN a.name\"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}]\n",
"Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.\n"
]
}
],
@@ -406,10 +406,10 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': [{'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'},\n",
" 'result': [{'a.name': 'Tom Cruise'},\n",
" {'a.name': 'Val Kilmer'},\n",
" {'a.name': 'Tom Cruise'}]}"
" {'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'}]}"
]
},
"execution_count": 14,
@@ -482,7 +482,7 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-()\n",
"\u001b[32;1m\u001b[1;3mMATCH (m:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'numberOfActors': 4}]\u001b[0m\n",
@@ -494,7 +494,7 @@
"data": {
"text/plain": [
"{'query': 'How many people played in Top Gun?',\n",
" 'result': 'There were 4 actors who played in Top Gun.'}"
" 'result': 'There were 4 actors in Top Gun.'}"
]
},
"execution_count": 16,
@@ -548,7 +548,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -557,7 +557,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, and Tom Cruise played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 18,
@@ -661,7 +661,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -670,7 +670,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 22,
@@ -683,12 +683,116 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3fa3f3d5-f7e7-4ca9-8f07-ca22b897f192",
"cell_type": "markdown",
"id": "81093062-eb7f-4d96-b1fd-c36b8f1b9474",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Provide context from database results as tool/function output\n",
"\n",
"You can use the `use_function_response` parameter to pass context from database results to an LLM as a tool/function output. This method improves the response accuracy and relevance of an answer as the LLM follows the provided context more closely.\n",
"_You will need to use an LLM with native function calling support to use this feature_."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "2be8f51c-e80a-4a60-ab1c-266450fc17cd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'The main actors in Top Gun are Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan.'}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" use_function_response=True,\n",
")\n",
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
"cell_type": "markdown",
"id": "48a75785-5bc9-49a7-a41b-88bf3ef9d312",
"metadata": {},
"source": [
"You can provide custom system message when using the function response feature by providing `function_response_system` to instruct the model on how to generate answers.\n",
"\n",
"_Note that `qa_prompt` will have no effect when using `use_function_response`_"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "ddf0a61e-f104-4dbb-abbf-e65f3f57dd9a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': \"Arrr matey! In the film Top Gun, ye be seein' Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan sailin' the high seas of the sky! Aye, they be a fine crew of actors, they be!\"}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" use_function_response=True,\n",
" function_response_system=\"Respond as a pirate!\",\n",
")\n",
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
}
],
"metadata": {

View File

@@ -724,6 +724,83 @@
"llm(\"Tell me joke\")"
]
},
{
"cell_type": "markdown",
"id": "9b2b2777",
"metadata": {},
"source": [
"## `MongoDB Atlas` Cache\n",
"\n",
"[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It has native support for \n",
"Vector Search on the MongoDB document data.\n",
"Use [MongoDB Atlas Vector Search](/docs/integrations/providers/mongodb_atlas) to semantically cache prompts and responses."
]
},
{
"cell_type": "markdown",
"id": "ecdc2a0a",
"metadata": {},
"source": [
"### `MongoDBCache`\n",
"An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"\n",
"To import this cache:\n",
"\n",
"```python\n",
"from langchain_mongodb.cache import MongoDBCache\n",
"```\n",
"\n",
"\n",
"To use this cache with your LLMs:\n",
"```python\n",
"from langchain_core.globals import set_llm_cache\n",
"\n",
"# use any embedding provider...\n",
"from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings\n",
"\n",
"mongodb_atlas_uri = \"<YOUR_CONNECTION_STRING>\"\n",
"COLLECTION_NAME=\"<YOUR_CACHE_COLLECTION_NAME>\"\n",
"DATABASE_NAME=\"<YOUR_DATABASE_NAME>\"\n",
"\n",
"set_llm_cache(MongoDBCache(\n",
" connection_string=mongodb_atlas_uri,\n",
" collection_name=COLLECTION_NAME,\n",
" database_name=DATABASE_NAME,\n",
"))\n",
"```\n",
"\n",
"\n",
"### `MongoDBAtlasSemanticCache`\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.\n",
"The MongoDBAtlasSemanticCache inherits from `MongoDBAtlasVectorSearch` and needs an Atlas Vector Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/mongodb_atlas) on how to set up the index.\n",
"\n",
"To import this cache:\n",
"```python\n",
"from langchain_mongodb.cache import MongoDBAtlasSemanticCache\n",
"```\n",
"\n",
"To use this cache with your LLMs:\n",
"```python\n",
"from langchain_core.globals import set_llm_cache\n",
"\n",
"# use any embedding provider...\n",
"from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings\n",
"\n",
"mongodb_atlas_uri = \"<YOUR_CONNECTION_STRING>\"\n",
"COLLECTION_NAME=\"<YOUR_CACHE_COLLECTION_NAME>\"\n",
"DATABASE_NAME=\"<YOUR_DATABASE_NAME>\"\n",
"\n",
"set_llm_cache(MongoDBAtlasSemanticCache(\n",
" embedding=FakeEmbeddings(),\n",
" connection_string=mongodb_atlas_uri,\n",
" collection_name=COLLECTION_NAME,\n",
" database_name=DATABASE_NAME,\n",
"))\n",
"```\n",
"\n",
"To find more resources about using MongoDBSemanticCache visit [here](https://www.mongodb.com/blog/post/introducing-semantic-caching-dedicated-mongodb-lang-chain-package-gen-ai-apps)"
]
},
{
"cell_type": "markdown",
"id": "726fe754",
@@ -993,7 +1070,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"CASSANDRA_KEYSPACE = demo_keyspace\n"
@@ -1029,7 +1106,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"ASTRA_DB_ID = 01234567-89ab-cdef-0123-456789abcdef\n",
@@ -1633,14 +1710,12 @@
"metadata": {},
"outputs": [],
"source": [
"from elasticsearch import Elasticsearch\n",
"from langchain.globals import set_llm_cache\n",
"from langchain_elasticsearch import ElasticsearchCache\n",
"\n",
"es_client = Elasticsearch(hosts=\"http://localhost:9200\")\n",
"set_llm_cache(\n",
" ElasticsearchCache(\n",
" es_connection=es_client,\n",
" es_url=\"http://localhost:9200\",\n",
" index_name=\"llm-chat-cache\",\n",
" metadata={\"project\": \"my_chatgpt_project\"},\n",
" )\n",
@@ -1684,7 +1759,6 @@
"import json\n",
"from typing import Any, Dict, List\n",
"\n",
"from elasticsearch import Elasticsearch\n",
"from langchain.globals import set_llm_cache\n",
"from langchain_core.caches import RETURN_VAL_TYPE\n",
"from langchain_elasticsearch import ElasticsearchCache\n",
@@ -1715,9 +1789,10 @@
" ]\n",
"\n",
"\n",
"es_client = Elasticsearch(hosts=\"http://localhost:9200\")\n",
"set_llm_cache(\n",
" SearchableElasticsearchCache(es_connection=es_client, index_name=\"llm-chat-cache\")\n",
" SearchableElasticsearchCache(\n",
" es_url=\"http://localhost:9200\", index_name=\"llm-chat-cache\"\n",
" )\n",
")"
]
},
@@ -2071,6 +2146,71 @@
"# so it uses the cached result!\n",
"llm(\"Tell me one joke\")"
]
},
{
"cell_type": "markdown",
"id": "ae1f5e1c-085e-4998-9f2d-b5867d2c3d5b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-31T17:18:43.345495Z",
"iopub.status.busy": "2024-05-31T17:18:43.345015Z",
"iopub.status.idle": "2024-05-31T17:18:43.351003Z",
"shell.execute_reply": "2024-05-31T17:18:43.350073Z",
"shell.execute_reply.started": "2024-05-31T17:18:43.345456Z"
}
},
"source": [
"## Cache classes: summary table"
]
},
{
"cell_type": "markdown",
"id": "65072e45-10bc-40f1-979b-2617656bbbce",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-31T17:16:05.616430Z",
"iopub.status.busy": "2024-05-31T17:16:05.616221Z",
"iopub.status.idle": "2024-05-31T17:16:05.624164Z",
"shell.execute_reply": "2024-05-31T17:16:05.623673Z",
"shell.execute_reply.started": "2024-05-31T17:16:05.616418Z"
}
},
"source": [
"**Cache** classes are implemented by inheriting the [BaseCache](https://api.python.langchain.com/en/latest/caches/langchain_core.caches.BaseCache.html) class.\n",
"\n",
"This table lists all 20 derived classes with links to the API Reference.\n",
"\n",
"\n",
"| Namespace 🔻 | Class |\n",
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://api.python.langchain.com/en/latest/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.AstraDBCache.html) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",
"| langchain_community.cache | [GPTCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.GPTCache.html) |\n",
"| langchain_community.cache | [InMemoryCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.InMemoryCache.html) |\n",
"| langchain_community.cache | [MomentoCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.MomentoCache.html) |\n",
"| langchain_community.cache | [OpenSearchSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.OpenSearchSemanticCache.html) |\n",
"| langchain_community.cache | [RedisSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.RedisSemanticCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.SQLAlchemyCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyMd5Cache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.SQLAlchemyMd5Cache.html) |\n",
"| langchain_community.cache | [UpstashRedisCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.UpstashRedisCache.html) |\n",
"| langchain_core.caches | [InMemoryCache](https://api.python.langchain.com/en/latest/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBCache.html) |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19067f14-c69a-4156-9504-af43a0713669",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -2089,7 +2229,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -12,6 +12,17 @@
"This example goes over how to use LangChain to interact with Aleph Alpha models"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "84483bd5",
"metadata": {},
"outputs": [],
"source": [
"# Installing the langchain package needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -9,6 +9,16 @@
">[Machine Learning Platform for AI of Alibaba Cloud](https://www.alibabacloud.com/help/en/pai) is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, `Machine Learning Platform for AI` provides whole-process AI engineering capabilities including data labeling (`PAI-iTAG`), model building (`PAI-Designer` and `PAI-DSW`), model training (`PAI-DLC`), compilation optimization, and inference deployment (`PAI-EAS`). `PAI-EAS` supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 8,

View File

@@ -16,6 +16,16 @@
">`API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization >and access control, throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data >transferred out and, with the `API Gateway` tiered pricing model, you can reduce your cost as your API usage scales."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -3,10 +3,15 @@
{
"cell_type": "raw",
"id": "602a52a4",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Anthropic\n",
"sidebar_class_name: hidden\n",
"---"
]
},
@@ -17,9 +22,13 @@
"source": [
"# AnthropicLLM\n",
"\n",
"This example goes over how to use LangChain to interact with `Anthropic` models.\n",
":::caution\n",
"You are currently on a page documenting the use of Anthropic legacy Claude 2 models as [text completion models](/docs/concepts/#llms). The latest and most popular Anthropic models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"NOTE: AnthropicLLM only supports legacy Claude 2 models. To use the newest Claude 3 models, please use [`ChatAnthropic`](/docs/integrations/chat/anthropic) instead.\n",
"You are probably looking for [this page instead](/docs/integrations/chat/anthropic/).\n",
":::\n",
"\n",
"This example goes over how to use LangChain to interact with `Anthropic` models.\n",
"\n",
"## Installation"
]

View File

@@ -12,6 +12,17 @@
"This example goes over how to use LangChain to interact with [Anyscale Endpoint](https://app.endpoints.anyscale.com/). "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "134bd228",
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -18,6 +18,17 @@
"To use, you should have the `aphrodite-engine` python package installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4dba1074",
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -8,6 +8,16 @@
"This notebook demonstrates how to use the `Arcee` class for generating text using Arcee's Domain Adapted Language Models (DALMs)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -11,6 +11,16 @@
"This notebook goes over how to use an LLM hosted on an `Azure ML Online Endpoint`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -7,7 +7,13 @@
"source": [
"# Azure OpenAI\n",
"\n",
"This notebook goes over how to use Langchain with [Azure OpenAI](https://aka.ms/azure-openai).\n",
":::caution\n",
"You are currently on a page documenting the use of Azure OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular Azure OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/azure_chat_openai/).\n",
":::\n",
"\n",
"This page goes over how to use LangChain with [Azure OpenAI](https://aka.ms/azure-openai).\n",
"\n",
"The Azure OpenAI API is compatible with OpenAI's API. The `openai` Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.\n",
"\n",

View File

@@ -8,6 +8,16 @@
"Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -45,6 +45,16 @@
"- AquilaChat-7B"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 2,

View File

@@ -12,6 +12,16 @@
"This example goes over how to use LangChain to interact with Banana models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -45,6 +45,16 @@
"In this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -11,6 +11,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
":::caution\n",
"You are currently on a page documenting the use of Amazon Bedrock models as [text completion models](/docs/concepts/#llms). Many popular models available on Bedrock are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/bedrock/).\n",
":::\n",
"\n",
">[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of \n",
"> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, \n",
"> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to \n",
@@ -46,67 +52,6 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using in a conversation chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"\n",
"conversation = ConversationChain(\n",
" llm=llm, verbose=True, memory=ConversationBufferMemory()\n",
")\n",
"\n",
"conversation.predict(input=\"Hi there!\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Conversation Chain With Streaming"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import Bedrock\n",
"from langchain_core.callbacks import StreamingStdOutCallbackHandler\n",
"\n",
"llm = Bedrock(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" model_id=\"amazon.titan-text-express-v1\",\n",
" streaming=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conversation = ConversationChain(\n",
" llm=llm, verbose=True, memory=ConversationBufferMemory()\n",
")\n",
"\n",
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -126,22 +71,17 @@
" model_id=\"<Custom model ARN>\", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model\n",
" model_kwargs={\"temperature\": 1},\n",
" streaming=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
")\n",
"\n",
"conversation = ConversationChain(\n",
" llm=custom_llm, verbose=True, memory=ConversationBufferMemory()\n",
")\n",
"conversation.predict(input=\"What is the recipe of mayonnaise?\")"
"custom_llm.invoke(input=\"What is the recipe of mayonnaise?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Guardrails for Amazon Bedrock example \n",
"## Guardrails for Amazon Bedrock\n",
"\n",
"## Guardrails for Amazon Bedrock (Preview) \n",
"[Guardrails for Amazon Bedrock](https://aws.amazon.com/bedrock/guardrails/) evaluates user inputs and model responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying model. Guardrails can be applied across models, including Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, and Amazon Titan Text, as well as fine-tuned models.\n",
"**Note**: Guardrails for Amazon Bedrock is currently in preview and not generally available. Reach out through your usual AWS Support contacts if youd like access to this feature.\n",
"In this section, we are going to set up a Bedrock language model with specific guardrails that include tracing capabilities. "

View File

@@ -7,6 +7,12 @@
"source": [
"# Cohere\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Cohere models as [text completion models](/docs/concepts/#llms). Many popular Cohere models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/cohere/).\n",
":::\n",
"\n",
">[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) for detailed documentation of all attributes and methods."
@@ -193,7 +199,7 @@
"id": "39198f7d-6fc8-4662-954a-37ad38c4bec4",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -7,141 +7,112 @@
"source": [
"# Databricks\n",
"\n",
"The [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.\n",
"> [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.\n",
"\n",
"This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.\n",
"It supports two endpoint types:\n",
"\n",
"* Serving endpoint, recommended for production and development,\n",
"* Cluster driver proxy app, recommended for interactive development."
"This notebook provides a quick overview for getting started with Databricks [LLM models](https://python.langchain.com/v0.2/docs/concepts/#llms). For detailed documentation of all features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.databricks.Databricks.html).\n",
"\n",
"## Overview\n",
"\n",
"`Databricks` LLM class wraps a completion endpoint hosted as either of these two endpoint types:\n",
"\n",
"* [Databricks Model Serving](https://docs.databricks.com/en/machine-learning/model-serving/index.html), recommended for production and development,\n",
"* Cluster driver proxy app, recommended for interactive development.\n",
"\n",
"This example notebook shows how to wrap your LLM endpoint and use it as an LLM in your LangChain application."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"## Limitations\n",
"\n",
"`mlflow >= 2.9 ` is required to run the code in this notebook. If it's not installed, please install it using this command:\n",
"The `Databricks` LLM class is *legacy* implementation and has several limitations in the feature compatibility.\n",
"\n",
"```\n",
"pip install mlflow>=2.9\n",
"```\n",
"* Only supports synchronous invocation. Streaming or async APIs are not supported.\n",
"* `batch` API is not supported.\n",
"\n",
"Also, we need `dbutils` for this example.\n",
"\n",
"```\n",
"pip install dbutils\n",
"```\n"
"To use those features, please use the new [ChatDatabricks](https://python.langchain.com/v0.2/docs/integrations/chat/databricks) class instead. `ChatDatabricks` supports all APIs of `ChatModel` including streaming, async, batch, etc.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping a serving endpoint: External model\n",
"## Setup\n",
"\n",
"Prerequisite: Register an OpenAI API key as a secret:\n",
"To access Databricks models you'll need to create a Databricks account, set up credentials (only if you are outside Databricks workspace), and install required packages.\n",
"\n",
" ```bash\n",
" databricks secrets create-scope <scope>\n",
" databricks secrets put-secret <scope> openai-api-key --string-value $OPENAI_API_KEY\n",
" ```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code creates a new serving endpoint with OpenAI's GPT-4 model for chat and generates a response using the endpoint."
"### Credentials (only if you are outside Databricks)\n",
"\n",
"If you are running LangChain app inside Databricks, you can skip this step.\n",
"\n",
"Otherwise, you need manually set the Databricks workspace hostname and personal access token to `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively. See [Authentication Documentation](https://docs.databricks.com/en/dev-tools/auth/index.html#databricks-personal-access-tokens) for how to get an access token."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Hello! How can I assist you today?'\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatDatabricks\n",
"from langchain_core.messages import HumanMessage\n",
"from mlflow.deployments import get_deploy_client\n",
"import getpass\n",
"import os\n",
"\n",
"client = get_deploy_client(\"databricks\")\n",
"\n",
"secret = \"secrets/<scope>/openai-api-key\" # replace `<scope>` with your scope\n",
"name = \"my-chat\" # rename this if my-chat already exists\n",
"client.create_endpoint(\n",
" name=name,\n",
" config={\n",
" \"served_entities\": [\n",
" {\n",
" \"name\": \"my-chat\",\n",
" \"external_model\": {\n",
" \"name\": \"gpt-4\",\n",
" \"provider\": \"openai\",\n",
" \"task\": \"llm/v1/chat\",\n",
" \"openai_config\": {\n",
" \"openai_api_key\": \"{{\" + secret + \"}}\",\n",
" },\n",
" },\n",
" }\n",
" ],\n",
" },\n",
")\n",
"\n",
"chat = ChatDatabricks(\n",
" target_uri=\"databricks\",\n",
" endpoint=name,\n",
" temperature=0.1,\n",
")\n",
"chat([HumanMessage(content=\"hello\")])"
"os.environ[\"DATABRICKS_HOST\"] = \"https://your-workspace.cloud.databricks.com\"\n",
"os.environ[\"DATABRICKS_TOKEN\"] = getpass.getpass(\"Enter your Databricks access token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping a serving endpoint: Foundation model\n",
"\n",
"The following code uses the `databricks-bge-large-en` serving endpoint (no endpoint creation is required) to generate embeddings from input text."
"Alternatively, you can pass those parameters when initializing the `Databricks` class."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0.051055908203125, 0.007221221923828125, 0.003879547119140625]\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.embeddings import DatabricksEmbeddings\n",
"from langchain_community.llms import Databricks\n",
"\n",
"embeddings = DatabricksEmbeddings(endpoint=\"databricks-bge-large-en\")\n",
"embeddings.embed_query(\"hello\")[:3]"
"databricks = Databricks(\n",
" host=\"https://your-workspace.cloud.databricks.com\",\n",
" # We strongly recommend NOT to hardcode your access token in your code, instead use secret management tools\n",
" # or environment variables to store your access token securely. The following example uses Databricks Secrets\n",
" # to retrieve the access token that is available within the Databricks notebook.\n",
" token=dbutils.secrets.get(scope=\"YOUR_SECRET_SCOPE\", key=\"databricks-token\"), # noqa: F821\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping a serving endpoint: Custom model\n",
"### Installation\n",
"\n",
"Prerequisites:\n",
"The LangChain Databricks integration lives in the `langchain-community` package. Also, `mlflow >= 2.9 ` is required to run the code in this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community mlflow>=2.9.0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping Model Serving Endpoint\n",
"\n",
"### Prerequisites:\n",
"\n",
"* An LLM was registered and deployed to [a Databricks serving endpoint](https://docs.databricks.com/machine-learning/model-serving/index.html).\n",
"* You have [\"Can Query\" permission](https://docs.databricks.com/security/auth-authz/access-control/serving-endpoint-acl.html) to the endpoint.\n",
@@ -149,9 +120,14 @@
"The expected MLflow model signature is:\n",
"\n",
" * inputs: `[{\"name\": \"prompt\", \"type\": \"string\"}, {\"name\": \"stop\", \"type\": \"list[string]\"}]`\n",
" * outputs: `[{\"type\": \"string\"}]`\n",
"\n",
"If the model signature is incompatible or you want to insert extra configs, you can set `transform_input_fn` and `transform_output_fn` accordingly."
" * outputs: `[{\"type\": \"string\"}]`\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Invocation"
]
},
{
@@ -173,12 +149,8 @@
"source": [
"from langchain_community.llms import Databricks\n",
"\n",
"# If running a Databricks notebook attached to an interactive cluster in \"single user\"\n",
"# or \"no isolation shared\" mode, you only need to specify the endpoint name to create\n",
"# a `Databricks` instance to query a serving endpoint in the same workspace.\n",
"llm = Databricks(endpoint_name=\"dolly\")\n",
"\n",
"llm(\"How are you?\")"
"llm = Databricks(endpoint_name=\"YOUR_ENDPOINT_NAME\")\n",
"llm.invoke(\"How are you?\")"
]
},
{
@@ -198,245 +170,16 @@
}
],
"source": [
"llm(\"How are you?\", stop=[\".\"])"
"llm.invoke(\"How are you?\", stop=[\".\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I am fine. Thank you!'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Otherwise, you can manually specify the Databricks workspace hostname and personal access token\n",
"# or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively.\n",
"# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens\n",
"# We strongly recommend not exposing the API token explicitly inside a notebook.\n",
"# You can use Databricks secret manager to store your API token securely.\n",
"# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecrets\n",
"\n",
"import os\n",
"\n",
"import dbutils\n",
"\n",
"os.environ[\"DATABRICKS_TOKEN\"] = dbutils.secrets.get(\"myworkspace\", \"api_token\")\n",
"\n",
"llm = Databricks(host=\"myworkspace.cloud.databricks.com\", endpoint_name=\"dolly\")\n",
"\n",
"llm(\"How are you?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I am fine.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# If the serving endpoint accepts extra parameters like `temperature`,\n",
"# you can set them in `model_kwargs`.\n",
"llm = Databricks(endpoint_name=\"dolly\", model_kwargs={\"temperature\": 0.1})\n",
"\n",
"llm(\"How are you?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Im Excellent. You?'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint\n",
"# expects a different input schema and does not return a JSON string,\n",
"# respectively, or you want to apply a prompt template on top.\n",
"\n",
"\n",
"def transform_input(**request):\n",
" full_prompt = f\"\"\"{request[\"prompt\"]}\n",
" Be Concise.\n",
" \"\"\"\n",
" request[\"prompt\"] = full_prompt\n",
" return request\n",
"\n",
"\n",
"llm = Databricks(endpoint_name=\"dolly\", transform_input_fn=transform_input)\n",
"\n",
"llm(\"How are you?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wrapping a cluster driver proxy app\n",
"### Transform Input and Output\n",
"\n",
"Prerequisites:\n",
"\n",
"* An LLM loaded on a Databricks interactive cluster in \"single user\" or \"no isolation shared\" mode.\n",
"* A local HTTP server running on the driver node to serve the model at `\"/\"` using HTTP POST with JSON input/output.\n",
"* It uses a port number between `[3000, 8000]` and listens to the driver IP address or simply `0.0.0.0` instead of localhost only.\n",
"* You have \"Can Attach To\" permission to the cluster.\n",
"\n",
"The expected server schema (using JSON schema) is:\n",
"\n",
"* inputs:\n",
" ```json\n",
" {\"type\": \"object\",\n",
" \"properties\": {\n",
" \"prompt\": {\"type\": \"string\"},\n",
" \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n",
" \"required\": [\"prompt\"]}\n",
" ```\n",
"* outputs: `{\"type\": \"string\"}`\n",
"\n",
"If the server schema is incompatible or you want to insert extra configs, you can use `transform_input_fn` and `transform_output_fn` accordingly.\n",
"\n",
"The following is a minimal example for running a driver proxy app to serve an LLM:\n",
"\n",
"```python\n",
"from flask import Flask, request, jsonify\n",
"import torch\n",
"from transformers import pipeline, AutoTokenizer, StoppingCriteria\n",
"\n",
"model = \"databricks/dolly-v2-3b\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model, padding_side=\"left\")\n",
"dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map=\"auto\")\n",
"device = dolly.device\n",
"\n",
"class CheckStop(StoppingCriteria):\n",
" def __init__(self, stop=None):\n",
" super().__init__()\n",
" self.stop = stop or []\n",
" self.matched = \"\"\n",
" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop]\n",
" def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs):\n",
" for i, s in enumerate(self.stop_ids):\n",
" if torch.all((s == input_ids[0][-s.shape[1]:])).item():\n",
" self.matched = self.stop[i]\n",
" return True\n",
" return False\n",
"\n",
"def llm(prompt, stop=None, **kwargs):\n",
" check_stop = CheckStop(stop)\n",
" result = dolly(prompt, stopping_criteria=[check_stop], **kwargs)\n",
" return result[0][\"generated_text\"].rstrip(check_stop.matched)\n",
"\n",
"app = Flask(\"dolly\")\n",
"\n",
"@app.route('/', methods=['POST'])\n",
"def serve_llm():\n",
" resp = llm(**request.json)\n",
" return jsonify(resp)\n",
"\n",
"app.run(host=\"0.0.0.0\", port=\"7777\")\n",
"```\n",
"\n",
"Once the server is running, you can create a `Databricks` instance to wrap it as an LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello, thank you for asking. It is wonderful to hear that you are well.'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# If running a Databricks notebook attached to the same cluster that runs the app,\n",
"# you only need to specify the driver port to create a `Databricks` instance.\n",
"llm = Databricks(cluster_driver_port=\"7777\")\n",
"\n",
"llm(\"How are you?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I am well. You?'"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Otherwise, you can manually specify the cluster ID to use,\n",
"# as well as Databricks workspace hostname and personal access token.\n",
"\n",
"llm = Databricks(cluster_id=\"0000-000000-xxxxxxxx\", cluster_driver_port=\"7777\")\n",
"\n",
"llm(\"How are you?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I am very well. It is a pleasure to meet you.'"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# If the app accepts extra parameters like `temperature`,\n",
"# you can set them in `model_kwargs`.\n",
"llm = Databricks(cluster_driver_port=\"7777\", model_kwargs={\"temperature\": 0.1})\n",
"\n",
"llm(\"How are you?\")"
"Sometimes you may want to wrap a serving endpoint that has imcompatible model signature or you want to insert extra configs. You can use the `transform_input_fn` and `transform_output_fn` arguments to define additional pre/post process."
]
},
{
@@ -456,7 +199,7 @@
}
],
"source": [
"# Use `transform_input_fn` and `transform_output_fn` if the app\n",
"# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint\n",
"# expects a different input schema and does not return a JSON string,\n",
"# respectively, or you want to apply a prompt template on top.\n",
"\n",
@@ -474,12 +217,12 @@
"\n",
"\n",
"llm = Databricks(\n",
" cluster_driver_port=\"7777\",\n",
" endpoint_name=\"YOUR_ENDPOINT_NAME\",\n",
" transform_input_fn=transform_input,\n",
" transform_output_fn=transform_output,\n",
")\n",
"\n",
"llm(\"How are you?\")"
"llm.invoke(\"How are you?\")"
]
}
],

View File

@@ -7,6 +7,12 @@
"source": [
"# Fireworks\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Fireworks models as [text completion models](/docs/concepts/#llms). Many popular Fireworks models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/fireworks/).\n",
":::\n",
"\n",
">[Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",
"This example goes over how to use LangChain to interact with `Fireworks` models."

View File

@@ -25,6 +25,12 @@
"id": "bead5ede-d9cc-44b9-b062-99c90a10cf40",
"metadata": {},
"source": [
":::caution\n",
"You are currently on a page documenting the use of Google models as [text completion models](/docs/concepts/#llms). Many popular Google models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/google_generative_ai/).\n",
":::\n",
"\n",
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](/docs/integrations/llms/google_vertex_ai_palm)."
]
},

View File

@@ -15,6 +15,12 @@
"source": [
"# Google Cloud Vertex AI\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Google Vertex [text completion models](/docs/concepts/#llms). Many Google models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/google_vertex_ai_palm/).\n",
":::\n",
"\n",
"**Note:** This is separate from the `Google Generative AI` integration, it exposes [Vertex AI Generative API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) on `Google Cloud`.\n",
"\n",
"VertexAI exposes all foundational models available in google cloud:\n",
@@ -328,7 +334,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -14,15 +14,15 @@
"Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n",
"Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available __[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)__ and __[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/)__.\n",
"\n",
"This notebook explains how to use OCI's Genrative AI models with LangChain."
"This notebook explains how to use OCI's Generative AI complete models with LangChain."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisite\n",
"We will need to install the oci sdk"
"## Setup\n",
"Ensure that the oci sdk and the langchain-community package are installed"
]
},
{
@@ -31,31 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install -U oci"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OCI Generative AI API endpoint \n",
"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Authentication\n",
"The authentication methods supported for this langchain integration are:\n",
"\n",
"1. API Key\n",
"2. Session token\n",
"3. Instance principal\n",
"4. Resource principal \n",
"\n",
"These follows the standard SDK authentication methods detailed __[here](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)__.\n",
" "
"!pip install -U oci langchain-community"
]
},
{
@@ -71,13 +47,13 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import OCIGenAI\n",
"from langchain_community.llms.oci_generative_ai import OCIGenAI\n",
"\n",
"# use default authN method API-key\n",
"llm = OCIGenAI(\n",
" model_id=\"MY_MODEL\",\n",
" model_id=\"cohere.command\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
")\n",
"\n",
"response = llm.invoke(\"Tell me one fact about earth\", temperature=0.7)\n",
@@ -85,30 +61,10 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"# Use Session Token to authN\n",
"llm = OCIGenAI(\n",
" model_id=\"MY_MODEL\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" auth_type=\"SECURITY_TOKEN\",\n",
" auth_profile=\"MY_PROFILE\", # replace with your profile name\n",
" model_kwargs={\"temperature\": 0.7, \"top_p\": 0.75, \"max_tokens\": 200},\n",
")\n",
"\n",
"prompt = PromptTemplate(input_variables=[\"query\"], template=\"{query}\")\n",
"\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
"response = llm_chain.invoke(\"what is the capital of france?\")\n",
"print(response)"
"#### Chaining with prompt templates"
]
},
{
@@ -117,49 +73,95 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import OCIGenAIEmbeddings\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"embeddings = OCIGenAIEmbeddings(\n",
" model_id=\"MY_EMBEDDING_MODEL\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
")\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\n",
" \"Larry Ellison co-founded Oracle Corporation in 1977 with Bob Miner and Ed Oates.\",\n",
" \"Oracle Corporation is an American multinational computer technology company headquartered in Austin, Texas, United States.\",\n",
" ],\n",
" embedding=embeddings,\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
" \n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"llm = OCIGenAI(\n",
" model_id=\"MY_MODEL\",\n",
" model_id=\"cohere.command\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
")\n",
"\n",
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
"prompt = PromptTemplate(input_variables=[\"query\"], template=\"{query}\")\n",
"llm_chain = prompt | llm\n",
"\n",
"response = llm_chain.invoke(\"what is the capital of france?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Streaming"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = OCIGenAI(\n",
" model_id=\"cohere.command\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
")\n",
"\n",
"print(chain.invoke(\"when was oracle founded?\"))\n",
"print(chain.invoke(\"where is oracle headquartered?\"))"
"for chunk in llm.stream(\"Write me a song about sparkling water.\"):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Authentication\n",
"The authentication methods supported for LlamaIndex are equivalent to those used with other OCI services and follow the __[standard SDK authentication](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)__ methods, specifically API Key, session token, instance principal, and resource principal.\n",
"\n",
"API key is the default authentication method used in the examples above. The following example demonstrates how to use a different authentication method (session token)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = OCIGenAI(\n",
" model_id=\"cohere.command\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" auth_type=\"SECURITY_TOKEN\",\n",
" auth_profile=\"MY_PROFILE\", # replace with your profile name\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dedicated AI Cluster\n",
"To access models hosted in a dedicated AI cluster __[create an endpoint](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai-inference/20231130/)__ whose assigned OCID (currently prefixed by ocid1.generativeaiendpoint.oc1.us-chicago-1) is used as your model ID.\n",
"\n",
"When accessing models hosted in a dedicated AI cluster you will need to initialize the OCIGenAI interface with two extra required params (\"provider\" and \"context_size\")."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = OCIGenAI(\n",
" model_id=\"ocid1.generativeaiendpoint.oc1.us-chicago-1....\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"DEDICATED_COMPARTMENT_OCID\",\n",
" auth_profile=\"MY_PROFILE\", # replace with your profile name,\n",
" provider=\"MODEL_PROVIDER\", # e.g., \"cohere\" or \"meta\"\n",
" context_size=\"MODEL_CONTEXT_SIZE\", # e.g., 128000\n",
")"
]
}
],

View File

@@ -6,6 +6,12 @@
"source": [
"# Ollama\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Ollama models as [text completion models](/docs/concepts/#llms). Many popular Ollama models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/ollama/).\n",
":::\n",
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
"\n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n",

View File

@@ -7,6 +7,12 @@
"source": [
"# OpenAI\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/openai/).\n",
":::\n",
"\n",
"[OpenAI](https://platform.openai.com/docs/introduction) offers a spectrum of models with different levels of power suitable for different tasks.\n",
"\n",
"This example goes over how to use LangChain to interact with `OpenAI` [models](https://platform.openai.com/docs/models)"

View File

@@ -87,7 +87,6 @@
" \"do_sample\": True,\n",
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"process_prompt\": True,\n",
" \"select_expert\": \"llama-2-7b-chat-hf\",\n",
" # \"stop_sequences\": '\\\"sequence1\\\",\\\"sequence2\\\"',\n",
" # \"repetition_penalty\": 1.0,\n",
@@ -116,7 +115,6 @@
" \"do_sample\": True,\n",
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"process_prompt\": True,\n",
" \"select_expert\": \"llama-2-7b-chat-hf\",\n",
" # \"stop_sequences\": '\\\"sequence1\\\",\\\"sequence2\\\"',\n",
" # \"repetition_penalty\": 1.0,\n",
@@ -177,14 +175,16 @@
"import os\n",
"\n",
"sambastudio_base_url = \"<Your SambaStudio environment URL>\"\n",
"# sambastudio_base_uri = \"<Your SambaStudio endpoint base URI>\" # optional, \"api/predict/nlp\" set as default\n",
"sambastudio_base_uri = (\n",
" \"<Your SambaStudio endpoint base URI>\" # optional, \"api/predict/nlp\" set as default\n",
")\n",
"sambastudio_project_id = \"<Your SambaStudio project id>\"\n",
"sambastudio_endpoint_id = \"<Your SambaStudio endpoint id>\"\n",
"sambastudio_api_key = \"<Your SambaStudio endpoint API key>\"\n",
"\n",
"# Set the environment variables\n",
"os.environ[\"SAMBASTUDIO_BASE_URL\"] = sambastudio_base_url\n",
"# os.environ[\"SAMBASTUDIO_BASE_URI\"] = sambastudio_base_uri\n",
"os.environ[\"SAMBASTUDIO_BASE_URI\"] = sambastudio_base_uri\n",
"os.environ[\"SAMBASTUDIO_PROJECT_ID\"] = sambastudio_project_id\n",
"os.environ[\"SAMBASTUDIO_ENDPOINT_ID\"] = sambastudio_endpoint_id\n",
"os.environ[\"SAMBASTUDIO_API_KEY\"] = sambastudio_api_key"
@@ -247,6 +247,40 @@
"for chunk in llm.stream(\"Why should I use open source models?\"):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also call a CoE endpoint expert model "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Using a CoE endpoint\n",
"\n",
"from langchain_community.llms.sambanova import SambaStudio\n",
"\n",
"llm = SambaStudio(\n",
" streaming=False,\n",
" model_kwargs={\n",
" \"do_sample\": True,\n",
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"select_expert\": \"Meta-Llama-3-8B-Instruct\",\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",
" # \"top_logprobs\": 0,\n",
" # \"top_p\": 1.0\n",
" },\n",
")\n",
"\n",
"print(llm.invoke(\"Why should I use open source models?\"))"
]
}
],
"metadata": {

Some files were not shown because too many files have changed in this diff Show More