Compare commits

..

106 Commits

Author SHA1 Message Date
isaac hershenson
0ecbe70b37 fixing image issues 2024-07-24 10:06:28 -07:00
isaac hershenson
984762c721 fmt 2024-07-23 16:58:49 -07:00
isaac hershenson
05fbb43ef6 typo 2024-07-22 16:13:50 -07:00
isaac hershenson
57de61a5ee batch embeddings 2024-07-22 14:19:01 -07:00
MarkYQJ
df357f82ca ignore the first turn to apply "history" mechanism (#14118)
This will generate a meaningless string "system: " for generating
condense question; this increases the probability to make an improper
condense question and misunderstand user's question. Below is a case
- Original Question: Can you explain the arguments of Meilisearch?
- Condense Question
  - What are the benefits of using Meilisearch? (by CodeLlama)
  - What are the reasons for using Meilisearch? (by GPT-4)

The condense questions (not matter from CodeLlam or GPT-4) are different
from the original one.

By checking the content of each dialogue turn, generating history string
only when the dialog content is not empty.
Since there is nothing before first turn, the "history" mechanism will
be ignored at the very first turn.

Doing so, the condense question will be "What are the arguments for
using Meilisearch?".

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-22 20:11:17 +00:00
Bagatur
236e957abb core,groq,openai,mistralai,robocorp,fireworks,anthropic[patch]: Update BaseModel subclass and instance checks to handle both v1 and proper namespaces (#24417)
After this PR chat models will correctly handle pydantic 2 with
bind_tools and with_structured_output.


```python
import pydantic
print(pydantic.__version__)
```
2.8.2

```python
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field

class Add(BaseModel):
    x: int
    y: int

model = ChatOpenAI().bind_tools([Add])
print(model.invoke('2 + 5').tool_calls)

model = ChatOpenAI().with_structured_output(Add)
print(type(model.invoke('2 + 5')))
```

```
[{'name': 'Add', 'args': {'x': 2, 'y': 5}, 'id': 'call_PNUFa4pdfNOYXxIMHc6ps2Do', 'type': 'tool_call'}]
<class '__main__.Add'>
```


```python
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel, Field

class Add(BaseModel):
    x: int
    y: int

model = ChatOpenAI().bind_tools([Add])
print(model.invoke('2 + 5').tool_calls)

model = ChatOpenAI().with_structured_output(Add)
print(type(model.invoke('2 + 5')))
```

```python
[{'name': 'Add', 'args': {'x': 2, 'y': 5}, 'id': 'call_hhiHYP441cp14TtrHKx3Upg0', 'type': 'tool_call'}]
<class '__main__.Add'>
```

Addresses issues: https://github.com/langchain-ai/langchain/issues/22782

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-22 20:07:39 +00:00
C K Ashby
199e64d372 Please spell Lex's name correctly Fridman (#24517)
https://www.youtube.com/watch?v=ZIyB9e_7a4c

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-22 19:38:32 +00:00
Erick Friis
1f01c0fd98 infra: remove core from min version pr testing (#24507)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-22 17:46:15 +00:00
Naka Masato
884f76e05a fix: load google credentials properly in GoogleDriveLoader (#12871)
- **Description:** 
- Fix #12870: set scope in `default` func (ref:
https://google-auth.readthedocs.io/en/master/reference/google.auth.html)
- Moved the code to load default credentials to the bottom for clarity
of the logic
	- Add docstring and comment for each credential loading logic
- **Issue:** https://github.com/langchain-ai/langchain/issues/12870
- **Dependencies:** no dependencies change
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** @gymnstcs

<!-- If no one reviews your PR within a few days, please @-mention one
of @baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-22 17:43:33 +00:00
Erick Friis
a45337ea07 ollama: release 0.1.0 (#24510) 2024-07-22 10:35:26 -07:00
Isaac Francisco
1318d534af [docs]: minor react change (#24509) 2024-07-22 10:25:01 -07:00
Jorge Piedrahita Ortiz
10e3982b59 community: sambanova integration minor changes (#24503)
- Minor changes in samabanova llm integration 
  - default api 
  - docstrings
- minor changes in docs
2024-07-22 17:06:35 +00:00
maang-h
721f709dec community: Improve QianfanChatEndpoint tool result to model (#24466)
- **Description:** `QianfanChatEndpoint` When using tool result to
answer questions, the content of the tool is required to be in Dict
format. Of course, this can require users to return Dict format when
calling the tool, but in order to be consistent with other Chat Models,
I think such modifications are necessary.
2024-07-22 11:29:00 -04:00
Chaunte W. Lacewell
02f0a29293 Cookbook: Add Visual RAG example using VDMS (#24353)
- **Description:** Adding notebook to demonstrate visual RAG which uses
both video scene description generated by open source vision models (ex.
video-llama, video-llava etc.) as text embeddings and frames as image
embeddings to perform vector similarity search using VDMS.
  - **Issue:** N/A
  - **Dependencies:** N/A
2024-07-22 11:16:06 -04:00
ccurme
dcba7df2fe community[patch]: deprecate langchain_community Chroma in favor of langchain_chroma (#24474) 2024-07-22 11:00:13 -04:00
ccurme
0f7569ddbc core[patch]: enable RunnableWithMessageHistory without config (#23775)
Feedback that `RunnableWithMessageHistory` is unwieldy compared to
ConversationChain and similar legacy abstractions is common.

Legacy chains using memory typically had no explicit notion of threads
or separate sessions. To use `RunnableWithMessageHistory`, users are
forced to introduce this concept into their code. This possibly felt
like unnecessary boilerplate.

Here we enable `RunnableWithMessageHistory` to run without a config if
the `get_session_history` callable has no arguments. This enables
minimal implementations like the following:
```python
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
memory = InMemoryChatMessageHistory()
chain = RunnableWithMessageHistory(llm, lambda: memory)

chain.invoke("Hi I'm Bob")  # Hello Bob!
chain.invoke("What is my name?")  # Your name is Bob.
```
2024-07-22 10:36:53 -04:00
Mohammad Mohtashim
5ade0187d0 [Commutiy]: Prompts Fixed for ZERO_SHOT_REACT React Agent Type in create_sql_agent function (#23693)
- **Description:** The correct Prompts for ZERO_SHOT_REACT were not
being used in the `create_sql_agent` function. They were not using the
specific `SQL_PREFIX` and `SQL_SUFFIX` prompts if client does not
provide any prompts. This is fixed.
- **Issue:** #23585

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-22 14:04:20 +00:00
ZhangShenao
0f6737cbfe [Vector Store] Fix function add_texts in TencentVectorDB (#24469)
Regardless of whether `embedding_func` is set or not, the 'text'
attribute of document should be assigned, otherwise the `page_content`
in the document of the final search result will be lost
2024-07-22 09:50:22 -04:00
남광우
7ab82eb8cc langchain: Copy libs/standard-tests folder when building devcontainer (#24470)
### Description

* Fix `libs/langchain/dev.Dockerfile` file. copy the
`libs/standard-tests` folder when building the devcontainer.
* `poetry install --no-interaction --no-ansi --with dev,test,docs`
command requires this folder, but it was not copied.

### Reference

#### Error message when building the devcontainer from the master branch

```
...

[2024-07-20T14:27:34.779Z] ------
 > [langchain langchain-dev-dependencies 7/7] RUN poetry install --no-interaction --no-ansi --with dev,test,docs:
0.409 
0.409 Directory ../standard-tests does not exist
------

...
```

#### After the fix

Build success at vscode:

<img width="866" alt="image"
src="https://github.com/user-attachments/assets/10db1b50-6fcf-4dfe-83e1-d93c96aa2317">
2024-07-22 13:46:38 +00:00
rbrugaro
37b89fb7fc fix RAG with quantized embeddings notebook (#24422)
1. Fix HuggingfacePipeline import error to newer partner package
 2. Switch to IPEXModelForCausalLM for performance

There are no dependency changes since optimum intel is also needed for
QuantizedBiEncoderEmbeddings

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-22 13:44:03 +00:00
Thomas Meike
40c02cedaf langchain[patch]: add async methods to ConversationSummaryBufferMemory (#20956)
Added asynchronously callable methods according to the
ConversationSummaryBufferMemory API documentation.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-22 09:21:43 -04:00
Steve Sharp
cecd875cdc docs: Update streaming.ipynb (typo fix) (#24483)
**Description:** Fixes typo `Le'ts` -> `Let's`.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-22 11:09:13 +00:00
Sheng Han Lim
0c6a3fdd6b langchain: Update ContextualCompressionRetriever base_retriever type to RetrieverLike (#24192)
**Description:**
When initializing retrievers with `configurable_fields` as base
retriever, `ContextualCompressionRetriever` validation fails with the
following error:

```
ValidationError: 1 validation error for ContextualCompressionRetriever
base_retriever
  Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
```

Example code:

```python
esearch_retriever = VertexAISearchRetriever(
    project_id=GCP_PROJECT_ID,
    location_id="global",
    data_store_id=SEARCH_ENGINE_ID,
).configurable_fields(
    filter=ConfigurableField(id="vertex_search_filter", name="Vertex Search Filter")
)

# rerank documents with Vertex AI Rank API
reranker = VertexAIRank(
    project_id=GCP_PROJECT_ID,
    location_id=GCP_REGION,
    ranking_config="default_ranking_config",
)

retriever_with_reranker = ContextualCompressionRetriever(
    base_compressor=reranker, base_retriever=esearch_retriever
)
```

It seems like the issue stems from ContextualCompressionRetriever
insisting that base retrievers must be strictly `BaseRetriever`
inherited, and doesn't take into account cases where retrievers need to
be chained and can have configurable fields defined.


0a1e475a30/libs/langchain/langchain/retrievers/contextual_compression.py (L15-L22)

This PR proposes that the base_retriever type be set to `RetrieverLike`,
similar to how `EnsembleRetriever` validates its list of retrievers:


0a1e475a30/libs/langchain/langchain/retrievers/ensemble.py (L58-L75)
2024-07-21 14:23:19 -04:00
clement.l
d98b830e4b community: add flag to toggle progress bar (#24463)
- **Description:** Add a flag to determine whether to show progress bar 
- **Issue:** n/a
- **Dependencies:** n/a
- **Twitter handle:** n/a

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-20 13:18:02 +00:00
chuanbei888
6b08a33fa4 community: fix QianfanChatEndpoint default model (#24464)
the baidu_qianfan_endpoint has been changed from ERNIE-Bot-turbo to
ERNIE-Lite-8K
2024-07-20 13:00:29 +00:00
Nuno Campos
947628311b core[patch]: Accept configurable keys top-level (#23806)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-07-20 03:49:00 +00:00
Jesus Martinez
c1d1fc13c2 langchain[patch]: Remove multiagent return_direct validation (#24419)
**Description:**

When you use Agents with multi-input tool and some of these tools have
`return_direct=True`, langchain thrown an error related to one
validator.
This change is implemented on [JS
community](https://github.com/langchain-ai/langchainjs/pull/4643) as
well

**Issue**:
This MR resolves #19843

**Dependencies:**

None

Co-authored-by: Jesus Martinez <jesusabraham.martinez@tyson.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-07-20 03:27:43 +00:00
Will Badart
74e3d796f1 core[patch]: ensure iterator_ in scope for _atransform_stream_with_config except (#24454)
Before, if an exception was raised in the outer `try` block in
`Runnable._atransform_stream_with_config` before `iterator_` is
assigned, the corresponding `finally` block would blow up with an
`UnboundLocalError`:

```txt
UnboundLocalError: cannot access local variable 'iterator_' where it is not associated with a value
```

By assigning an initial value to `iterator_` before entering the `try`
block, this commit ensures that the `finally` can run, and not bury the
"true" exception under a "During handling of the above exception [...]"
traceback.

Thanks for your consideration!
2024-07-20 03:24:04 +00:00
maang-h
7b28359719 docs: Add ChatSparkLLM docstrings (#24449)
- **Description:** 
  - Add `ChatSparkLLM` docstrings, the issue #22296 
  - To support `stream` method
2024-07-19 20:19:14 -07:00
Eugene Yurtsev
5e48f35fba core[minor]: Relax constraints on type checking for tools and parsers (#24459)
This will allow tools and parsers to accept pydantic models from any of
the
following namespaces:

* pydantic.BaseModel with pydantic 1
* pydantic.BaseModel with pydantic 2
* pydantic.v1.BaseModel with pydantic 2
2024-07-19 21:47:34 -04:00
Isaac Francisco
838464de25 ollama: init package (#23615)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-20 00:43:29 +00:00
Erick Friis
f4ee3c8a22 infra: add min version testing to pr test flow (#24358)
xfailing some sql tests that do not currently work on sqlalchemy v1

#22207 was very much not sqlalchemy v1 compatible. 

Moving forward, implementations should be compatible with both to pass
CI
2024-07-19 22:03:19 +00:00
Erick Friis
50cb0a03bc docs: advanced feature note (#24456)
fixes #24430
2024-07-19 20:05:59 +00:00
Bagatur
842065a9cc community[patch]: Release 0.2.9 (#24453) 2024-07-19 12:50:22 -07:00
Bagatur
27ad6a4bb3 langchain[patch]: Release 0.2.10 (#24452) 2024-07-19 12:50:13 -07:00
Bagatur
dda9438e87 community[patch]: gpt-4o-mini costs (#24421) 2024-07-19 19:02:44 +00:00
Eugene Yurtsev
604dfe2d99 community[patch]: Force opt-in for WebResearchRetriever (CVE-2024-3095) (#24451)
This PR addresses the issue raised by (CVE-2024-3095)

https://huntr.com/bounties/e62d4895-2901-405b-9559-38276b6a5273

Unfortunately, we didn't do a good job writing the initial report. It's
pointing at both the wrong package and the wrong code.

The affected code is the Web Retriever not the AsyncHTMLLoader, and the
WebRetriever lives in langchain-community

The vulnerable code lives here: 

0bd3f4e129/libs/community/langchain_community/retrievers/web_research.py (L233-L233)


This PR adds a forced opt-in for users to make sure they are aware of
the risk and can mitigate by configuring a proxy:


0bd3f4e129/libs/community/langchain_community/retrievers/web_research.py (L84-L84)
2024-07-19 18:51:35 +00:00
Bagatur
f101c759ed docs: how to pass runtime secrets (#24450) 2024-07-19 18:36:28 +00:00
Asi Greenholts
372c27f2e5 community[minor]: [GoogleApiYoutubeLoader] Replace API used in _get_document_for_channel from search to playlistItem (#24034)
- **Description:** Search has a limit of 500 results, playlistItems
doesn't. Added a class in except clause to catch another common error.
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** @TupleType

---------

Co-authored-by: asi-cider <88270351+asi-cider@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 14:04:34 -04:00
Rafael Pereira
6a45bf9554 community[minor]: GraphCypherQAChain to accept additional inputs as provided by the user for cypher generation (#24300)
**Description:** This PR introduces a change to the
`cypher_generation_chain` to dynamically concatenate inputs. This
improvement aims to streamline the input handling process and make the
method more flexible. The change involves updating the arguments
dictionary with all elements from the `inputs` dictionary, ensuring that
all necessary inputs are dynamically appended. This will ensure that any
cypher generation template will not require a new `_call` method patch.

**Issue:** This PR fixes issue #24260.
2024-07-19 14:03:14 -04:00
Philippe PRADOS
f5856680fe community[minor]: add mongodb byte store (#23876)
The `MongoDBStore` can manage only documents.
It's not possible to use MongoDB for an `CacheBackedEmbeddings`.

With this new implementation, it's possible to use:
```python
CacheBackedEmbeddings.from_bytes_store(
    underlying_embeddings=embeddings,
    document_embedding_cache=MongoDBByteStore(
      connection_string=db_uri,
      db_name=db_name,
      collection_name=collection_name,
  ),
)
```
and use MongoDB to cache the embeddings !
2024-07-19 13:54:12 -04:00
yabooung
07715f815b community[minor]: Add ability to specify file encoding and json encoding for FileChatMessageHistory (#24258)
Description:
Add UTF-8 encoding support

Issue:
Inability to properly handle characters from certain languages (e.g.,
Korean)

Fix:
Implement UTF-8 encoding in FileChatMessageHistory

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 13:53:21 -04:00
Dristy Srivastava
020cc1cf3e Community[minor]: Added checksum in while send data to pebblo-cloud (#23968)
- **Description:** 
            - Updated checksum in doc metadata
- Sending checksum and removing actual content, while sending data to
`pebblo-cloud` if `classifier-location `is `pebblo-cloud` in
`/loader/doc` API
            - Adding `pb_id` i.e. pebblo id to doc metadata
            - Refactoring as needed.
- Sending `content-checksum` and removing actual content, while sending
data to `pebblo-cloud` if `classifier-location `is `pebblo-cloud` in
`prmopt` API
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** Updated
- **Docs** NA

---------

Co-authored-by: dristy.cd <dristy@clouddefense.io>
2024-07-19 13:52:54 -04:00
Eun Hye Kim
9aae8ef416 core[patch]: Fix utils.json_schema.dereference_refs (#24335 KeyError: 400 in JSON schema processing) (#24337)
Description:
This PR fixes a KeyError: 400 that occurs in the JSON schema processing
within the reduce_openapi_spec function. The _retrieve_ref function in
json_schema.py was modified to handle missing components gracefully by
continuing to the next component if the current one is not found. This
ensures that the OpenAPI specification is fully interpreted and the
agent executes without errors.

Issue:
Fixes issue #24335

Dependencies:
No additional dependencies are required for this change.

Twitter handle:
@lunara_x
2024-07-19 13:31:00 -04:00
keval dekivadiya
06f47678ae community[minor]: Add TextEmbed Embedding Integration (#22946)
**Description:**

**TextEmbed** is a high-performance embedding inference server designed
to provide a high-throughput, low-latency solution for serving
embeddings. It supports various sentence-transformer models and includes
the ability to deploy image and text embedding models. TextEmbed offers
flexibility and scalability for diverse applications.

- **PyPI Package:** [TextEmbed on
PyPI](https://pypi.org/project/textembed/)
- **Docker Image:** [TextEmbed on Docker
Hub](https://hub.docker.com/r/kevaldekivadiya/textembed)
- **GitHub Repository:** [TextEmbed on
GitHub](https://github.com/kevaldekivadiya2415/textembed)

**PR Description**
This PR adds functionality for embedding documents and queries using the
`TextEmbedEmbeddings` class. The implementation allows for both
synchronous and asynchronous embedding requests to a TextEmbed API
endpoint. The class handles batching and permuting of input texts to
optimize the embedding process.

**Example Usage:**

```python
from langchain_community.embeddings import TextEmbedEmbeddings

# Initialise the embeddings class
embeddings = TextEmbedEmbeddings(model="your-model-id", api_key="your-api-key", api_url="your_api_url")

# Define a list of documents
documents = [
    "Data science involves extracting insights from data.",
    "Artificial intelligence is transforming various industries.",
    "Cloud computing provides scalable computing resources over the internet.",
    "Big data analytics helps in understanding large datasets.",
    "India has a diverse cultural heritage."
]

# Define a query
query = "What is the cultural heritage of India?"

# Embed all documents
document_embeddings = embeddings.embed_documents(documents)

# Embed the query
query_embedding = embeddings.embed_query(query)

# Print embeddings for each document
for i, embedding in enumerate(document_embeddings):
    print(f"Document {i+1} Embedding:", embedding)

# Print the query embedding
print("Query Embedding:", query_embedding)

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-19 17:30:25 +00:00
Shikanime Deva
9c3da11910 Fix MultiQueryRetriever breaking Embeddings with empty lines (#21093)
Fix MultiQueryRetriever breaking Embeddings with empty lines

```
[chain/end] [1:chain:ConversationalRetrievalChain > 2:retriever:Retriever > 3:retriever:Retriever > 4:chain:LLMChain] [2.03s] Exiting Chain run with output:
[outputs]
> /workspaces/Sfeir/sncf/metabot-backend/.venv/lib/python3.11/site-packages/langchain/retrievers/multi_query.py(116)_aget_relevant_documents()
-> if self.include_original:
(Pdb) queries
['## Alternative questions for "Hello, tell me about phones?":', '', '1. **What are the latest trends in smartphone technology?** (Focuses on recent advancements)', '2. **How has the mobile phone industry evolved over the years?** (Historical perspective)', '3. **What are the different types of phones available in the market, and which one is best for me?** (Categorization and recommendation)']
```

Example of failure on VertexAIEmbeddings

```
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "The text content is empty."
	debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.184.234:443 {created_time:"2024-04-30T09:57:45.625698408+00:00", grpc_status:3, grpc_message:"The text content is empty."}"
```

Fixes: #15959

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-19 17:13:12 +00:00
John Kelly
5affbada61 langchain: Add aadd_documents to ParentDocumentRetriever (#23969)
- **Description:** Add an async version of `add_documents` to
`ParentDocumentRetriever`
-  **Twitter handle:** @johnkdev

---------

Co-authored-by: John Kelly <j.kelly@mwam.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 13:12:39 -04:00
Andrew Benton
f9d64d22e5 community[minor]: Add Riza Python/JS code execution tool (#23995)
- **Description:** Add Riza Python/JS code execution tool
- **Issue:** N/A
- **Dependencies:** an optional dependency on the `rizaio` pypi package
- **Twitter handle:** [@rizaio](https://x.com/rizaio)

[Riza](https://riza.io) is a safe code execution environment for
agent-generated Python and JavaScript that's easy to integrate into
langchain apps. This PR adds two new tool classes to the community
package.
2024-07-19 17:03:22 +00:00
Ben Chambers
3691701d58 community[minor]: Add keybert-based link extractor (#24311)
- **Description:** Add a `KeybertLinkExtractor` for graph vectorstores.
This allows extracting links from keywords in a Document and linking
nodes that have common keywords.
- **Issue:** None
- **Dependencies:** None.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-19 12:25:07 -04:00
Erick Friis
ef049769f0 core[patch]: Release 0.2.22 (#24423)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-07-19 09:09:24 -07:00
Bagatur
cd19ba9a07 core[patch]: core lint fix (#24447) 2024-07-19 09:01:22 -07:00
Ben Chambers
83f3d95ffa community[minor]: GLiNER link extraction (#24314)
- **Description:** This allows extracting links between documents with
common named entities using [GLiNER](https://github.com/urchade/GLiNER).
- **Issue:** None
- **Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 15:34:54 +00:00
Anas Khan
b5acb91080 Mask API keys for various LLM/ChatModel Modules (#13885)
**Description:** 
- Added masking of the API Keys for the modules:
  - `langchain/chat_models/openai.py`
  - `langchain/llms/openai.py`
  - `langchain/llms/google_palm.py`
  - `langchain/chat_models/google_palm.py`
  - `langchain/llms/edenai.py`

- Updated the modules to utilize `SecretStr` from pydantic to securely
manage API key.
- Added unit/integration tests
- `langchain/chat_models/asure_openai.py` used the `open_api_key` that
is derived from the `ChatOpenAI` Class and it was assuming
`openai_api_key` is a str so we changed it to expect `SecretStr`
instead.

**Issue:** https://github.com/langchain-ai/langchain/issues/12165 ,
**Dependencies:** none,
**Tag maintainer:** @eyurtsev

---------

Co-authored-by: HassanA01 <anikeboss@gmail.com>
Co-authored-by: Aneeq Hassan <aneeq.hassan@utoronto.ca>
Co-authored-by: kristinspenc <kristinspenc2003@gmail.com>
Co-authored-by: faisalt14 <faisalt14@gmail.com>
Co-authored-by: Harshil-Patel28 <76663814+Harshil-Patel28@users.noreply.github.com>
Co-authored-by: kristinspenc <146893228+kristinspenc@users.noreply.github.com>
Co-authored-by: faisalt14 <90787271+faisalt14@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-19 15:23:34 +00:00
ccurme
f99369a54c community[patch]: fix formatting (#24443)
Somehow this got through CI:
https://github.com/langchain-ai/langchain/pull/24363
2024-07-19 14:38:53 +00:00
Ben Chambers
242b085be7 Merge pull request #24315
* community: Add Hierarchy link extractor

* add example

* lint
2024-07-19 09:42:26 -04:00
Rhuan Barros
c3308f31bc Merge pull request #24363
* important email fields
2024-07-19 09:41:20 -04:00
Piotr Romanowski
c50dd79512 docs: Update langchain-openai package version in chat_token_usage_tracking (#24436)
This PR updates docs to mention correct version of the
`langchain-openai` package required to use the `stream_usage` parameter.

As it can be noticed in the details of this [merge
commit](722c8f50ea),
that functionality is available only in `langchain-openai >= 0.1.9`
while docs state it's available in `langchain-openai >= 0.1.8`.
2024-07-19 13:07:37 +00:00
Han Sol Park
aade9bfde5 Mask API key for ChatOpenAI based chat_models (#14293)
- **Description**: Mask API key for ChatOpenAi based chat_models
(openai, azureopenai, anyscale, everlyai).
Made changes to all chat_models that are based on ChatOpenAI since all
of them assumes that openai_api_key is str rather than SecretStr.
  - **Issue:**: #12165 
  - **Dependencies:**  N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-19 02:25:38 +00:00
William FH
0ee6ed76ca [Evaluation] Pass in seed directly (#24403)
adding test rn
2024-07-18 19:12:28 -07:00
Nuno Campos
62b6965d2a core: In ensure_config don't copy dunder configurable keys to metadata (#24420) 2024-07-18 22:28:52 +00:00
Eugene Yurtsev
ef22ebe431 standard-tests[patch]: Add pytest assert rewrites (#24408)
This will surface nice error messages in subclasses that fail assertions.
2024-07-18 21:41:11 +00:00
Eugene Yurtsev
f62b323108 core[minor]: Support all versions of pydantic base model in argsschema (#24418)
This adds support to any pydantic base model for tools.

The only potential issue is that `get_input_schema()` will not always
return a v1 base model.
2024-07-18 17:14:23 -04:00
Prakul
b2bc15e640 docs: Update mongodb README.md (#24412)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-18 14:02:34 -07:00
Evan Harris
61ea7bf60b Add a ListRerank document compressor (#13311)
- **Description:** This PR adds a new document compressor called
`ListRerank`. It's derived from `BaseDocumentCompressor`. It's a near
exact implementation of introduced by this paper: [Zero-Shot Listwise
Document Reranking with a Large Language
Model](https://arxiv.org/pdf/2305.02156.pdf) which it finds to
outperform pointwise reranking, which is somewhat implemented in
LangChain as
[LLMChainFilter](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/chain_filter.py).
- **Issue:** None
- **Dependencies:** None
- **Tag maintainer:** @hwchase17 @izzymsft
- **Twitter handle:** @HarrisEMitchell

Notes:
1. I didn't add anything to `docs`. I wasn't exactly sure which patterns
to follow as [cohere reranker is under
Retrievers](https://python.langchain.com/docs/integrations/retrievers/cohere-reranker)
with other external document retrieval integrations, but other
contextual compression is
[here](https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression/).
Happy to contribute to either with some direction.
2. I followed syntax, docstrings, implementation patterns, etc. as well
as I could looking at nearby modules. One thing I didn't do was put the
default prompt in a separate `.py` file like [Chain
Filter](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/chain_filter_prompt.py)
and [Chain
Extract](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/chain_extract_prompt.py).
Happy to follow that pattern if it would be preferred.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-18 20:34:38 +00:00
Srijan Dubey
4c651ba13a Adding LangChain v0.2 support for nvidia ai endpoint, langchain-nvidia-ai-endpoints. Removed deprecated classes from nvidia_ai_endpoints.ipynb (#24411)
Description: added support for LangChain v0.2 for nvidia ai endpoint.
Implremented inMemory storage for chains using
RunnableWithMessageHistory which is analogous to using
`ConversationChain` which was used in v0.1 with the default
`ConversationBufferMemory`. This class is deprecated in favor of
`RunnableWithMessageHistory` in LangChain v0.2

Issue: None

Dependencies: None.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-18 15:59:26 -04:00
Erick Friis
334fc1ed1c mongodb: release 0.1.7 (#24409) 2024-07-18 18:13:27 +00:00
ccurme
ba74341eee docs: update tool calling how-to to pass functions to bind_tools (#24402) 2024-07-18 08:53:48 -07:00
Harrison Chase
3adf710f1d docs: improve docs on tools (#24404)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-18 08:52:12 -07:00
Eun Hye Kim
07c5c60f63 community: fix tool appending logic and update planner prompt in OpenAPI agent toolkit (#24384)
**Description:**
- Updated the format for the 'Action' section in the planner prompt to
ensure it must be one of the tools without additional words. Adjusted
the phrasing from "should be" to "must be" for clarity and
enforceability.
- Corrected the tool appending logic in the
`_create_api_controller_agent` function to ensure that
`RequestsDeleteToolWithParsing` and `RequestsPatchToolWithParsing` are
properly added to the tools list for "DELETE" and "PATCH" operations.

**Issue:** #24382

**Dependencies:** None

**Twitter handle:** @lunara_x

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-18 13:37:46 +00:00
Casey Clements
aade1550c6 docs: Adds MongoDBAtlasVectorSearch to VectorStore list compatible with Indexing API (#24374)
Adds MongoDBAtlasVectorSearch to list of VectorStores compatible with
the Indexing API.

(One line change.)

As of `langchain-mongodb = "0.1.7"`, the requirements that the
VectorStore have both add_documents and delete methods with an ids kwarg
is satisfied. #23535 contains the implementation of that, and has been
merged.
2024-07-18 09:37:29 -04:00
Chen Xiabin
63c60a31f0 [fix] baidu qianfan AiMessage with usage_metadata (#24389)
make AIMessage usage_metadata has error
2024-07-18 09:28:16 -04:00
João Dinis Ferreira
242de9aa5e docs: remove redundant --quiet option in pip install command (#24397)
- **Description:** Removes a redundant option in a `pip install` command
in the documentation.
- **Issue:** N/A
- **Dependencies:** N/A
2024-07-18 13:24:42 +00:00
ZhangShenao
916b813107 community[patch]: Fix spelling error in ConversationVectorStoreTokenBufferMemory doc-string (#24385)
Fix word spelling error in `ConversationVectorStoreTokenBufferMemory`
2024-07-18 12:27:36 +00:00
Rajendra Kadam
1c65529fd7 community[minor]: [PebbloSafeLoader] Rename loader type and add SharePointLoader to supported loaders (#24393)
Thank you for contributing to LangChain!

- [x] **PR title**: [PebbloSafeLoader] Rename loader type and add
SharePointLoader to supported loaders
    - **Description:** Minor fixes in the PebbloSafeLoader:
        - Renamed the loader type from `remote_db` to `cloud_folder`.
- Added `SharePointLoader` to the list of loaders supported by
PebbloSafeLoader.
    - **Issue:** NA
    - **Dependencies:** NA
- [x] **Add tests and docs**: NA
2024-07-18 08:23:12 -04:00
Eugene Yurtsev
6182a402f1 experimental[patch]: block a few more things from PALValidator (#24379)
* Please see security warning already in existing class.
* The approach here is fundamentally insecure as it's relying on a block
  approach rather than an approach based on only running allowed nodes.
So users should only use this code if its running from a properly
sandboxed  environment.
2024-07-18 08:22:45 -04:00
Paolo Ráez
0dec72cab0 Community[patch]: Missing "stream" parameter in cloudflare_workersai (#23987)
### Description
Missing "stream" parameter. Without it, you'd never receive a stream of
tokens when using stream() or astream()

### Issue
No existing issue available
2024-07-18 02:09:39 +00:00
Eugene Yurtsev
570566b858 core[patch]: Update API reference for astream events (#24359)
Update the API reference for astream events to include information about
custom events.
2024-07-17 21:48:53 -04:00
Bagatur
f9baaae3ec docs: clean up tool how to titles (#24373) 2024-07-17 17:08:31 -07:00
Bagatur
4da1df568a docs: tools concepts (#24368) 2024-07-17 17:08:16 -07:00
Erick Friis
96ccba9c27 infra: 15s retry wait on test pypi (#24375) 2024-07-17 23:41:22 +00:00
Bagatur
a4c101ae97 core[patch]: Release 0.2.21 (#24372) 2024-07-17 22:44:35 +00:00
William FH
c5a07e2dd8 core[patch]: add InjectedToolArg annotation (#24279)
```python
from typing_extensions import Annotated
from langchain_core.tools import tool, InjectedToolArg
from langchain_anthropic import ChatAnthropic

@tool
def multiply(x: int, y: int, not_for_model: Annotated[dict, InjectedToolArg]) -> str:
    """multiply."""
    return x * y 

ChatAnthropic(model='claude-3-sonnet-20240229',).bind_tools([multiply]).invoke('5 times 3').tool_calls
'''
-> [{'name': 'multiply',
  'args': {'x': 5, 'y': 3},
  'id': 'toolu_01Y1QazYWhu4R8vF4hF4z9no',
  'type': 'tool_call'}]
'''
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-07-17 15:28:40 -07:00
Erick Friis
80f3d48195 openai: release 0.1.18 (#24369) 2024-07-17 22:26:33 +00:00
Bagatur
7d83189b19 openai[patch]: use model_name in AzureOpenAI.ls_model_name (#24366) 2024-07-17 15:24:05 -07:00
Nithish Raghunandanan
eb26b5535a couchbase: Add chat message history (#24356)
**Description:** : Add support for chat message history using Couchbase

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
2024-07-17 15:22:42 -07:00
Eugene Yurtsev
96bac8e20d core[patch]: Fix regression requiring input_variables in few chat prompt templates (#24360)
* Fix regression that requires users passing input_variables=[].

* Regression introduced by my own changes to this PR:
https://github.com/langchain-ai/langchain/pull/22851
2024-07-17 18:14:57 -04:00
Brice Fotzo
034a8c7c1b community: support advanced text extraction options for pdf documents (#20265)
**Description:** 
- Updated constructors in PyPDFParser and PyPDFLoader to handle
`extraction_mode` and additional kwargs, aligning with the capabilities
of `PageObject.extract_text()` from pypdf.

- Added `test_pypdf_loader_with_layout` along with a corresponding
example text file to validate layout extraction from PDFs.

**Issue:** fixes #19735 

**Dependencies:** This change requires updating the pypdf dependency
from version 3.4.0 to at least 4.0.0.

Additional changes include the addition of a new test
test_pypdf_loader_with_layout and an example text file to ensure the
functionality of layout extraction from PDFs aligns with the new
capabilities.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-17 20:47:09 +00:00
hmasdev
a402de3dae langchain[patch]: fix wrong dict key in OutputFixingParser, RetryOutputParser and RetryWithErrorOutputParser (#23967)
# Description
This PR aims to solve a bug in `OutputFixingParser`, `RetryOutputParser`
and `RetryWithErrorOutputParser`
The bug is that the wrong keyword argument was given to `retry_chain`.
The correct keyword argument is 'completion', but 'input' is used.

This pull request makes the following changes:
1. correct a `dict` key given to `retry_chain`;
2. add a test when using the default prompt.
   - `NAIVE_FIX_PROMPT` for `OutputFixingParser`;
   - `NAIVE_RETRY_PROMPT` for `RetryOutputParser`;
   - `NAIVE_RETRY_WITH_ERROR_PROMPT` for `RetryWithErrorOutputParser`;
3. ~~add comments on `retry_chain` input and output types~~ clarify
`InputType` and `OutputType` of `retry_chain`

# Issue
The bug is pointed out in
https://github.com/langchain-ai/langchain/pull/19792#issuecomment-2196512928

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-17 20:34:46 +00:00
Casey Clements
a47f69a120 partners/mongodb : Significant MongoDBVectorSearch ID enhancements (#23535)
## Description

This pull-request improves the treatment of document IDs in
`MongoDBAtlasVectorSearch`.

Class method signatures of add_documents, add_texts, delete, and
from_texts
now include an `ids:Optional[List[str]]` keyword argument permitting the
user
greater control. 
Note that, as before, IDs may also be inferred from
`Document.metadata['_id']`
if present, but this is no longer required,
IDs can also optionally be returned from searches.

This PR closes the following JIRA issues.

* [PYTHON-4446](https://jira.mongodb.org/browse/PYTHON-4446)
MongoDBVectorSearch delete / add_texts function rework
* [PYTHON-4435](https://jira.mongodb.org/browse/PYTHON-4435) Add support
for "Indexing"
* [PYTHON-4534](https://jira.mongodb.org/browse/PYTHON-4534) Ensure
datetimes are json-serializable

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-17 13:26:20 -07:00
Erick Friis
cc2cbfabfc milvus: release 0.1.2 (#24365) 2024-07-17 19:42:44 +00:00
Eugene Yurtsev
9e4a0e76f6 core[patch]: Fix one unit test for chat prompt template (#24362)
Minor change that fixes a unit test that had missing assertions.
2024-07-17 18:56:48 +00:00
Erick Friis
81639243e2 openai: release 0.1.17 (#24361) 2024-07-17 18:50:42 +00:00
Erick Friis
61976a4147 pinecone: release 0.1.2 (#24355) 2024-07-17 17:09:07 +00:00
Bagatur
b5360e2e5f community[patch]: Release 0.2.8 (#24354) 2024-07-17 17:07:27 +00:00
ccurme
4cf67084d3 openai[patch]: fix key collision and _astream (#24345)
Fixes small issues introduced in
https://github.com/langchain-ai/langchain/pull/24150 (unreleased).
2024-07-17 12:59:26 -04:00
Luis Moros
bcb5f354ad community: Fix SQLDatabse.from_databricks issue when ran from Job (#24346)
- Description: When SQLDatabase.from_databricks is ran from a Databricks
Workflow job, line 205 (default_host = context.browserHostName) throws
an ``AttributeError`` as the ``context`` object has no
``browserHostName`` attribute. The fix handles the exception and sets
the ``default_host`` variable to null

---------

Co-authored-by: lmorosdb <lmorosdb>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-17 12:40:12 -04:00
Bagatur
24e9b48d15 langchain[patch]: Release 0.2.9 (#24327) 2024-07-17 09:39:57 -07:00
Rafael Pereira
cf28708e7b Neo4j: Update with non-deprecated cypher methods, and new method to associate relationship embeddings (#23725)
**Description:** At the moment neo4j wrapper is using setVectorProperty,
which is deprecated
([link](https://neo4j.com/docs/operations-manual/5/reference/procedures/#procedure_db_create_setVectorProperty)).
I replaced with the non-deprecated version.

Neo4j recently introduced a new cypher method to associate embeddings
into relations using "setRelationshipVectorProperty" method. In this PR
I also implemented a new method to perform this association maintaining
the same format used in the "add_embeddings" method which is used to
associate embeddings into Nodes.
I also included a test case for this new method.
2024-07-17 12:37:47 -04:00
maang-h
2a3288b15d docs: Add ChatBaichuan docstrings (#24348)
- **Description:** Add ChatBaichuan rich docstrings.
- **Issue:** the issue #22296
2024-07-17 12:00:16 -04:00
Srijan Dubey
1792684e8f removed deprecated classes from pipelineai.ipynb, added support for LangChain v0.2 for PipelineAI integration (#24333)
Description: added support for LangChain v0.2 for PipelineAI
integration. Removed deprecated classes and incorporated support for
LangChain v0.2 to integrate with PipelineAI. Removed LLMChain and
replaced it with Runnable interface. Also added StrOutputParser, that
parses LLMResult into the top likely string.

Issue: None

Dependencies: None.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-17 13:48:32 +00:00
Tobias Sette
e60ad12521 docs(infobip.ipynb): fix typo (#24328) 2024-07-17 13:33:34 +00:00
Rafael Pereira
fc41730e28 neo4j: Fix test for order-insensitive comparison and floating-point precision issues (#24338)
**Description:** 
This PR addresses two main issues in the `test_neo4jvector.py`:
1. **Order-insensitive Comparison:** Modified the
`test_retrieval_dictionary` to ensure that it passes regardless of the
order of returned values by parsing `page_content` into a structured
format (dictionary) before comparison.
2. **Floating-point Precision:** Updated
`test_neo4jvector_relevance_score` to handle minor floating-point
precision differences by using the `isclose` function for comparing
relevance scores with a relative tolerance.

Errors addressed:

- **test_neo4jvector_relevance_score:**
  ```
AssertionError: assert [(Document(page_content='foo', metadata={'page':
'0'}), 1.0000014305114746), (Document(page_content='bar',
metadata={'page': '1'}), 0.9998371005058289),
(Document(page_content='baz', metadata={'page': '2'}),
0.9993508458137512)] == [(Document(page_content='foo', metadata={'page':
'0'}), 1.0), (Document(page_content='bar', metadata={'page': '1'}),
0.9998376369476318), (Document(page_content='baz', metadata={'page':
'2'}), 0.9993523359298706)]
At index 0 diff: (Document(page_content='foo', metadata={'page': '0'}),
1.0000014305114746) != (Document(page_content='foo', metadata={'page':
'0'}), 1.0)
  Full diff:
  - [(Document(page_content='foo', metadata={'page': '0'}), 1.0),
+ [(Document(page_content='foo', metadata={'page': '0'}),
1.0000014305114746),
? +++++++++++++++
- (Document(page_content='bar', metadata={'page': '1'}),
0.9998376369476318),
? ^^^ ------
+ (Document(page_content='bar', metadata={'page': '1'}),
0.9998371005058289),
? ^^^^^^^^^
- (Document(page_content='baz', metadata={'page': '2'}),
0.9993523359298706),
? ----------
+ (Document(page_content='baz', metadata={'page': '2'}),
0.9993508458137512),
? ++++++++++
  ]
  ```

- **test_retrieval_dictionary:**
  ```
AssertionError: assert [Document(page_content='skills:\n- Python\n- Data
Analysis\n- Machine Learning\nname: John\nage: 30\n')] ==
[Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: 30\nname: John\n')]
At index 0 diff: Document(page_content='skills:\n- Python\n- Data
Analysis\n- Machine Learning\nname: John\nage: 30\n') !=
Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: 30\nname: John\n')
  Full diff:
- [Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: 30\nname: John\n')]
? ---------
+ [Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: John\nage: 30\n')]
? +++++++++
  ```
2024-07-17 09:28:25 -04:00
Erick Friis
47ed7f766a infra: fix release prerelease deps bug (#24323) 2024-07-16 15:13:41 -07:00
Bagatur
80e7cd6cff core[patch]: Release 0.2.20 (#24322) 2024-07-16 15:04:36 -07:00
Erick Friis
6c3e65a878 infra: prerelease dep checking on release (#23269) 2024-07-16 21:48:15 +00:00
Eugene Yurtsev
616196c620 Docs: Add how to dispatch custom callback events (#24278)
* Add how-to guide for dispatching custom callback events.
* Add links from index to the how to guide
* Add link from streaming from within a tool
* Update versionadded to correct release
https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D0.2.15
2024-07-16 17:38:32 -04:00
298 changed files with 18014 additions and 2681 deletions

View File

@@ -0,0 +1,35 @@
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
releasing_rc = "rc" in version
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["tool"]["poetry"]["dependencies"]
for lib in dependencies:
dep_version = dependencies[lib]
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {lib} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {lib} has allow-prereleases set to true. Please remove this."
)

View File

@@ -1,6 +1,11 @@
import sys
import tomllib
if sys.version_info >= (3, 11):
import tomllib
else:
# for python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
from packaging.version import parse as parse_version
import re
@@ -12,6 +17,8 @@ MIN_VERSION_LIBS = [
"SQLAlchemy",
]
SKIP_IF_PULL_REQUEST = ["langchain-core"]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
@@ -38,7 +45,7 @@ def get_min_version(version: str) -> str:
raise ValueError(f"Unrecognized version format: {version}")
def get_min_version_from_toml(toml_path: str):
def get_min_version_from_toml(toml_path: str, versions_for: str):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
@@ -51,6 +58,10 @@ def get_min_version_from_toml(toml_path: str):
# Iterate over the libs in MIN_VERSION_LIBS
for lib in MIN_VERSION_LIBS:
if versions_for == "pull_request" and lib in SKIP_IF_PULL_REQUEST:
# some libs only get checked on release because of simultaneous
# changes
continue
# Check if the lib is present in the dependencies
if lib in dependencies:
# Get the version string
@@ -71,8 +82,10 @@ def get_min_version_from_toml(toml_path: str):
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
versions_for = sys.argv[2]
assert versions_for in ["release", "pull_request"]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
min_versions = get_min_version_from_toml(toml_file, versions_for)
print(" ".join([f"{lib}=={version}" for lib, version in min_versions.items()]))

View File

@@ -189,7 +189,7 @@ jobs:
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 5 && \
sleep 15 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
@@ -221,12 +221,17 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
poetry run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"

View File

@@ -65,3 +65,21 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging tomli
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
make tests
working-directory: ${{ inputs.working-directory }}

View File

@@ -64,7 +64,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
"! pip install -U langchain openai langchain-chroma langchain-experimental # (newest versions required for multi-modal)"
]
},
{
@@ -355,7 +355,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -37,7 +37,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -U --quiet langchain langchain_community openai chromadb langchain-experimental\n",
"%pip install -U --quiet langchain langchain-chroma langchain-community openai langchain-experimental\n",
"%pip install --quiet \"unstructured[all-docs]\" pypdf pillow pydantic lxml pillow matplotlib chromadb tiktoken"
]
},
@@ -344,8 +344,8 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import VertexAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"pip install -U langchain umap-learn scikit-learn langchain_community tiktoken langchain-openai langchainhub chromadb langchain-anthropic"
"pip install -U langchain umap-learn scikit-learn langchain_community tiktoken langchain-openai langchainhub langchain-chroma langchain-anthropic"
]
},
{
@@ -645,7 +645,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"\n",
"# Initialize all_texts with leaf_texts\n",
"all_texts = leaf_texts.copy()\n",

View File

@@ -58,4 +58,5 @@ Notebook | Description
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.

View File

@@ -39,7 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain unstructured[all-docs] pydantic lxml langchainhub"
"! pip install langchain langchain-chroma unstructured[all-docs] pydantic lxml langchainhub"
]
},
{
@@ -320,7 +320,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -59,7 +59,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain unstructured[all-docs] pydantic lxml"
"! pip install langchain langchain-chroma unstructured[all-docs] pydantic lxml"
]
},
{
@@ -375,7 +375,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -59,7 +59,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain unstructured[all-docs] pydantic lxml"
"! pip install langchain langchain-chroma unstructured[all-docs] pydantic lxml"
]
},
{
@@ -378,8 +378,8 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",

View File

@@ -19,7 +19,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
"! pip install -U langchain openai langchain_chroma langchain-experimental # (newest versions required for multi-modal)"
]
},
{
@@ -132,7 +132,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"baseline = Chroma.from_texts(\n",

View File

@@ -28,7 +28,7 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",

View File

@@ -14,7 +14,7 @@
}
],
"source": [
"%pip install -qU langchain-airbyte"
"%pip install -qU langchain-airbyte langchain_chroma"
]
},
{
@@ -123,7 +123,7 @@
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",

View File

@@ -39,7 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain docugami==0.0.8 dgml-utils==0.3.0 pydantic langchainhub chromadb hnswlib --upgrade --quiet"
"! pip install langchain docugami==0.0.8 dgml-utils==0.3.0 pydantic langchainhub langchain-chroma hnswlib --upgrade --quiet"
]
},
{
@@ -547,7 +547,7 @@
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",

View File

@@ -84,7 +84,7 @@
}
],
"source": [
"%pip install --quiet pypdf chromadb tiktoken openai \n",
"%pip install --quiet pypdf langchain-chroma tiktoken openai \n",
"%pip uninstall -y langchain-fireworks\n",
"%pip install --editable /mnt/disks/data/langchain/libs/partners/fireworks"
]
@@ -138,7 +138,7 @@
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_fireworks.embeddings import FireworksEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",

View File

@@ -170,7 +170,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain_community tiktoken langchain-openai langchainhub chromadb langchain langgraph"
"! pip install langchain-chroma langchain_community tiktoken langchain-openai langchainhub langchain langgraph"
]
},
{
@@ -30,8 +30,8 @@
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"urls = [\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain_community tiktoken langchain-openai langchainhub chromadb langchain langgraph tavily-python"
"! pip install langchain-chroma langchain_community tiktoken langchain-openai langchainhub langchain langgraph tavily-python"
]
},
{
@@ -77,8 +77,8 @@
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"urls = [\n",
@@ -180,8 +180,8 @@
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import Document\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.messages import BaseMessage, FunctionMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",

View File

@@ -7,7 +7,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain_community tiktoken langchain-openai langchainhub chromadb langchain langgraph"
"! pip install langchain-chroma langchain_community tiktoken langchain-openai langchainhub langchain langgraph"
]
},
{
@@ -86,8 +86,8 @@
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"urls = [\n",
@@ -188,7 +188,7 @@
"from langchain.output_parsers import PydanticOutputParser\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_core.messages import BaseMessage, FunctionMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",

View File

@@ -58,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
"! pip install -U langchain openai langchain-chroma langchain-experimental # (newest versions required for multi-modal)"
]
},
{
@@ -187,7 +187,7 @@
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",

View File

@@ -58,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain-nomic langchain_community tiktoken langchain-openai chromadb langchain"
"! pip install -U langchain-nomic langchain-chroma langchain-community tiktoken langchain-openai langchain"
]
},
{
@@ -167,7 +167,7 @@
"source": [
"import os\n",
"\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_nomic import NomicEmbeddings\n",

View File

@@ -56,7 +56,7 @@
},
"outputs": [],
"source": [
"! pip install -U langchain-nomic langchain_community tiktoken langchain-openai chromadb langchain # (newest versions required for multi-modal)"
"! pip install -U langchain-nomic langchain-chroma langchain-community tiktoken langchain-openai langchain # (newest versions required for multi-modal)"
]
},
{
@@ -194,7 +194,7 @@
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_nomic import NomicEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",

View File

@@ -20,8 +20,8 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]

View File

@@ -80,7 +80,7 @@
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"

View File

@@ -36,10 +36,10 @@
"from bs4 import BeautifulSoup as Soup\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryByteStore, LocalFileStore\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders.recursive_url_loader import (\n",
" RecursiveUrlLoader,\n",
")\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
@@ -370,13 +370,14 @@
],
"source": [
"import torch\n",
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"from langchain_huggingface.llms import HuggingFacePipeline\n",
"from optimum.intel.ipex import IPEXModelForCausalLM\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model_id = \"Intel/neural-chat-7b-v3-3\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = AutoModelForCausalLM.from_pretrained(\n",
" model_id, device_map=\"auto\", torch_dtype=torch.bfloat16\n",
"model = IPEXModelForCausalLM.from_pretrained(\n",
" model_id, torch_dtype=torch.bfloat16, export=True\n",
")\n",
"\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=100)\n",
@@ -581,7 +582,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.10.14"
}
},
"nbformat": 4,

View File

@@ -740,7 +740,7 @@ Even this relatively large model will most likely fail to generate more complica
```bash
poetry run pip install pyyaml chromadb
poetry run pip install pyyaml langchain_chroma
import yaml
```
@@ -994,7 +994,7 @@ from langchain.prompts import FewShotPromptTemplate, PromptTemplate
from langchain.chains.sql_database.prompt import _sqlite_prompt, PROMPT_SUFFIX
from langchain_huggingface import HuggingFaceEmbeddings
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
from langchain_community.vectorstores import Chroma
from langchain_chroma import Chroma
example_prompt = PromptTemplate(
input_variables=["table_info", "input", "sql_cmd", "sql_result", "answer"],

View File

@@ -22,7 +22,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet pypdf chromadb tiktoken openai langchain-together"
"! pip install --quiet pypdf tiktoken openai langchain-chroma langchain-together"
]
},
{
@@ -45,8 +45,8 @@
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import OpenAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"\"\"\"\n",
"from langchain_together.embeddings import TogetherEmbeddings\n",

File diff suppressed because one or more lines are too long

View File

@@ -236,7 +236,7 @@ This is where information like log-probs and token usage may be stored.
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output.
They can be accessed from there with the `.tool_calls` property.
This property returns a list of dictionaries. Each dictionary has the following keys:
This property returns a list of `ToolCall`s. A `ToolCall` is a dictionary with the following arguments:
- `name`: The name of the tool that should be called.
- `args`: The arguments to that tool.
@@ -246,13 +246,18 @@ This property returns a list of dictionaries. Each dictionary has the following
This represents a system message, which tells the model how to behave. Not every model provider supports this.
#### FunctionMessage
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
#### ToolMessage
This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
This represents the result of a tool call. In addition to `role` and `content`, this message has:
- a `tool_call_id` field which conveys the id of the call to the tool that was called to produce this result.
- an `artifact` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.
#### (Legacy) FunctionMessage
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. ToolMessage should be used instead to correspond to the updated tool-calling API.
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
### Prompt templates
@@ -496,35 +501,87 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d
### Tools
<span data-heading-keywords="tool,tools"></span>
Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world.
Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
Tools are needed whenever you want a model to control parts of your code or call out to external APIs.
A tool consists of the following components:
A tool consists of:
1. The name of the tool
2. A description of what the tool does
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user (only relevant for agents)
1. The name of the tool.
2. A description of what the tool does.
3. A JSON schema defining the inputs to the tool.
4. A function (and, optionally, an async variant of the function).
The name, description and JSON schema are provided as context
to the LLM, allowing the LLM to determine how to use the tool
appropriately.
When a tool is bound to a model, the name, description and JSON schema are provided as context to the model.
Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs.
Typical usage may look like the following:
Given a list of available tools and a prompt, an LLM can request
that one or more tools be invoked with appropriate arguments.
```python
tools = [...] # Define a list of tools
llm_with_tools = llm.bind_tools(tools)
ai_msg = llm_with_tools.invoke("do xyz...") # AIMessage(tool_calls=[ToolCall(...), ...], ...)
```
Generally, when designing tools to be used by a chat model or LLM, it is important to keep in mind the following:
The `AIMessage` returned from the model MAY have `tool_calls` associated with it.
Read [this guide](/docs/concepts/#aimessage) for more information on what the response type may look like.
- Chat models that have been fine-tuned for tool calling will be better at tool calling than non-fine-tuned models.
- Non fine-tuned models may not be able to use tools at all, especially if the tools are complex or require multiple tool calls.
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas.
- Simpler tools are generally easier for models to use than more complex tools.
Once the chosen tools are invoked, the results can be passed back to the model so that it can complete whatever task
it's performing.
There are generally two different ways to invoke the tool and pass back the response:
For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
#### Invoke with just the arguments
To use an existing pre-built tool, see [here](/docs/integrations/tools/) for a list of pre-built tools.
When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string).
This generally looks like:
```python
# You will want to previously check that the LLM returned tool calls
tool_call = ai_msg.tool_calls[0] # ToolCall(args={...}, id=..., ...)
tool_output = tool.invoke(tool_call["args"])
tool_message = ToolMessage(content=tool_output, tool_call_id=tool_call["id"], name=tool_call["name"])
```
Note that the `content` field will generally be passed back to the model.
If you do not want the raw tool response to be passed to the model, but you still want to keep it around,
you can transform the tool output but also pass it as an artifact (read more about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage))
```python
... # Same code as above
response_for_llm = transform(response)
tool_message = ToolMessage(content=response_for_llm, tool_call_id=tool_call["id"], name=tool_call["name"], artifact=tool_output)
```
#### Invoke with `ToolCall`
The other way to invoke a tool is to call it with the full `ToolCall` that was generated by the model.
When you do this, the tool will return a ToolMessage.
The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage.
This generally looks like:
```python
tool_call = ai_msg.tool_calls[0] # ToolCall(args={...}, id=..., ...)
tool_message = tool.invoke(tool_call)
# -> ToolMessage(content="tool result foobar...", tool_call_id=..., name="tool_name")
```
If you are invoking the tool this way and want to include an [artifact](/docs/concepts/#toolmessage) for the ToolMessage, you will need to have the tool return two things.
Read more about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/).
#### Best practices
When designing tools to be used by a model, it is important to keep in mind that:
- Chat models that have explicit [tool-calling APIs](/docs/concepts/#functiontool-calling) will be better at tool calling than non-fine-tuned models.
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. This another form of prompt engineering.
- Simple, narrowly scoped tools are easier for models to use than complex tools.
#### Related
For specifics on how to use tools, see the [tools how-to guides](/docs/how_to/#tools).
To use a pre-built tool, see the [tool integration docs](/docs/integrations/tools/).
### Toolkits
<span data-heading-keywords="toolkit,toolkits"></span>
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.

View File

@@ -153,7 +153,7 @@
"\n",
" def parse(self, text: str) -> List[str]:\n",
" lines = text.strip().split(\"\\n\")\n",
" return lines\n",
" return list(filter(None, lines)) # Remove empty lines\n",
"\n",
"\n",
"output_parser = LineListOutputParser()\n",

View File

@@ -0,0 +1,342 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to dispatch custom callback events\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [Custom callback handlers](/docs/how_to/custom_callbacks)\n",
"- [Astream Events API](/docs/concepts/#astream_events) the `astream_events` method will surface custom callback events.\n",
":::\n",
"\n",
"In some situations, you may want to dipsatch a custom callback event from within a [Runnable](/docs/concepts/#runnable-interface) so it can be surfaced\n",
"in a custom callback handler or via the [Astream Events API](/docs/concepts/#astream_events).\n",
"\n",
"For example, if you have a long running tool with multiple steps, you can dispatch custom events between the steps and use these custom events to monitor progress.\n",
"You could also surface these custom events to an end user of your application to show them how the current task is progressing.\n",
"\n",
"To dispatch a custom event you need to decide on two attributes for the event: the `name` and the `data`.\n",
"\n",
"| Attribute | Type | Description |\n",
"|-----------|------|----------------------------------------------------------------------------------------------------------|\n",
"| name | str | A user defined name for the event. |\n",
"| data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |\n",
"\n",
"\n",
":::{.callout-important}\n",
"* Dispatching custom callback events requires `langchain-core>=0.2.15`.\n",
"* Custom callback events can only be dispatched from within an existing `Runnable`.\n",
"* If using `astream_events`, you must use `version='v2'` to see custom events.\n",
"* Sending or rendering custom callbacks events in LangSmith is not yet supported.\n",
":::\n",
"\n",
"\n",
":::caution COMPATIBILITY\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in other Python versions.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain-core"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Astream Events API\n",
"\n",
"The most useful way to consume custom events is via the [Astream Events API](/docs/concepts/#astream_events).\n",
"\n",
"We can use the `async` `adispatch_custom_event` API to emit custom events in an async setting. \n",
"\n",
"\n",
":::{.callout-important}\n",
"\n",
"To see custom events via the astream events API, you need to use the newer `v2` API of `astream_events`.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'hello world'}, 'name': 'foo', 'tags': [], 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': 5, 'parent_ids': []}\n",
"{'event': 'on_chain_stream', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'foo', 'tags': [], 'metadata': {}, 'data': {'chunk': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_chain_end', 'data': {'output': 'hello world'}, 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def foo(x: str) -> str:\n",
" await adispatch_custom_event(\"event1\", {\"x\": x})\n",
" await adispatch_custom_event(\"event2\", 5)\n",
" return x\n",
"\n",
"\n",
"async for event in foo.astream_events(\"hello world\", version=\"v2\"):\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In python <= 3.10, you must propagate the config manually!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'hello world'}, 'name': 'bar', 'tags': [], 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': 5, 'parent_ids': []}\n",
"{'event': 'on_chain_stream', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'bar', 'tags': [], 'metadata': {}, 'data': {'chunk': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_chain_end', 'data': {'output': 'hello world'}, 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'bar', 'tags': [], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def bar(x: str, config: RunnableConfig) -> str:\n",
" \"\"\"An example that shows how to manually propagate config.\n",
"\n",
" You must do this if you're running python<=3.10.\n",
" \"\"\"\n",
" await adispatch_custom_event(\"event1\", {\"x\": x}, config=config)\n",
" await adispatch_custom_event(\"event2\", 5, config=config)\n",
" return x\n",
"\n",
"\n",
"async for event in bar.astream_events(\"hello world\", version=\"v2\"):\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Async Callback Handler\n",
"\n",
"You can also consume the dispatched event via an async callback handler."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Received event event1 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: a62b84be-7afd-4829-9947-7165df1f37d9\n",
"Received event event2 with data: 5, with tags: ['foo', 'bar'], with metadata: {} and run_id: a62b84be-7afd-4829-9947-7165df1f37d9\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Any, Dict, List, Optional\n",
"from uuid import UUID\n",
"\n",
"from langchain_core.callbacks import AsyncCallbackHandler\n",
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"class AsyncCustomCallbackHandler(AsyncCallbackHandler):\n",
" async def on_custom_event(\n",
" self,\n",
" name: str,\n",
" data: Any,\n",
" *,\n",
" run_id: UUID,\n",
" tags: Optional[List[str]] = None,\n",
" metadata: Optional[Dict[str, Any]] = None,\n",
" **kwargs: Any,\n",
" ) -> None:\n",
" print(\n",
" f\"Received event {name} with data: {data}, with tags: {tags}, with metadata: {metadata} and run_id: {run_id}\"\n",
" )\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def bar(x: str, config: RunnableConfig) -> str:\n",
" \"\"\"An example that shows how to manually propagate config.\n",
"\n",
" You must do this if you're running python<=3.10.\n",
" \"\"\"\n",
" await adispatch_custom_event(\"event1\", {\"x\": x}, config=config)\n",
" await adispatch_custom_event(\"event2\", 5, config=config)\n",
" return x\n",
"\n",
"\n",
"async_handler = AsyncCustomCallbackHandler()\n",
"await foo.ainvoke(1, {\"callbacks\": [async_handler], \"tags\": [\"foo\", \"bar\"]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sync Callback Handler\n",
"\n",
"Let's see how to emit custom events in a sync environment using `dispatch_custom_event`.\n",
"\n",
"You **must** call `dispatch_custom_event` from within an existing `Runnable`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Received event event1 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: 27b5ce33-dc26-4b34-92dd-08a89cb22268\n",
"Received event event2 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: 27b5ce33-dc26-4b34-92dd-08a89cb22268\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Any, Dict, List, Optional\n",
"from uuid import UUID\n",
"\n",
"from langchain_core.callbacks import BaseCallbackHandler\n",
"from langchain_core.callbacks.manager import (\n",
" dispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"class CustomHandler(BaseCallbackHandler):\n",
" def on_custom_event(\n",
" self,\n",
" name: str,\n",
" data: Any,\n",
" *,\n",
" run_id: UUID,\n",
" tags: Optional[List[str]] = None,\n",
" metadata: Optional[Dict[str, Any]] = None,\n",
" **kwargs: Any,\n",
" ) -> None:\n",
" print(\n",
" f\"Received event {name} with data: {data}, with tags: {tags}, with metadata: {metadata} and run_id: {run_id}\"\n",
" )\n",
"\n",
"\n",
"@RunnableLambda\n",
"def foo(x: int, config: RunnableConfig) -> int:\n",
" dispatch_custom_event(\"event1\", {\"x\": x})\n",
" dispatch_custom_event(\"event2\", {\"x\": x})\n",
" return x\n",
"\n",
"\n",
"handler = CustomHandler()\n",
"foo.invoke(1, {\"callbacks\": [handler], \"tags\": [\"foo\", \"bar\"]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've seen how to emit custom events, you can check out the more in depth guide for [astream events](/docs/how_to/streaming/#using-stream-events) which is the easiest way to leverage custom events."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -16,7 +16,7 @@
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-openai >= 0.1.8`."
"This guide requires `langchain-openai >= 0.1.9`."
]
},
{
@@ -153,7 +153,7 @@
"\n",
"#### OpenAI\n",
"\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.9` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n",
"\n",
"```{=mdx}\n",
":::note\n",

View File

@@ -220,6 +220,57 @@
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"id": "14002ec8-7ee5-4f91-9315-dd21c3808776",
"metadata": {},
"source": [
"### `LLMListwiseRerank`\n",
"\n",
"[LLMListwiseRerank](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html) uses [zero-shot listwise document reranking](https://arxiv.org/pdf/2305.02156) and functions similarly to `LLMChainFilter` as a robust but more expensive option. It is recommended to use a more powerful LLM.\n",
"\n",
"Note that `LLMListwiseRerank` requires a model with the [with_structured_output](/docs/integrations/chat/) method implemented."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4ab9ee9f-917e-4d6f-9344-eb7f01533228",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"from langchain.retrievers.document_compressors import LLMListwiseRerank\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"_filter = LLMListwiseRerank.from_llm(llm, top_n=1)\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=_filter, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"id": "7194da42",
@@ -295,7 +346,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "617a1756",
"metadata": {},
"outputs": [],

View File

@@ -5,7 +5,7 @@
"id": "9a8bceb3-95bd-4496-bb9e-57655136e070",
"metadata": {},
"source": [
"# How to use Runnables as Tools\n",
"# How to convert Runnables as Tools\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -541,7 +541,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -5,7 +5,7 @@
"id": "5436020b",
"metadata": {},
"source": [
"# How to create custom tools\n",
"# How to create tools\n",
"\n",
"When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"\n",

View File

@@ -44,6 +44,7 @@ This highlights functionality that is core to using LangChain.
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
- [How to: migrate chains to LCEL](/docs/how_to/migrate_chains)
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)
## Components
@@ -185,19 +186,21 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
- [How to: create custom tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin)
- [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool)
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: pass tool results back to model](/docs/how_to/tool_results_pass_to_model)
- [How to: add ad-hoc tool calling capability to LLMs and chat models](/docs/how_to/tools_prompting)
- [How to: create tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
- [How to: use chat models to call tools](/docs/how_to/tool_calling)
- [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model)
- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
- [How to: add a human in the loop to tool usage](/docs/how_to/tools_human)
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: disable parallel tool calling](/docs/how_to/tool_choice)
- [How to: access the `RunnableConfig` object within a custom tool](/docs/how_to/tool_configure)
- [How to: stream events from child runs within a custom tool](/docs/how_to/tool_stream_events)
- [How to: return extra artifacts from a tool](/docs/how_to/tool_artifacts/)
- [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human)
- [How to: handle tool errors](/docs/how_to/tools_error)
- [How to: force models to call a tool](/docs/how_to/tool_choice)
- [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel)
- [How to: access the `RunnableConfig` from a tool](/docs/how_to/tool_configure)
- [How to: stream events from a tool](/docs/how_to/tool_stream_events)
- [How to: return artifacts from a tool](/docs/how_to/tool_artifacts/)
- [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool)
- [How to: add ad-hoc tool calling capability to models](/docs/how_to/tools_prompting)
- [How to: pass in runtime secrets](/docs/how_to/runnable_runtime_secrets)
### Multimodal
@@ -225,6 +228,7 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang
- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: use callbacks in async environments](/docs/how_to/callbacks_async)
- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
### Custom
@@ -237,6 +241,7 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: define a custom tool](/docs/how_to/custom_tools)
- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
### Serialization
- [How to: save and load LangChain objects](/docs/how_to/serialization)

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MongoDBAtlasVectorSearch`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -0,0 +1,78 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6fcd2994-0092-4fa3-9bb1-c9c84babadc5",
"metadata": {},
"source": [
"# How to pass runtime secrets to runnables\n",
"\n",
":::info Requires `langchain-core >= 0.2.22`\n",
"\n",
":::\n",
"\n",
"We can pass in secrets to our runnables at runtime using the `RunnableConfig`. Specifically we can pass in secrets with a `__` prefix to the `configurable` field. This will ensure that these secrets aren't traced as part of the invocation:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "92e42e91-c277-49de-aa7a-dfb5c993c817",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"7"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def foo(x: int, config: RunnableConfig) -> int:\n",
" \"\"\"Sum x and a secret int\"\"\"\n",
" return x + config[\"configurable\"][\"__top_secret_int\"]\n",
"\n",
"\n",
"foo.invoke({\"x\": 5}, {\"configurable\": {\"__top_secret_int\": 2, \"traced_key\": \"bar\"}})"
]
},
{
"cell_type": "markdown",
"id": "ae3a4fb9-2ce7-46b2-b654-35dff0ae7197",
"metadata": {},
"source": [
"Looking at the LangSmith trace for this run, we can see that \"traced_key\" was recorded (as part of Metadata) while our secret int was not: https://smith.langchain.com/public/aa7e3289-49ca-422d-a408-f6b927210170/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"language": "python",
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -452,7 +452,7 @@
"source": [
"#### Generator Functions\n",
"\n",
"Le'ts fix the streaming using a generator function that can operate on the **input stream**.\n",
"Let's fix the streaming using a generator function that can operate on the **input stream**.\n",
"\n",
":::{.callout-tip}\n",
"A generator function (a function that uses `yield`) allows writing code that operates on **input streams**\n",

View File

@@ -5,11 +5,12 @@
"id": "503e36ae-ca62-4f8a-880c-4fe78ff5df93",
"metadata": {},
"source": [
"# How to return extra artifacts from a tool\n",
"# How to return artifacts from a tool\n",
"\n",
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [ToolMessage](/docs/concepts/#toolmessage)\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
"\n",

View File

@@ -17,7 +17,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to use a model to call tools\n",
"# How to use chat models to call tools\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -82,30 +82,24 @@
"## Passing tools to chat models\n",
"\n",
"Chat models that support tool calling features implement a `.bind_tools` method, which \n",
"receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
"receives a list of functions, Pydantic models, or LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
"and binds them to the chat model in its expected format. Subsequent invocations of the \n",
"chat model will include tool schemas in its calls to the LLM.\n",
"\n",
"For example, we can define the schema for custom tools using the `@tool` decorator \n",
"on Python functions:"
"For example, below we implement simple tools for arithmetic:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
@@ -118,12 +112,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Or below, we define the schema using [Pydantic](https://docs.pydantic.dev):"
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for detail.\n",
"\n",
"We can also define the schema using [Pydantic](https://docs.pydantic.dev):"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -343,7 +339,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -4,7 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Disabling parallel tool calling (OpenAI only)\n",
"# How to disable parallel tool calling\n",
"\n",
":::info OpenAI-specific\n",
"\n",
"This API is currently only supported by OpenAI.\n",
"\n",
":::\n",
"\n",
"OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like \"What is the weather in Tokyo, New York, and Chicago?\" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using the ``parallel_tool_call`` parameter."
]
@@ -99,10 +105,24 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python"
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to force tool calling behavior\n",
"# How to force models to call a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -125,10 +125,24 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python"
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to access the RunnableConfig object within a custom tool\n",
"# How to access the RunnableConfig from a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -110,7 +110,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -124,9 +124,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass tool outputs to the model\n",
"# How to pass tool outputs to chat models\n",
"\n",
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass run time values to a tool\n",
"# How to pass run time values to tools\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -15,26 +15,25 @@
"- [How to use a model to call tools](/docs/how_to/tool_calling)\n",
":::\n",
"\n",
":::{.callout-info} Supported models\n",
"\n",
"This how-to guide uses models with native tool calling capability.\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"\n",
":::{.callout-info} Using with LangGraph\n",
":::info Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::\n",
"\n",
":::caution Added in `langchain-core==0.2.21`\n",
"\n",
"Must have `langchain-core>=0.2.21` to use this functionality.\n",
"\n",
":::\n",
"\n",
"You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.\n",
"\n",
"Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.\n",
"\n",
"Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.\n",
"\n",
"This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values."
"This how-to guide shows you how to prevent the model from generating certain tool arguments and injecting them in directly at runtime."
]
},
{
@@ -57,23 +56,12 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_openai\n",
"# %pip install -qU langchain langchain_openai\n",
"\n",
"import os\n",
"from getpass import getpass\n",
@@ -90,10 +78,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Passing request time information\n",
"## Hiding arguments from the model\n",
"\n",
"The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example,\n",
"this information may be the user ID as resolved from the request itself."
"We can use the InjectedToolArg annotation to mark certain parameters of our Tool, like `user_id` as being injected at runtime, meaning they shouldn't be generated by the model"
]
},
{
@@ -104,46 +91,88 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_core.output_parsers import JsonOutputParser\n",
"from langchain_core.tools import BaseTool, tool"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import InjectedToolArg, tool\n",
"from typing_extensions import Annotated\n",
"\n",
"user_to_pets = {}\n",
"\n",
"\n",
"def generate_tools_for_user(user_id: str) -> List[BaseTool]:\n",
" \"\"\"Generate a set of tools that have a user id associated with them.\"\"\"\n",
"@tool(parse_docstring=True)\n",
"def update_favorite_pets(\n",
" pets: List[str], user_id: Annotated[str, InjectedToolArg]\n",
") -> None:\n",
" \"\"\"Add the list of favorite pets.\n",
"\n",
" @tool\n",
" def update_favorite_pets(pets: List[str]) -> None:\n",
" \"\"\"Add the list of favorite pets.\"\"\"\n",
" user_to_pets[user_id] = pets\n",
" Args:\n",
" pets: List of favorite pets to set.\n",
" user_id: User's ID.\n",
" \"\"\"\n",
" user_to_pets[user_id] = pets\n",
"\n",
" @tool\n",
" def delete_favorite_pets() -> None:\n",
" \"\"\"Delete the list of favorite pets.\"\"\"\n",
" if user_id in user_to_pets:\n",
" del user_to_pets[user_id]\n",
"\n",
" @tool\n",
" def list_favorite_pets() -> None:\n",
" \"\"\"List favorite pets if any.\"\"\"\n",
" return user_to_pets.get(user_id, [])\n",
"@tool(parse_docstring=True)\n",
"def delete_favorite_pets(user_id: Annotated[str, InjectedToolArg]) -> None:\n",
" \"\"\"Delete the list of favorite pets.\n",
"\n",
" return [update_favorite_pets, delete_favorite_pets, list_favorite_pets]"
" Args:\n",
" user_id: User's ID.\n",
" \"\"\"\n",
" if user_id in user_to_pets:\n",
" del user_to_pets[user_id]\n",
"\n",
"\n",
"@tool(parse_docstring=True)\n",
"def list_favorite_pets(user_id: Annotated[str, InjectedToolArg]) -> None:\n",
" \"\"\"List favorite pets if any.\n",
"\n",
" Args:\n",
" user_id: User's ID.\n",
" \"\"\"\n",
" return user_to_pets.get(user_id, [])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify that the tools work correctly"
"If we look at the input schemas for these tools, we'll see that user_id is still listed:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_petsSchema',\n",
" 'description': 'Add the list of favorite pets.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"update_favorite_pets.get_input_schema().schema()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But if we look at the tool call schema, which is what is passed to the model for tool-calling, user_id has been removed:"
]
},
{
@@ -152,46 +181,60 @@
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'eugene': ['cat', 'dog']}\n",
"['cat', 'dog']\n"
]
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Add the list of favorite pets.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"update_pets, delete_pets, list_pets = generate_tools_for_user(\"eugene\")\n",
"update_pets.invoke({\"pets\": [\"cat\", \"dog\"]})\n",
"print(user_to_pets)\n",
"print(list_pets.invoke({}))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"\n",
"def handle_run_time_request(user_id: str, query: str):\n",
" \"\"\"Handle run time request.\"\"\"\n",
" tools = generate_tools_for_user(user_id)\n",
" llm_with_tools = llm.bind_tools(tools)\n",
" prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful assistant.\")],\n",
" )\n",
" chain = prompt | llm_with_tools\n",
" return llm_with_tools.invoke(query)"
"update_favorite_pets.tool_call_schema.schema()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This code will allow the LLM to invoke the tools, but the LLM is **unaware** of the fact that a **user ID** even exists!"
"So when we invoke our tool, we need to pass in user_id:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'123': ['lizard', 'dog']}\n",
"['lizard', 'dog']\n"
]
}
],
"source": [
"user_id = \"123\"\n",
"update_favorite_pets.invoke({\"pets\": [\"lizard\", \"dog\"], \"user_id\": user_id})\n",
"print(user_to_pets)\n",
"print(list_favorite_pets.invoke({\"user_id\": user_id}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But when the model calls the tool, no user_id argument will be generated:"
]
},
{
@@ -204,7 +247,8 @@
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots']},\n",
" 'id': 'call_jJvjPXsNbFO5MMgW0q84iqCN'}]"
" 'id': 'call_W3cn4lZmJlyk8PCrKN4PRwqB',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 6,
@@ -213,30 +257,349 @@
}
],
"source": [
"ai_message = handle_run_time_request(\n",
" \"eugene\", \"my favorite animals are cats and parrots.\"\n",
")\n",
"ai_message.tool_calls"
"tools = [\n",
" update_favorite_pets,\n",
" delete_favorite_pets,\n",
" list_favorite_pets,\n",
"]\n",
"llm_with_tools = llm.bind_tools(tools)\n",
"ai_msg = llm_with_tools.invoke(\"my favorite animals are cats and parrots\")\n",
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::{.callout-important}\n",
"## Injecting arguments at runtime"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to actually execute our tools using the model-generated tool call, we'll need to inject the user_id ourselves:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots'], 'user_id': '123'},\n",
" 'id': 'call_W3cn4lZmJlyk8PCrKN4PRwqB',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from copy import deepcopy\n",
"\n",
"Chat models only output requests to invoke tools, they don't actually invoke the underlying tools.\n",
"from langchain_core.runnables import chain\n",
"\n",
"To see how to invoke the tools, please refer to [how to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling).\n",
":::"
"\n",
"@chain\n",
"def inject_user_id(ai_msg):\n",
" tool_calls = []\n",
" for tool_call in ai_msg.tool_calls:\n",
" tool_call_copy = deepcopy(tool_call)\n",
" tool_call_copy[\"args\"][\"user_id\"] = user_id\n",
" tool_calls.append(tool_call_copy)\n",
" return tool_calls\n",
"\n",
"\n",
"inject_user_id.invoke(ai_msg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we can chain together our model, injection code, and the actual tools to create a tool-executing chain:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ToolMessage(content='null', name='update_favorite_pets', tool_call_id='call_HUyF6AihqANzEYxQnTUKxkXj')]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool_map = {tool.name: tool for tool in tools}\n",
"\n",
"\n",
"@chain\n",
"def tool_router(tool_call):\n",
" return tool_map[tool_call[\"name\"]]\n",
"\n",
"\n",
"chain = llm_with_tools | inject_user_id | tool_router.map()\n",
"chain.invoke(\"my favorite animals are cats and parrots\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at the user_to_pets dict, we can see that it's been updated to include cats and parrots:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'123': ['cats', 'parrots']}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"user_to_pets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Other ways of annotating args\n",
"\n",
"Here are a few other ways of annotating our tool args:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'UpdateFavoritePetsSchema',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class UpdateFavoritePetsSchema(BaseModel):\n",
" \"\"\"Update list of favorite pets\"\"\"\n",
"\n",
" pets: List[str] = Field(..., description=\"List of favorite pets to set.\")\n",
" user_id: Annotated[str, InjectedToolArg] = Field(..., description=\"User's ID.\")\n",
"\n",
"\n",
"@tool(args_schema=UpdateFavoritePetsSchema)\n",
"def update_favorite_pets(pets, user_id):\n",
" user_to_pets[user_id] = pets\n",
"\n",
"\n",
"update_favorite_pets.get_input_schema().schema()"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"update_favorite_pets.tool_call_schema.schema()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'UpdateFavoritePetsSchema',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Optional, Type\n",
"\n",
"\n",
"class UpdateFavoritePets(BaseTool):\n",
" name: str = \"update_favorite_pets\"\n",
" description: str = \"Update list of favorite pets\"\n",
" args_schema: Optional[Type[BaseModel]] = UpdateFavoritePetsSchema\n",
"\n",
" def _run(self, pets, user_id):\n",
" user_to_pets[user_id] = pets\n",
"\n",
"\n",
"UpdateFavoritePets().get_input_schema().schema()"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"UpdateFavoritePets().tool_call_schema.schema()"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_petsSchema',\n",
" 'description': 'Use the tool.\\n\\nAdd run_manager: Optional[CallbackManagerForToolRun] = None\\nto child implementations to enable tracing.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id', 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class UpdateFavoritePets2(BaseTool):\n",
" name: str = \"update_favorite_pets\"\n",
" description: str = \"Update list of favorite pets\"\n",
"\n",
" def _run(self, pets: List[str], user_id: Annotated[str, InjectedToolArg]) -> None:\n",
" user_to_pets[user_id] = pets\n",
"\n",
"\n",
"UpdateFavoritePets2().get_input_schema().schema()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"UpdateFavoritePets2().tool_call_schema.schema()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-311",
"language": "python",
"name": "python3"
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
@@ -248,7 +611,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to stream events from child runs within a custom tool\n",
"# How to stream events from a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -22,10 +22,11 @@
"\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running `python>=3.11`, configuration will automatically propagate to child runnables in async environments, and you don't need to access the `RunnableConfig` object for that tool as shown in this guide. However, it is still a good idea if your code may run in other Python versions.\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in older Python versions.\n",
"\n",
"This guide also requires `langchain-core>=0.2.16`.\n",
"\n",
":::\n",
"\n",
"Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n",
@@ -268,6 +269,7 @@
"\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Pass [tool results back to a model](/docs/how_to/tool_results_pass_to_model)\n",
"- [Dispatch custom callback events](/docs/how_to/callbacks_custom_events)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
@@ -278,7 +280,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -292,9 +294,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -228,7 +228,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -540,7 +540,7 @@
"id": "137662a6"
},
"source": [
"## Example usage within a Conversation Chains"
"## Example usage within RunnableWithMessageHistory "
]
},
{
@@ -550,7 +550,7 @@
"id": "79efa62d"
},
"source": [
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://python.langchain.com/docs/modules/memory/types/buffer) example applied to the `mistralai/mixtral-8x22b-instruct-v0.1` model."
"Like any other integration, ChatNVIDIA is fine to support chat utilities like RunnableWithMessageHistory which is analogous to using `ConversationChain`. Below, we show the [LangChain RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) example applied to the `mistralai/mixtral-8x22b-instruct-v0.1` model."
]
},
{
@@ -572,8 +572,19 @@
},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"# store is a dictionary that maps session IDs to their corresponding chat histories.\n",
"store = {} # memory is maintained outside the chain\n",
"\n",
"\n",
"# A function that returns the chat history for a given session ID.\n",
"def get_session_history(session_id: str) -> InMemoryChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = InMemoryChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"chat = ChatNVIDIA(\n",
" model=\"mistralai/mixtral-8x22b-instruct-v0.1\",\n",
@@ -582,24 +593,18 @@
" top_p=1.0,\n",
")\n",
"\n",
"conversation = ConversationChain(llm=chat, memory=ConversationBufferMemory())"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f644ff28",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 268
},
"id": "f644ff28",
"outputId": "bae354cc-2118-4e01-ce20-a717ac94d27d"
},
"outputs": [],
"source": [
"conversation.invoke(\"Hi there!\")[\"response\"]"
"# Define a RunnableConfig object, with a `configurable` key. session_id determines thread\n",
"config = {\"configurable\": {\"session_id\": \"1\"}}\n",
"\n",
"conversation = RunnableWithMessageHistory(\n",
" chat,\n",
" get_session_history,\n",
")\n",
"\n",
"conversation.invoke(\n",
" \"Hi I'm Srijan Dubey.\", # input or query\n",
" config=config,\n",
")"
]
},
{
@@ -616,26 +621,30 @@
},
"outputs": [],
"source": [
"conversation.invoke(\"I'm doing well! Just having a conversation with an AI.\")[\n",
" \"response\"\n",
"]"
"conversation.invoke(\n",
" \"I'm doing well! Just having a conversation with an AI.\",\n",
" config=config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "LyD1xVKmVSs4",
"id": "uHIMZxVSVNBC",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 350
"height": 284
},
"id": "LyD1xVKmVSs4",
"outputId": "a1714513-a8fd-4d14-f974-233e39d5c4f5"
"id": "uHIMZxVSVNBC",
"outputId": "79acc89d-a820-4f2c-bac2-afe99da95580"
},
"outputs": [],
"source": [
"conversation.invoke(\"Tell me about yourself.\")[\"response\"]"
"conversation.invoke(\n",
" \"Tell me about yourself.\",\n",
" config=config,\n",
")"
]
}
],

View File

@@ -2,6 +2,7 @@
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
@@ -11,6 +12,7 @@
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatOllama\n",
@@ -23,6 +25,18 @@
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/ollama) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOllama](https://api.python.langchain.com/en/latest/chat_models/langchain_ollama.chat_models.ChatOllama.html) | [langchain-ollama](https://api.python.langchain.com/en/latest/ollama_api_reference.html) | ✅ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ollama?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ollama?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
@@ -40,307 +54,202 @@
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
"\n",
"## Usage\n",
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html).\n",
"\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` interface.\n",
"\n",
"This includes [special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) for system message and user input.\n",
"\n",
"## Interacting with Models \n",
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"#### Via an API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
"```bash\n",
"curl http://localhost:11434/api/generate -d '{\n",
" \"model\": \"llama3\",\n",
" \"prompt\":\"Why is the sky blue?\"\n",
"}'\n",
"```\n",
"\n",
"See the Ollama [API documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### Via LangChain\n",
"\n",
"See a typical basic example of using Ollama via the `ChatOllama` chat model in your LangChain application. \n",
"\n",
"View the [API Reference for ChatOllama](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama) for more."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the astronaut break up with his girlfriend?\n",
"\n",
"Because he needed space!\n"
]
}
],
"source": [
"# LangChain supports many other chat models. Here, we're using Ollama\n",
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"# supports many more optional parameters. Hover on your `ChatOllama(...)`\n",
"# class to view the latest available supported parameters\n",
"llm = ChatOllama(model=\"llama3\")\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
"\n",
"# using LangChain Expressive Language chain syntax\n",
"# learn more about the LCEL on\n",
"# /docs/concepts/#langchain-expression-language-lcel\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"# for brevity, response is printed in terminal\n",
"# You can use LangServe to deploy your application for\n",
"# production\n",
"print(chain.invoke({\"topic\": \"Space travel\"}))"
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"LCEL chains, out of the box, provide extra functionalities, such as streaming of responses, and async support"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why\n",
" did\n",
" the\n",
" astronaut\n",
" break\n",
" up\n",
" with\n",
" his\n",
" girlfriend\n",
" before\n",
" going\n",
" to\n",
" Mars\n",
"?\n",
"\n",
"\n",
"Because\n",
" he\n",
" needed\n",
" space\n",
"!\n",
"\n"
]
}
],
"source": [
"topic = {\"topic\": \"Space travel\"}\n",
"\n",
"for chunks in chain.stream(topic):\n",
" print(chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For streaming async support, here's an example - all possible via the single chain created above."
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"topic = {\"topic\": \"Space travel\"}\n",
"\n",
"async for chunks in chain.astream(topic):\n",
" print(chunks)"
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/concepts#interface) for the other available interfaces for use when a chain is created.\n",
"### Installation\n",
"\n",
"## Building from source\n",
"\n",
"For up to date instructions on building from source, check the Ollama documentation on [Building from Source](https://github.com/ollama/ollama?tab=readme-ov-file#building)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Extraction\n",
" \n",
"Use the latest version of Ollama and supply the [`format`](https://github.com/jmorganca/ollama/blob/main/docs/api.md#json-mode) flag. The `format` flag will force the model to produce the response in JSON.\n",
"\n",
"> **Note:** You can also try out the experimental [OllamaFunctions](/docs/integrations/chat/ollama_functions) wrapper for convenience."
"The LangChain Ollama integration lives in the `langchain-ollama` package:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"%pip install -qU langchain-ollama"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"llm = ChatOllama(model=\"llama3\", format=\"json\", temperature=0)"
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = ChatOllama(\n",
" model=\"llama3\",\n",
" temperature=0,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='{ \"morning\": \"blue\", \"noon\": \"clear blue\", \"afternoon\": \"hazy yellow\", \"evening\": \"orange-red\" }\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n ' id='run-e893700f-e2d0-4df8-ad86-17525dcee318-0'\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"What color is the sky at different times of the day? Respond using JSON\"\n",
" )\n",
"]\n",
"\n",
"chat_model_response = llm.invoke(messages)\n",
"print(chat_model_response)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Name: John\n",
"Age: 35\n",
"Likes: Pizza\n"
]
}
],
"source": [
"import json\n",
"\n",
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"json_schema = {\n",
" \"title\": \"Person\",\n",
" \"description\": \"Identifying information about a person.\",\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"name\": {\"title\": \"Name\", \"description\": \"The person's name\", \"type\": \"string\"},\n",
" \"age\": {\"title\": \"Age\", \"description\": \"The person's age\", \"type\": \"integer\"},\n",
" \"fav_food\": {\n",
" \"title\": \"Fav Food\",\n",
" \"description\": \"The person's favorite food\",\n",
" \"type\": \"string\",\n",
" },\n",
" },\n",
" \"required\": [\"name\", \"age\"],\n",
"}\n",
"\n",
"llm = ChatOllama(model=\"llama2\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"Please tell me about a person using the following JSON schema:\"\n",
" ),\n",
" HumanMessage(content=\"{dumps}\"),\n",
" HumanMessage(\n",
" content=\"Now, considering the schema, tell me about a person named John who is 35 years old and loves pizza.\"\n",
" ),\n",
"]\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(messages)\n",
"dumps = json.dumps(json_schema, indent=2)\n",
"\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"print(chain.invoke({\"dumps\": dumps}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Multi-modal\n",
"\n",
"Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.ai/library/bakllava) and [llava](https://ollama.ai/library/llava).\n",
"\n",
"Browse the full set of versions for models with `tags`, such as [Llava](https://ollama.ai/library/llava/tags).\n",
"\n",
"Download the desired LLM via `ollama pull bakllava`\n",
"\n",
"Be sure to update Ollama so that you have the most recent version to support multi-modal.\n",
"\n",
"Check out the typical example of how to use ChatOllama multi-modal support below:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "62e0dbc3",
"metadata": {
"scrolled": true
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
"AIMessage(content='Je adore le programmation.\\n\\n(Note: \"programmation\" is the feminine form of the noun in French, but if you want to use the masculine form, it would be \"le programme\" instead.)' response_metadata={'model': 'llama3', 'created_at': '2024-07-04T04:20:28.138164Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 1943337750, 'load_duration': 1128875, 'prompt_eval_count': 33, 'prompt_eval_duration': 322813000, 'eval_count': 43, 'eval_duration': 1618213000} id='run-ed8c17ab-7fc2-4c90-a88a-f6273b49bc78-0')\n"
]
}
],
"source": [
"!pip install --upgrade --quiet pillow"
"from langchain_core.messages import AIMessage\n",
"\n",
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 8,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Je adore le programmation.\n",
"\n",
"(Note: \"programmation\" is the feminine form of the noun in French, but if you want to use the masculine form, it would be \"le programme\" instead.)\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren!\\n\\n(Note: \"Ich liebe\" means \"I love\", \"Programmieren\" is the verb for \"programming\")', response_metadata={'model': 'llama3', 'created_at': '2024-07-04T04:22:33.864132Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 1310800083, 'load_duration': 1782000, 'prompt_eval_count': 16, 'prompt_eval_duration': 250199000, 'eval_count': 29, 'eval_duration': 1057192000}, id='run-cbadbe59-2de2-4ec0-a18a-b3220226c3d2-0')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4c5e0197",
"metadata": {},
"source": [
"## Multi-modal\n",
"\n",
"Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.com/library/bakllava) and [llava](https://ollama.com/library/llava).\n",
"\n",
" ollama pull bakllava\n",
"\n",
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "36c9b1c2",
"metadata": {},
"outputs": [
{
@@ -399,7 +308,8 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 12,
"id": "32b3ba7b",
"metadata": {},
"outputs": [
{
@@ -411,8 +321,8 @@
}
],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = ChatOllama(model=\"bakllava\", temperature=0)\n",
"\n",
@@ -449,20 +359,12 @@
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## Concurrency Features\n",
"## API reference\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
"For detailed documentation of all ChatOllama features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_ollama.chat_models.ChatOllama.html"
]
}
],
@@ -482,9 +384,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 5
}

View File

@@ -162,7 +162,7 @@
"metadata": {},
"outputs": [],
"source": [
"!poetry run pip install --upgrade langchain-openai tiktoken chromadb hnswlib"
"!poetry run pip install --upgrade langchain-openai tiktoken langchain-chroma hnswlib"
]
},
{
@@ -211,7 +211,7 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"\n",
"embedding = OpenAIEmbeddings()\n",
@@ -365,7 +365,7 @@
"source": [
"from langchain.chains.query_constructor.schema import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain_chroma import Chroma\n",
"\n",
"EXCLUDE_KEYS = [\"id\", \"xpath\", \"structure\"]\n",
"metadata_field_info = [\n",
@@ -540,7 +540,7 @@
"source": [
"from langchain.retrievers.multi_vector import MultiVectorRetriever, SearchType\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# The vectorstore to use to index the child chunks\n",

View File

@@ -33,7 +33,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet transformers --quiet"
"%pip install --upgrade --quiet transformers"
]
},
{

View File

@@ -1,10 +1,21 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "67db2992",
"metadata": {},
"source": [
"# Ollama\n",
"---\n",
"sidebar_label: Ollama\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "9597802c",
"metadata": {},
"source": [
"# OllamaLLM\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Ollama models as [text completion models](/docs/concepts/#llms). Many popular Ollama models are [chat completion models](/docs/concepts/#chat-models).\n",
@@ -12,21 +23,35 @@
"You may be looking for [this page instead](/docs/integrations/chat/ollama/).\n",
":::\n",
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
"\n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n",
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/ollama/ollama#model-library).\n",
"This page goes over how to use LangChain to interact with `Ollama` models.\n",
"\n",
"## Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59c710c4",
"metadata": {},
"outputs": [],
"source": [
"# install package\n",
"%pip install -U langchain-ollama"
]
},
{
"cell_type": "markdown",
"id": "0ee90032",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/ollama/ollama) to set up and run a local Ollama instance:\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
"\n",
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
" * View a list of available models via the [model library](https://ollama.ai/library) and pull to use locally with the command `ollama pull llama3`\n",
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
" * e.g., `ollama pull llama3`\n",
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
@@ -34,194 +59,67 @@
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models on your local instance, use `ollama list`\n",
"* To view all pulled models, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/ollama/ollama) for more commands. \n",
"* Run `ollama help` in the terminal to see available commands too.\n",
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
"\n",
"## Usage\n",
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html).\n",
"\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` [interface](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/).\n",
"\n",
"This includes [special tokens](https://ollama.com/library/llama3) for system message and user input.\n",
"\n",
"## Interacting with Models \n",
"\n",
"Here are a few ways to interact with pulled local models\n",
"\n",
"#### In the terminal:\n",
"\n",
"* All of your local models are automatically served on `localhost:11434`\n",
"* Run `ollama run <name-of-model>` to start interacting via the command line directly\n",
"\n",
"#### Via the API\n",
"\n",
"Send an `application/json` request to the API endpoint of Ollama to interact.\n",
"\n",
"```bash\n",
"curl http://localhost:11434/api/generate -d '{\n",
" \"model\": \"llama3\",\n",
" \"prompt\":\"Why is the sky blue?\"\n",
"}'\n",
"```\n",
"\n",
"See the Ollama [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md) for all endpoints.\n",
"\n",
"#### via LangChain\n",
"\n",
"See a typical basic example of using [Ollama chat model](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) in your LangChain application."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Here's one:\\n\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything!\\n\\nHope that made you smile! Do you want to hear another one?\""
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(\n",
" model=\"llama3\"\n",
") # assuming you have Ollama installed and have llama3 model pulled with `ollama pull llama3 `\n",
"\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To stream tokens, use the `.stream(...)` method:"
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"id": "035dea0f",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"S\n",
"ure\n",
",\n",
" here\n",
"'\n",
"s\n",
" one\n",
":\n",
"\n",
"\n",
"\n",
"\n",
"Why\n",
" don\n",
"'\n",
"t\n",
" scient\n",
"ists\n",
" trust\n",
" atoms\n",
"?\n",
"\n",
"\n",
"B\n",
"ecause\n",
" they\n",
" make\n",
" up\n",
" everything\n",
"!\n",
"\n",
"\n",
"\n",
"\n",
"I\n",
" hope\n",
" you\n",
" found\n",
" that\n",
" am\n",
"using\n",
"!\n",
" Do\n",
" you\n",
" want\n",
" to\n",
" hear\n",
" another\n",
" one\n",
"?\n",
"\n"
]
"data": {
"text/plain": [
"'A great start!\\n\\nLangChain is a type of AI model that uses language processing techniques to generate human-like text based on input prompts or chains of reasoning. In other words, it can have a conversation with humans, understanding the context and responding accordingly.\\n\\nHere\\'s a possible breakdown:\\n\\n* \"Lang\" likely refers to its focus on natural language processing (NLP) and linguistic analysis.\\n* \"Chain\" suggests that LangChain is designed to generate text in response to a series of connected ideas or prompts, rather than simply generating random text.\\n\\nSo, what do you think LangChain\\'s capabilities might be?'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"Tell me a joke\"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_ollama.llms import OllamaLLM\n",
"\n",
"for chunks in llm.stream(query):\n",
" print(chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)"
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = OllamaLLM(model=\"llama3\")\n",
"\n",
"chain = prompt | model\n",
"\n",
"chain.invoke({\"question\": \"What is LangChain?\"})"
]
},
{
"cell_type": "markdown",
"id": "e2d85456",
"metadata": {},
"source": [
"## Multi-modal\n",
"\n",
"Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.ai/library/bakllava) and [llava](https://ollama.ai/library/llava).\n",
"Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.com/library/bakllava) and [llava](https://ollama.com/library/llava).\n",
"\n",
"`ollama pull bakllava`\n",
" ollama pull bakllava\n",
"\n",
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"bakllava = Ollama(model=\"bakllava\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4043e202",
"metadata": {},
"outputs": [
{
@@ -279,7 +177,8 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 4,
"id": "79aaf863",
"metadata": {},
"outputs": [
{
@@ -288,38 +187,24 @@
"'90%'"
]
},
"execution_count": 8,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_image_context = bakllava.bind(images=[image_b64])\n",
"from langchain_ollama import OllamaLLM\n",
"\n",
"llm = OllamaLLM(model=\"bakllava\")\n",
"\n",
"llm_with_image_context = llm.bind(images=[image_b64])\n",
"llm_with_image_context.invoke(\"What is the dollar based gross retention rate:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Concurrency Features\n",
"\n",
"Ollama supports concurrency inference for a single model, and or loading multiple models simulatenously (at least [version 0.1.33](https://github.com/ollama/ollama/releases)).\n",
"\n",
"Start the Ollama server with:\n",
"\n",
"* `OLLAMA_NUM_PARALLEL`: Handle multiple requests simultaneously for a single model\n",
"* `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously\n",
"\n",
"Example: `OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve`\n",
"\n",
"Learn more about configuring Ollama server in [the official guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3.11.1 64-bit",
"language": "python",
"name": "python3"
},
@@ -333,9 +218,14 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.12.3"
},
"vscode": {
"interpreter": {
"hash": "e971737741ff4ec9aff7dc6155a1060a59a8a6d52c757dbbe66bf8ee389494b1"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 5
}

View File

@@ -50,8 +50,8 @@
"source": [
"import os\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.llms import PipelineAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate"
]
},
@@ -123,7 +123,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
"llm_chain = prompt | llm | StrOutputParser()"
]
},
{
@@ -142,7 +142,7 @@
"source": [
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
"llm_chain.run(question)"
"llm_chain.invoke(question)"
]
}
],

View File

@@ -88,6 +88,7 @@
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"select_expert\": \"llama-2-7b-chat-hf\",\n",
" \"process_prompt\": False,\n",
" # \"stop_sequences\": '\\\"sequence1\\\",\\\"sequence2\\\"',\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",
@@ -116,6 +117,7 @@
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"select_expert\": \"llama-2-7b-chat-hf\",\n",
" \"process_prompt\": False,\n",
" # \"stop_sequences\": '\\\"sequence1\\\",\\\"sequence2\\\"',\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",
@@ -175,9 +177,7 @@
"import os\n",
"\n",
"sambastudio_base_url = \"<Your SambaStudio environment URL>\"\n",
"sambastudio_base_uri = (\n",
" \"<Your SambaStudio endpoint base URI>\" # optional, \"api/predict/nlp\" set as default\n",
")\n",
"sambastudio_base_uri = \"<Your SambaStudio endpoint base URI>\" # optional, \"api/predict/generic\" set as default\n",
"sambastudio_project_id = \"<Your SambaStudio project id>\"\n",
"sambastudio_endpoint_id = \"<Your SambaStudio endpoint id>\"\n",
"sambastudio_api_key = \"<Your SambaStudio endpoint API key>\"\n",
@@ -271,6 +271,7 @@
" \"do_sample\": True,\n",
" \"max_tokens_to_generate\": 1000,\n",
" \"temperature\": 0.01,\n",
" \"process_prompt\": False,\n",
" \"select_expert\": \"Meta-Llama-3-8B-Instruct\",\n",
" # \"repetition_penalty\": 1.0,\n",
" # \"top_k\": 50,\n",

View File

@@ -0,0 +1,325 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a283d2fd-e26e-4811-a486-d3cf0ecf6749",
"metadata": {},
"source": [
"# Couchbase\n",
"> Couchbase is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. Couchbase embraces AI with coding assistance for developers and vector search for their applications.\n",
"\n",
"This notebook goes over how to use the `CouchbaseChatMessageHistory` class to store the chat message history in a Couchbase cluster\n"
]
},
{
"cell_type": "markdown",
"id": "ff868a6c-3e17-4c3d-8d32-67b01f4d7bcc",
"metadata": {},
"source": [
"## Set Up Couchbase Cluster\n",
"To run this demo, you need a Couchbase Cluster. \n",
"\n",
"You can work with both [Couchbase Capella](https://www.couchbase.com/products/capella/) and your self-managed Couchbase Server."
]
},
{
"cell_type": "markdown",
"id": "41fa85e7-6968-45e4-a445-de305d80f332",
"metadata": {},
"source": [
"## Install Dependencies\n",
"`CouchbaseChatMessageHistory` lives inside the `langchain-couchbase` package. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b744ca05-b8c6-458c-91df-f50ca2c20b3c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langchain-couchbase"
]
},
{
"cell_type": "markdown",
"id": "41f29205-6452-493b-ba18-8a3b006bcca4",
"metadata": {},
"source": [
"## Create Couchbase Connection Object\n",
"We create a connection to the Couchbase cluster initially and then pass the cluster object to the Vector Store. \n",
"\n",
"Here, we are connecting using the username and password. You can also connect using any other supported way to your cluster. \n",
"\n",
"For more information on connecting to the Couchbase cluster, please check the [Python SDK documentation](https://docs.couchbase.com/python-sdk/current/hello-world/start-using-sdk.html#connect)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f394908e-f5fe-408a-84d7-b97fdebcfa26",
"metadata": {},
"outputs": [],
"source": [
"COUCHBASE_CONNECTION_STRING = (\n",
" \"couchbase://localhost\" # or \"couchbases://localhost\" if using TLS\n",
")\n",
"DB_USERNAME = \"Administrator\"\n",
"DB_PASSWORD = \"Password\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ad4dce21-d80c-465a-b709-fd366ba5ce35",
"metadata": {},
"outputs": [],
"source": [
"from datetime import timedelta\n",
"\n",
"from couchbase.auth import PasswordAuthenticator\n",
"from couchbase.cluster import Cluster\n",
"from couchbase.options import ClusterOptions\n",
"\n",
"auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)\n",
"options = ClusterOptions(auth)\n",
"cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)\n",
"\n",
"# Wait until the cluster is ready for use.\n",
"cluster.wait_until_ready(timedelta(seconds=5))"
]
},
{
"cell_type": "markdown",
"id": "e3d0210c-e2e6-437a-86f3-7397a1899fef",
"metadata": {},
"source": [
"We will now set the bucket, scope, and collection names in the Couchbase cluster that we want to use for storing the message history.\n",
"\n",
"Note that the bucket, scope, and collection need to exist before using them to store the message history."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e8c7f846-a5c4-4465-a40e-4a9a23ac71bd",
"metadata": {},
"outputs": [],
"source": [
"BUCKET_NAME = \"langchain-testing\"\n",
"SCOPE_NAME = \"_default\"\n",
"COLLECTION_NAME = \"conversational_cache\""
]
},
{
"cell_type": "markdown",
"id": "283959e1-6af7-4768-9211-5b0facc6ef65",
"metadata": {},
"source": [
"## Usage\n",
"In order to store the messages, you need the following:\n",
"- Couchbase Cluster object: Valid connection to the Couchbase cluster\n",
"- bucket_name: Bucket in cluster to store the chat message history\n",
"- scope_name: Scope in bucket to store the message history\n",
"- collection_name: Collection in scope to store the message history\n",
"- session_id: Unique identifier for the session\n",
"\n",
"Optionally you can configure the following:\n",
"- session_id_key: Field in the chat message documents to store the `session_id`\n",
"- message_key: Field in the chat message documents to store the message content\n",
"- create_index: Used to specify if the index needs to be created on the collection. By default, an index is created on the `message_key` and the `session_id_key` of the documents"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "43c3b2d5-aae2-44a9-9e9f-f10adf054cfa",
"metadata": {},
"outputs": [],
"source": [
"from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory\n",
"\n",
"message_history = CouchbaseChatMessageHistory(\n",
" cluster=cluster,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" session_id=\"test-session\",\n",
")\n",
"\n",
"message_history.add_user_message(\"hi!\")\n",
"\n",
"message_history.add_ai_message(\"how are you doing?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e7e348ef-79e9-481c-aeef-969ae03dea6a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!'), AIMessage(content='how are you doing?')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message_history.messages"
]
},
{
"cell_type": "markdown",
"id": "c8b942a7-93fa-4cd9-8414-d047135c2733",
"metadata": {},
"source": [
"## Chaining\n",
"The chat message history class can be used with [LCEL Runnables](https://python.langchain.com/v0.2/docs/how_to/message_history/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8a9f0d91-d1d6-481d-8137-ea11229f485a",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "946d45aa-5a61-49ae-816b-1c3949c56d9a",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"# Create the LCEL runnable\n",
"chain = prompt | ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "20dfd838-b549-42ed-b3ba-ac005f7e024c",
"metadata": {},
"outputs": [],
"source": [
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: CouchbaseChatMessageHistory(\n",
" cluster=cluster,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" session_id=session_id,\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "17bd09f4-896d-433d-bb9a-369a06e7aa8a",
"metadata": {},
"outputs": [],
"source": [
"# This is where we configure the session id\n",
"config = {\"configurable\": {\"session_id\": \"testing\"}}"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "4bda1096-2fc2-40d7-a046-0d5d8e3a8f75",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 22, 'total_tokens': 32}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-a0f8a29e-ddf4-4e06-a1fe-cf8c325a2b72-0', usage_metadata={'input_tokens': 22, 'output_tokens': 10, 'total_tokens': 32})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1cfb31da-51bb-4c5f-909a-b7118b0ae08d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Bob.', response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 43, 'total_tokens': 48}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f764a9eb-999e-4042-96b6-fe47b7ae4779-0', usage_metadata={'input_tokens': 43, 'output_tokens': 5, 'total_tokens': 48})"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -89,3 +89,23 @@ set_llm_cache(
)
)
```
## Chat Message History
Use Couchbase as the storage for your chat messages.
See a [usage example](/docs/integrations/memory/couchbase_chat_message_history).
To use the chat message history in your applications:
```python
from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory
message_history = CouchbaseChatMessageHistory(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
session_id="test-session",
)
message_history.add_user_message("hi!")
```

File diff suppressed because it is too large Load Diff

View File

@@ -101,7 +101,7 @@
" sambastudio_embeddings_project_id=sambastudio_project_id,\n",
" sambastudio_embeddings_endpoint_id=sambastudio_endpoint_id,\n",
" sambastudio_embeddings_api_key=sambastudio_api_key,\n",
" batch_size=32,\n",
" batch_size=32, # set depending on the deployed endpoint configuration\n",
")"
]
},

View File

@@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TextEmbed - Embedding Inference Server\n",
"\n",
"TextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.\n",
"\n",
"## Features\n",
"\n",
"- **High Throughput & Low Latency:** Designed to handle a large number of requests efficiently.\n",
"- **Flexible Model Support:** Works with various sentence-transformer models.\n",
"- **Scalable:** Easily integrates into larger systems and scales with demand.\n",
"- **Batch Processing:** Supports batch processing for better and faster inference.\n",
"- **OpenAI Compatible REST API Endpoint:** Provides an OpenAI compatible REST API endpoint.\n",
"- **Single Line Command Deployment:** Deploy multiple models via a single command for efficient deployment.\n",
"- **Support for Embedding Formats:** Supports binary, float16, and float32 embeddings formats for faster retrieval.\n",
"\n",
"## Getting Started\n",
"\n",
"### Prerequisites\n",
"\n",
"Ensure you have Python 3.10 or higher installed. You will also need to install the required dependencies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation via PyPI\n",
"\n",
"1. **Install the required dependencies:**\n",
"\n",
" ```bash\n",
" pip install -U textembed\n",
" ```\n",
"\n",
"2. **Start the TextEmbed server with your desired models:**\n",
"\n",
" ```bash\n",
" python -m textembed.server --models sentence-transformers/all-MiniLM-L12-v2 --workers 4 --api-key TextEmbed \n",
" ```\n",
"\n",
"For more information, please read the [documentation](https://github.com/kevaldekivadiya2415/textembed/blob/main/docs/setup.md)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import TextEmbedEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"embeddings = TextEmbedEmbeddings(\n",
" model=\"sentence-transformers/all-MiniLM-L12-v2\",\n",
" api_url=\"http://0.0.0.0:8000/v1\",\n",
" api_key=\"TextEmbed\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Embed your documents"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# Define a list of documents\n",
"documents = [\n",
" \"Data science involves extracting insights from data.\",\n",
" \"Artificial intelligence is transforming various industries.\",\n",
" \"Cloud computing provides scalable computing resources over the internet.\",\n",
" \"Big data analytics helps in understanding large datasets.\",\n",
" \"India has a diverse cultural heritage.\",\n",
"]\n",
"\n",
"# Define a query\n",
"query = \"What is the cultural heritage of India?\""
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"# Embed all documents\n",
"document_embeddings = embeddings.embed_documents(documents)\n",
"\n",
"# Embed the query\n",
"query_embedding = embeddings.embed_query(query)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Data science involves extracting insights from data.': 0.05121298956322118,\n",
" 'Artificial intelligence is transforming various industries.': -0.0060612142358469345,\n",
" 'Cloud computing provides scalable computing resources over the internet.': -0.04877402795301714,\n",
" 'Big data analytics helps in understanding large datasets.': 0.016582168576929422,\n",
" 'India has a diverse cultural heritage.': 0.7408992963028144}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Compute Similarity\n",
"import numpy as np\n",
"\n",
"scores = np.array(document_embeddings) @ np.array(query_embedding).T\n",
"dict(zip(documents, scores))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "check10",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -22,7 +22,7 @@
"`InfobipAPIWrapper` uses name parameters where you can provide credentials:\n",
"\n",
"- `infobip_api_key` - [API Key](https://www.infobip.com/docs/essentials/api-authentication#api-key-header) that you can find in your [developer tools](https://portal.infobip.com/dev/api-keys)\n",
"- `infobip_base_url` - [Base url](https://www.infobip.com/docs/essentials/base-url) for Infobip API. You can use default value `https://api.infobip.com/`.\n",
"- `infobip_base_url` - [Base url](https://www.infobip.com/docs/essentials/base-url) for Infobip API. You can use the default value `https://api.infobip.com/`.\n",
"\n",
"You can also provide `infobip_api_key` and `infobip_base_url` as environment variables `INFOBIP_API_KEY` and `INFOBIP_BASE_URL`."
]
@@ -60,7 +60,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sending a Email"
"## Sending an Email"
]
},
{

View File

@@ -0,0 +1,183 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7d143c73",
"metadata": {},
"source": [
"# Riza Code Interpreter\n",
"\n",
"> The Riza Code Interpreter is a WASM-based isolated environment for running Python or JavaScript generated by AI agents.\n",
"\n",
"In this notebook we'll create an example of an agent that uses Python to solve a problem that an LLM can't solve on its own:\n",
"counting the number of 'r's in the word \"strawberry.\"\n",
"\n",
"Before you get started grab an API key from the [Riza dashboard](https://dashboard.riza.io). For more guides and a full API reference\n",
"head over to the [Riza Code Interpreter API documentation](https://docs.riza.io)."
]
},
{
"cell_type": "markdown",
"id": "894aa87a",
"metadata": {},
"source": [
"Make sure you have the necessary dependencies installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8265cf7f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community rizaio"
]
},
{
"cell_type": "markdown",
"id": "e085eb51",
"metadata": {},
"source": [
"Set up your API keys as an environment variable."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45ba8936",
"metadata": {},
"outputs": [],
"source": [
"%env ANTHROPIC_API_KEY=<your_anthropic_api_key_here>\n",
"%env RIZA_API_KEY=<your_riza_api_key_here>"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "efe26fd9-6e33-4f5f-b49b-ea74fa6c4915",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.riza.command import ExecPython"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "cd5b952e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "markdown",
"id": "7bd0b610",
"metadata": {},
"source": [
"Initialize the `ExecPython` tool."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "32f1543f",
"metadata": {},
"outputs": [],
"source": [
"tools = [ExecPython()]"
]
},
{
"cell_type": "markdown",
"id": "24f952d5",
"metadata": {},
"source": [
"Initialize an agent using Anthropic's Claude Haiku model."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "71831ea8",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\", temperature=0)\n",
"\n",
"prompt_template = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Make sure to use a tool if you need to solve a problem.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" (\"placeholder\", \"{agent_scratchpad}\"),\n",
" ]\n",
")\n",
"\n",
"agent = create_tool_calling_agent(llm, tools, prompt_template)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "36b24036",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `riza_exec_python` with `{'code': 'word = \"strawberry\"\\nprint(word.count(\"r\"))'}`\n",
"responded: [{'id': 'toolu_01JwPLAAqqCNCjVuEnK8Fgut', 'input': {}, 'name': 'riza_exec_python', 'type': 'tool_use', 'index': 0, 'partial_json': '{\"code\": \"word = \\\\\"strawberry\\\\\"\\\\nprint(word.count(\\\\\"r\\\\\"))\"}'}]\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m3\n",
"\u001b[0m\u001b[32;1m\u001b[1;3m[{'text': '\\n\\nThe word \"strawberry\" contains 3 \"r\" characters.', 'type': 'text', 'index': 0}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"The word \"strawberry\" contains 3 \"r\" characters.\n"
]
}
],
"source": [
"# Ask a tough question\n",
"result = agent_executor.invoke({\"input\": \"how many rs are in strawberry?\"})\n",
"print(result[\"output\"][0][\"text\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -72,7 +72,7 @@
}
],
"source": [
"tool.run(\"lex friedman\")"
"tool.run(\"lex fridman\")"
]
},
{

View File

@@ -150,7 +150,13 @@ hide_table_of_contents: true
## Advanced features
The following table shows all the chat models that support one or more advanced features.
The following table shows all the chat model classes that support one or more advanced features.
:::info
While all these LangChain classes support the indicated advanced feature, you may have
to open the provider-specific documentation to learn which hosted models or backends support
the feature.
:::
{table}

View File

@@ -25,6 +25,7 @@ fireworks-ai>=0.9.0,<0.10
friendli-client>=1.2.4,<2
geopandas>=0.13.1
gitpython>=3.1.32,<4
gliner>=0.2.7
google-cloud-documentai>=2.20.1,<3
gql>=3.4.1,<4
gradientai>=1.4.0,<2
@@ -37,6 +38,7 @@ javelin-sdk>=0.1.8,<0.2
jinja2>=3,<4
jq>=1.4.1,<2
jsonschema>1
keybert>=0.8.5
lxml>=4.9.3,<6.0
markdownify>=0.11.6,<0.12
motor>=3.3.1,<4
@@ -60,7 +62,7 @@ psychicapi>=0.8.0,<0.9
py-trello>=0.19.0,<0.20
pyjwt>=2.8.0,<3
pymupdf>=1.22.3,<2
pypdf>=3.4.0,<4
pypdf>=3.4.0,<5
pypdfium2>=4.10.0,<5
pyspark>=3.4.0,<4
rank-bm25>=0.2.2,<0.3

View File

@@ -292,17 +292,21 @@ def _create_api_controller_agent(
)
if "DELETE" in allowed_operations:
delete_llm_chain = LLMChain(llm=llm, prompt=PARSING_DELETE_PROMPT)
RequestsDeleteToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=delete_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
tools.append(
RequestsDeleteToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=delete_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
)
)
if "PATCH" in allowed_operations:
patch_llm_chain = LLMChain(llm=llm, prompt=PARSING_PATCH_PROMPT)
RequestsPatchToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=patch_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
tools.append(
RequestsPatchToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=patch_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
)
)
if not tools:
raise ValueError("Tools not found")

View File

@@ -25,6 +25,7 @@ from langchain_core.prompts.chat import (
from langchain_community.agent_toolkits.sql.prompt import (
SQL_FUNCTIONS_SUFFIX,
SQL_PREFIX,
SQL_SUFFIX,
)
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain_community.tools.sql_database.tool import (
@@ -140,8 +141,9 @@ def create_sql_agent(
toolkit = toolkit or SQLDatabaseToolkit(llm=llm, db=db) # type: ignore[arg-type]
agent_type = agent_type or AgentType.ZERO_SHOT_REACT_DESCRIPTION
tools = toolkit.get_tools() + list(extra_tools)
if prefix is None:
prefix = SQL_PREFIX
if prompt is None:
prefix = prefix or SQL_PREFIX
prefix = prefix.format(dialect=toolkit.dialect, top_k=top_k)
else:
if "top_k" in prompt.input_variables:
@@ -170,10 +172,10 @@ def create_sql_agent(
)
template = "\n\n".join(
[
react_prompt.PREFIX,
prefix,
"{tools}",
format_instructions,
react_prompt.SUFFIX,
suffix or SQL_SUFFIX,
]
)
prompt = PromptTemplate.from_template(template)

View File

@@ -8,6 +8,12 @@ from langchain_core.messages import AIMessage
from langchain_core.outputs import ChatGeneration, LLMResult
MODEL_COST_PER_1K_TOKENS = {
# GPT-4o-mini input
"gpt-4o-mini": 0.00015,
"gpt-4o-mini-2024-07-18": 0.00015,
# GPT-4o-mini output
"gpt-4o-mini-completion": 0.0006,
"gpt-4o-mini-2024-07-18-completion": 0.0006,
# GPT-4o input
"gpt-4o": 0.005,
"gpt-4o-2024-05-13": 0.005,

View File

@@ -24,6 +24,7 @@ from langchain_core.output_parsers import (
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import Runnable
from langchain_core.utils.pydantic import is_basemodel_subclass
from langchain_community.output_parsers.ernie_functions import (
JsonOutputFunctionsParser,
@@ -94,7 +95,7 @@ def _get_python_function_arguments(function: Callable, arg_descriptions: dict) -
for arg, arg_type in annotations.items():
if arg == "return":
continue
if isinstance(arg_type, type) and issubclass(arg_type, BaseModel):
if isinstance(arg_type, type) and is_basemodel_subclass(arg_type):
# Mypy error:
# "type" has no attribute "schema"
properties[arg] = arg_type.schema() # type: ignore[attr-defined]
@@ -156,7 +157,7 @@ def convert_to_ernie_function(
"""
if isinstance(function, dict):
return function
elif isinstance(function, type) and issubclass(function, BaseModel):
elif isinstance(function, type) and is_basemodel_subclass(function):
return cast(Dict, convert_pydantic_to_ernie_function(function))
elif callable(function):
return convert_python_function_to_ernie_function(function)
@@ -185,7 +186,7 @@ def get_ernie_output_parser(
only the function arguments and not the function name.
"""
function_names = [convert_to_ernie_function(f)["name"] for f in functions]
if isinstance(functions[0], type) and issubclass(functions[0], BaseModel):
if isinstance(functions[0], type) and is_basemodel_subclass(functions[0]):
if len(functions) > 1:
pydantic_schema: Union[Dict, Type[BaseModel]] = {
name: fn for name, fn in zip(function_names, functions)

View File

@@ -311,12 +311,15 @@ class GraphCypherQAChain(Chain):
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
callbacks = _run_manager.get_child()
question = inputs[self.input_key]
args = {
"question": question,
"schema": self.graph_schema,
}
args.update(inputs)
intermediate_steps: List = []
generated_cypher = self.cypher_generation_chain.run(
{"question": question, "schema": self.graph_schema}, callbacks=callbacks
)
generated_cypher = self.cypher_generation_chain.run(args, callbacks=callbacks)
# Extract Cypher code if it is wrapped in backticks
generated_cypher = extract_cypher(generated_cypher)

View File

@@ -124,6 +124,11 @@ class PebbloRetrievalQA(Chain):
),
"doc": doc.page_content,
"vector_db": self.retriever.vectorstore.__class__.__name__,
**(
{"pb_checksum": doc.metadata.get("pb_checksum")}
if doc.metadata.get("pb_checksum")
else {}
),
}
for doc in docs
if isinstance(doc, Document)
@@ -457,25 +462,24 @@ class PebbloRetrievalQA(Chain):
if self.api_key:
if self.classifier_location == "local":
if pebblo_resp:
payload["response"] = (
json.loads(pebblo_resp.text)
.get("retrieval_data", {})
.get("response", {})
)
payload["context"] = (
json.loads(pebblo_resp.text)
.get("retrieval_data", {})
.get("context", [])
)
payload["prompt"] = (
json.loads(pebblo_resp.text)
.get("retrieval_data", {})
.get("prompt", {})
)
resp = json.loads(pebblo_resp.text)
if resp:
payload["response"].update(
resp.get("retrieval_data", {}).get("response", {})
)
payload["response"].pop("data")
payload["prompt"].update(
resp.get("retrieval_data", {}).get("prompt", {})
)
payload["prompt"].pop("data")
context = payload["context"]
for context_data in context:
context_data.pop("doc")
payload["context"] = context
else:
payload["response"] = None
payload["context"] = None
payload["prompt"] = None
payload["response"] = {}
payload["prompt"] = {}
payload["context"] = []
headers.update({"x-api-key": self.api_key})
pebblo_cloud_url = f"{PEBBLO_CLOUD_URL}{PROMPT_URL}"
try:

View File

@@ -129,6 +129,7 @@ class Context(BaseModel):
retrieved_from: Optional[str]
doc: Optional[str]
vector_db: str
pb_checksum: Optional[str]
class Prompt(BaseModel):

View File

@@ -1,6 +1,6 @@
import json
from pathlib import Path
from typing import List
from typing import List, Optional
from langchain_core.chat_history import (
BaseChatMessageHistory,
@@ -11,21 +11,33 @@ from langchain_core.messages import BaseMessage, messages_from_dict, messages_to
class FileChatMessageHistory(BaseChatMessageHistory):
"""Chat message history that stores history in a local file."""
def __init__(self, file_path: str) -> None:
def __init__(
self,
file_path: str,
*,
encoding: Optional[str] = None,
ensure_ascii: bool = True,
) -> None:
"""Initialize the file path for the chat history.
Args:
file_path: The path to the local file to store the chat history.
encoding: The encoding to use for file operations. Defaults to None.
ensure_ascii: If True, escape non-ASCII in JSON. Defaults to True.
"""
self.file_path = Path(file_path)
self.encoding = encoding
self.ensure_ascii = ensure_ascii
if not self.file_path.exists():
self.file_path.touch()
self.file_path.write_text(json.dumps([]))
self.file_path.write_text(
json.dumps([], ensure_ascii=self.ensure_ascii), encoding=self.encoding
)
@property
def messages(self) -> List[BaseMessage]: # type: ignore
"""Retrieve the messages from the local file"""
items = json.loads(self.file_path.read_text())
items = json.loads(self.file_path.read_text(encoding=self.encoding))
messages = messages_from_dict(items)
return messages
@@ -33,8 +45,12 @@ class FileChatMessageHistory(BaseChatMessageHistory):
"""Append the message to the record in the local file"""
messages = messages_to_dict(self.messages)
messages.append(messages_to_dict([message])[0])
self.file_path.write_text(json.dumps(messages))
self.file_path.write_text(
json.dumps(messages, ensure_ascii=self.ensure_ascii), encoding=self.encoding
)
def clear(self) -> None:
"""Clear session memory from the local file"""
self.file_path.write_text(json.dumps([]))
self.file_path.write_text(
json.dumps([], ensure_ascii=self.ensure_ascii), encoding=self.encoding
)

View File

@@ -32,7 +32,6 @@ from sqlalchemy.engine import Engine
from sqlalchemy.ext.asyncio import (
AsyncEngine,
AsyncSession,
async_sessionmaker,
create_async_engine,
)
from sqlalchemy.orm import (
@@ -44,6 +43,12 @@ from sqlalchemy.orm import (
sessionmaker,
)
try:
from sqlalchemy.ext.asyncio import async_sessionmaker
except ImportError:
# dummy for sqlalchemy < 2
async_sessionmaker = type("async_sessionmaker", (type,), {}) # type: ignore
logger = logging.getLogger(__name__)

View File

@@ -106,10 +106,145 @@ async def aconnect_httpx_sse(
class ChatBaichuan(BaseChatModel):
"""Baichuan chat models API by Baichuan Intelligent Technology.
"""Baichuan chat model integration.
For more information, see https://platform.baichuan-ai.com/docs/api
"""
Setup:
To use, you should have the environment variable``BAICHUAN_API_KEY`` set with
your API KEY.
.. code-block:: bash
export BAICHUAN_API_KEY="your-api-key"
Key init args — completion params:
model: Optional[str]
Name of Baichuan model to use.
max_tokens: Optional[int]
Max number of tokens to generate.
streaming: Optional[bool]
Whether to stream the results or not.
temperature: Optional[float]
Sampling temperature.
top_p: Optional[float]
What probability mass to use.
top_k: Optional[int]
What search sampling control to use.
Key init args — client params:
api_key: Optional[str]
MiniMax API key. If not passed in will be read from env var BAICHUAN_API_KEY.
base_url: Optional[str]
Base URL for API requests.
See full list of supported init args and their descriptions in the params section.
Instantiate:
.. code-block:: python
from langchain_community.chat_models import ChatBaichuan
chat = ChatBaichuan(
api_key=api_key,
model='Baichuan4',
# temperature=...,
# other params...
)
Invoke:
.. code-block:: python
messages = [
("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
("human", "我喜欢编程。"),
]
chat.invoke(messages)
.. code-block:: python
AIMessage(
content='I enjoy programming.',
response_metadata={
'token_usage': {
'prompt_tokens': 93,
'completion_tokens': 5,
'total_tokens': 98
},
'model': 'Baichuan4'
},
id='run-944ff552-6a93-44cf-a861-4e4d849746f9-0'
)
Stream:
.. code-block:: python
for chunk in chat.stream(messages):
print(chunk)
.. code-block:: python
content='I' id='run-f99fcd6f-dd31-46d5-be8f-0b6a22bf77d8'
content=' enjoy programming.' id='run-f99fcd6f-dd31-46d5-be8f-0b6a22bf77d8
.. code-block:: python
stream = chat.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
.. code-block:: python
AIMessageChunk(
content='I like programming.',
id='run-74689970-dc31-461d-b729-3b6aa93508d2'
)
Async:
.. code-block:: python
await chat.ainvoke(messages)
# stream
# async for chunk in chat.astream(messages):
# print(chunk)
# batch
# await chat.abatch([messages])
.. code-block:: python
AIMessage(
content='I enjoy programming.',
response_metadata={
'token_usage': {
'prompt_tokens': 93,
'completion_tokens': 5,
'total_tokens': 98
},
'model': 'Baichuan4'
},
id='run-952509ed-9154-4ff9-b187-e616d7ddfbba-0'
)
Response metadata
.. code-block:: python
ai_msg = chat.invoke(messages)
ai_msg.response_metadata
.. code-block:: python
{
'token_usage': {
'prompt_tokens': 93,
'completion_tokens': 5,
'total_tokens': 98
},
'model': 'Baichuan4'
}
""" # noqa: E501
@property
def lc_secrets(self) -> Dict[str, str]:

View File

@@ -1,3 +1,4 @@
import json
import logging
import uuid
from operator import itemgetter
@@ -39,11 +40,17 @@ from langchain_core.output_parsers.openai_tools import (
PydanticToolsParser,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain_core.pydantic_v1 import (
BaseModel,
Field,
SecretStr,
root_validator,
)
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_core.utils.pydantic import is_basemodel_subclass
logger = logging.getLogger(__name__)
@@ -65,7 +72,7 @@ def convert_message_to_dict(message: BaseMessage) -> dict:
elif isinstance(message, (FunctionMessage, ToolMessage)):
message_dict = {
"role": "function",
"content": message.content,
"content": _create_tool_content(message.content),
"name": message.name or message.additional_kwargs.get("name"),
}
else:
@@ -74,6 +81,20 @@ def convert_message_to_dict(message: BaseMessage) -> dict:
return message_dict
def _create_tool_content(content: Union[str, List[Union[str, Dict[Any, Any]]]]) -> str:
"""Convert tool content to dict scheme."""
if isinstance(content, str):
try:
if isinstance(json.loads(content), dict):
return content
else:
return json.dumps({"tool_result": content})
except json.JSONDecodeError:
return json.dumps({"tool_result": content})
else:
return json.dumps({"tool_result": content})
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> AIMessage:
content = _dict.get("result", "") or ""
additional_kwargs: Mapping[str, Any] = {}
@@ -109,9 +130,9 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> AIMessage:
content=content,
additional_kwargs=msg_additional_kwargs,
usage_metadata=UsageMetadata(
input_tokens=usage.prompt_tokens,
output_tokens=usage.completion_tokens,
total_tokens=usage.total_tokens,
input_tokens=usage.get("prompt_tokens", 0),
output_tokens=usage.get("completion_tokens", 0),
total_tokens=usage.get("total_tokens", 0),
),
)
@@ -340,13 +361,13 @@ class QianfanChatEndpoint(BaseChatModel):
In the case of other model, passing these params will not affect the result.
"""
model: str = "ERNIE-Bot-turbo"
model: str = "ERNIE-Lite-8K"
"""Model name.
you could get from https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu
preset models are mapping to an endpoint.
`model` will be ignored if `endpoint` is set.
Default is ERNIE-Bot-turbo.
Default is ERNIE-Lite-8K.
"""
endpoint: Optional[str] = None
@@ -754,7 +775,7 @@ class QianfanChatEndpoint(BaseChatModel):
""" # noqa: E501
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = isinstance(schema, type) and issubclass(schema, BaseModel)
is_pydantic_schema = isinstance(schema, type) and is_basemodel_subclass(schema)
llm = self.bind_tools([schema])
if is_pydantic_schema:
output_parser: OutputParserLike = PydanticToolsParser(

View File

@@ -57,6 +57,7 @@ from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_core.utils.pydantic import is_basemodel_subclass
from langchain_community.utilities.requests import Requests
@@ -443,7 +444,7 @@ class ChatEdenAI(BaseChatModel):
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
llm = self.bind_tools([schema], tool_choice="required")
if isinstance(schema, type) and issubclass(schema, BaseModel):
if isinstance(schema, type) and is_basemodel_subclass(schema):
output_parser: OutputParserLike = PydanticToolsParser(
tools=[schema], first_tool_only=True
)

View File

@@ -8,7 +8,7 @@ from typing import TYPE_CHECKING, Dict, Optional, Set
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (
@@ -79,10 +79,12 @@ class ChatEverlyAI(ChatOpenAI):
@root_validator(pre=True)
def validate_environment_override(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values,
"everlyai_api_key",
"EVERLYAI_API_KEY",
values["openai_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,
"everlyai_api_key",
"EVERLYAI_API_KEY",
)
)
values["openai_api_base"] = DEFAULT_API_BASE

View File

@@ -46,10 +46,15 @@ from langchain_core.output_parsers.openai_tools import (
parse_tool_call,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.pydantic_v1 import (
BaseModel,
Field,
root_validator,
)
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_core.utils.pydantic import is_basemodel_subclass
class ChatLlamaCpp(BaseChatModel):
@@ -525,7 +530,7 @@ class ChatLlamaCpp(BaseChatModel):
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = isinstance(schema, type) and issubclass(schema, BaseModel)
is_pydantic_schema = isinstance(schema, type) and is_basemodel_subclass(schema)
if schema is None:
raise ValueError(
"schema must be specified when method is 'function_calling'. "

View File

@@ -92,34 +92,116 @@ def _convert_delta_to_message_chunk(
class ChatSparkLLM(BaseChatModel):
"""iFlyTek Spark large language model.
"""IFlyTek Spark chat model integration.
To use, you should pass `app_id`, `api_key`, `api_secret`
as a named parameter to the constructor OR set environment
variables ``IFLYTEK_SPARK_APP_ID``, ``IFLYTEK_SPARK_API_KEY`` and
``IFLYTEK_SPARK_API_SECRET``
Setup:
To use, you should have the environment variable``IFLYTEK_SPARK_API_KEY``,
``IFLYTEK_SPARK_API_SECRET`` and ``IFLYTEK_SPARK_APP_ID``.
Example:
Key init args — completion params:
model: Optional[str]
Name of IFLYTEK SPARK model to use.
temperature: Optional[float]
Sampling temperature.
top_k: Optional[float]
What search sampling control to use.
streaming: Optional[bool]
Whether to stream the results or not.
Key init args — client params:
api_key: Optional[str]
IFLYTEK SPARK API KEY. If not passed in will be read from env var IFLYTEK_SPARK_API_KEY.
api_secret: Optional[str]
IFLYTEK SPARK API SECRET. If not passed in will be read from env var IFLYTEK_SPARK_API_SECRET.
api_url: Optional[str]
Base URL for API requests.
timeout: Optional[int]
Timeout for requests.
See full list of supported init args and their descriptions in the params section.
Instantiate:
.. code-block:: python
client = ChatSparkLLM(
spark_app_id="<app_id>",
spark_api_key="<api_key>",
spark_api_secret="<api_secret>"
)
from langchain_community.chat_models import ChatSparkLLM
Extra infos:
1. Get app_id, api_key, api_secret from the iFlyTek Open Platform Console:
https://console.xfyun.cn/services/bm35
2. By default, iFlyTek Spark LLM V3.5 is invoked.
If you need to invoke other versions, please configure the corresponding
parameters(spark_api_url and spark_llm_domain) according to the document:
https://www.xfyun.cn/doc/spark/Web.html
3. It is necessary to ensure that the app_id used has a license for
the corresponding model version.
4. If you encounter problems during use, try getting help at:
https://console.xfyun.cn/workorder/commit
"""
chat = MiniMaxChat(
api_key=api_key,
api_secret=ak,
model='Spark4.0 Ultra',
# temperature=...,
# other params...
)
Invoke:
.. code-block:: python
messages = [
("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
("human", "我喜欢编程。"),
]
chat.invoke(messages)
.. code-block:: python
AIMessage(
content='I like programming.',
response_metadata={
'token_usage': {
'question_tokens': 3,
'prompt_tokens': 16,
'completion_tokens': 4,
'total_tokens': 20
}
},
id='run-af8b3531-7bf7-47f0-bfe8-9262cb2a9d47-0'
)
Stream:
.. code-block:: python
for chunk in chat.stream(messages):
print(chunk)
.. code-block:: python
content='I' id='run-fdbb57c2-2d32-4516-b894-6c5a67605d83'
content=' like programming' id='run-fdbb57c2-2d32-4516-b894-6c5a67605d83'
content='.' id='run-fdbb57c2-2d32-4516-b894-6c5a67605d83'
.. code-block:: python
stream = chat.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
.. code-block:: python
AIMessageChunk(
content='I like programming.',
id='run-aca2fa82-c2e4-4835-b7e2-865ddd3c46cb'
)
Response metadata
.. code-block:: python
ai_msg = chat.invoke(messages)
ai_msg.response_metadata
.. code-block:: python
{
'token_usage': {
'question_tokens': 3,
'prompt_tokens': 16,
'completion_tokens': 4,
'total_tokens': 20
}
}
""" # noqa: E501
@classmethod
def is_lc_serializable(cls) -> bool:
@@ -257,7 +339,7 @@ class ChatSparkLLM(BaseChatModel):
[_convert_message_to_dict(m) for m in messages],
self.spark_user_id,
self.model_kwargs,
self.streaming,
streaming=True,
)
for content in self.client.subscribe(timeout=self.request_timeout):
if "data" not in content:
@@ -274,9 +356,10 @@ class ChatSparkLLM(BaseChatModel):
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
stream: Optional[bool] = None,
**kwargs: Any,
) -> ChatResult:
if self.streaming:
if stream or self.streaming:
stream_iter = self._stream(
messages=messages, stop=stop, run_manager=run_manager, **kwargs
)

View File

@@ -53,11 +53,16 @@ from langchain_core.outputs import (
ChatGenerationChunk,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
from langchain_core.pydantic_v1 import (
BaseModel,
Field,
SecretStr,
)
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_core.utils.pydantic import is_basemodel_subclass
from requests.exceptions import HTTPError
from tenacity import (
before_sleep_log,
@@ -865,7 +870,7 @@ class ChatTongyi(BaseChatModel):
"""
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = isinstance(schema, type) and issubclass(schema, BaseModel)
is_pydantic_schema = isinstance(schema, type) and is_basemodel_subclass(schema)
llm = self.bind_tools([schema])
if is_pydantic_schema:
output_parser: OutputParserLike = PydanticToolsParser(

View File

@@ -41,6 +41,7 @@ class BlackboardLoader(WebBaseLoader):
basic_auth: Optional[Tuple[str, str]] = None,
cookies: Optional[dict] = None,
continue_on_failure: bool = False,
show_progress: bool = True,
):
"""Initialize with blackboard course url.
@@ -56,12 +57,15 @@ class BlackboardLoader(WebBaseLoader):
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
show_progress: whether to show a progress bar while loading. Default: True
Raises:
ValueError: If blackboard course url is invalid.
"""
super().__init__(
web_paths=(blackboard_course_url), continue_on_failure=continue_on_failure
web_paths=(blackboard_course_url),
continue_on_failure=continue_on_failure,
show_progress=show_progress,
)
# Get base url
try:

View File

@@ -20,6 +20,7 @@ class GitbookLoader(WebBaseLoader):
base_url: Optional[str] = None,
content_selector: str = "main",
continue_on_failure: bool = False,
show_progress: bool = True,
):
"""Initialize with web page and whether to load all paths.
@@ -36,6 +37,7 @@ class GitbookLoader(WebBaseLoader):
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
show_progress: whether to show a progress bar while loading. Default: True
"""
self.base_url = base_url or web_page
if self.base_url.endswith("/"):
@@ -43,7 +45,11 @@ class GitbookLoader(WebBaseLoader):
if load_all_paths:
# set web_path to the sitemap if we want to crawl all paths
web_page = f"{self.base_url}/sitemap.xml"
super().__init__(web_paths=(web_page,), continue_on_failure=continue_on_failure)
super().__init__(
web_paths=(web_page,),
continue_on_failure=continue_on_failure,
show_progress=show_progress,
)
self.load_all_paths = load_all_paths
self.content_selector = content_selector

View File

@@ -7,7 +7,6 @@
# 4. For service accounts visit
# https://cloud.google.com/iam/docs/service-accounts-create
import os
from pathlib import Path
from typing import Any, Dict, List, Optional, Sequence, Union
@@ -108,7 +107,13 @@ class GoogleDriveLoader(BaseLoader, BaseModel):
return v
def _load_credentials(self) -> Any:
"""Load credentials."""
"""Load credentials.
The order of loading credentials:
1. Service account key if file exists
2. Token path (for OAuth Client) if file exists
3. Credentials path (for OAuth Client) if file exists
4. Default credentials. if no credentials found, raise DefaultCredentialsError
"""
# Adapted from https://developers.google.com/drive/api/v3/quickstart/python
try:
from google.auth import default
@@ -126,30 +131,31 @@ class GoogleDriveLoader(BaseLoader, BaseModel):
)
creds = None
# From service account
if self.service_account_key.exists():
return service_account.Credentials.from_service_account_file(
str(self.service_account_key), scopes=SCOPES
)
# From Oauth Client
if self.token_path.exists():
creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
elif "GOOGLE_APPLICATION_CREDENTIALS" not in os.environ:
creds, project = default()
creds = creds.with_scopes(SCOPES)
# no need to write to file
if creds:
return creds
else:
elif self.credentials_path.exists():
flow = InstalledAppFlow.from_client_secrets_file(
str(self.credentials_path), SCOPES
)
creds = flow.run_local_server(port=0)
with open(self.token_path, "w") as token:
token.write(creds.to_json())
if creds:
with open(self.token_path, "w") as token:
token.write(creds.to_json())
# From Application Default Credentials
if not creds:
creds, _ = default(scopes=SCOPES)
return creds

View File

@@ -6,6 +6,7 @@ import warnings
from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
Iterator,
Mapping,
@@ -27,6 +28,7 @@ if TYPE_CHECKING:
import pdfplumber.page
import pypdf._page
import pypdfium2._helpers.page
from pypdf import PageObject
from textractor.data.text_linearization_config import TextLinearizationConfig
@@ -83,10 +85,17 @@ class PyPDFParser(BaseBlobParser):
"""Load `PDF` using `pypdf`"""
def __init__(
self, password: Optional[Union[str, bytes]] = None, extract_images: bool = False
self,
password: Optional[Union[str, bytes]] = None,
extract_images: bool = False,
*,
extraction_mode: str = "plain",
extraction_kwargs: Optional[Dict[str, Any]] = None,
):
self.password = password
self.extract_images = extract_images
self.extraction_mode = extraction_mode
self.extraction_kwargs = extraction_kwargs or {}
def lazy_parse(self, blob: Blob) -> Iterator[Document]: # type: ignore[valid-type]
"""Lazily parse the blob."""
@@ -98,11 +107,23 @@ class PyPDFParser(BaseBlobParser):
"`pip install pypdf`"
)
def _extract_text_from_page(page: "PageObject") -> str:
"""
Extract text from image given the version of pypdf.
"""
if pypdf.__version__.startswith("3"):
return page.extract_text()
else:
return page.extract_text(
extraction_mode=self.extraction_mode, **self.extraction_kwargs
)
with blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
yield from [
Document(
page_content=page.extract_text()
page_content=_extract_text_from_page(page=page)
+ self._extract_images_from_page(page),
metadata={"source": blob.source, "page": page_number}, # type: ignore[attr-defined]
)

View File

@@ -171,6 +171,9 @@ class PyPDFLoader(BasePDFLoader):
password: Optional[Union[str, bytes]] = None,
headers: Optional[Dict] = None,
extract_images: bool = False,
*,
extraction_mode: str = "plain",
extraction_kwargs: Optional[Dict] = None,
) -> None:
"""Initialize with a file path."""
try:
@@ -180,7 +183,12 @@ class PyPDFLoader(BasePDFLoader):
"pypdf package not found, please install it with " "`pip install pypdf`"
)
super().__init__(file_path, headers=headers)
self.parser = PyPDFParser(password=password, extract_images=extract_images)
self.parser = PyPDFParser(
password=password,
extract_images=extract_images,
extraction_mode=extraction_mode,
extraction_kwargs=extraction_kwargs,
)
def lazy_load(
self,

View File

@@ -5,7 +5,7 @@ import logging
import os
import uuid
from http import HTTPStatus
from typing import Any, Dict, Iterator, List, Optional, Union
from typing import Any, Dict, Iterator, List, Optional
import requests # type: ignore
from langchain_core.documents import Document
@@ -61,7 +61,7 @@ class PebbloSafeLoader(BaseLoader):
self.source_path = get_loader_full_path(self.loader)
self.source_owner = PebbloSafeLoader.get_file_owner_from_path(self.source_path)
self.docs: List[Document] = []
self.docs_with_id: Union[List[IndexedDocument], List[Document], List] = []
self.docs_with_id: List[IndexedDocument] = []
loader_name = str(type(self.loader)).split(".")[-1].split("'")[0]
self.source_type = get_loader_type(loader_name)
self.source_path_size = self.get_source_size(self.source_path)
@@ -89,17 +89,13 @@ class PebbloSafeLoader(BaseLoader):
list: Documents fetched from load method of the wrapped `loader`.
"""
self.docs = self.loader.load()
# Add pebblo-specific metadata to docs
self._add_pebblo_specific_metadata()
if not self.load_semantic:
self._classify_doc(self.docs, loading_end=True)
return self.docs
self.docs_with_id = self._index_docs()
classified_docs = self._classify_doc(self.docs_with_id, loading_end=True)
self.docs_with_id = self._add_semantic_to_docs(
self.docs_with_id, classified_docs
)
self.docs = self._unindex_docs(self.docs_with_id) # type: ignore
classified_docs = self._classify_doc(loading_end=True)
self._add_pebblo_specific_metadata(classified_docs)
if self.load_semantic:
self.docs = self._add_semantic_to_docs(classified_docs)
else:
self.docs = self._unindex_docs() # type: ignore
return self.docs
def lazy_load(self) -> Iterator[Document]:
@@ -125,19 +121,14 @@ class PebbloSafeLoader(BaseLoader):
self.docs = []
break
self.docs = list((doc,))
# Add pebblo-specific metadata to docs
self._add_pebblo_specific_metadata()
if not self.load_semantic:
self._classify_doc(self.docs, loading_end=True)
yield self.docs[0]
self.docs_with_id = self._index_docs()
classified_doc = self._classify_doc()
self._add_pebblo_specific_metadata(classified_doc)
if self.load_semantic:
self.docs = self._add_semantic_to_docs(classified_doc)
else:
self.docs_with_id = self._index_docs()
classified_doc = self._classify_doc(self.docs)
self.docs_with_id = self._add_semantic_to_docs(
self.docs_with_id, classified_doc
)
self.docs = self._unindex_docs(self.docs_with_id) # type: ignore
yield self.docs[0]
self.docs = self._unindex_docs()
yield self.docs[0]
@classmethod
def set_discover_sent(cls) -> None:
@@ -147,13 +138,12 @@ class PebbloSafeLoader(BaseLoader):
def set_loader_sent(cls) -> None:
cls._loader_sent = True
def _classify_doc(self, loaded_docs: list, loading_end: bool = False) -> list:
def _classify_doc(self, loading_end: bool = False) -> dict:
"""Send documents fetched from loader to pebblo-server. Then send
classified documents to Daxa cloud(If api_key is present). Internal method.
Args:
loaded_docs (list): List of documents fetched from loader's load operation.
loading_end (bool, optional): Flag indicating the halt of data
loading by loader. Defaults to False.
"""
@@ -163,9 +153,8 @@ class PebbloSafeLoader(BaseLoader):
}
if loading_end is True:
PebbloSafeLoader.set_loader_sent()
doc_content = [doc.dict() for doc in loaded_docs]
doc_content = [doc.dict() for doc in self.docs_with_id]
docs = []
classified_docs = []
for doc in doc_content:
doc_metadata = doc.get("metadata", {})
doc_authorized_identities = doc_metadata.get("authorized_identities", [])
@@ -183,12 +172,12 @@ class PebbloSafeLoader(BaseLoader):
page_content = str(doc.get("page_content"))
page_content_size = self.calculate_content_size(page_content)
self.source_aggregate_size += page_content_size
doc_id = doc.get("id", None) or 0
doc_id = doc.get("pb_id", None) or 0
docs.append(
{
"doc": page_content,
"source_path": doc_source_path,
"id": doc_id,
"pb_id": doc_id,
"last_modified": doc.get("metadata", {}).get("last_modified"),
"file_owner": doc_source_owner,
**(
@@ -221,6 +210,7 @@ class PebbloSafeLoader(BaseLoader):
self.source_aggregate_size
)
payload = Doc(**payload).dict(exclude_unset=True)
classified_docs = {}
# Raw payload to be sent to classifier
if self.classifier_location == "local":
load_doc_url = f"{self.classifier_url}{LOADER_DOC_URL}"
@@ -228,7 +218,10 @@ class PebbloSafeLoader(BaseLoader):
pebblo_resp = requests.post(
load_doc_url, headers=headers, json=payload, timeout=300
)
classified_docs = json.loads(pebblo_resp.text).get("docs", None)
# Updating the structure of pebblo response docs for efficient searching
for classified_doc in json.loads(pebblo_resp.text).get("docs", []):
classified_docs.update({classified_doc["pb_id"]: classified_doc})
if pebblo_resp.status_code not in [
HTTPStatus.OK,
HTTPStatus.BAD_GATEWAY,
@@ -257,7 +250,21 @@ class PebbloSafeLoader(BaseLoader):
if self.api_key:
if self.classifier_location == "local":
payload["docs"] = classified_docs
docs = payload["docs"]
for doc_data in docs:
classified_data = classified_docs.get(doc_data["pb_id"], {})
doc_data.update(
{
"pb_checksum": classified_data.get("pb_checksum", None),
"loader_source_path": classified_data.get(
"loader_source_path", None
),
"entities": classified_data.get("entities", {}),
"topics": classified_data.get("topics", {}),
}
)
doc_data.pop("doc")
headers.update({"x-api-key": self.api_key})
pebblo_cloud_url = f"{PEBBLO_CLOUD_URL}{LOADER_DOC_URL}"
try:
@@ -453,33 +460,29 @@ class PebbloSafeLoader(BaseLoader):
List[IndexedDocument]: A list of IndexedDocument objects with unique IDs.
"""
docs_with_id = [
IndexedDocument(id=hex(i)[2:], **doc.dict())
IndexedDocument(pb_id=str(i), **doc.dict())
for i, doc in enumerate(self.docs)
]
return docs_with_id
def _add_semantic_to_docs(
self, docs_with_id: List[IndexedDocument], classified_docs: List[dict]
) -> List[Document]:
def _add_semantic_to_docs(self, classified_docs: Dict) -> List[Document]:
"""
Adds semantic metadata to the given list of documents.
Args:
docs_with_id (List[IndexedDocument]): A list of IndexedDocument objects
containing the documents with their IDs.
classified_docs (List[dict]): A list of dictionaries containing the
classified documents.
classified_docs (Dict): A dictionary of dictionaries containing the
classified documents with pb_id as key.
Returns:
List[Document]: A list of Document objects with added semantic metadata.
"""
indexed_docs = {
doc.id: Document(page_content=doc.page_content, metadata=doc.metadata)
for doc in docs_with_id
doc.pb_id: Document(page_content=doc.page_content, metadata=doc.metadata)
for doc in self.docs_with_id
}
for classified_doc in classified_docs:
doc_id = classified_doc.get("id")
for classified_doc in classified_docs.values():
doc_id = classified_doc.get("pb_id")
if doc_id in indexed_docs:
self._add_semantic_to_doc(indexed_docs[doc_id], classified_doc)
@@ -487,19 +490,16 @@ class PebbloSafeLoader(BaseLoader):
return semantic_metadata_docs
def _unindex_docs(self, docs_with_id: List[IndexedDocument]) -> List[Document]:
def _unindex_docs(self) -> List[Document]:
"""
Converts a list of IndexedDocument objects to a list of Document objects.
Args:
docs_with_id (List[IndexedDocument]): A list of IndexedDocument objects.
Returns:
List[Document]: A list of Document objects.
"""
docs = [
Document(page_content=doc.page_content, metadata=doc.metadata)
for i, doc in enumerate(docs_with_id)
for i, doc in enumerate(self.docs_with_id)
]
return docs
@@ -522,12 +522,16 @@ class PebbloSafeLoader(BaseLoader):
)
return doc
def _add_pebblo_specific_metadata(self) -> None:
def _add_pebblo_specific_metadata(self, classified_docs: dict) -> None:
"""Add Pebblo specific metadata to documents."""
for doc in self.docs:
for doc in self.docs_with_id:
doc_metadata = doc.metadata
doc_metadata["full_path"] = get_full_path(
doc_metadata.get(
"full_path", doc_metadata.get("source", self.source_path)
)
)
doc_metadata["pb_id"] = doc.pb_id
doc_metadata["pb_checksum"] = classified_docs.get(doc.pb_id, {}).get(
"pb_checksum", None
)

View File

@@ -58,6 +58,8 @@ class WebBaseLoader(BaseLoader):
bs_get_text_kwargs: Optional[Dict[str, Any]] = None,
bs_kwargs: Optional[Dict[str, Any]] = None,
session: Any = None,
*,
show_progress: bool = True,
) -> None:
"""Initialize loader.
@@ -69,6 +71,7 @@ class WebBaseLoader(BaseLoader):
raise_for_status: Raise an exception if http status code denotes an error.
bs_get_text_kwargs: kwargs for beatifulsoup4 get_text
bs_kwargs: kwargs for beatifulsoup4 web page parsing
show_progress: Show progress bar when loading pages.
"""
# web_path kept for backwards-compatibility.
if web_path and web_paths:
@@ -91,6 +94,7 @@ class WebBaseLoader(BaseLoader):
self.default_parser = default_parser
self.requests_kwargs = requests_kwargs or {}
self.raise_for_status = raise_for_status
self.show_progress = show_progress
self.bs_get_text_kwargs = bs_get_text_kwargs or {}
self.bs_kwargs = bs_kwargs or {}
if session:
@@ -177,11 +181,14 @@ class WebBaseLoader(BaseLoader):
task = asyncio.ensure_future(self._fetch_with_rate_limit(url, semaphore))
tasks.append(task)
try:
from tqdm.asyncio import tqdm_asyncio
if self.show_progress:
from tqdm.asyncio import tqdm_asyncio
return await tqdm_asyncio.gather(
*tasks, desc="Fetching pages", ascii=True, mininterval=1
)
return await tqdm_asyncio.gather(
*tasks, desc="Fetching pages", ascii=True, mininterval=1
)
else:
return await asyncio.gather(*tasks)
except ImportError:
warnings.warn("For better logging of progress, `pip install tqdm`")
return await asyncio.gather(*tasks)

View File

@@ -7,6 +7,7 @@ from enum import Enum
from pathlib import Path
from typing import Any, Dict, Generator, List, Optional, Sequence, Union
from urllib.parse import parse_qs, urlparse
from xml.etree.ElementTree import ParseError # OK: trusted-source
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import root_validator
@@ -28,6 +29,8 @@ class GoogleApiClient:
As the google api expects credentials you need to set up a google account and
register your Service. "https://developers.google.com/docs/api/quickstart/python"
*Security Note*: Note that parsing of the transcripts relies on the standard
xml library but the input is viewed as trusted in this case.
Example:
@@ -437,6 +440,14 @@ class GoogleApiYoutubeLoader(BaseLoader):
channel_id = response["items"][0]["id"]["channelId"]
return channel_id
def _get_uploads_playlist_id(self, channel_id: str) -> str:
request = self.youtube_client.channels().list(
part="contentDetails",
id=channel_id,
)
response = request.execute()
return response["items"][0]["contentDetails"]["relatedPlaylists"]["uploads"]
def _get_document_for_channel(self, channel: str, **kwargs: Any) -> List[Document]:
try:
from youtube_transcript_api import (
@@ -452,10 +463,11 @@ class GoogleApiYoutubeLoader(BaseLoader):
)
channel_id = self._get_channel_id(channel)
request = self.youtube_client.search().list(
uploads_playlist_id = self._get_uploads_playlist_id(channel_id)
request = self.youtube_client.playlistItems().list(
part="id,snippet",
channelId=channel_id,
maxResults=50, # adjust this value to retrieve more or fewer videos
playlistId=uploads_playlist_id,
maxResults=50,
)
video_ids = []
while request is not None:
@@ -463,23 +475,20 @@ class GoogleApiYoutubeLoader(BaseLoader):
# Add each video ID to the list
for item in response["items"]:
if not item["id"].get("videoId"):
continue
meta_data = {"videoId": item["id"]["videoId"]}
video_id = item["snippet"]["resourceId"]["videoId"]
meta_data = {"videoId": video_id}
if self.add_video_info:
item["snippet"].pop("thumbnails")
meta_data.update(item["snippet"])
try:
page_content = self._get_transcripe_for_video_id(
item["id"]["videoId"]
)
page_content = self._get_transcripe_for_video_id(video_id)
video_ids.append(
Document(
page_content=page_content,
metadata=meta_data,
)
)
except (TranscriptsDisabled, NoTranscriptFound) as e:
except (TranscriptsDisabled, NoTranscriptFound, ParseError) as e:
if self.continue_on_failure:
logger.error(
"Error fetching transscript "

View File

@@ -213,6 +213,9 @@ if TYPE_CHECKING:
from langchain_community.embeddings.tensorflow_hub import (
TensorflowHubEmbeddings,
)
from langchain_community.embeddings.textembed import (
TextEmbedEmbeddings,
)
from langchain_community.embeddings.titan_takeoff import (
TitanTakeoffEmbed,
)
@@ -308,6 +311,7 @@ __all__ = [
"SpacyEmbeddings",
"SparkLLMTextEmbeddings",
"TensorflowHubEmbeddings",
"TextEmbedEmbeddings",
"TitanTakeoffEmbed",
"VertexAIEmbeddings",
"VolcanoEmbeddings",
@@ -392,6 +396,7 @@ _module_lookup = {
"VolcanoEmbeddings": "langchain_community.embeddings.volcengine",
"VoyageEmbeddings": "langchain_community.embeddings.voyageai",
"XinferenceEmbeddings": "langchain_community.embeddings.xinference",
"TextEmbedEmbeddings": "langchain_community.embeddings.textembed",
"TitanTakeoffEmbed": "langchain_community.embeddings.titan_takeoff",
"PremAIEmbeddings": "langchain_community.embeddings.premai",
"YandexGPTEmbeddings": "langchain_community.embeddings.yandex",

View File

@@ -0,0 +1,356 @@
"""
TextEmbed: Embedding Inference Server
TextEmbed provides a high-throughput, low-latency solution for serving embeddings.
It supports various sentence-transformer models.
Now, it includes the ability to deploy image embedding models.
TextEmbed offers flexibility and scalability for diverse applications.
TextEmbed is maintained by Keval Dekivadiya and is licensed under the Apache-2.0 license.
""" # noqa: E501
import asyncio
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
import aiohttp
import numpy as np
import requests
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
__all__ = ["TextEmbedEmbeddings"]
class TextEmbedEmbeddings(BaseModel, Embeddings):
"""
A class to handle embedding requests to the TextEmbed API.
Attributes:
model : The TextEmbed model ID to use for embeddings.
api_url : The base URL for the TextEmbed API.
api_key : The API key for authenticating with the TextEmbed API.
client : The TextEmbed client instance.
Example:
.. code-block:: python
from langchain_community.embeddings import TextEmbedEmbeddings
embeddings = TextEmbedEmbeddings(
model="sentence-transformers/clip-ViT-B-32",
api_url="http://localhost:8000/v1",
api_key="<API_KEY>"
)
For more information: https://github.com/kevaldekivadiya2415/textembed/blob/main/docs/setup.md
""" # noqa: E501
model: str
"""Underlying TextEmbed model id."""
api_url: str = "http://localhost:8000/v1"
"""Endpoint URL to use."""
api_key: str = "None"
"""API Key for authentication"""
client: Any = None
"""TextEmbed client."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=False, skip_on_failure=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and URL exist in the environment.
Args:
values (Dict): Dictionary of values to validate.
Returns:
Dict: Validated values.
"""
values["api_url"] = get_from_dict_or_env(values, "api_url", "API_URL")
values["api_key"] = get_from_dict_or_env(values, "api_key", "API_KEY")
values["client"] = AsyncOpenAITextEmbedEmbeddingClient(
host=values["api_url"], api_key=values["api_key"]
)
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call out to TextEmbed's embedding endpoint.
Args:
texts (List[str]): The list of texts to embed.
Returns:
List[List[float]]: List of embeddings, one for each text.
"""
embeddings = self.client.embed(
model=self.model,
texts=texts,
)
return embeddings
async def aembed_documents(self, texts: List[str]) -> List[List[float]]:
"""Async call out to TextEmbed's embedding endpoint.
Args:
texts (List[str]): The list of texts to embed.
Returns:
List[List[float]]: List of embeddings, one for each text.
"""
embeddings = await self.client.aembed(
model=self.model,
texts=texts,
)
return embeddings
def embed_query(self, text: str) -> List[float]:
"""Call out to TextEmbed's embedding endpoint for a single query.
Args:
text (str): The text to embed.
Returns:
List[float]: Embeddings for the text.
"""
return self.embed_documents([text])[0]
async def aembed_query(self, text: str) -> List[float]:
"""Async call out to TextEmbed's embedding endpoint for a single query.
Args:
text (str): The text to embed.
Returns:
List[float]: Embeddings for the text.
"""
embeddings = await self.aembed_documents([text])
return embeddings[0]
class AsyncOpenAITextEmbedEmbeddingClient:
"""
A client to handle synchronous and asynchronous requests to the TextEmbed API.
Attributes:
host (str): The base URL for the TextEmbed API.
api_key (str): The API key for authenticating with the TextEmbed API.
aiosession (Optional[aiohttp.ClientSession]): The aiohttp session for async requests.
_batch_size (int): Maximum batch size for a single request.
""" # noqa: E501
def __init__(
self,
host: str = "http://localhost:8000/v1",
api_key: Union[str, None] = None,
aiosession: Optional[aiohttp.ClientSession] = None,
) -> None:
self.host = host
self.api_key = api_key
self.aiosession = aiosession
if self.host is None or len(self.host) < 3:
raise ValueError("Parameter `host` must be set to a valid URL")
self._batch_size = 256
@staticmethod
def _permute(
texts: List[str], sorter: Callable = len
) -> Tuple[List[str], Callable]:
"""
Sorts texts in ascending order and provides a function to restore the original order.
Args:
texts (List[str]): List of texts to sort.
sorter (Callable, optional): Sorting function, defaults to length.
Returns:
Tuple[List[str], Callable]: Sorted texts and a function to restore original order.
""" # noqa: E501
if len(texts) == 1:
return texts, lambda t: t
length_sorted_idx = np.argsort([-sorter(sen) for sen in texts])
texts_sorted = [texts[idx] for idx in length_sorted_idx]
return texts_sorted, lambda unsorted_embeddings: [
unsorted_embeddings[idx] for idx in np.argsort(length_sorted_idx)
]
def _batch(self, texts: List[str]) -> List[List[str]]:
"""
Splits a list of texts into batches of size max `self._batch_size`.
Args:
texts (List[str]): List of texts to split.
Returns:
List[List[str]]: List of batches of texts.
"""
if len(texts) == 1:
return [texts]
batches = []
for start_index in range(0, len(texts), self._batch_size):
batches.append(texts[start_index : start_index + self._batch_size])
return batches
@staticmethod
def _unbatch(batch_of_texts: List[List[Any]]) -> List[Any]:
"""
Merges batches of texts into a single list.
Args:
batch_of_texts (List[List[Any]]): List of batches of texts.
Returns:
List[Any]: Merged list of texts.
"""
if len(batch_of_texts) == 1 and len(batch_of_texts[0]) == 1:
return batch_of_texts[0]
texts = []
for sublist in batch_of_texts:
texts.extend(sublist)
return texts
def _kwargs_post_request(self, model: str, texts: List[str]) -> Dict[str, Any]:
"""
Builds the kwargs for the POST request, used by sync method.
Args:
model (str): The model to use for embedding.
texts (List[str]): List of texts to embed.
Returns:
Dict[str, Any]: Dictionary of POST request parameters.
"""
return dict(
url=f"{self.host}/embedding",
headers={
"accept": "application/json",
"content-type": "application/json",
"Authorization": f"Bearer {self.api_key}",
},
json=dict(
input=texts,
model=model,
),
)
def _sync_request_embed(
self, model: str, batch_texts: List[str]
) -> List[List[float]]:
"""
Sends a synchronous request to the embedding endpoint.
Args:
model (str): The model to use for embedding.
batch_texts (List[str]): Batch of texts to embed.
Returns:
List[List[float]]: List of embeddings for the batch.
Raises:
Exception: If the response status is not 200.
"""
response = requests.post(
**self._kwargs_post_request(model=model, texts=batch_texts)
)
if response.status_code != 200:
raise Exception(
f"TextEmbed responded with an unexpected status message "
f"{response.status_code}: {response.text}"
)
return [e["embedding"] for e in response.json()["data"]]
def embed(self, model: str, texts: List[str]) -> List[List[float]]:
"""
Embeds a list of texts synchronously.
Args:
model (str): The model to use for embedding.
texts (List[str]): List of texts to embed.
Returns:
List[List[float]]: List of embeddings for the texts.
"""
perm_texts, unpermute_func = self._permute(texts)
perm_texts_batched = self._batch(perm_texts)
# Request
map_args = (
self._sync_request_embed,
[model] * len(perm_texts_batched),
perm_texts_batched,
)
if len(perm_texts_batched) == 1:
embeddings_batch_perm = list(map(*map_args))
else:
with ThreadPoolExecutor(32) as p:
embeddings_batch_perm = list(p.map(*map_args))
embeddings_perm = self._unbatch(embeddings_batch_perm)
embeddings = unpermute_func(embeddings_perm)
return embeddings
async def _async_request(
self, session: aiohttp.ClientSession, **kwargs: Dict[str, Any]
) -> List[List[float]]:
"""
Sends an asynchronous request to the embedding endpoint.
Args:
session (aiohttp.ClientSession): The aiohttp session for the request.
kwargs (Dict[str, Any]): Dictionary of POST request parameters.
Returns:
List[List[float]]: List of embeddings for the request.
Raises:
Exception: If the response status is not 200.
"""
async with session.post(**kwargs) as response: # type: ignore
if response.status != 200:
raise Exception(
f"TextEmbed responded with an unexpected status message "
f"{response.status}: {response.text}"
)
embedding = (await response.json())["data"]
return [e["embedding"] for e in embedding]
async def aembed(self, model: str, texts: List[str]) -> List[List[float]]:
"""
Embeds a list of texts asynchronously.
Args:
model (str): The model to use for embedding.
texts (List[str]): List of texts to embed.
Returns:
List[List[float]]: List of embeddings for the texts.
"""
perm_texts, unpermute_func = self._permute(texts)
perm_texts_batched = self._batch(perm_texts)
async with aiohttp.ClientSession(
connector=aiohttp.TCPConnector(limit=32)
) as session:
embeddings_batch_perm = await asyncio.gather(
*[
self._async_request(
session=session,
**self._kwargs_post_request(model=model, texts=t),
)
for t in perm_texts_batched
]
)
embeddings_perm = self._unbatch(embeddings_batch_perm)
embeddings = unpermute_func(embeddings_perm)
return embeddings

View File

@@ -1,7 +1,19 @@
from langchain_community.graph_vectorstores.extractors.gliner_link_extractor import (
GLiNERInput,
GLiNERLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.hierarchy_link_extractor import (
HierarchyInput,
HierarchyLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.html_link_extractor import (
HtmlInput,
HtmlLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.keybert_link_extractor import (
KeybertInput,
KeybertLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
@@ -10,8 +22,16 @@ from langchain_community.graph_vectorstores.extractors.link_extractor_adapter im
)
__all__ = [
"LinkExtractor",
"LinkExtractorAdapter",
"GLiNERInput",
"GLiNERLinkExtractor",
"HierarchyInput",
"HierarchyLinkExtractor",
"HtmlInput",
"HtmlLinkExtractor",
"KeybertInput",
"KeybertLinkExtractor",
"LinkExtractor",
"LinkExtractor",
"LinkExtractorAdapter",
"LinkExtractorAdapter",
]

View File

@@ -0,0 +1,71 @@
from typing import Any, Dict, Iterable, List, Optional, Set, Union
from langchain_core.documents import Document
from langchain_core.graph_vectorstores.links import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
# TypeAlias is not available in Python 3.9, we can't use that or the newer `type`.
GLiNERInput = Union[str, Document]
class GLiNERLinkExtractor(LinkExtractor[GLiNERInput]):
"""Link documents with common named entities using GLiNER <https://github.com/urchade/GLiNER>."""
def __init__(
self,
labels: List[str],
*,
kind: str = "entity",
model: str = "urchade/gliner_mediumv2.1",
extract_kwargs: Optional[Dict[str, Any]] = None,
):
"""Extract keywords using GLiNER.
Example:
.. code-block:: python
extractor = GLiNERLinkExtractor(
labels=["Person", "Award", "Date", "Competitions", "Teams"]
)
results = extractor.extract_one("some long text...")
Args:
labels: List of kinds of entities to extract.
kind: Kind of links to produce with this extractor.
model: GLiNER model to use.
extract_kwargs: Keyword arguments to pass to GLiNER.
"""
try:
from gliner import GLiNER
self._model = GLiNER.from_pretrained(model)
except ImportError:
raise ImportError(
"gliner is required for GLiNERLinkExtractor. "
"Please install it with `pip install gliner`."
) from None
self._labels = labels
self._kind = kind
self._extract_kwargs = extract_kwargs or {}
def extract_one(self, input: GLiNERInput) -> Set[Link]: # noqa: A002
return next(iter(self.extract_many([input])))
def extract_many(
self,
inputs: Iterable[GLiNERInput],
) -> Iterable[Set[Link]]:
strs = [i if isinstance(i, str) else i.page_content for i in inputs]
for entities in self._model.batch_predict_entities(
strs, self._labels, **self._extract_kwargs
):
yield {
Link.bidir(kind=f"{self._kind}:{e['label']}", tag=e["text"])
for e in entities
}

View File

@@ -0,0 +1,108 @@
from typing import Callable, List, Set
from langchain_core.documents import Document
from langchain_core.graph_vectorstores.links import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor_adapter import (
LinkExtractorAdapter,
)
# TypeAlias is not available in Python 3.9, we can't use that or the newer `type`.
HierarchyInput = List[str]
_PARENT: str = "p:"
_CHILD: str = "c:"
_SIBLING: str = "s:"
class HierarchyLinkExtractor(LinkExtractor[HierarchyInput]):
def __init__(
self,
*,
kind: str = "hierarchy",
parent_links: bool = True,
child_links: bool = False,
sibling_links: bool = False,
):
"""Extract links from a document hierarchy.
Example:
.. code-block:: python
# Given three paths (in this case, within the "Root" document):
h1 = ["Root", "H1"]
h1a = ["Root", "H1", "a"]
h1b = ["Root", "H1", "b"]
# Parent links `h1a` and `h1b` to `h1`.
# Child links `h1` to `h1a` and `h1b`.
# Sibling links `h1a` and `h1b` together (both directions).
Example use with documents:
.. code_block: python
transformer = LinkExtractorTransformer([
HierarchyLinkExtractor().as_document_extractor(
# Assumes the "path" to each document is in the metadata.
# Could split strings, etc.
lambda doc: doc.metadata.get("path", [])
)
])
linked = transformer.transform_documents(docs)
Args:
kind: Kind of links to produce with this extractor.
parent_links: Link from a section to its parent.
child_links: Link from a section to its children.
sibling_links: Link from a section to other sections with the same parent.
"""
self._kind = kind
self._parent_links = parent_links
self._child_links = child_links
self._sibling_links = sibling_links
def as_document_extractor(
self, hierarchy: Callable[[Document], HierarchyInput]
) -> LinkExtractor[Document]:
"""Create a LinkExtractor from `Document`.
Args:
hierarchy: Function that returns the path for the given document.
Returns:
A `LinkExtractor[Document]` suitable for application to `Documents` directly
or with `LinkExtractorTransformer`.
"""
return LinkExtractorAdapter(underlying=self, transform=hierarchy)
def extract_one(
self,
input: HierarchyInput,
) -> Set[Link]:
this_path = "/".join(input)
parent_path = None
links = set()
if self._parent_links:
# This is linked from everything with this parent path.
links.add(Link.incoming(kind=self._kind, tag=_PARENT + this_path))
if self._child_links:
# This is linked to every child with this as it's "parent" path.
links.add(Link.outgoing(kind=self._kind, tag=_CHILD + this_path))
if len(input) >= 1:
parent_path = "/".join(input[0:-1])
if self._parent_links and len(input) > 1:
# This is linked to the nodes with the given parent path.
links.add(Link.outgoing(kind=self._kind, tag=_PARENT + parent_path))
if self._child_links and len(input) > 1:
# This is linked from every node with the given parent path.
links.add(Link.incoming(kind=self._kind, tag=_CHILD + parent_path))
if self._sibling_links:
# This is a sibling of everything with the same parent.
links.add(Link.bidir(kind=self._kind, tag=_SIBLING + parent_path))
return links

View File

@@ -0,0 +1,73 @@
from typing import Any, Dict, Iterable, Optional, Set, Union
from langchain_core.documents import Document
from langchain_core.graph_vectorstores.links import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
KeybertInput = Union[str, Document]
class KeybertLinkExtractor(LinkExtractor[KeybertInput]):
def __init__(
self,
*,
kind: str = "kw",
embedding_model: str = "all-MiniLM-L6-v2",
extract_keywords_kwargs: Optional[Dict[str, Any]] = None,
):
"""Extract keywords using KeyBERT <https://maartengr.github.io/KeyBERT/>.
Example:
.. code-block:: python
extractor = KeybertLinkExtractor()
results = extractor.extract_one(PAGE_1)
Args:
kind: Kind of links to produce with this extractor.
embedding_model: Name of the embedding model to use with KeyBERT.
extract_keywords_kwargs: Keyword arguments to pass to KeyBERT's
`extract_keywords` method.
"""
try:
import keybert
self._kw_model = keybert.KeyBERT(model=embedding_model)
except ImportError:
raise ImportError(
"keybert is required for KeybertLinkExtractor. "
"Please install it with `pip install keybert`."
) from None
self._kind = kind
self._extract_keywords_kwargs = extract_keywords_kwargs or {}
def extract_one(self, input: KeybertInput) -> Set[Link]: # noqa: A002
keywords = self._kw_model.extract_keywords(
input if isinstance(input, str) else input.page_content,
**self._extract_keywords_kwargs,
)
return {Link.bidir(kind=self._kind, tag=kw[0]) for kw in keywords}
def extract_many(
self,
inputs: Iterable[KeybertInput],
) -> Iterable[Set[Link]]:
inputs = list(inputs)
if len(inputs) == 1:
# Even though we pass a list, if it contains one item, keybert will
# flatten it. This means it's easier to just call the special case
# for one item.
yield self.extract_one(inputs[0])
elif len(inputs) > 1:
strs = [i if isinstance(i, str) else i.page_content for i in inputs]
extracted = self._kw_model.extract_keywords(
strs, **self._extract_keywords_kwargs
)
for keywords in extracted:
yield {Link.bidir(kind=self._kind, tag=kw[0]) for kw in keywords}

View File

@@ -70,7 +70,9 @@ class CloudflareWorkersAI(LLM):
"""Call Cloudflare Workers API"""
headers = {"Authorization": f"Bearer {self.api_token}"}
data = {"prompt": prompt, "stream": self.streaming, **params}
response = requests.post(self.endpoint_url, headers=headers, json=data)
response = requests.post(
self.endpoint_url, headers=headers, json=data, stream=self.streaming
)
return response
def _process_response(self, response: requests.Response) -> str:

View File

@@ -190,6 +190,7 @@ class Sambaverse(LLM):
"top_p": 1.0,
"repetition_penalty": 1.0,
"top_k": 50,
"process_prompt": False
},
)
"""
@@ -672,7 +673,7 @@ class SambaStudio(LLM):
Example:
.. code-block:: python
from langchain_community.llms.sambanova import Sambaverse
from langchain_community.llms.sambanova import SambaStudio
SambaStudio(
sambastudio_base_url="your-SambaStudio-environment-URL",
sambastudio_base_uri="your-SambaStudio-base-URI",
@@ -687,6 +688,8 @@ class SambaStudio(LLM):
"top_p": 1.0,
"repetition_penalty": 1,
"top_k": 50,
#"process_prompt": False,
#"select_expert": "Meta-Llama-3-8B-Instruct"
},
)
"""
@@ -741,7 +744,7 @@ class SambaStudio(LLM):
values,
"sambastudio_base_uri",
"SAMBASTUDIO_BASE_URI",
default="api/predict/nlp",
default="api/predict/generic",
)
values["sambastudio_project_id"] = get_from_dict_or_env(
values, "sambastudio_project_id", "SAMBASTUDIO_PROJECT_ID"

View File

@@ -1,6 +1,6 @@
import logging
import re
from typing import List, Optional
from typing import Any, List, Optional
from langchain.chains import LLMChain
from langchain.chains.prompt_selector import ConditionalPromptSelector
@@ -81,6 +81,35 @@ class WebResearchRetriever(BaseRetriever):
"check .netrc for proxy configuration",
)
allow_dangerous_requests: bool = False
"""A flag to force users to acknowledge the risks of SSRF attacks when using
this retriever.
Users should set this flag to `True` if they have taken the necessary precautions
to prevent SSRF attacks when using this retriever.
For example, users can run the requests through a properly configured
proxy and prevent the crawler from accidentally crawling internal resources.
"""
def __init__(self, **kwargs: Any) -> None:
"""Initialize the retriever."""
allow_dangerous_requests = kwargs.get("allow_dangerous_requests", False)
if not allow_dangerous_requests:
raise ValueError(
"WebResearchRetriever crawls URLs surfaced through "
"the provided search engine. It is possible that some of those URLs "
"will end up pointing to machines residing on an internal network, "
"leading"
"to an SSRF (Server-Side Request Forgery) attack. "
"To protect yourself against that risk, you can run the requests "
"through a proxy and prevent the crawler from accidentally crawling "
"internal resources."
"If've taken the necessary precautions, you can set "
"`allow_dangerous_requests` to `True`."
)
super().__init__(**kwargs)
@classmethod
def from_llm(
cls,

View File

@@ -25,9 +25,7 @@ if TYPE_CHECKING:
from langchain_community.storage.cassandra import (
CassandraByteStore,
)
from langchain_community.storage.mongodb import (
MongoDBStore,
)
from langchain_community.storage.mongodb import MongoDBByteStore, MongoDBStore
from langchain_community.storage.redis import (
RedisStore,
)
@@ -44,6 +42,7 @@ __all__ = [
"AstraDBStore",
"CassandraByteStore",
"MongoDBStore",
"MongoDBByteStore",
"RedisStore",
"SQLStore",
"UpstashRedisByteStore",
@@ -55,6 +54,7 @@ _module_lookup = {
"AstraDBStore": "langchain_community.storage.astradb",
"CassandraByteStore": "langchain_community.storage.cassandra",
"MongoDBStore": "langchain_community.storage.mongodb",
"MongoDBByteStore": "langchain_community.storage.mongodb",
"RedisStore": "langchain_community.storage.redis",
"SQLStore": "langchain_community.storage.sql",
"UpstashRedisByteStore": "langchain_community.storage.upstash_redis",

Some files were not shown because too many files have changed in this diff Show More