Compare commits

...

179 Commits

Author SHA1 Message Date
Leonid Ganeline
d9bfdc95ea docs[patch]: google platform page update (#14475)
Added missed tools

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2023-12-08 17:40:44 -08:00
Leonid Ganeline
2fa81739b6 docs[patch]: microsoft platform page update (#14476)
Added `presidio` and `OneNote` references to `microsoft.mdx`; added link
and description to the `presidio` notebook

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2023-12-08 17:40:30 -08:00
Yelin Zhang
84a57f5350 docs[patch]: add missing imports for local_llms (#14453)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
Keeping it consistent with everywhere else in the docs and adding the
missing imports to be able to copy paste and run the code example.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-08 17:23:29 -08:00
Harrison Chase
f5befe3b89 manual mapping (#14422) 2023-12-08 16:29:33 -08:00
Erick Friis
c24f277b7c langchain[patch], docs[patch]: use byte store in multivectorretriever (#14474) 2023-12-08 16:26:11 -08:00
Leonid Ganeline
1ef13661b9 docs[patch]: link and description cleanup (#14471)
Fixed inconsistencies; added links and descriptions

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2023-12-08 15:24:38 -08:00
Lance Martin
6fbfc375b9 Update README and vectorstore path for multi-modal template (#14473) 2023-12-08 15:24:05 -08:00
Anish Nag
6da0cfea0e experimental[patch]: SmartLLMChain Output Key Customization (#14466)
**Description**
The `SmartLLMChain` was was fixed to output key "resolution".
Unfortunately, this prevents the ability to use multiple `SmartLLMChain`
in a `SequentialChain` because of colliding output keys. This change
simply gives the option the customize the output key to allow for
sequential chaining. The default behavior is the same as the current
behavior.

Now, it's possible to do the following:
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain_experimental.smart_llm import SmartLLMChain
from langchain.chains import SequentialChain

joke_prompt = PromptTemplate(
    input_variables=["content"],
    template="Tell me a joke about {content}.",
)
review_prompt = PromptTemplate(
    input_variables=["scale", "joke"],
    template="Rate the following joke from 1 to {scale}: {joke}"
)

llm = ChatOpenAI(temperature=0.9, model_name="gpt-4-32k")
joke_chain = SmartLLMChain(llm=llm, prompt=joke_prompt, output_key="joke")
review_chain = SmartLLMChain(llm=llm, prompt=review_prompt, output_key="review")

chain = SequentialChain(
    chains=[joke_chain, review_chain],
    input_variables=["content", "scale"],
    output_variables=["review"],
    verbose=True
)
response = chain.run({"content": "chickens", "scale": "10"})
print(response)
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-08 13:55:51 -08:00
Leonid Ganeline
0797358c1b docs networkxupdate (#14426)
Added setting up instruction, package description and link
2023-12-08 13:39:50 -08:00
Bagatur
300305e5e5 infra: add langchain-community release workflow (#14469) 2023-12-08 13:31:15 -08:00
Ben Flast
b32fcb550d Update mongodb_atlas docs for GA (#14425)
Updated the MongoDB Atlas Vector Search docs to indicate the service is
Generally Available, updated the example to use the new index
definition, and added an example that uses metadata pre-filtering for
semantic search

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-08 13:23:01 -08:00
Erick Friis
b3f226e8f8 core[patch], langchain[patch], experimental[patch]: import CI (#14414) 2023-12-08 11:28:55 -08:00
Leonid Ganeline
ba083887e5 docs Dependents updated statistics (#14461)
Updated statistics for the dependents (packages dependent on `langchain`
package). Only packages with 100+ starts
2023-12-08 14:14:41 -05:00
Eugene Yurtsev
37bee92b8a Use deepcopy in RunLogPatch (#14244)
This PR adds deepcopy usage in RunLogPatch.

I included a unit-test that shows an issue that was caused in LangServe
in the RemoteClient.

```python
import jsonpatch

s1 = {}
s2 = {'value': []}
s3 = {'value': ['a']}

ops0 = list(jsonpatch.JsonPatch.from_diff(None, s1))
ops1 = list(jsonpatch.JsonPatch.from_diff(s1, s2))
ops2 = list(jsonpatch.JsonPatch.from_diff(s2, s3))
ops = ops0 + ops1 + ops2

jsonpatch.apply_patch(None, ops)
{'value': ['a']}

jsonpatch.apply_patch(None, ops)
{'value': ['a', 'a']}

jsonpatch.apply_patch(None, ops)
{'value': ['a', 'a', 'a']}
```
2023-12-08 14:09:36 -05:00
Erick Friis
1d7e5c51aa langchain[patch]: xfail unstable vertex test (#14462) 2023-12-08 11:00:37 -08:00
Erick Friis
477b274a62 langchain[patch]: fix scheduled testing ci dep install (#14460) 2023-12-08 10:37:44 -08:00
Harrison Chase
02ee0073cf revoke serialization (#14456) 2023-12-08 10:31:05 -08:00
Erick Friis
ff0d5514c1 langchain[patch]: fix scheduled testing ci variables (#14459) 2023-12-08 10:27:21 -08:00
Erick Friis
1d725327eb langchain[patch]: Fix scheduled testing (#14428)
- integration tests in pyproject
- integration test fixes
2023-12-08 10:23:02 -08:00
Harrison Chase
7be3eb6fbd fix imports from core (#14430) 2023-12-08 09:33:35 -08:00
Leonid Ganeline
a05230a4ba docs[patch]: promptlayer pages update (#14416)
Updated provider page by adding LLM and ChatLLM references; removed a
content that is duplicate text from the LLM referenced page.
Updated the collback page
2023-12-07 15:48:10 -08:00
Leonid Ganeline
18aba7fdef docs: notebook linting (#14366)
Many jupyter notebooks didn't pass linting. List of these files are
presented in the [tool.ruff.lint.per-file-ignores] section of the
pyproject.toml . Addressed these bugs:
- fixed bugs; added missed imports; updated pyproject.toml
 Only the `document_loaders/tensorflow_datasets.ipyn`,
`cookbook/gymnasium_agent_simulation.ipynb` are not completely fixed.
I'm not sure about imports.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-07 15:47:48 -08:00
Bagatur
52052cc7b9 experimental[patch]: Release 0.0.45 (#14418) 2023-12-07 15:01:39 -08:00
Bagatur
e4d6e55c5e langchain[patch]: Release 0.0.348 (#14417) 2023-12-07 14:52:43 -08:00
Bagatur
eb209e7ee3 core[patch]: Release 0.0.12 (#14415) 2023-12-07 14:37:00 -08:00
Bagatur
b2280fd874 core[patch], langchain[patch]: fix required deps (#14373) 2023-12-07 14:24:58 -08:00
Leonid Ganeline
7186faefb2 API Reference building script update (#13587)
The namespaces like `langchain.agents.format_scratchpad` clogging the
API Reference sidebar.
This change removes those 3-level namespaces from sidebar (this issue
was discussed with @efriis )

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-07 11:43:42 -08:00
Kacper Łukawski
76f30f5297 langchain[patch]: Rollback multiple keys in Qdrant (#14390)
This reverts commit 38813d7090. This is a
temporary fix, as I don't see a clear way on how to use multiple keys
with `Qdrant.from_texts`.

Context: #14378
2023-12-07 11:13:19 -08:00
Erick Friis
54040b00a4 langchain[patch]: fix ChatVertexAI streaming (#14369) 2023-12-07 09:46:11 -08:00
Bagatur
db6bf8b022 langchain[patch]: Release 0.0.347 (#14368) 2023-12-06 16:13:29 -08:00
Bagatur
a7271cf5bd core[patch]: Release 0.0.11 (#14367) 2023-12-06 15:53:49 -08:00
Nuno Campos
77c38df36c [core/minor] Runnables: Implement a context api (#14046)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Brace Sproul <braceasproul@gmail.com>
2023-12-06 15:02:29 -08:00
Erick Friis
8f95a8206b core[patch]: message history error typo (#14361) 2023-12-06 14:20:10 -08:00
William FH
e5bd32ff6d Include run_id (#14331)
in the test run outputs
2023-12-06 14:07:45 -08:00
Bagatur
cc76f0e834 langchain[patch]: import nits (#14354)
import from core instead of langchain.schema
2023-12-06 11:45:05 -08:00
Bagatur
ce4d81f88b infra: ci matrix (#14306) 2023-12-06 11:43:03 -08:00
Jacob Lee
867ca6d0be Fix multi vector retriever subclassing (#14350)
Fixes #14342

@eyurtsev @baskaryan

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-06 11:12:50 -08:00
Erick Friis
7bdfc43766 core[patch], langchain[patch]: ByteStore (#14312) 2023-12-06 10:05:43 -08:00
Brace Sproul
b9087e765d docs[patch]: Fix broken link 'tip' in docs (#14349) 2023-12-06 09:44:54 -08:00
Eugene Yurtsev
0dea8cc62d Update doc-string in RunnableWithMessageHistory (#14262)
Update doc-string in RunnableWithMessageHistory
2023-12-06 12:31:46 -05:00
Erick Friis
2aaf8e11e0 docs[patch]: fix ipynb links (#14325)
Keeping it simple for now.

Still iterating on our docs build in pursuit of making everything mdxv2
compatible for docusaurus 3, and the fewer custom scripts we're reliant
on through that, the less likely the docs will break again.

Other things to consider in future:

Quarto rewriting in ipynbs:
https://quarto.org/docs/extensions/nbfilter.html (but this won't do
md/mdx files)

Docusaurus plugins for rewriting these paths
2023-12-06 09:29:07 -08:00
Jean-Baptiste dlb
38813d7090 Qdrant metadata payload keys (#13001)
- **Description:** In Qdrant allows to input list of keys as the
content_payload_key to retrieve multiple fields (the generated document
will contain the dictionary {field: value} in a string),
- **Issue:** Previously we were able to retrieve only one field from the
vector database when making a search
  - **Dependencies:** 
  - **Tag maintainer:** 
  - **Twitter handle:** @jb_dlb

---------

Co-authored-by: Jean Baptiste De La Broise <jeanbaptiste.delabroise@mdpi.com>
2023-12-06 09:12:54 -08:00
Yuchen Liang
ad6dfb6220 feat: mask api key for cerebriumai llm (#14272)
- **Description:** Masking API key for CerebriumAI LLM to protect user
secrets.
 - **Issue:** #12165 
 - **Dependencies:** None
 - **Tag maintainer:** @eyurtsev

---------

Signed-off-by: Yuchen Liang <yuchenl3@andrew.cmu.edu>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-06 09:06:00 -08:00
newfinder
d4d64daa1e Mask API key for baidu qianfan (#14281)
Description: This PR masked baidu qianfan - Chat_Models API Key and
added unit tests.
Issue: the issue langchain-ai#12165.
Tag maintainer: @eyurtsev

---------

Co-authored-by: xiayi <xiayi@bytedance.com>
2023-12-06 08:47:09 -08:00
cxumol
06e3316f54 feat(add): LLM integration of Cloudflare Workers AI (#14322)
Add [Text Generation by Cloudflare Workers
AI](https://developers.cloudflare.com/workers-ai/models/text-generation/).
It's a new LLM integration.

- Dependencies: N/A
2023-12-06 08:24:19 -08:00
Harutaka Kawamura
5efaedf488 Exclude max_tokens from request if it's None (#14334)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->


We found a request with `max_tokens=None` results in the following error
in Anthropic:

```
HTTPError: 400 Client Error: Bad Request for url: https://oregon.staging.cloud.databricks.com/serving-endpoints/corey-anthropic/invocations. 
Response text: {"error_code":"INVALID_PARAMETER_VALUE","message":"INVALID_PARAMETER_VALUE: max_tokens was not of type Integer: null"}
```

This PR excludes `max_tokens` if it's None.
2023-12-06 08:23:17 -08:00
Nicolas Bondoux
86b08d7753 Fix typo in lcel example for rerank in doc (#14336)
fix typo in lcel example for rerank in doc
2023-12-06 08:21:41 -08:00
Matt Wells
e1ea191237 Demonstrate use of get_buffer_string (#13013)
**Description**

The docs for creating a RAG chain with Memory [currently use a manual
lambda](https://python.langchain.com/docs/expression_language/cookbook/retrieval#with-memory-and-returning-source-documents)
to format chat history messages. [There exists a helper method within
the
codebase](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/messages.py#L14C15-L14C15)
to perform this task so I've updated the documentation to demonstrate
its usage

Also worth noting that the current documented method of using the
included `_format_chat_history ` function actually results in an error:

```
TypeError: 'HumanMessage' object is not subscriptable
```

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-05 20:08:50 -08:00
MinjiK
a1a11ffd78 Amadeus toolkit minor update (#13002)
- update `Amadeus` toolkit with ability to switch Amadeus environments 
- update minor code explanations

---------

Co-authored-by: MinjiK <minji.kim@amadeus.com>
2023-12-05 20:08:34 -08:00
Alexandre Dumont
b05c46074b OpenAIEmbeddings: retry_min_seconds/retry_max_seconds parameters (#13138)
- **Description:** new parameters in OpenAIEmbeddings() constructor
(retry_min_seconds and retry_max_seconds) that allow parametrization by
the user of the former min_seconds and max_seconds that were hidden in
_create_retry_decorator() and _async_retry_decorator()
  - **Issue:** #9298, #12986
  - **Dependencies:** none
  - **Tag maintainer:** @hwchase17
  - **Twitter handle:** @adumont

make format 
make lint 
make test 

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-05 20:08:17 -08:00
mogith-pn
9e5d146409 Updated integration with Clarifai python SDK functions (#13671)
Description :

Updated the functions with new Clarifai python SDK.
Enabled initialisation of Clarifai class with model URL.
Updated docs with new functions examples.
2023-12-05 20:08:00 -08:00
dudub12
8f403ea2d7 info sql tool remove whitespaces in table names (#13712)
Remove whitespaces from the input of the ListSQLDatabaseTool for better
support.
for example, the input "table1,table2,table3" will throw an exception
whiteout the change although it's a valid input.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-05 20:07:38 -08:00
balaba-max
64d5108f99 Feature: GitLab url from ENV (#14221)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** add gitlab url from env, 
  - **Issue:** no issue,
  - **Dependencies:** no,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-05 19:41:36 -08:00
kavinraj A S
ab6b41937a Fixed a typo in smart_llm prompt (#13052)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-12-05 19:16:18 -08:00
jeffpezzone
7c2ef06136 Adds "NIN" metadata filter for pgvector to all checking for set absence (#14205)
This PR adds support for metadata filters of the form:

`{"filter": {"key": { "NIN" : ["list", "of", "values"]}}}`

"IN" is already supported, so this is a quick & related update to add
"NIN"
2023-12-05 19:07:33 -08:00
lif
20d2b4a6ba feat: Increased compatibility with new and old versions for dalle (#14222)
- **Description:** Increased compatibility with all versions openai for
dalle,

This pr add support for openai version from 0 ~ 1.3.
2023-12-05 17:31:28 -08:00
Wang Wei
7205bfdd00 feat: 1. Add system parameters, 2. Align with the QianfanChatEndpoint for function calling (#14275)
- **Description:** 
1. Add system parameters to the ERNIE LLM API to set the role of the
LLM.
2. Add support for the ERNIE-Bot-turbo-AI model according from the
document https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Alp0kdm0n.
3. For the function call of ErnieBotChat, align with the
QianfanChatEndpoint.

With this PR, the `QianfanChatEndpoint()` can use the `function calling`
ability with `create_ernie_fn_chain()`. The example is as the following:

```
from langchain.prompts import ChatPromptTemplate
import json
from langchain.prompts.chat import (
    ChatPromptTemplate,
)

from langchain.chat_models import QianfanChatEndpoint
from langchain.chains.ernie_functions import (
    create_ernie_fn_chain,
)

def get_current_news(location: str) -> str:
    """Get the current news based on the location.'

    Args:
        location (str): The location to query.
    
    Returs:
        str: Current news based on the location.
    """

    news_info = {
        "location": location,
        "news": [
            "I have a Book.",
            "It's a nice day, today."
        ]
    }

    return json.dumps(news_info)

def get_current_weather(location: str, unit: str="celsius") -> str:
    """Get the current weather in a given location

    Args:
        location (str): location of the weather.
        unit (str): unit of the tempuature.
    
    Returns:
        str: weather in the given location.
    """

    weather_info = {
        "location": location,
        "temperature": "27",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)

template = ChatPromptTemplate.from_messages([
    ("user", "{user_input}"),
])

chat = QianfanChatEndpoint(model="ERNIE-Bot-4")
chain = create_ernie_fn_chain([get_current_weather, get_current_news], chat, template, verbose=True)
res = chain.run("北京今天的新闻是什么?")
print(res)
```

The result of the above code:
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: 北京今天的新闻是什么?
> Finished chain.
{'name': 'get_current_news', 'arguments': {'location': '北京'}}
```

For the `ErnieBotChat`, now can use the `system` parameter to set the
role of the LLM.

```
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ErnieBotChat

llm = ErnieBotChat(model_name="ERNIE-Bot-turbo-AI", system="你是一个能力很强的机器人,你的名字叫 小叮当。无论问你什么问题,你都可以给出答案。")
prompt = ChatPromptTemplate.from_messages(
    [
        ("human", "{query}"),
    ]
)
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
res = chain.run(query="你是谁?")
print(res)
```

The result of the above code:

```
> Entering new LLMChain chain...
Prompt after formatting:
Human: 你是谁?
> Finished chain.
我是小叮当,一个智能机器人。我可以为你提供各种服务,包括回答问题、提供信息、进行计算等。如果你需要任何帮助,请随时告诉我,我会尽力为你提供最好的服务。
```
2023-12-05 17:28:31 -08:00
Leonid Kuligin
fd5be55a7b added get_num_tokens to GooglePalm (#14282)
added get_num_tokens to GooglePalm + a little bit of refactoring
2023-12-05 17:24:19 -08:00
Massimiliano Pronesti
c215a4c9ec feat(embeddings): text-embeddings-inference (#14288)
- **Description:** Added a notebook to illustrate how to use
`text-embeddings-inference` from huggingface. As
`HuggingFaceHubEmbeddings` was using a deprecated client, I made the
most of this PR updating that too.

- **Issue:** #13286 

- **Dependencies**: None

- **Tag maintainer:** @baskaryan
2023-12-05 17:22:05 -08:00
Tim Van Wassenhove
85b88c33f3 Fixes issue-14295: Correctly pass along the kwargs (#14296)
- **Description:** Update code to correctly pass the kwargs 
  - **Issue:** #14295 
  - **Dependencies:**  - 
  - **Tag maintainer:** 

<--
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

#issue-14295
2023-12-05 17:14:00 -08:00
Alex Kira
62b59048de docs[patch] Add how-to doc for RunnablePassthrough and nav modifications (#14255)
- **Description:** Add How To docs for `RunnablePassthrough` with
examples. Also redo the ordering and some of the other How-To docs.
2023-12-05 17:01:07 -08:00
Bob Lin
5a23608c41 Add custom async generator example (#14299)
<img width="1172" alt="Screenshot 2023-12-05 at 11 19 16 PM"
src="https://github.com/langchain-ai/langchain/assets/10000925/6b0fbd70-9f6b-4f91-b494-9e88676b4786">
2023-12-05 16:08:19 -08:00
Bob Lin
63fdc6e818 Update docs (#14294)
### Description

Fixed 3 doc  issues:

1. `ConfigurableField ` needs to be imported in
`docs/docs/expression_language/how_to/configure.ipynb`
2. use `error` instead of `RateLimitError()` in
`docs/docs/expression_language/how_to/fallbacks.ipynb`
3. I think it might be better to output the fixed json data(when I
looked at this example, I didn't understand its purpose at first, but
then I suddenly realized):
<img width="1219" alt="Screenshot 2023-12-05 at 10 34 13 PM"
src="https://github.com/langchain-ai/langchain/assets/10000925/7623ba13-7b56-4964-8c98-b7430fabc6de">
2023-12-05 16:08:03 -08:00
Jarkko Lagus
667ad6a5de Add support for CORS options for AzureSearch (#14305)
- **Description:** Add support for setting the CORS options when using
AzureSearch indexes
2023-12-05 16:05:40 -08:00
Karim Assi
9401539e43 Allow not enforcing function usage when a single function is passed to openai function executable (#14308)
- **Description:** allows not enforcing function usage when a single
function is passed to an openAI function executable (or corresponding
legacy chain). This is a desired feature in the case where the model
does not have enough information to call a function, and needs to get
back to the user.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** N/A
2023-12-05 15:56:31 -08:00
Ran
d22c13ec48 Mask API key for Minimax LLM (#14309)
- **Description:** Added masking for the API key for Minimax LLM + tests
inspired by https://github.com/langchain-ai/langchain/pull/12418.
- **Issue:** the issue # fixes
https://github.com/langchain-ai/langchain/issues/12165
- **Dependencies:** this fix is dependent on Minimax instantiation fix
which is introduced in
https://github.com/langchain-ai/langchain/pull/13439, so merge this one
after.
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-05 15:42:00 -08:00
Lance Martin
29e993a5f2 Update OpenCLIP docs (#14319) 2023-12-05 15:31:10 -08:00
Eugene Yurtsev
a74c03da3c Add metadata to blob (#14162)
Add metadata to the blob object. This makes it easier
to make a pipeline that properly propagates metadata information
from raw content to the derived content.
2023-12-05 17:17:41 -05:00
Lance Martin
66848871fc Multi-modal RAG template (#14186)
* OpenCLIP embeddings
* GPT-4V

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-05 13:36:38 -08:00
James Braza
3b75d37cee Adding BaseChatMessageHistory.__str__ (#14311)
Adding __str__ to base chat message history to make it easier to debug
2023-12-05 16:22:31 -05:00
James Braza
8b0060184d Fixing empty input variable crashing PromptTemplate validations (#14314)
- Fixes `input_variables=[""]` crashing validations with a template
`"{}"`
- Uses `__cause__` for proper `Exception` chaining in
`check_valid_template`
2023-12-05 13:13:08 -08:00
Leonid Ganeline
0f02e94565 docs: integrations/providers/ update (#14315)
- added missed provider files (from `integrations/Callbacks`
- updated notebooks: added links; updated into consistent formats
2023-12-05 13:05:29 -08:00
Bagatur
6607cc6eab experimental[patch]: Release 0.0.44 (#14310) 2023-12-05 12:11:42 -08:00
Eugene Yurtsev
80637727ea hide api key: arcee (#14304)
Hide API key for Arcee

---------

Co-authored-by: raphael <raph.nunes95@gmail.com>
2023-12-05 14:49:55 -05:00
Bagatur
b2e756c0a8 langchain[patch]: Release 0.0.346 (#14307) 2023-12-05 11:38:52 -08:00
Bagatur
4a5a13aab3 core[patch]: Release 0.0.10 (#14303) 2023-12-05 10:20:57 -08:00
Eugene Yurtsev
7ad75edf8b Fix rag google cloud vertex ai template (#14300)
Fix template by exposing chain correctly
2023-12-05 09:38:04 -08:00
Eun Hye Kim
f758c8adc4 Fix #11737 issue (extra_tools option of create_pandas_dataframe_agent is not working) (#13203)
- **Description:** Fix #11737 issue (extra_tools option of
create_pandas_dataframe_agent is not working),
  - **Issue:** #11737 ,
  - **Dependencies:** no,
- **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17 I needed this
method at work, so I modified it myself and used it. There is a similar
issue(#11737) and PR(#13018) of @PyroGenesis, so I combined my code at
the original PR.
You may be busy, but it would be great help for me if you checked. Thank
you.
  - **Twitter handle:** @lunara_x 

If you need an .ipynb example about this, please tag me. 
I will share what I am working on after removing any work-related
content.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 20:54:08 -08:00
Sean Bearden
77a15fa988 Added ability to pass arguments to the Playwright browser (#13146)
- **Description:** Enhanced `create_sync_playwright_browser` and
`create_async_playwright_browser` functions to accept a list of
arguments. These arguments are now forwarded to
`browser.chromium.launch()` for customizable browser instantiation.
  - **Issue:** #13143
  - **Dependencies:** None
  - **Tag maintainer:** @eyurtsev,
  - **Twitter handle:** Dr_Bearden

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 20:48:09 -08:00
Joan Fontanals
dcccf8fa66 adapt Jina Embeddings to new Jina AI Embedding API (#13658)
- **Description:** Adapt JinaEmbeddings to run with the new Jina AI
Embedding platform
- **Twitter handle:** https://twitter.com/JinaAI_

---------

Co-authored-by: Joan Fontanals Martinez <joan.fontanals.martinez@jina.ai>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 20:40:33 -08:00
Philippe PRADOS
e0c03d6c44 Pprados/lite google drive (#13175)
- Fix bug in the document
 - Add clarification on the use of langchain-google drive.
2023-12-04 20:31:21 -08:00
guillaumedelande
ea0afd07ca Update azuresearch.py following recent change from azure-search-documents library (#13472)
- **Description:** 

Reference library azure-search-documents has been adapted in version
11.4.0:

1. Notebook explaining Azure AI Search updated with most recent info
2. HnswVectorSearchAlgorithmConfiguration --> HnswAlgorithmConfiguration
3. PrioritizedFields(prioritized_content_fields) -->
SemanticPrioritizedFields(content_fields)
4. SemanticSettings --> SemanticSearch
5. VectorSearch(algorithm_configurations) -->
VectorSearch(configurations)

--> Changes now reflected on Langchain: default vector search config
from langchain is now compatible with officially released library from
Azure.

  - **Issue:**
Issue creating a new index (due to wrong class used for default vector
search configuration) if using latest version of azure-search-documents
with current langchain version
  - **Dependencies:** azure-search-documents>=11.4.0,
  - **Tag maintainer:** ,

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-04 20:29:20 -08:00
price-deshaw
5cb3393e20 update OpenAI function agents' llm validation (#13538)
- **Description:** This PR modifies the LLM validation in OpenAI
function agents to check whether the LLM supports OpenAI functions based
on a property (`supports_oia_functions`) instead of whether the LLM
passed to the agent `isinstance` of `ChatOpenAI`. This allows classes
that extend `BaseChatModel` to be passed to these agents as long as
they've been integrated with the OpenAI APIs and have this property set,
even if they don't extend `ChatOpenAI`.
  - **Issue:** N/A
  - **Dependencies:** none
2023-12-04 20:28:13 -08:00
Max Weng
74c7b799ef migrate openai audio api (#13557)
for issue https://github.com/langchain-ai/langchain/issues/13162
migrate openai audio api, as [openai v1.0.0 Migration
Guide](https://github.com/openai/openai-python/discussions/742)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Double Max <max@ground-map.com>
2023-12-04 20:27:54 -08:00
Arnaud Gelas
abbba6c7d8 openapi/planner.py: Deal with json in markdown output cases (#13576)
- **Description:** In openapi/planner deal with json in markdown output
cases
- **Issue:** In some cases LLMs could return json in markdown which
can't be loaded.
  - **Dependencies:**
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:**

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 20:27:22 -08:00
Harrison Chase
8eab4d95c0 Harrison/delegate from template (#14266)
Co-authored-by: M.R. Sopacua <144725145+msopacua@users.noreply.github.com>
2023-12-04 20:18:15 -08:00
Erick Friis
956d55de2b docs[patch]: chat model page names (#14264) 2023-12-04 20:08:41 -08:00
Nolan
b49104c2c9 Add missing doc key to metadata field in AzureSearch Vectorstore (#13328)
- **Description:** Adds doc key to metadata field when adding document
to Azure Search.
  - **Issue:** -,
  - **Dependencies:** -,
  - **Tag maintainer:** @eyurtsev,
  - **Twitter handle:** @finnless

Right now the document key with the name FIELDS_ID is not included in
the FIELDS_METADATA field, and therefore is not included in the Document
returned from a query. This is really annoying if you want to be able to
modify that item in the vectorstore.

Other's thoughts on this are welcome.
2023-12-04 19:53:27 -08:00
Jon Watte
e042e5df35 fix: call _on_llm_error() (#13581)
Description: There's a copy-paste typo where on_llm_error() calls
_on_chain_error() instead of _on_llm_error().
Issue: #13580 
Dependencies: None
Tag maintainer: @hwchase17 
Twitter handle: @jwatte

"Run `make format`, `make lint` and `make test` to check this locally."
The test scripts don't work in a plain Ubuntu LTS 20.04 system.
It looks like the dev container pulling is stuck. Or maybe the internet
is just ornery today.

---------

Co-authored-by: jwatte <jwatte@observeinc.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 19:44:50 -08:00
Hamza Ahmed
fcc8e5e839 Update geodataframe.py (#13573)
here it is validating shapely.geometry.point.Point: if not
isinstance(data_frame[page_content_column].iloc[0], gpd.GeoSeries):
raise ValueError(
f"Expected data_frame[{page_content_column}] to be a GeoSeries" you need
it to validate the geoSeries and not the shapely.geometry.point.Point

if not isinstance(data_frame[page_content_column], gpd.GeoSeries):
            raise ValueError(
f"Expected data_frame[{page_content_column}] to be a GeoSeries"

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-12-04 19:44:30 -08:00
Harrison Chase
2213fc9711 Harrison/bookend ai (#14258)
Co-authored-by: stvhu-bookend <142813359+stvhu-bookend@users.noreply.github.com>
2023-12-04 19:42:15 -08:00
cxumol
0d47d15a9f add(feat): Text Embeddings by Cloudflare Workers AI (#14220)
Add [Text Embeddings by Cloudflare Workers
AI](https://developers.cloudflare.com/workers-ai/models/text-embeddings/).
It's a new integration.
Trying to align it with its langchain-js version counterpart
[here](https://api.js.langchain.com/classes/embeddings_cloudflare_workersai.CloudflareWorkersAIEmbeddings.html).
- Dependencies: N/A
- Done `make format` `make lint` `make spell_check` `make
integration_tests` and all my changes was passed
2023-12-04 19:25:05 -08:00
Harrison Chase
c51001f01e fix comet tracer (#14259) 2023-12-04 19:03:19 -08:00
Erick Friis
4351b99d2b docs[patch]: search experiment (#14254)
- npm
- search config
- custom
2023-12-04 16:58:26 -08:00
Harrison Chase
4fb72ff76f fake consistent embeddings cleanup (#14256)
delete code that could never be reached
2023-12-04 16:55:30 -08:00
Michael Landis
e26906c1dc feat: implement max marginal relevance for momento vector index (#13619)
**Description**

Implements `max_marginal_relevance_search` and
`max_marginal_relevance_search_by_vector` for the Momento Vector Index
vectorstore.

Additionally bumps the `momento` dependency in the lock file and adds
logging to the implementation.

**Dependencies**

 updates `momento` dependency in lock file

**Tag maintainer**

@baskaryan 

**Twitter handle**

Please tag @momentohq for Momento Vector Index and @mloml for the
contribution 🙇

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-12-04 16:50:23 -08:00
deedy5
ee9abb6722 Bugfix duckduckgo_search news search (#13670)
- **Description:** 
Bugfix duckduckgo_search news search
  - **Issue:** 
https://github.com/langchain-ai/langchain/issues/13648
  - **Dependencies:** 
None
  - **Tag maintainer:** 
@baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 16:48:20 -08:00
Aliaksandr Kuzmik
676a077c4e Add CometTracer (#13661)
Hi! I'm Alex, Python SDK Team Lead from
[Comet](https://www.comet.com/site/).

This PR contains our new integration between langchain and Comet -
`CometTracer` class which uses new `comet_llm` python package for
submitting data to Comet.

No additional dependencies for the langchain package are required
directly, but if the user wants to use `CometTracer`, `comet-llm>=2.0.0`
should be installed. Otherwise an exception will be raised from
`CometTracer.__init__`.

A test for the feature is included.

There is also an already existing callback (and .ipynb file with
example) which ideally should be deprecated in favor of a new tracer. I
wasn't sure how exactly you'd prefer to do it. For example we could open
a separate PR for that.

I'm open to your ideas :)
2023-12-04 16:46:48 -08:00
Harrison Chase
921c4b5597 Harrison/searchapi (#14252)
Co-authored-by: SebastjanPrachovskij <86522260+SebastjanPrachovskij@users.noreply.github.com>
2023-12-04 16:34:15 -08:00
Ravidhu
224aa5151d Fix Sagemaker Endpoint documentation (#13660)
- **Description:** fixed the transform_input method in the example., 
  - **Issue:** example didn't work,
  - **Dependencies:** None,
  - **Tag maintainer:** @baskaryan,
  - **Twitter handle:** @Ravidhu87
2023-12-04 16:28:29 -08:00
Colin Ulin
9f9cb71d26 Embaas - added backoff retries for network requests (#13679)
Running a large number of requests to Embaas' servers (or any server)
can result in intermittent network failures (both from local and
external network/service issues). This PR implements exponential backoff
retries to help mitigate this issue.
2023-12-04 16:21:35 -08:00
Erick Friis
f26d88ca60 docs[patch]: fix columns (#14251) 2023-12-04 16:03:09 -08:00
Kastan Day
65faba91ad langchain[patch]: Adding new Github functions for reading pull requests (#9027)
The Github utilities are fantastic, so I'm adding support for deeper
interaction with pull requests. Agents should read "regular" comments
and review comments, and the content of PR files (with summarization or
`ctags` abbreviations).

Progress:
- [x] Add functions to read pull requests and the full content of
modified files.
- [x] Function to use Github's built in code / issues search.

Out of scope:
- Smarter summarization of file contents of large pull requests (`tree`
output, or ctags).
- Smarter functions to checkout PRs and edit the files incrementally
before bulk committing all changes.
- Docs example for creating two agents:
- One watches issues: For every new issue, open a PR with your best
attempt at fixing it.
- The other watches PRs: For every new PR && every new comment on a PR,
check the status and try to finish the job.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-04 15:53:36 -08:00
Hynek Kydlíček
aa8ae31e5b core[patch]: add response kwarg to on_llm_error
# Dependencies
None

# Twitter handle
@HKydlicek

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-04 15:04:48 -08:00
Leonid Ganeline
1750cc464d docs[patch]: moved vectorstore notebook file (#14181)
The `/docs/integrations/toolkits/vectorstore` page is not the
Integration page. The best place is in `/docs/modules/agents/how_to/`
- Moved the file
- Rerouted the page URL
2023-12-04 14:44:06 -08:00
Jacob Lee
a26c4a0930 Allow base_store to be used directly with MultiVectorRetriever (#14202)
Allow users to pass a generic `BaseStore[str, bytes]` to
MultiVectorRetriever, removing the need to use the `create_kv_docstore`
method. This encoding will now happen internally.

@rlancemartin @eyurtsev

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-12-04 14:43:32 -08:00
Vincent Brouwers
67662564f3 langchain[patch]: Fix config arg detection for wrapped lambdarunnable (#14230)
**Description:**
When a RunnableLambda only receives a synchronous callback, this
callback is wrapped into an async one since #13408. However, this
wrapping with `(*args, **kwargs)` causes the `accepts_config` check at
[/libs/core/langchain_core/runnables/config.py#L342](ee94ef55ee/libs/core/langchain_core/runnables/config.py (L342))
to fail, as this checks for the presence of a "config" argument in the
method signature.

Adding a `functools.wraps` around it, resolves it.
2023-12-04 14:18:30 -08:00
Jacob Lee
de86b84a70 Prefer byte store interface for Upstash BaseStore to match other Redis (#14201)
If we are not going to make the existing Docstore class also implement
`BaseStore[str, Document]`, IMO all base store implementations should
always be `[str, bytes]` so that they are more interchangeable.

CC @rlancemartin @eyurtsev
2023-12-04 14:17:33 -08:00
Harrison Chase
411aa9a41e Harrison/nasa tool (#14245)
Co-authored-by: Jacob Matias <88005863+matiasjacob25@users.noreply.github.com>
Co-authored-by: Karam Daid <karam.daid@mail.utoronto.ca>
Co-authored-by: Jumana <jumana.fanous@mail.utoronto.ca>
Co-authored-by: KaramDaid <38271127+KaramDaid@users.noreply.github.com>
Co-authored-by: Anna Chester <74325334+CodeMakesMeSmile@users.noreply.github.com>
Co-authored-by: Jumana <144748640+jfanous@users.noreply.github.com>
2023-12-04 13:43:11 -08:00
nceccarelli
5fea63327b Support Azure gov cloud in Azure Cognitive Search retriever (#13695)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** The existing version hardcoded search.windows.net in
the base url. This is not compatible with the gov cloud. I am allowing
the user to override the default for gov cloud support.,
  - **Issue:** N/A, did not write up in an issue,
  - **Dependencies:** None

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Nicholas Ceccarelli <nceccarelli2@moog.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-04 12:56:35 -08:00
ealt
e09b876863 Fixes error loading Obsidian templates (#13888)
- **Description:** Obsidian templates can include
[variables](https://help.obsidian.md/Plugins/Templates#Template+variables)
using double curly braces. `ObsidianLoader` uses PyYaml to parse the
frontmatter of documents. This parsing throws an error when encountering
variables' curly braces. This is avoided by temporarily substituting
safe strings before parsing.
  - **Issue:** #13887
  - **Tag maintainer:** @hwchase17
2023-12-04 12:55:37 -08:00
Erick Friis
f6d68d78f3 nbdoc -> quarto (#14156)
Switches to a more maintained solution for building ipynb -> md files
(`quarto`)

Also bumps us down to python3.8 because it's significantly faster in the
vercel build step. Uses default openssl version instead of upgrading as
well.
2023-12-04 12:50:56 -08:00
Nithish Raghunandanan
eecfa3f9e5 Add Couchbase document loader (#13979)
**Description:** 
Adds the document loader for [Couchbase](http://couchbase.com/), a
distributed NoSQL database.
**Dependencies:** 
Added the Couchbase SDK as an optional dependency.
**Twitter handle:** nithishr

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-04 12:28:12 -08:00
Bob Lin
805e9bfc24 Add doc for the development of core and experimental sections (#13966)
### **Description**

Hi, I just started learning the source code of `langchain` and hope to
contribute code. However, according to the instructions in the
[CONTRIBUTING.md](https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md)
document, I could not run the test command `make test` to run normally.
I found that many modules did not exist after [splitting
`langchain_core`](https://github.com/langchain-ai/langchain/discussions/13823),
so I updated the document.

### **Twitter handle** 

lin_bob57617
2023-12-04 12:27:57 -08:00
Muntaqa Mahmood
25f72944a0 Add: Steam API tool (#14008)
- **Description:** Our PR is an integration of a Steam API Tool that
makes recommendations on steam games based on user's Steam profile and
provides information on games based on user provided queries.
- **Issue:** the issue # our PR implements:
https://github.com/langchain-ai/langchain/issues/12120
- **Dependencies:** python-steam-api library, steamspypi library and
decouple library
  - **Tag maintainer:** @baskaryan, @hwchase17 
  - **Twitter handle:** N/A

Hello langchain Maintainers,

We are a team of 4 University of Toronto students contributing to
langchain as part of our course [CSCD01 (link to course
page)](https://cscd01.com/work/open-source-project). We hope our changes
help the community. We have run make format, make lint and make test
locally before submitting the PR. To our knowledge, our changes do not
introduce any new errors.

Our PR integrates the python-steam-api, steamspypi and decouple
packages. We have added integration tests to test our python API
integration into langchain and an example notebook is also provided.

Our amazing team that contributed to this PR: @JohnY2002, @shenceyang,
@andrewqian2001 and @muntaqamahmood

Thank you in advance to all the maintainers for reviewing our PR!

---------

Co-authored-by: Shence <ysc1412799032@163.com>
Co-authored-by: JohnY2002 <johnyuan0526@gmail.com>
Co-authored-by: Andrew Qian <andrewqian2001@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: JohnY <94477598+JohnY2002@users.noreply.github.com>
2023-12-04 12:27:38 -08:00
Bob Lin
cd2028288e Add openai v2 adapter (#14063)
### Description

Starting from [openai version
1.0.0](17ac677995 (module-level-client)),
the camel case form of `openai.ChatCompletion` is no longer supported
and has been changed to lowercase `openai.chat.completions`. In
addition, the returned object only accepts attribute access instead of
index access:

```python
import openai

# optional; defaults to `os.environ['OPENAI_API_KEY']`
openai.api_key = '...'

# all client options can be configured just like the `OpenAI` instantiation counterpart
openai.base_url = "https://..."
openai.default_headers = {"x-foo": "true"}

completion = openai.chat.completions.create(
    model="gpt-4",
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?",
        },
    ],
)
print(completion.choices[0].message.content)
```

So I implemented a compatible adapter that supports both attribute
access and index access:

```python
In [1]: from langchain.adapters import openai as lc_openai
   ...: messages = [{"role": "user", "content": "hi"}]

In [2]: result = lc_openai.chat.completions.create(
   ...:     messages=messages, model="gpt-3.5-turbo", temperature=0
   ...: )

In [3]: result.choices[0].message
Out[3]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [4]: result["choices"][0]["message"]
Out[4]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [5]: result = await lc_openai.chat.completions.acreate(
   ...:     messages=messages, model="gpt-3.5-turbo", temperature=0
   ...: )

In [6]: result.choices[0].message
Out[6]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [7]: result["choices"][0]["message"]
Out[7]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [8]: for rs in lc_openai.chat.completions.create(
    ...:     messages=messages, model="gpt-3.5-turbo", temperature=0, stream=True
    ...: ):
    ...:     print(rs.choices[0].delta)
    ...:     print(rs["choices"][0]["delta"])
    ...:
{'role': 'assistant', 'content': ''}
{'role': 'assistant', 'content': ''}
{'content': 'Hello'}
{'content': 'Hello'}
{'content': '!'}
{'content': '!'}

In [20]: async for rs in await lc_openai.chat.completions.acreate(
    ...:     messages=messages, model="gpt-3.5-turbo", temperature=0, stream=True
    ...: ):
    ...:     print(rs.choices[0].delta)
    ...:     print(rs["choices"][0]["delta"])
    ...:
{'role': 'assistant', 'content': ''}
{'role': 'assistant', 'content': ''}
{'content': 'Hello'}
{'content': 'Hello'}
{'content': '!'}
{'content': '!'}
...
```

### Twitter handle

[lin_bob57617](https://twitter.com/lin_bob57617)
2023-12-04 12:12:30 -08:00
billytrend-cohere
0f02081392 Add input_type override (#14068)
Add option to override input_type for cohere's v3 embeddings models

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-04 12:10:24 -08:00
Dmitrii Rashchenko
aaabc1574f Support of custom hugging face inference endpoints url (#14125)
- **Description:** to support not only publicly available Hugging Face
endpoints, but also protected ones (created with "Inference Endpoints"
Hugging Face feature), I have added ability to specify custom api_url.
But if not specified, default behaviour won't change
  - **Issue:** #9181,
  - **Dependencies:** no extra dependencies
2023-12-04 12:08:51 -08:00
Bob Lin
702a6d7044 Closed #14159 (#14165)
### Description

Fix: #14159

Use `from pydantic.v1 import BaseModel, Field` instead of `from pydantic
import BaseModel, Field`

### [lin_bob57617](https://twitter.com/lin_bob57617)
2023-12-04 12:06:04 -08:00
Perry Lee
641e401ba8 Shorten wget commands (#14211)
- **Description:** The commands can be more efficient if the output name
is set to the destined filename instead of renaming in the second
command.
2023-12-04 12:03:47 -08:00
Harrison Chase
e32185193e Harrison/embass (#14242)
Co-authored-by: Julius Lipp <lipp.julius@gmail.com>
2023-12-04 11:58:52 -08:00
umair mehmood
8504ec56e4 fixed: ModuleNotFoundError: No module named 'clarifai.auth' (#14215)
Updated the clarifai imports 

fixed: #14175 

@efriis 
@baskaryan
2023-12-04 11:53:34 -08:00
Hieu Lam
ca8a022cd9 Fixed OpenAIFunctionsAgent not returning when receiving AgentFinish (#14236)
**Description:** The way the condition is checked in the
`return_stopped_response` function of `OpenAIAgent` may not be correct,
when the value returned is `AgentFinish` from the tools it does not work
properly.


Thanks for review, @baskaryan, @eyurtsev, @hwchase17.
2023-12-04 11:43:04 -08:00
Unai Garay Maestre
6826feea14 Adds llm_chain_kwargs to BaseRetrievalQA.from_llm (#14224)
- **Description:** Adds `llm_chain_kwargs` to `BaseRetrievalQA.from_llm`
so these can be passed to the LLM at runtime,
- **Issue:** https://github.com/langchain-ai/langchain/issues/14216,

---------

Signed-off-by: ugm2 <unaigaraymaestre@gmail.com>
2023-12-04 11:34:01 -08:00
James Braza
6ce5dab38c Clarifying descriptions in GuardrailsOutputParser (#14228)
Upstreaming knowledge from
https://github.com/guardrails-ai/guardrails/discussions/473 to LangChain
2023-12-04 11:33:22 -08:00
geret1
50aee687c6 langchain[patch]: Cerebrium model_api_request deprecation (#12704)
- **Description:** As part of my conversation with Cerebrium team,
`model_api_request` will be no longer available in cerebrium lib so it
needs to be replaced.
  - **Issue:** #12705 12705,
  - **Dependencies:** Cerebrium team (agreed)
  - **Tag maintainer:** @eyurtsev 
  - **Twitter handle:** No official Twitter account sorry :D

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-04 09:26:32 -08:00
Harutaka Kawamura
ee94ef55ee docs[patch]: Update MLflow and Databricks docs (#14011)
Depends on #13699. Updates the existing mlflow and databricks examples.

---------

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
2023-12-03 16:07:09 -08:00
Leonid Ganeline
94bf733dae docs[patch]: AWS platform page update (#14160)
The `AWS` platform page has many missed integrations.
- added missed integration references to the `AWS` platform page
- added/updated descriptions and links in the referenced notebooks
- renamed two notebook files. They have file names != page Title, which
generate unordered ToC.
- reroute the URLs for renamed files
- fixed `amazon_textract` notebook: removed failed cell outputs
2023-12-03 15:42:52 -08:00
Leonid Ganeline
74d4154bcc docs[patch]: added Templates Hub menu item (#14148)
This link was missing in Docs.
Added it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-03 15:36:35 -08:00
William FH
246dc4f9cc langchain[patch]: Pass kwargs to chat fireworks (#14183)
Otherwise `.bind()` isn't really any good
2023-12-03 15:12:02 -08:00
Kaiboon Ee
e961c57fd2 langchain[patch]: Mask API key for Arcee LLM (#14193)
- **Description:** Mask API key for Arcee LLM and its associated unit
tests
  - **Issue:** https://github.com/langchain-ai/langchain/issues/12165
  - **Dependencies:** N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** `eekaiboon`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-03 15:11:43 -08:00
Daniyar Supiyev
092f302c0f langchain[patch]: Asynchronous human-in-the-loop callback (#14195)
**Description:** Adding a possibility to use asynchronous callback
handler in human-in-the-loop validation tool. Very useful, for example,
if you want to implement a validation over Telegram bot.
**Issue:** -
**Dependencies:** -

---------

Co-authored-by: Daniyar_Supiyev <daniyar_supiyev@epam.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-03 14:57:07 -08:00
Leonid Ganeline
c660b0cf79 docs[patch]: moved semadb.mdx file (#14204)
SemaDB.mdx file was placed with additional sub-folder:
`https://python.langchain.com/docs/integrations/providers/providers/semadb`
- Moved file to the
`https://python.langchain.com/docs/integrations/providers/semadb`
- Added a redirect for the file URL
2023-12-03 14:36:47 -08:00
Mark Cusack
16c83f786c Adds the Yellowbrick Data Warehouse as a supported vector store (#13820)
- **Description** An integration to allow the Yellowbrick Data Warehouse
to function as a vector store

---------

Co-authored-by: markcusack <markcusack@markcusacksmac.lan>
Co-authored-by: markcusack <markcusack@Mark-Cusack-sMac.local>
2023-12-03 13:35:53 -08:00
Hendrik Hogertz
e6862e6e7d Fix Azure Openai function calling in streaming mode (#13768)
- **Description**: This PR addresses an issue with the OpenAI API
streaming response, where initially the key (arguments) is provided but
the value is None. Subsequently, it updates with {"arguments": "{\n"},
leading to a type inconsistency that causes an exception. The specific
error encountered is ValueError: additional_kwargs["arguments"] already
exists in this message, but with a different type. This change aims to
resolve this inconsistency and ensure smooth API interactions.
- **Issue**: None.
- **Dependencies**: None.
- **Tag maintainer**: @eyurtsev

This is an updated version of #13229 based on the refactored code.
Credit goes to @superken01.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-03 12:07:15 -08:00
Nicolò Boschi
e204657b3c AstraDB VectorStore: implement pre_delete_collection (#13780)
- **Description:** some vector stores have a flag for try deleting the
collection before creating it (such as ´vectorpg´). This is a useful
flag when prototyping indexing pipelines and also for integration tests.
Added the bool flag `pre_delete_collection ` to the constructor (default
False)
  - **Tag maintainer:** @hemidactylus 
  - **Twitter handle:** nicoloboschi

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-03 12:06:20 -08:00
Chelsea E. Manning
2780d2d4dd Extend OpenAIEmbeddings class to support non-tiktoken based embeddings (#13884)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** This extends `OpenAIEmbeddings` to add support for
non-`tiktoken` based embeddings, specifically for use with the new
`text-generation-webui` API (`--extensions openai`) which does not
support `tiktoken` encodings, but rather strings
  - **Issue:** Not found,
- **Dependencies:** HuggingFace `transformers.AutoTokenizer` is new
dependency for running the model without `tiktoken`
- **Tag maintainer:** @baskaryan based on last commit for
`langchain-core` refactor
  - **Twitter handle:** @xychelsea

Modified the tokenization process to be model-agnostic, allowing for
both OpenAI and non-OpenAI model tokenizations, by setting the new
default `bool` flag `tiktoken_enabled` to `False`. This requeires
HuggingFace’s AutoTokenizer and handling tokenization for models
requiring different preprocessing steps to generate a chunked string
request rather than a list of integers.

Updated the embeddings generation process to accommodate non-OpenAI
models. This includes converting tokenized text into embeddings using
OpenAI’s and Hugging Face’s model architectures.
 -->
2023-12-03 12:04:17 -08:00
Changgeng Zhao
9b59bde93d Update Hologres vector store: use hologres-vector (#13767)
Hi,
I made some code changes on the Hologres vector store to improve the
data insertion performance.
Also, this version of the code uses `hologres-vector` library. This
library is more convenient for us to update, and more efficient in
performance.
The code has passed the format/lint/spell check. I have run the unit
test for Hologres connecting to my own database.
Please check this PR again and tell me if anything needs to change.

Best,
Changgeng,
Developer @ Alibaba Cloud

Co-authored-by: Changgeng Zhao <zhaochanggeng.zcg@alibaba-inc.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-03 11:50:45 -08:00
Nicolò Boschi
0de7cf898d Ensure AstraDB integration tests clean up the environment (#13774)
- **Description:** currently astra_db integration tests might leave
orphan collections
  - **Tag maintainer:** @hemidactylus 
  - **Twitter handle:** nicoloboschi
2023-12-03 11:14:42 -08:00
Harrison Chase
7bc4c12477 delete stray test (#14200)
was added to an old path

also im not sure this is even really a test file? which is why i didnt
move it
2023-12-03 11:06:57 -08:00
Leonid Ganeline
283c2994de docs: Hugging Face platform page (#13831)
`Hugging Face` is definitely a platform. It includes many integrations
for many modules (LLM, Embedding, DocumentLoader, Tool)
So, a doc page was added that defines Hugging Face as a platform.
2023-12-03 11:06:43 -08:00
Chad Norvell
8a0951d934 Fix Mathpix PDF loader integration (#13949)
- **Description:** Fixes the Mathpix PDF loader API integration.
Specifically, ensures that Mathpix auth headers are provided for every
request, and ensures that we recognize all errors that can occur during
a request. Also, the option to provide API keys as kwargs never actually
worked before, but now that's fixed too.
  - **Issue:** #11249
  - **Dependencies:** None
2023-12-03 10:36:49 -08:00
gzyJoy
32d4bb4590 Added Slacktoolkit (#14012)
- **Description:** 
This PR introduces the Slack toolkit to LangChain, which allows users to
read and write to Slack using the Slack API. Specifically, we've added
the following tools.
1. get_channel: Provides a summary of all the channels in a workspace.
2. get_message: Gets the message history of a channel.
3. send_message: Sends a message to a channel.
4. schedule_message: Sends a message to a channel at a specific time and
date.

- **Issue:** This pull request addresses [Add Slack Toolkit
#11747](https://github.com/langchain-ai/langchain/issues/11747)
  - **Dependencies:** package`slack_sdk`
Note: For this toolkit to function you will need to add a Slack app to
your workspace. Additional info can be found
[here](https://slack.com/help/articles/202035138-Add-apps-to-your-Slack-workspace).

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ArianneLavada <ariannelavada@gmail.com>
Co-authored-by: ArianneLavada <84357335+ArianneLavada@users.noreply.github.com>
Co-authored-by: ariannelavada@gmail.com <you@example.com>
2023-12-03 10:25:38 -08:00
Richie
99e5ee6a84 fix(vectorstores): incorrect import for mongodb atlas DriverInfo (#14060)
- **Description:** fix `import` issue for `mongodb atlas` vectore store
integration
  - **Issue:** none
  - **Dependencies:** none

while trying to follow official `langchain`'s [mongodb integration
guide](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas),
an import error will happen.

It's caused by incorrect import location:
- `from pymongo import DriverInfo` should be `from pymongo.driver_info
import DriverInfo`
- reference: [pymongo's DriverInfo
class](https://pymongo.readthedocs.io/en/stable/api/pymongo/driver_info.html#pymongo.driver_info.DriverInfo)

Thanks!
2023-12-03 10:22:13 -08:00
ggeutzzang
03d6b94c29 Fix: (issue #14066) DOC: Summarization output broken (#14078)
- **Description:** : As described in the issue below, 
https://python.langchain.com/docs/use_cases/summarization  
I've modified the Python code in the above notebook to perform well. 

I also modified the OpenAI LLM model to the latest version as shown
below.
`gpt-3.5-turbo-16k --> gpt-3.5-turbo-1106`
This is because it seems to be a bit more responsive.
  - **Issue:** : #14066
2023-12-03 10:13:57 -08:00
James Braza
3833882ab7 Removing extra StdOutCallbackHandler overridden methods (#14136)
Unnecessarily overridden methods:

- Give the idea the subclass is doing something special (when it isn't)
- Block CTRL-click to the actual method

This PR removes some unnecessarily overridden methods in
`StdOutCallbackHandler`

Supercedes https://github.com/langchain-ai/langchain/pull/12858
2023-12-03 09:38:49 -08:00
Bob Lin
ac449f186b Update docs to use new usage in openai>1.0.0 (#14163)
### Description

Use new
[APIs](https://github.com/openai/openai-python/blob/main/api.md#finetuning)

### Twitter handle

[lin_bob57617](https://twitter.com/lin_bob57617)
2023-12-03 09:37:35 -08:00
James Braza
052e23be3e Added Python logging tracer (#14190)
This PR creates a logging handler and adds a simple unit test of it

Supercedes https://github.com/langchain-ai/langchain/pull/12862

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-12-03 09:36:30 -08:00
Bob Lin
1ea48a31da Update fallback cases (#14164)
### Description

The `RateLimitError` initialization method has changed after openai v1,
and the usage of `patch` needs to be changed.

### Twitter handle

[lin_bob57617](https://twitter.com/lin_bob57617)
2023-12-03 08:56:07 -08:00
Bob Lin
62505043be Closed #14069 (#14166)
### Description

Fix #14069

### Twitter handle

[lin_bob57617](https://twitter.com/lin_bob57617)
2023-12-03 08:55:25 -08:00
Yong woo Song
9938086df0 Fix Html2TextTransformer for shallow copy (#14197)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
Hi,
There is some unintended behavior in Html2TextTransformer.
The current code is **directly modifying the original documents that are
passed as arguments to the function.**
Therefore, not only the return of the function but also the input
variables are being modified simultaneously.
**To resolve this, I added unit test code as well.**

reference link: [Shallow vs Deep Copying of Python
Objects](https://realpython.com/copying-python-objects/)

Thanks! ☺️
2023-12-03 08:45:35 -08:00
h3l
818252b1f8 Fix: (issue #14127) Volc Engine MaaS import error (#14194)
- **Description:** fix Volc Engine MaaS import error
- **Issue:** [the issue # it fixes (if
applicable),](https://github.com/langchain-ai/langchain/issues/14127)
  - **Dependencies:** None
  - **Tag maintainer:** @baskaryan 
  - **Twitter handle:**

Co-authored-by: lvzhong <lvzhong@bytedance.com>
2023-12-03 08:43:23 -08:00
Leonid Ganeline
6ae0194dc7 docs: integrations/toolkits/office365 notebook update (#14188)
Added more descriptions and authentication details.
2023-12-03 08:43:00 -08:00
Bagatur
0bdb434383 langchain[patch]: Release langchain 0.0.345 (#14184) 2023-12-02 15:53:49 -08:00
Bagatur
15c04a5670 core[patch]: Release 0.0.9 (#14182) 2023-12-02 14:40:56 -08:00
James Braza
bdb6ae2ed3 core[patch]: BaseTracer helper method for Run lookup (#14139)
I observed the same run ID extraction logic is repeated many times in
`BaseTracer`.

This PR creates a helper method for DRY code.
2023-12-02 14:05:50 -08:00
Harutaka Kawamura
41ee3be95f langchain[patch]: Support passing parameters to llms.Databricks and llms.Mlflow (#14100)
Before, we need to use `params` to pass extra parameters:

```python
from langchain.llms import Databricks

Databricks(..., params={"temperature": 0.0})
```

Now, we can directly specify extra params:

```python
from langchain.llms import Databricks

Databricks(..., temperature=0.0)
```
2023-12-01 19:27:18 -08:00
Abdul
82102c99b3 langchain[patch]: Running SQLDatabaseChain adds prefix "SQLQuery:\n" (#14058)
- **Issue:** https://github.com/langchain-ai/langchain/issues/12077

---------

Co-authored-by: Abdul Kader Maliyakkal <maliyakk@amazon.com>
2023-12-01 19:26:16 -08:00
Samuel Kemp
fd781c89cc langchain[minor]: add azure ai data document loader (#13404)
This PR adds an "Azure AI data" document loader, which allows Azure AI
users to load their registered data assets as a document object in
langchain.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-01 19:25:55 -08:00
James Braza
24385a00de core[minor], langchain[patch], experimental[patch]: Added missing py.typed to langchain_core (#14143)
See PR title.

From what I can see, `poetry` will auto-include this. Please let me know
if I am missing something here.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-01 19:15:23 -08:00
quantum00549
f7c257553d langchain[patch]: fixed a bug that was causing the streaming transfer to not work… (#10827)
… properly

Fixed a bug that was causing the streaming transfer to not work
properly.
 - **Description: 
1、The on_llm_new_token method in the streaming callback can now be
called properly in streaming transfer mode.
2、In streaming transfer mode, LLM can now correctly output the complete
response instead of just the first token.
- **Tag maintainer: @wangxuqi 
- **Twitter handle: @kGX7XJjuYxzX9Km

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-01 18:57:50 -08:00
Eugene Yurtsev
6d0209e0aa Improve file system blob loader and generic loader (#14004)
* Add support for passing a specific file to the file system blob loader
* Allow specifying a class parameter for the parser for the generic
loader

```python

class AudioLoader(GenericLoader):
  @staticmethod
  def get_parser(**kwargs):
     return MyAudioParser(**kwargs):
```

The intent of the GenericLoader is to provide on-ramps from different
sources (e.g., web, s3, file system).

An alternative is to use pipelining syntax or creating a Pipeline

```
FileSystemBlobLoader(...) | MyAudioParser
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-01 21:23:40 -05:00
Erick Friis
700428593a fix broken api docs links (#14154) 2023-12-01 17:17:52 -08:00
Bagatur
340b42d8ee docs[minor]: lcel why page (#14089) 2023-12-01 16:13:31 -08:00
Lance Martin
cbe4753e1a Update Open CLIP embd (#14155)
Prior default model required a large amt of RAM and often crashed
Jupyter ntbk kernel.
2023-12-01 15:13:20 -08:00
Erick Friis
b01d9d27d9 docs[patch]: docs local build (#14152) 2023-12-01 14:03:36 -08:00
Alex Kira
0caef3cde7 Change RunnableMap to RunnableParallel for consistency (#14142)
- **Description:** Change instances of RunnableMap to RunnableParallel,
as that should be the one used going forward. This makes it consistent
across the codebase.
2023-12-01 13:36:40 -08:00
Erick Friis
96f6b90349 templates[patch]: relock templates (#14149) 2023-12-01 13:35:54 -08:00
Martin Jul
e3a7c96a8e docs[patch]: Fix minor typos (casing) in quickstart (#14138)
Fix casing of API and LangChain in the description text for the
LangServe example server.
2023-12-01 13:29:53 -08:00
Erick Friis
8cf4cb9e48 docs[patch]: Fix templates/index (#14146) 2023-12-01 13:09:36 -08:00
Amyh102
b6d26d3f9f infra[patch]: Add unit tests for Huggingface dataset loader (#14053)
- **Description:** Add unit tests for huggingface dataset loader and
sample huggingface dataset for future tests. Updates dependencies for
`datasets` module.
- Adds coverage for [previous pull
request](https://github.com/langchain-ai/langchain/pull/13864)
  - **Tag maintainer:** @hwchase17

---------

Co-authored-by: Amy Han <amyhan@Amys-Air.lan>
Co-authored-by: Amy Han <amyhan@Amys-MacBook-Air.local>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-01 12:42:31 -08:00
Alex Kira
6eb40db353 docs[patch]: Add getting started section to LCEL doc (#14045)
### Description:
Doc addition for LCEL introduction. Adds a more basic starter guide for
using LCEL.

---------

Co-authored-by: Alex Kira <akira@Alexs-MBP.local.tld>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-12-01 12:23:43 -08:00
Govinda Totla
62a3473ac0 docs[patch]: add text_splitter.py test (#14025)
Description: Add HTMLHeaderTextSplitter unit test
Dependencies: none
2023-12-01 11:57:50 -08:00
Bagatur
7d5341dbd3 docs[patch]: add contribs to readme (#14137) 2023-12-01 11:34:28 -08:00
axiangcoding
1b36ddf16c docs[patch]: add deprecated note for ErnieChatBot (#14061)
- **Description:** just a little change of ErnieChatBot class
description, sugguesting user to use more suitable class
  - **Issue:** none,
  - **Dependencies:** none,
  - **Tag maintainer:** @baskaryan ,
  - **Twitter handle:** none
2023-12-01 11:16:31 -08:00
Alex Kira
1757258b2a docs[patch]: Add mermaid JS theme dependency to docusaurus (#14051)
- **Description:** Add mermaid JS dependency and configs to
documentation. Allows inline doc diagrams in markdown.
  - **Dependencies:** NPM package @docusaurus/theme-mermaid
2023-12-01 11:06:29 -08:00
Devin Dahoon Kim
32da0a4d71 langchain[patch]: use async_embed_with_retry in _aget_len_safe_embeddings (#14110)
**Description**

`embed_with_retry` is for sync operations and not for async operations.
Use `async_embed_with_retry` for appropriate async operations.


I'm using `OpenAIEmbedding(http_client=httpx.AsyncClient())` with only
async operations.
However, I got an error when I use `embedding.aembed_documents` because
`embed_with_retry` uses sync OpenAI client with async http client.
2023-12-01 10:47:07 -08:00
lijie
371bcb7580 langchain[patch]: set maxsplit when parse python function docstring (#14121)
Description

when the desc of arg in python docstring contains ":", the
`_parse_python_function_docstring` will raise **ValueError: too many
values to unpack (expected 2)**.

A sample desc would be:
"""
Args: 
    error_arg: this is an arg with an additional ":" symbol
"""

So, set `maxsplit` parameter to fix it.
2023-12-01 10:46:53 -08:00
Harrison Chase
ae646701c4 Harrison/ibm (#14133)
Co-authored-by: Mateusz Szewczyk <139469471+MateuszOssGit@users.noreply.github.com>
2023-12-01 12:44:11 -05:00
621 changed files with 64526 additions and 32131 deletions

View File

@@ -72,9 +72,10 @@ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs
### Core vs. Experimental
This repository contains two separate projects:
This repository contains three separate projects:
- `langchain`: core langchain code, abstractions, and use cases.
- `langchain.experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
- `langchain_core`: contain interfaces for key abstractions as well as logic for combining them in chains (LCEL).
- `langchain_experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
@@ -128,6 +129,24 @@ make docker_tests
There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.
### Only develop langchain_core or langchain_experimental
If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests:
```bash
cd libs/core
poetry install --with test
make test
```
Or:
```bash
cd libs/experimental
poetry install --with test
make test
```
### Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.

45
.github/scripts/check_diff.py vendored Normal file
View File

@@ -0,0 +1,45 @@
import json
import sys
ALL_DIRS = {
"libs/core",
"libs/langchain",
"libs/experimental",
}
if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run = set()
for file in files:
if any(
file.startswith(dir_)
for dir_ in (
".github/workflows",
".github/tools",
".github/actions",
"libs/core",
".github/scripts/check_diff.py",
)
):
dirs_to_run = ALL_DIRS
break
elif "libs/community" in file:
dirs_to_run.update(
("libs/community", "libs/langchain", "libs/experimental")
)
elif "libs/partners" in file:
partner_dir = file.split("/")[2]
dirs_to_run.update(
(f"libs/partners/{partner_dir}", "libs/langchain", "libs/experimental")
)
elif "libs/langchain" in file:
dirs_to_run.update(("libs/langchain", "libs/experimental"))
elif "libs/experimental" in file:
dirs_to_run.add("libs/experimental")
elif file.startswith("libs/"):
dirs_to_run = ALL_DIRS
break
else:
pass
print(json.dumps(list(dirs_to_run)))

View File

@@ -1,21 +1,24 @@
---
name: libs/langchain CI
name: langchain CI
on:
push:
branches: [master]
pull_request:
paths:
- ".github/actions/poetry_setup/action.yml"
- ".github/tools/**"
- ".github/workflows/_lint.yml"
- ".github/workflows/_test.yml"
- ".github/workflows/_pydantic_compatibility.yml"
- ".github/workflows/langchain_ci.yml"
- "libs/*"
- "libs/langchain/**"
- "libs/core/**"
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: choice
default: 'libs/langchain'
options:
- libs/langchain
- libs/core
- libs/experimental
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
@@ -24,43 +27,39 @@ on:
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
group: ${{ github.workflow }}-${{ github.ref }}-${{ inputs.working-directory }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/langchain"
jobs:
lint:
uses: ./.github/workflows/_lint.yml
with:
working-directory: libs/langchain
working-directory: ${{ inputs.working-directory }}
secrets: inherit
test:
uses: ./.github/workflows/_test.yml
with:
working-directory: libs/langchain
working-directory: ${{ inputs.working-directory }}
secrets: inherit
compile-integration-tests:
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: libs/langchain
working-directory: ${{ inputs.working-directory }}
secrets: inherit
pydantic-compatibility:
uses: ./.github/workflows/_pydantic_compatibility.yml
dependencies:
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: libs/langchain
working-directory: ${{ inputs.working-directory }}
secrets: inherit
extended-tests:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
@@ -69,6 +68,9 @@ jobs:
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
defaults:
run:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
@@ -77,19 +79,14 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: libs/langchain
working-directory: ${{ inputs.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing
- name: Install langchain core editable
shell: bash
run: |
poetry run pip install -e ../core
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests

View File

@@ -38,7 +38,7 @@ jobs:
- name: Install integration dependencies
shell: bash
run: poetry install --with=test_integration
run: poetry install --with=test_integration,test
- name: Check integration tests compile
shell: bash

View File

@@ -1,4 +1,4 @@
name: pydantic v1/v2 compatibility
name: dependencies
on:
workflow_call:
@@ -28,7 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: Pydantic v1/v2 compatibility - Python ${{ matrix.python-version }}
name: dependencies - Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
@@ -44,6 +44,14 @@ jobs:
shell: bash
run: poetry install
- name: Check imports with base dependencies
shell: bash
run: poetry run make check_imports
- name: Install test dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-location }}

View File

@@ -90,4 +90,31 @@ jobs:
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
run: |
make lint
make lint_package
- name: Install test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test
- name: Get .mypy_cache to speed up mypy
uses: actions/cache@v3
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests

47
.github/workflows/check_diffs.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
---
name: Check library diffs
on:
push:
branches: [master]
pull_request:
paths:
- ".github/actions/**"
- ".github/tools/**"
- ".github/workflows/**"
- "libs/**"
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: echo "dirs-to-run=$(python .github/scripts/check_diff.py ${{ steps.files.outputs.all }})" >> $GITHUB_OUTPUT
outputs:
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
ci:
needs: [ build ]
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-run) }}
uses: ./.github/workflows/_all_ci.yml
with:
working-directory: ${{ matrix.working-directory }}

View File

@@ -1,47 +0,0 @@
---
name: libs/cli CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/_pydantic_compatibility.yml'
- '.github/workflows/langchain_cli_ci.yml'
- 'libs/cli/**'
- 'libs/*'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/cli"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/cli
langchain-location: ../langchain
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/cli
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: libs/community Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/community
secrets: inherit

View File

@@ -1,52 +0,0 @@
---
name: libs/langchain core CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/_pydantic_compatibility.yml'
- '.github/workflows/langchain_core_ci.yml'
- 'libs/core/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/core"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/core
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/core
secrets: inherit
pydantic-compatibility:
uses:
./.github/workflows/_pydantic_compatibility.yml
with:
working-directory: libs/core
secrets: inherit

View File

@@ -1,136 +0,0 @@
---
name: libs/experimental CI
on:
push:
branches: [master]
pull_request:
paths:
- ".github/actions/poetry_setup/action.yml"
- ".github/tools/**"
- ".github/workflows/_lint.yml"
- ".github/workflows/_test.yml"
- ".github/workflows/langchain_experimental_ci.yml"
- "libs/*"
- "libs/experimental/**"
- "libs/langchain/**"
- "libs/core/**"
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/experimental"
jobs:
lint:
uses: ./.github/workflows/_lint.yml
with:
working-directory: libs/experimental
secrets: inherit
test:
uses: ./.github/workflows/_test.yml
with:
working-directory: libs/experimental
secrets: inherit
compile-integration-tests:
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: libs/experimental
secrets: inherit
# It's possible that langchain-experimental works fine with the latest *published* langchain,
# but is broken with the langchain on `master`.
#
# We want to catch situations like that *before* releasing a new langchain, hence this test.
test-with-latest-langchain:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: test with unpublished langchain - Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ env.WORKDIR }}
cache-key: unpublished-langchain
- name: Install dependencies
shell: bash
run: |
echo "Running tests with unpublished langchain, installing dependencies with poetry..."
poetry install
echo "Editably installing langchain outside of poetry, to avoid messing up lockfile..."
poetry run pip install -e ../langchain
poetry run pip install -e ../core
- name: Run tests
run: make test
extended-tests:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: libs/experimental
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -52,13 +52,7 @@ jobs:
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
poetry install --with=test_integration
poetry run pip install google-cloud-aiplatform
poetry run pip install "boto3>=1.28.57"
if [[ ${{ matrix.python-version }} != "3.8" ]]
then
poetry run pip install fireworks-ai
fi
poetry install --with=test_integration,test
- name: Run tests
shell: bash
@@ -68,7 +62,9 @@ jobs:
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
run: |
make scheduled_tests

View File

@@ -33,5 +33,4 @@ jobs:
./.github/workflows/_lint.yml
with:
working-directory: templates
langchain-location: ../libs/langchain
secrets: inherit

3
.gitignore vendored
View File

@@ -167,8 +167,7 @@ docs/node_modules/
docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/api_reference.rst
docs/api_reference/experimental_api_reference.rst
docs/api_reference/*api_reference.rst
docs/api_reference/_build
docs/api_reference/*/
!docs/api_reference/_static/

View File

@@ -41,7 +41,7 @@ spell_fix:
# LINTING AND FORMATTING
######################
lint:
lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook

View File

@@ -104,3 +104,7 @@ Please see [here](https://python.langchain.com) for full documentation, which in
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](.github/CONTRIBUTING.md).
## 🌟 Contributors
[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)

View File

@@ -31,7 +31,7 @@
"source": [
"import re\n",
"\n",
"from IPython.display import Image\n",
"from IPython.display import Image, display\n",
"from steamship import Block, Steamship"
]
},
@@ -180,7 +180,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -37,7 +37,8 @@
"source": [
"#!pip install qianfan\n",
"#!pip install bce-python-sdk\n",
"#!pip install elasticsearch == 7.11.0"
"#!pip install elasticsearch == 7.11.0\n",
"#!pip install sentence-transformers"
]
},
{
@@ -54,8 +55,10 @@
"metadata": {},
"outputs": [],
"source": [
"import sentence_transformers\n",
"from baidubce.auth.bce_credentials import BceCredentials\n",
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
"from langchain.chains.retrieval_qa import RetrievalQA\n",
"from langchain.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader\n",
"from langchain.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
@@ -161,15 +164,22 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.9.17"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
@@ -177,5 +187,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -133,7 +133,7 @@
"from tqdm import tqdm\n",
"\n",
"for i in tqdm(range(len(title_embeddings))):\n",
" title = titles[i].replace(\"'\", \"''\")\n",
" title = song_titles[i].replace(\"'\", \"''\")\n",
" embedding = title_embeddings[i]\n",
" sql_command = (\n",
" f'UPDATE \"Track\" SET \"embeddings\" = ARRAY{embedding} WHERE \"Name\" ='\n",
@@ -681,9 +681,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -187,7 +187,7 @@
" for key in path:\n",
" try:\n",
" current = current[key]\n",
" except:\n",
" except KeyError:\n",
" return None\n",
" return current\n",
"\n",

View File

@@ -9,13 +9,15 @@ SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
cd "${SCRIPT_DIR}"
mkdir -p ../_dist
rsync -ruv . ../_dist
rsync -ruv --exclude node_modules --exclude api_reference --exclude .venv --exclude .docusaurus . ../_dist
cd ../_dist
poetry run python scripts/model_feat_table.py
poetry run nbdoc_build --srcdir docs --pause 0
cp ../cookbook/README.md src/pages/cookbook.mdx
cp ../.github/CONTRIBUTING.md docs/contributing.md
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
poetry run python scripts/generate_api_reference_links.py
yarn install
yarn start
yarn
quarto preview docs

View File

@@ -196,11 +196,13 @@ def _load_package_modules(
return modules_by_namespace
def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) -> str:
def _construct_doc(
package_namespace: str, members_by_namespace: Dict[str, ModuleMembers]
) -> str:
"""Construct the contents of the reference.rst file for the given package.
Args:
pkg: The package name
package_namespace: The package top level namespace
members_by_namespace: The members of the package, dict organized by top level
module contains a list of classes and functions
inside of the top level namespace.
@@ -210,7 +212,7 @@ def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) ->
"""
full_doc = f"""\
=======================
``{pkg}`` API Reference
``{package_namespace}`` API Reference
=======================
"""
@@ -222,13 +224,13 @@ def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) ->
functions = _members["functions"]
if not (classes or functions):
continue
section = f":mod:`{pkg}.{module}`"
section = f":mod:`{package_namespace}.{module}`"
underline = "=" * (len(section) + 1)
full_doc += f"""\
{section}
{underline}
.. automodule:: {pkg}.{module}
.. automodule:: {package_namespace}.{module}
:no-members:
:no-inherited-members:
@@ -238,7 +240,7 @@ def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) ->
full_doc += f"""\
Classes
--------------
.. currentmodule:: {pkg}
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
@@ -270,7 +272,7 @@ Classes
full_doc += f"""\
Functions
--------------
.. currentmodule:: {pkg}
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
@@ -282,57 +284,57 @@ Functions
return full_doc
def _document_langchain_experimental() -> None:
"""Document the langchain_experimental package."""
# Generate experimental_api_reference.rst
exp_members = _load_package_modules(EXP_DIR)
exp_doc = ".. _experimental_api_reference:\n\n" + _construct_doc(
"langchain_experimental", exp_members
)
with open(EXP_WRITE_FILE, "w") as f:
f.write(exp_doc)
def _build_rst_file(package_name: str = "langchain") -> None:
"""Create a rst file for building of documentation.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
"""
package_members = _load_package_modules(_package_dir(package_name))
with open(_out_file_path(package_name), "w") as f:
f.write(
_doc_first_line(package_name)
+ _construct_doc(package_namespace[package_name], package_members)
)
def _document_langchain_core() -> None:
"""Document the langchain_core package."""
# Generate core_api_reference.rst
core_members = _load_package_modules(CORE_DIR)
core_doc = ".. _core_api_reference:\n\n" + _construct_doc(
"langchain_core", core_members
)
with open(CORE_WRITE_FILE, "w") as f:
f.write(core_doc)
package_namespace = {
"langchain": "langchain",
"experimental": "langchain_experimental",
"core": "langchain_core",
}
def _document_langchain() -> None:
"""Document the main langchain package."""
# load top level module members
lc_members = _load_package_modules(PKG_DIR)
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
return ROOT_DIR / "libs" / package_name / package_namespace[package_name]
# Add additional packages
tools = _load_package_modules(PKG_DIR, "tools")
agents = _load_package_modules(PKG_DIR, "agents")
schema = _load_package_modules(PKG_DIR, "schema")
lc_members.update(
{
"agents.output_parsers": agents["output_parsers"],
"agents.format_scratchpad": agents["format_scratchpad"],
"tools.render": tools["render"],
}
)
def _out_file_path(package_name: str = "langchain") -> Path:
"""Return the path to the file containing the documentation."""
name_prefix = {
"langchain": "",
"experimental": "experimental_",
"core": "core_",
}
return HERE / f"{name_prefix[package_name]}api_reference.rst"
lc_doc = ".. _api_reference:\n\n" + _construct_doc("langchain", lc_members)
with open(WRITE_FILE, "w") as f:
f.write(lc_doc)
def _doc_first_line(package_name: str = "langchain") -> str:
"""Return the path to the file containing the documentation."""
prefix = {
"langchain": "",
"experimental": "experimental",
"core": "core",
}
return f".. {prefix[package_name]}_api_reference:\n\n"
def main() -> None:
"""Generate the reference.rst file for each package."""
_document_langchain()
_document_langchain_experimental()
_document_langchain_core()
"""Generate the api_reference.rst file for each package."""
_build_rst_file(package_name="core")
_build_rst_file(package_name="langchain")
_build_rst_file(package_name="experimental")
if __name__ == "__main__":

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: 3
---
# Cookbook

View File

@@ -146,7 +146,7 @@
"source": [
"### Branching and Merging\n",
"\n",
"You may want the output of one component to be processed by 2 or more other components. [RunnableMaps](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableMap.html) let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
"You may want the output of one component to be processed by 2 or more other components. [RunnableParallels](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html#langchain_core.runnables.base.RunnableParallel) let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
"\n",
"```text\n",
" Input\n",

View File

@@ -317,7 +317,7 @@
"source": [
"## Simplifying input\n",
"\n",
"To make invocation even simpler, we can add a `RunnableMap` to take care of creating the prompt input dict for us:"
"To make invocation even simpler, we can add a `RunnableParallel` to take care of creating the prompt input dict for us:"
]
},
{
@@ -327,9 +327,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
"from langchain.schema.runnable import RunnableParallel, RunnablePassthrough\n",
"\n",
"map_ = RunnableMap(foo=RunnablePassthrough())\n",
"map_ = RunnableParallel(foo=RunnablePassthrough())\n",
"chain = (\n",
" map_\n",
" | prompt\n",

View File

@@ -209,7 +209,10 @@
"id": "637f994a-5134-402a-bcf0-4de3911eaf49",
"metadata": {},
"source": [
":::tip [LangSmith trace](https://smith.langchain.com/public/60909eae-f4f1-43eb-9f96-354f5176f66f/r)\n",
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/60909eae-f4f1-43eb-9f96-354f5176f66f/r)\n",
"\n",
":::"
]
},
@@ -374,7 +377,10 @@
"id": "5a7e498b-dc68-4267-a35c-90ceffa91c46",
"metadata": {},
"source": [
":::tip [LangSmith trace](https://smith.langchain.com/public/3b27d47f-e4df-4afb-81b1-0f88b80ca97e/r)\n",
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/3b27d47f-e4df-4afb-81b1-0f88b80ca97e/r)\n",
"\n",
":::"
]
}

View File

@@ -31,7 +31,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 1,
"id": "33be32af",
"metadata": {},
"outputs": [],
@@ -48,7 +48,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 2,
"id": "bfc47ec1",
"metadata": {},
"outputs": [],
@@ -70,7 +70,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "eae31755",
"metadata": {},
"outputs": [],
@@ -85,7 +85,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 4,
"id": "f3040b0c",
"metadata": {},
"outputs": [
@@ -95,7 +95,7 @@
"'Harrison worked at Kensho.'"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -106,7 +106,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "e1d20c7c",
"metadata": {},
"outputs": [],
@@ -134,7 +134,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "7ee8b2d4",
"metadata": {},
"outputs": [
@@ -144,7 +144,7 @@
"'Harrison ha lavorato a Kensho.'"
]
},
"execution_count": 7,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -165,18 +165,20 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 21,
"id": "3f30c348",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import format_document\n",
"from langchain.schema.runnable import RunnableMap"
"from langchain.schema.messages import get_buffer_string\n",
"from langchain.schema.runnable import RunnableParallel\n",
"from langchain_core.messages import AIMessage, HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "64ab1dbf",
"metadata": {},
"outputs": [],
@@ -194,7 +196,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "7d628c97",
"metadata": {},
"outputs": [],
@@ -209,7 +211,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "f60a5d0f",
"metadata": {},
"outputs": [],
@@ -226,39 +228,14 @@
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7d007db6",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Tuple\n",
"\n",
"\n",
"def _format_chat_history(chat_history: List[Tuple[str, str]]) -> str:\n",
" # chat history is of format:\n",
" # [\n",
" # (human_message_str, ai_message_str),\n",
" # ...\n",
" # ]\n",
" # see below for an example of how it's invoked\n",
" buffer = \"\"\n",
" for dialogue_turn in chat_history:\n",
" human = \"Human: \" + dialogue_turn[0]\n",
" ai = \"Assistant: \" + dialogue_turn[1]\n",
" buffer += \"\\n\" + \"\\n\".join([human, ai])\n",
" return buffer"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 11,
"id": "5c32cc89",
"metadata": {},
"outputs": [],
"source": [
"_inputs = RunnableMap(\n",
"_inputs = RunnableParallel(\n",
" standalone_question=RunnablePassthrough.assign(\n",
" chat_history=lambda x: _format_chat_history(x[\"chat_history\"])\n",
" chat_history=lambda x: get_buffer_string(x[\"chat_history\"])\n",
" )\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
@@ -273,17 +250,17 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 12,
"id": "135c8205",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)"
"AIMessage(content='Harrison was employed at Kensho.')"
]
},
"execution_count": 14,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -299,17 +276,17 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 22,
"id": "424e7e7a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison worked at Kensho.', additional_kwargs={}, example=False)"
"AIMessage(content='Harrison worked at Kensho.')"
]
},
"execution_count": 15,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
@@ -318,7 +295,10 @@
"conversational_qa_chain.invoke(\n",
" {\n",
" \"question\": \"where did he work?\",\n",
" \"chat_history\": [(\"Who wrote this notebook?\", \"Harrison\")],\n",
" \"chat_history\": [\n",
" HumanMessage(content=\"Who wrote this notebook?\"),\n",
" AIMessage(content=\"Harrison\"),\n",
" ],\n",
" }\n",
")"
]
@@ -335,7 +315,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 14,
"id": "e31dd17c",
"metadata": {},
"outputs": [],
@@ -347,7 +327,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 15,
"id": "d4bffe94",
"metadata": {},
"outputs": [],
@@ -359,7 +339,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 16,
"id": "733be985",
"metadata": {},
"outputs": [],
@@ -373,7 +353,7 @@
"standalone_question = {\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: _format_chat_history(x[\"chat_history\"]),\n",
" \"chat_history\": lambda x: get_buffer_string(x[\"chat_history\"]),\n",
" }\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
@@ -400,18 +380,18 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 17,
"id": "806e390c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False),\n",
" 'docs': [Document(page_content='harrison worked at kensho', metadata={})]}"
"{'answer': AIMessage(content='Harrison was employed at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 19,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -424,7 +404,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 18,
"id": "977399fd",
"metadata": {},
"outputs": [],
@@ -437,18 +417,18 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 19,
"id": "f94f7de4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='where did harrison work?', additional_kwargs={}, example=False),\n",
" AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)]}"
"{'history': [HumanMessage(content='where did harrison work?'),\n",
" AIMessage(content='Harrison was employed at Kensho.')]}"
]
},
"execution_count": 21,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -456,6 +436,38 @@
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "88f2b7cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison actually worked at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"but where did he really work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "207a2782",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -474,7 +486,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,493 @@
{
"cells": [
{
"cell_type": "raw",
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: Get started\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "befa7fd1",
"metadata": {},
"source": [
"LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging."
]
},
{
"cell_type": "markdown",
"id": "9a9acd2e",
"metadata": {},
"source": [
"## Basic example: prompt + model + output parser\n",
"\n",
"The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "466b65b3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why did the ice cream go to therapy?\\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a short joke about {topic}\")\n",
"model = ChatOpenAI()\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser\n",
"\n",
"chain.invoke({\"topic\": \"ice cream\"})"
]
},
{
"cell_type": "markdown",
"id": "81c502c5-85ee-4f36-aaf4-d6e350b7792f",
"metadata": {},
"source": [
"Notice this line of this code, where we piece together then different components into a single chain using LCEL:\n",
"\n",
"```\n",
"chain = prompt | model | output_parser\n",
"```\n",
"\n",
"The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components feeds the output from one component as input into the next component. \n",
"\n",
"In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on. "
]
},
{
"cell_type": "markdown",
"id": "aa1b77fa",
"metadata": {},
"source": [
"### 1. Prompt\n",
"\n",
"`prompt` is a `BasePromptTemplate`, which means it takes in a dictionary of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `BaseMessage`s and for producing a string."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "b8656990",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value = prompt.invoke({\"topic\": \"ice cream\"})\n",
"prompt_value"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e6034488",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='tell me a short joke about ice cream')]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value.to_messages()"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "60565463",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Human: tell me a short joke about ice cream'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value.to_string()"
]
},
{
"cell_type": "markdown",
"id": "577f0f76",
"metadata": {},
"source": [
"### 2. Model\n",
"\n",
"The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `BaseMessage`."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "33cf5f72",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\")"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message = model.invoke(prompt_value)\n",
"message"
]
},
{
"cell_type": "markdown",
"id": "327e7db8",
"metadata": {},
"source": [
"If our `model` was an `LLM`, it would output a string."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "8feb05da",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nRobot: Why did the ice cream go to therapy? Because it had a rocky road.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"llm.invoke(prompt_value)"
]
},
{
"cell_type": "markdown",
"id": "91847478",
"metadata": {},
"source": [
"### 3. Output parser\n",
"\n",
"And lastly we pass our `model` output to the `output_parser`, which is a `BaseOutputParser` meaning it takes either a string or a \n",
"`BaseMessage` as input. The `StrOutputParser` specifically simple converts any input into a string."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "533e59a8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"output_parser.invoke(message)"
]
},
{
"cell_type": "markdown",
"id": "9851e842",
"metadata": {},
"source": [
"### 4. Entire Pipeline\n",
"\n",
"To follow the steps along:\n",
"\n",
"1. We pass in user input on the desired topic as `{\"topic\": \"ice cream\"}`\n",
"2. The `prompt` component takes the user input, which is then used to construct a PromptValue after using the `topic` to construct the prompt. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method. \n"
]
},
{
"cell_type": "markdown",
"id": "c4873109",
"metadata": {},
"source": [
"```mermaid\n",
"graph LR\n",
" A(Input: topic=ice cream) --> |Dict| B(PromptTemplate)\n",
" B -->|PromptValue| C(ChatModel) \n",
" C -->|ChatMessage| D(StrOutputParser)\n",
" D --> |String| F(Result)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "fe63534d",
"metadata": {},
"source": [
":::info\n",
"\n",
"Note that if youre curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11089b6f-23f8-474f-97ec-8cae8d0ca6d4",
"metadata": {},
"outputs": [],
"source": [
"input = {\"topic\": \"ice cream\"}\n",
"\n",
"prompt.invoke(input)\n",
"# > ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])\n",
"\n",
"(prompt | model).invoke(input)\n",
"# > AIMessage(content=\"Why did the ice cream go to therapy?\\nBecause it had too many toppings and couldn't cone-trol itself!\")"
]
},
{
"cell_type": "markdown",
"id": "cc7d3b9d-e400-4c9b-9188-f29dac73e6bb",
"metadata": {},
"source": [
"## RAG Search Example\n",
"\n",
"For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "662426e8-4316-41dc-8312-9b58edc7e0c9",
"metadata": {},
"outputs": [],
"source": [
"# Requires:\n",
"# pip install langchain docarray\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableParallel, RunnablePassthrough\n",
"from langchain.vectorstores import DocArrayInMemorySearch\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_texts(\n",
" [\"harrison worked at kensho\", \"bears like to eat honey\"],\n",
" embedding=OpenAIEmbeddings(),\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"output_parser = StrOutputParser()\n",
"\n",
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")\n",
"chain = setup_and_retrieval | prompt | model | output_parser\n",
"\n",
"chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "f0999140-6001-423b-970b-adf1dfdb4dec",
"metadata": {},
"source": [
"In this case, the composed chain is: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b88e9bb-f04a-4a56-87ec-19a0e6350763",
"metadata": {},
"outputs": [],
"source": [
"chain = setup_and_retrieval | prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "6e929e15-40a5-4569-8969-384f636cab87",
"metadata": {},
"source": [
"To explain this, we first can see that the prompt template above takes in `context` and `question` as values to be substituted in the prompt. Before building the prompt template, we want to retrieve relevant documents to the search and include them as part of the context. \n",
"\n",
"As a preliminary step, weve setup the retriever using an in memory store, which can retrieve documents based on a query. This is a runnable component as well that can be chained together with other components, but you can also try to run it separately:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7319ef6-613b-4638-ad7d-4a2183702c1d",
"metadata": {},
"outputs": [],
"source": [
"retriever.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "e6833844-f1c4-444c-a3d2-31b3c6b31d46",
"metadata": {},
"source": [
"We then use the `RunnableParallel` to prepare the expected inputs into the prompt by using the entries for the retrieved documents as well as the original user question, using the retriever for document search, and RunnablePassthrough to pass the users question:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcbca26b-d6b9-4c24-806c-1ec8fdaab4ed",
"metadata": {},
"outputs": [],
"source": [
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "68c721c1-048b-4a64-9d78-df54fe465992",
"metadata": {},
"source": [
"To review, the complete chain is:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d5115a7-7b8e-458b-b936-26cc87ee81c4",
"metadata": {},
"outputs": [],
"source": [
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")\n",
"chain = setup_and_retrieval | prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "5c6f5f74-b387-48a0-bedd-1fae202cd10a",
"metadata": {},
"source": [
"With the flow being:\n",
"\n",
"1. The first steps create a `RunnableParallel` object with two entries. The first entry, `context` will include the document results fetched by the retriever. The second entry, `question` will contain the users original question. To pass on the question, we use `RunnablePassthrough` to copy this entry. \n",
"2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method.\n",
"\n",
"```mermaid\n",
"graph LR\n",
" A(Question) --> B(RunnableParallel)\n",
" B -->|Question| C(Retriever)\n",
" B -->|Question| D(RunnablePassThrough)\n",
" C -->|context=retrieved docs| E(PromptTemplate)\n",
" D -->|question=Question| E\n",
" E -->|PromptValue| F(ChatModel) \n",
" F -->|ChatMessage| G(StrOutputParser)\n",
" G --> |String| H(Result)\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "8c2438df-164e-4bbe-b5f4-461695e45b0f",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"We recommend reading our [Why use LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -43,6 +43,7 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema.runnable import ConfigurableField\n",
"\n",
"model = ChatOpenAI(temperature=0).configurable_fields(\n",
" temperature=ConfigurableField(\n",
@@ -594,7 +595,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -26,7 +26,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "d3e893bf",
"metadata": {},
"outputs": [],
@@ -44,19 +44,24 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "dfdd8bf5",
"metadata": {},
"outputs": [],
"source": [
"from unittest.mock import patch\n",
"\n",
"from openai.error import RateLimitError"
"import httpx\n",
"from openai import RateLimitError\n",
"\n",
"request = httpx.Request(\"GET\", \"/\")\n",
"response = httpx.Response(200, request=request)\n",
"error = RateLimitError(\"rate limit\", response=response, body=\"\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "e6fdffc1",
"metadata": {},
"outputs": [],
@@ -69,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 4,
"id": "584461ab",
"metadata": {},
"outputs": [
@@ -83,10 +88,10 @@
],
"source": [
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -106,10 +111,10 @@
],
"source": [
"# Now let's try with fallbacks to Anthropic\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -148,10 +153,10 @@
" ]\n",
")\n",
"chain = prompt | llm\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -185,10 +190,10 @@
")\n",
"\n",
"chain = prompt | llm\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -286,7 +291,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -1,5 +1,16 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: \"RunnableLambda: Run Custom Functions\"\n",
"keywords: [RunnableLambda, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fbc4bf6e",
@@ -7,14 +18,14 @@
"source": [
"# Run custom functions\n",
"\n",
"You can use arbitrary functions in the pipeline\n",
"You can use arbitrary functions in the pipeline.\n",
"\n",
"Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"id": "6bb221b3",
"metadata": {},
"outputs": [],
@@ -56,17 +67,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"id": "5488ec85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 + 9 equals 12.', additional_kwargs={}, example=False)"
"AIMessage(content='3 + 9 equals 12.')"
]
},
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@@ -82,12 +93,12 @@
"source": [
"## Accepting a Runnable Config\n",
"\n",
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.config.RunnableConfig.html?highlight=runnableconfig#langchain.schema.runnable.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "80b3b5f6-5d58-44b9-807e-cce9a46bf49f",
"metadata": {},
"outputs": [],
@@ -98,7 +109,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36",
"metadata": {},
"outputs": [],
@@ -125,7 +136,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 5,
"id": "1a5e709e-9d75-48c7-bb9c-503251990505",
"metadata": {},
"outputs": [
@@ -133,6 +144,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'foo': 'bar'}\n",
"Tokens Used: 65\n",
"\tPrompt Tokens: 56\n",
"\tCompletion Tokens: 9\n",
@@ -145,9 +157,10 @@
"from langchain.callbacks import get_openai_callback\n",
"\n",
"with get_openai_callback() as cb:\n",
" RunnableLambda(parse_or_fix).invoke(\n",
" output = RunnableLambda(parse_or_fix).invoke(\n",
" \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n",
" )\n",
" print(output)\n",
" print(cb)"
]
},
@@ -176,7 +189,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -17,6 +17,13 @@
"Let's implement a custom output parser for comma-separated lists."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sync version"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -57,7 +64,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 3,
"metadata": {},
"outputs": [
{
@@ -66,7 +73,7 @@
"'lion, tiger, wolf, gorilla, panda'"
]
},
"execution_count": 8,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -152,12 +159,81 @@
"list_chain.invoke({\"animal\": \"bear\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Async version"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": []
"source": [
"from typing import AsyncIterator\n",
"\n",
"\n",
"async def asplit_into_list(\n",
" input: AsyncIterator[str]\n",
") -> AsyncIterator[List[str]]: # async def\n",
" buffer = \"\"\n",
" async for (\n",
" chunk\n",
" ) in input: # `input` is a `async_generator` object, so use `async for`\n",
" buffer += chunk\n",
" while \",\" in buffer:\n",
" comma_index = buffer.index(\",\")\n",
" yield [buffer[:comma_index].strip()]\n",
" buffer = buffer[comma_index + 1 :]\n",
" yield [buffer.strip()]\n",
"\n",
"\n",
"list_chain = str_chain | asplit_into_list"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['lion']\n",
"['tiger']\n",
"['wolf']\n",
"['gorilla']\n",
"['panda']\n"
]
}
],
"source": [
"async for chunk in list_chain.astream({\"animal\": \"bear\"}):\n",
" print(chunk, flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await list_chain.ainvoke({\"animal\": \"bear\"})"
]
}
],
"metadata": {
@@ -176,7 +252,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 1
sidebar_position: 2
---
# How to

View File

@@ -1,29 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e2596041-9b76-4e74-836f-e6235086bbf0",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: \"RunnableParallel: Manipulating data\"\n",
"keywords: [RunnableParallel, RunnableMap, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"metadata": {},
"source": [
"# Parallelize steps\n",
"# Manipulating inputs & output\n",
"\n",
"RunnableParallel can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.\n",
"\n",
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison worked at Kensho.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"\n",
"retrieval_chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"retrieval_chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"::: {.callout-tip}\n",
"Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n",
":::\n",
"\n",
"```\n",
"{\"context\": retriever, \"question\": RunnablePassthrough()}\n",
"```\n",
"\n",
"```\n",
"RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
"```\n",
"\n",
"```\n",
"RunnableParallel(context=retriever, question=RunnablePassthrough())\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "7c1b8baa-3a80-44f0-bb79-d22f79815d3d",
"metadata": {},
"source": [
"## Using itemgetter as shorthand\n",
"\n",
"Note that you can use Python's `itemgetter` as shorthand to extract data from the map when combining with `RunnableParallel`. You can find more information about itemgetter in the [Python Documentation](https://docs.python.org/3/library/operator.html#operator.itemgetter). \n",
"\n",
"In the example below, we use itemgetter to extract specific keys from the map:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "84fc49e1-2daf-4700-ae33-a0a6ed47d5f6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison ha lavorato a Kensho.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\n",
"Answer in the following language: {language}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"chain = (\n",
" {\n",
" \"context\": itemgetter(\"question\") | retriever,\n",
" \"question\": itemgetter(\"question\"),\n",
" \"language\": itemgetter(\"language\"),\n",
" }\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
]
},
{
"cell_type": "markdown",
"id": "bc2f9847-39aa-4fe4-9049-3a8969bc4bce",
"metadata": {},
"source": [
"## Parallelize steps\n",
"\n",
"RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7e1873d6-d4b6-43ac-96a1-edcf178201e0",
"execution_count": 1,
"id": "31f18442-f837-463f-bef4-8729368f5f8b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'joke': AIMessage(content=\"Why don't bears wear shoes? \\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
" 'poem': AIMessage(content=\"In woodland depths, bear prowls with might,\\nSilent strength, nature's sovereign, day and night.\", additional_kwargs={}, example=False)}"
"{'joke': AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
" 'poem': AIMessage(content=\"In the wild's embrace, bear roams free,\\nStrength and grace, a majestic decree.\")}"
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
@@ -44,69 +207,6 @@
"map_chain.invoke({\"topic\": \"bear\"})"
]
},
{
"cell_type": "markdown",
"id": "df867ae9-1cec-4c9e-9fef-21969b206af5",
"metadata": {},
"source": [
"## Manipulating outputs/inputs\n",
"Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison worked at Kensho.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"retrieval_chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"retrieval_chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
"\n",
"Note that when composing a RunnableMap with another Runnable we don't even need to wrap our dictionary in the RunnableMap class — the type conversion is handled for us."
]
},
{
"cell_type": "markdown",
"id": "833da249-c0d4-4e5b-b3f8-cab549f0f7e1",
@@ -114,7 +214,7 @@
"source": [
"## Parallelism\n",
"\n",
"RunnableMaps are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier `joke_chain`, `poem_chain` and `map_chain` all have about the same runtime, even though `map_chain` executes both of the other two."
"RunnableParallel are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier `joke_chain`, `poem_chain` and `map_chain` all have about the same runtime, even though `map_chain` executes both of the other two."
]
},
{
@@ -194,7 +294,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.6"
}
},
"nbformat": 4,

View File

@@ -251,7 +251,10 @@
"id": "da3d1feb-b4bb-4624-961c-7db2e1180df7",
"metadata": {},
"source": [
":::tip [Langsmith trace](https://smith.langchain.com/public/863a003b-7ca8-4b24-be9e-d63ec13c106e/r)\n",
":::tip\n",
"\n",
"[Langsmith trace](https://smith.langchain.com/public/863a003b-7ca8-4b24-be9e-d63ec13c106e/r)\n",
"\n",
":::"
]
},
@@ -290,9 +293,9 @@
],
"source": [
"from langchain.schema.messages import HumanMessage\n",
"from langchain.schema.runnable import RunnableMap\n",
"from langchain.schema.runnable import RunnableParallel\n",
"\n",
"chain = RunnableMap({\"output_message\": ChatAnthropic(model=\"claude-2\")})\n",
"chain = RunnableParallel({\"output_message\": ChatAnthropic(model=\"claude-2\")})\n",
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL),\n",
@@ -334,7 +337,10 @@
"id": "b898d1b1-11e6-4d30-a8dd-cc5e45533611",
"metadata": {},
"source": [
":::tip [LangSmith trace](https://smith.langchain.com/public/f6c3e1d1-a49d-4955-a9fa-c6519df74fa7/r)\n",
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/f6c3e1d1-a49d-4955-a9fa-c6519df74fa7/r)\n",
"\n",
":::"
]
},

View File

@@ -0,0 +1,159 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d35de667-0352-4bfb-a890-cebe7f676fe7",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: \"RunnablePassthrough: Passing data through\"\n",
"keywords: [RunnablePassthrough, RunnableParallel, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"metadata": {},
"source": [
"# Passing data through\n",
"\n",
"RunnablePassthrough allows to pass inputs unchanged or with the addition of extra keys. This typically is used in conjuction with RunnableParallel to assign data to a new key in the map. \n",
"\n",
"RunnablePassthrough() called on it's own, will simply take the input and pass it through. \n",
"\n",
"RunnablePassthrough called with assign (`RunnablePassthrough.assign(...)`) will take the input, and will add the extra arguments passed to the assign function. \n",
"\n",
"See the example below:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "03988b8d-d54c-4492-8707-1594372cf093",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'passed': {'num': 1}, 'extra': {'num': 1, 'mult': 3}, 'modified': 2}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.schema.runnable import RunnableParallel, RunnablePassthrough\n",
"\n",
"runnable = RunnableParallel(\n",
" passed=RunnablePassthrough(),\n",
" extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n",
" modified=lambda x: x[\"num\"] + 1,\n",
")\n",
"\n",
"runnable.invoke({\"num\": 1})"
]
},
{
"cell_type": "markdown",
"id": "702c7acc-cd31-4037-9489-647df192fd7c",
"metadata": {},
"source": [
"As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. \n",
"\n",
"In the second line, we used `RunnablePastshrough.assign` with a lambda that multiplies the numerical value by 3. In this cased, `extra` was set with `{'num': 1, 'mult': 3}` which is the original value with the `mult` key added. \n",
"\n",
"Finally, we also set a third key in the map with `modified` which uses a labmda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`."
]
},
{
"cell_type": "markdown",
"id": "15187a3b-d666-4b9b-a258-672fc51fe0e2",
"metadata": {},
"source": [
"## Retrieval Example\n",
"\n",
"In the example below, we see a use case where we use RunnablePassthrough along with RunnableMap. "
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison worked at Kensho.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"\n",
"retrieval_chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"retrieval_chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key. In this case, the RunnablePassthrough allows us to pass on the user's question to the prompt and model. \n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,5 +1,16 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"title: \"RunnableBranch: Dynamically route logic based on input\"\n",
"keywords: [RunnableBranch, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "4b47436a",
@@ -63,7 +74,7 @@
"chain = (\n",
" PromptTemplate.from_template(\n",
" \"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
" \n",
"\n",
"Do not respond with more than one word.\n",
"\n",
"<question>\n",
@@ -293,7 +304,7 @@
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use Anthroipc?\"})"
"full_chain.invoke({\"question\": \"how do I use Anthropic?\"})"
]
},
{

View File

@@ -6,7 +6,7 @@
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"sidebar_position: 1\n",
"title: Interface\n",
"---"
]
@@ -16,7 +16,7 @@
"id": "9a9acd2e",
"metadata": {},
"source": [
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.Runnable.html#langchain.schema.runnable.base.Runnable) protocol. The `Runnable` protocol is implemented for most components. \n",
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. The `Runnable` protocol is implemented for most components. \n",
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",
"The standard interface includes:\n",
"\n",

File diff suppressed because it is too large Load Diff

View File

@@ -344,7 +344,7 @@ category_chain = chat_prompt | ChatOpenAI() | CommaSeparatedListOutputParser()
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple api server using Langchain's Runnable interfaces",
description="A simple API server using LangChain's Runnable interfaces",
)
# 3. Adding chain route

View File

@@ -12,7 +12,7 @@ Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [Wand
For anyone building production-grade LLM applications, we highly recommend using a platform like this.
![LangSmith run](/img/run_details.png)
![LangSmith run](../../static/img/run_details.png)
## `set_debug` and `set_verbose`

View File

@@ -5,13 +5,13 @@
"id": "465cfbef-5bba-4b3b-b02d-fe2eba39db17",
"metadata": {},
"source": [
"# Evaluating Structured Output: JSON Evaluators\n",
"# JSON Evaluators\n",
"\n",
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following JSON validators provide provide functionality to check your model's output in a consistent way.\n",
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
"\n",
"## JsonValidityEvaluator\n",
"\n",
"The `JsonValidityEvaluator` is designed to check the validity of a JSON string prediction.\n",
"The `JsonValidityEvaluator` is designed to check the validity of a `JSON` string prediction.\n",
"\n",
"### Overview:\n",
"- **Requires Input?**: No\n",
@@ -377,7 +377,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -8,9 +8,12 @@
"# String Distance\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
"\n",
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
">In information theory, linguistics, and computer science, the [Levenshtein distance (Wikipedia)](https://en.wikipedia.org/wiki/Levenshtein_distance) is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician Vladimir Levenshtein, who considered this distance in 1965.\n",
"\n",
"This can be accessed using the `string_distance` evaluator, which uses distance metric's from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
"\n",
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as `Levenshtein` or `postfix` distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
"\n",
"This can be accessed using the `string_distance` evaluator, which uses distance metrics from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
"\n",
"**Note:** The returned scores are _distances_, meaning lower is typically \"better\".\n",
"\n",
@@ -213,9 +216,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -28,7 +28,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 1,
"id": "d3e893bf",
"metadata": {},
"outputs": [],
@@ -46,19 +46,24 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 2,
"id": "dfdd8bf5",
"metadata": {},
"outputs": [],
"source": [
"from unittest.mock import patch\n",
"\n",
"from openai.error import RateLimitError"
"import httpx\n",
"from openai import RateLimitError\n",
"\n",
"request = httpx.Request(\"GET\", \"/\")\n",
"response = httpx.Response(200, request=request)\n",
"error = RateLimitError(\"rate limit\", response=response, body=\"\")"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 3,
"id": "e6fdffc1",
"metadata": {},
"outputs": [],
@@ -71,7 +76,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 4,
"id": "584461ab",
"metadata": {},
"outputs": [
@@ -85,10 +90,10 @@
],
"source": [
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -108,10 +113,10 @@
],
"source": [
"# Now let's try with fallbacks to Anthropic\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -150,10 +155,10 @@
" ]\n",
")\n",
"chain = prompt | llm\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except:\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
@@ -431,7 +436,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -32,7 +32,7 @@
"1. `Base model`: What is the base-model and how was it trained?\n",
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
"\n",
"![Image description](/img/OSS_LLM_overview.png)\n",
"![Image description](../../static/img/OSS_LLM_overview.png)\n",
"\n",
"The relative performance of these models can be assessed using several leaderboards, including:\n",
"\n",
@@ -55,7 +55,7 @@
"\n",
"In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n",
"\n",
"![Image description](/img/llama-memory-weights.png)\n",
"![Image description](../../static/img/llama-memory-weights.png)\n",
"\n",
"With less precision, we radically decrease the memory needed to store the LLM in memory.\n",
"\n",
@@ -63,7 +63,7 @@
"\n",
"A Mac M2 Max is 5-6x faster than a M1 for inference due to the larger GPU memory bandwidth.\n",
"\n",
"![Image description](/img/llama_t_put.png)\n",
"![Image description](../../static/img/llama_t_put.png)\n",
"\n",
"## Quickstart\n",
"\n",
@@ -284,6 +284,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.llms import LlamaCpp\n",
"\n",
"llm = LlamaCpp(\n",

View File

@@ -8,6 +8,8 @@
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/index.ipynb)\n",
"\n",
">[Presidio](https://microsoft.github.io/presidio/) (Origin from Latin praesidium protection, garrison) helps to ensure sensitive data is properly managed and governed. It provides fast identification and anonymization modules for private entities in text and images such as credit card numbers, names, locations, social security numbers, bitcoin wallets, US phone numbers, financial data and more.\n",
"\n",
"## Use case\n",
"\n",
"Data anonymization is crucial before passing information to a language model like GPT-4 because it helps protect privacy and maintain confidentiality. If data is not anonymized, sensitive information such as names, addresses, contact numbers, or other identifiers linked to specific individuals could potentially be learned and misused. Hence, by obscuring or removing this personally identifiable information (PII), data can be used freely without compromising individuals' privacy rights or breaching data protection laws and regulations.\n",
@@ -530,7 +532,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -60,7 +60,7 @@
"\n",
" Firstly, the wallet contains my credit card with number 4111 1111 1111 1111, which is registered under my name and linked to my bank account, PL61109010140000071219812874.\n",
"\n",
" Additionally, the wallet had a driver's license - DL No: 999000680 issued to my name. It also houses my Social Security Number, 602-76-4532. \n",
" Additionally, the wallet had a driver's license - DL No: 999000680 issued to my name. It also houses my Social Security Number, 602-76-4532.\n",
"\n",
" What's more, I had my polish identity card there, with the number ABC123456.\n",
"\n",
@@ -68,7 +68,7 @@
"\n",
" In case any information arises regarding my wallet, please reach out to me on my phone number, 999-888-7777, or through my personal email, johndoe@example.com.\n",
"\n",
" Please consider this information to be highly confidential and respect my privacy. \n",
" Please consider this information to be highly confidential and respect my privacy.\n",
"\n",
" The bank has been informed about the stolen credit card and necessary actions have been taken from their end. They will be reachable at their official email, support@bankname.com.\n",
" My representative there is Victoria Cherry (her business phone: 987-654-3210).\n",
@@ -667,7 +667,11 @@
"from langchain.chat_models.openai import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableLambda, RunnableMap, RunnablePassthrough\n",
"from langchain.schema.runnable import (\n",
" RunnableLambda,\n",
" RunnableParallel,\n",
" RunnablePassthrough,\n",
")\n",
"\n",
"# 6. Create anonymizer chain\n",
"template = \"\"\"Answer the question based only on the following context:\n",
@@ -680,7 +684,7 @@
"model = ChatOpenAI(temperature=0.3)\n",
"\n",
"\n",
"_inputs = RunnableMap(\n",
"_inputs = RunnableParallel(\n",
" question=RunnablePassthrough(),\n",
" # It is important to remember about question anonymization\n",
" anonymized_question=RunnableLambda(anonymizer.anonymize),\n",
@@ -882,7 +886,7 @@
"\n",
"\n",
"chain_with_deanonymization = (\n",
" RunnableMap({\"question\": RunnablePassthrough()})\n",
" RunnableParallel({\"question\": RunnablePassthrough()})\n",
" | {\n",
" \"context\": itemgetter(\"question\")\n",
" | retriever\n",

View File

@@ -7,7 +7,9 @@
"source": [
"# Amazon Comprehend Moderation Chain\n",
"\n",
"This notebook shows how to use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle `Personally Identifiable Information` (`PII`) and toxicity.\n",
">[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text.\n",
"\n",
"This notebook shows how to use `Amazon Comprehend` to detect and handle `Personally Identifiable Information` (`PII`) and toxicity.\n",
"\n",
"## Setting up"
]
@@ -1417,7 +1419,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,285 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "700a516b",
"metadata": {},
"source": [
"# OpenAI Adapter(Old)\n",
"\n",
"**Please ensure OpenAI library is less than 1.0.0; otherwise, refer to the newer doc [OpenAI Adapter](./openai).**\n",
"\n",
"A lot of people get started with OpenAI but want to explore other models. LangChain's integrations with many model providers make this easy to do so. While LangChain has it's own message and model APIs, we've also made it as easy as possible to explore other models by exposing an adapter to adapt LangChain models to the OpenAI api.\n",
"\n",
"At the moment this only deals with output and does not return other information (token counts, stop reasons, etc)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6017f26a",
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"from langchain.adapters import openai as lc_openai"
]
},
{
"cell_type": "markdown",
"id": "b522ceda",
"metadata": {},
"source": [
"## ChatCompletion.create"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "1d22eb61",
"metadata": {},
"outputs": [],
"source": [
"messages = [{\"role\": \"user\", \"content\": \"hi\"}]"
]
},
{
"cell_type": "markdown",
"id": "d550d3ad",
"metadata": {},
"source": [
"Original OpenAI call"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "012d81ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")\n",
"result[\"choices\"][0][\"message\"].to_dict_recursive()"
]
},
{
"cell_type": "markdown",
"id": "db5b5500",
"metadata": {},
"source": [
"LangChain OpenAI wrapper call"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "c67a5ac8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")\n",
"lc_result[\"choices\"][0][\"message\"]"
]
},
{
"cell_type": "markdown",
"id": "034ba845",
"metadata": {},
"source": [
"Swapping out model providers"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "f7c94827",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': ' Hello!'}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"claude-2\", temperature=0, provider=\"ChatAnthropic\"\n",
")\n",
"lc_result[\"choices\"][0][\"message\"]"
]
},
{
"cell_type": "markdown",
"id": "cb3f181d",
"metadata": {},
"source": [
"## ChatCompletion.stream"
]
},
{
"cell_type": "markdown",
"id": "f7b8cd18",
"metadata": {},
"source": [
"Original OpenAI call"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "fd8cb1ea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ''}\n",
"{'content': 'Hello'}\n",
"{'content': '!'}\n",
"{'content': ' How'}\n",
"{'content': ' can'}\n",
"{'content': ' I'}\n",
"{'content': ' assist'}\n",
"{'content': ' you'}\n",
"{'content': ' today'}\n",
"{'content': '?'}\n",
"{}\n"
]
}
],
"source": [
"for c in openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0, stream=True\n",
"):\n",
" print(c[\"choices\"][0][\"delta\"].to_dict_recursive())"
]
},
{
"cell_type": "markdown",
"id": "0b2a076b",
"metadata": {},
"source": [
"LangChain OpenAI wrapper call"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "9521218c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ''}\n",
"{'content': 'Hello'}\n",
"{'content': '!'}\n",
"{'content': ' How'}\n",
"{'content': ' can'}\n",
"{'content': ' I'}\n",
"{'content': ' assist'}\n",
"{'content': ' you'}\n",
"{'content': ' today'}\n",
"{'content': '?'}\n",
"{}\n"
]
}
],
"source": [
"for c in lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0, stream=True\n",
"):\n",
" print(c[\"choices\"][0][\"delta\"])"
]
},
{
"cell_type": "markdown",
"id": "0fc39750",
"metadata": {},
"source": [
"Swapping out model providers"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "68f0214e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ' Hello'}\n",
"{'content': '!'}\n",
"{}\n"
]
}
],
"source": [
"for c in lc_openai.ChatCompletion.create(\n",
" messages=messages,\n",
" model=\"claude-2\",\n",
" temperature=0,\n",
" stream=True,\n",
" provider=\"ChatAnthropic\",\n",
"):\n",
" print(c[\"choices\"][0][\"delta\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -7,6 +7,8 @@
"source": [
"# OpenAI Adapter\n",
"\n",
"**Please ensure OpenAI library is version 1.0.0 or higher; otherwise, refer to the older doc [OpenAI Adapter(Old)](./openai-old).**\n",
"\n",
"A lot of people get started with OpenAI but want to explore other models. LangChain's integrations with many model providers make this easy to do so. While LangChain has it's own message and model APIs, we've also made it as easy as possible to explore other models by exposing an adapter to adapt LangChain models to the OpenAI api.\n",
"\n",
"At the moment this only deals with output and does not return other information (token counts, stop reasons, etc)."
@@ -28,12 +30,12 @@
"id": "b522ceda",
"metadata": {},
"source": [
"## ChatCompletion.create"
"## chat.completions.create"
]
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 2,
"id": "1d22eb61",
"metadata": {},
"outputs": [],
@@ -51,26 +53,29 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 3,
"id": "012d81ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
"{'content': 'Hello! How can I assist you today?',\n",
" 'role': 'assistant',\n",
" 'function_call': None,\n",
" 'tool_calls': None}"
]
},
"execution_count": 15,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = openai.ChatCompletion.create(\n",
"result = openai.chat.completions.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")\n",
"result[\"choices\"][0][\"message\"].to_dict_recursive()"
"result.choices[0].message.model_dump()"
]
},
{
@@ -83,26 +88,48 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 4,
"id": "c67a5ac8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
"{'role': 'assistant', 'content': 'Hello! How can I help you today?'}"
]
},
"execution_count": 17,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
"lc_result = lc_openai.chat.completions.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")\n",
"lc_result[\"choices\"][0][\"message\"]"
"\n",
"lc_result.choices[0].message # Attribute access"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "37a6e461-8608-47f6-ac45-12ad753c062a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I help you today?'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result[\"choices\"][0][\"message\"] # Also compatible with index access"
]
},
{
@@ -115,26 +142,26 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 6,
"id": "f7c94827",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': ' Hello!'}"
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
]
},
"execution_count": 19,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
"lc_result = lc_openai.chat.completions.create(\n",
" messages=messages, model=\"claude-2\", temperature=0, provider=\"ChatAnthropic\"\n",
")\n",
"lc_result[\"choices\"][0][\"message\"]"
"lc_result.choices[0].message"
]
},
{
@@ -142,7 +169,7 @@
"id": "cb3f181d",
"metadata": {},
"source": [
"## ChatCompletion.stream"
"## chat.completions.stream"
]
},
{
@@ -155,7 +182,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 7,
"id": "fd8cb1ea",
"metadata": {},
"outputs": [
@@ -163,25 +190,25 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ''}\n",
"{'content': 'Hello'}\n",
"{'content': '!'}\n",
"{'content': ' How'}\n",
"{'content': ' can'}\n",
"{'content': ' I'}\n",
"{'content': ' assist'}\n",
"{'content': ' you'}\n",
"{'content': ' today'}\n",
"{'content': '?'}\n",
"{}\n"
"{'content': '', 'function_call': None, 'role': 'assistant', 'tool_calls': None}\n",
"{'content': 'Hello', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': '!', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': ' How', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': ' can', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': ' I', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': ' assist', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': ' you', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': ' today', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': '?', 'function_call': None, 'role': None, 'tool_calls': None}\n",
"{'content': None, 'function_call': None, 'role': None, 'tool_calls': None}\n"
]
}
],
"source": [
"for c in openai.ChatCompletion.create(\n",
"for c in openai.chat.completions.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0, stream=True\n",
"):\n",
" print(c[\"choices\"][0][\"delta\"].to_dict_recursive())"
" print(c.choices[0].delta.model_dump())"
]
},
{
@@ -194,7 +221,7 @@
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 8,
"id": "9521218c",
"metadata": {},
"outputs": [
@@ -217,10 +244,10 @@
}
],
"source": [
"for c in lc_openai.ChatCompletion.create(\n",
"for c in lc_openai.chat.completions.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0, stream=True\n",
"):\n",
" print(c[\"choices\"][0][\"delta\"])"
" print(c.choices[0].delta)"
]
},
{
@@ -233,7 +260,7 @@
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 9,
"id": "68f0214e",
"metadata": {},
"outputs": [
@@ -241,14 +268,22 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ' Hello'}\n",
"{'role': 'assistant', 'content': ''}\n",
"{'content': 'Hello'}\n",
"{'content': '!'}\n",
"{'content': ' How'}\n",
"{'content': ' can'}\n",
"{'content': ' I'}\n",
"{'content': ' assist'}\n",
"{'content': ' you'}\n",
"{'content': ' today'}\n",
"{'content': '?'}\n",
"{}\n"
]
}
],
"source": [
"for c in lc_openai.ChatCompletion.create(\n",
"for c in lc_openai.chat.completions.create(\n",
" messages=messages,\n",
" model=\"claude-2\",\n",
" temperature=0,\n",
@@ -275,7 +310,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -7,8 +7,6 @@
"source": [
"# Argilla\n",
"\n",
"![Argilla - Open-source data platform for LLMs](https://argilla.io/og.png)\n",
"\n",
">[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.\n",
"> Using Argilla, everyone can build robust language models through faster data curation \n",
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
@@ -410,7 +408,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

View File

@@ -7,12 +7,9 @@
"source": [
"# Context\n",
"\n",
"![Context - User Analytics for LLM Powered Products](https://with.context.ai/langchain.png)\n",
">[Context](https://context.ai/) provides user analytics for LLM-powered products and features.\n",
"\n",
"[Context](https://context.ai/) provides user analytics for LLM powered products and features.\n",
"\n",
"With Context, you can start understanding your users and improving their experiences in less than 30 minutes.\n",
"\n"
"With `Context`, you can start understanding your users and improving their experiences in less than 30 minutes.\n"
]
},
{
@@ -89,11 +86,9 @@
"metadata": {},
"source": [
"## Usage\n",
"### Using the Context callback within a chat model\n",
"### Context callback within a chat model\n",
"\n",
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
"\n",
"#### Example"
"The Context callback handler can be used to directly record transcripts between users and AI assistants."
]
},
{
@@ -132,7 +127,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using the Context callback within Chains\n",
"### Context callback within Chains\n",
"\n",
"The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.\n",
"\n",
@@ -149,9 +144,7 @@
">handler = ContextCallbackHandler(token)\n",
">chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n",
">chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\n",
">```\n",
"\n",
"#### Example"
">```\n"
]
},
{
@@ -203,7 +196,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

View File

@@ -7,12 +7,14 @@
"source": [
"# Infino\n",
"\n",
">[Infino](https://github.com/infinohq/infino) is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.\n",
"\n",
"This example shows how one can track the following while calling OpenAI and ChatOpenAI models via `LangChain` and [Infino](https://github.com/infinohq/infino):\n",
"\n",
"* prompt input,\n",
"* response from `ChatGPT` or any other `LangChain` model,\n",
"* latency,\n",
"* errors,\n",
"* prompt input\n",
"* response from `ChatGPT` or any other `LangChain` model\n",
"* latency\n",
"* errors\n",
"* number of tokens consumed"
]
},
@@ -454,7 +456,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -4,6 +4,9 @@
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
},
"pycharm": {
"name": "#%% md\n"
}
@@ -11,17 +14,14 @@
"source": [
"# Label Studio\n",
"\n",
"<div>\n",
"<img src=\"https://labelstudio-pub.s3.amazonaws.com/lc/open-source-data-labeling-platform.png\" width=\"400\"/>\n",
"</div>\n",
"\n",
"Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
">[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
"\n",
"In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:\n",
"In this guide, you will learn how to connect a LangChain pipeline to `Label Studio` to:\n",
"\n",
"- Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Aggregate all input prompts, conversations, and responses in a single `Label Studio` project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\n",
"- Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
"- Evaluate model responses through human feedback. `Label Studio` provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
]
},
{
@@ -362,9 +362,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "labelops",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "labelops"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -376,9 +376,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

View File

@@ -1,6 +1,6 @@
# LLMonitor
[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
<video controls width='100%' >
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>

View File

@@ -7,13 +7,13 @@
"source": [
"# PromptLayer\n",
"\n",
"![PromptLayer](https://promptlayer.com/text_logo.png)\n",
">[PromptLayer](https://docs.promptlayer.com/introduction) is a platform for prompt engineering. It also helps with the LLM observability to visualize requests, version prompts, and track usage.\n",
">\n",
">While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), using a callback is the recommended way to integrate `PromptLayer` with LangChain.\n",
"\n",
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"In this guide, we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"\n",
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
"See [PromptLayer docs](https://docs.promptlayer.com/languages/langchain) for more information."
]
},
{
@@ -51,7 +51,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage\n",
"## Usage\n",
"\n",
"Getting started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
"1. `pl_tags` - an optional list of strings that will be tracked as tags on PromptLayer.\n",
@@ -63,7 +63,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple OpenAI Example\n",
"## Simple OpenAI Example\n",
"\n",
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
]
@@ -99,7 +99,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### GPT4All Example"
"## GPT4All Example"
]
},
{
@@ -125,9 +125,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Full Featured Example\n",
"## Full Featured Example\n",
"\n",
"In this example we unlock more of the power of PromptLayer.\n",
"In this example, we unlock more of the power of `PromptLayer`.\n",
"\n",
"PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programmatically fetch the prompt template called `example`.\n",
"\n",
@@ -182,7 +182,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -196,7 +196,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

View File

@@ -7,14 +7,15 @@
"source": [
"# SageMaker Tracking\n",
"\n",
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:\n",
">[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",
">[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of `Amazon SageMaker` that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
"\n",
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into `SageMaker Experiments`. Here, we use different scenarios to showcase the capability:\n",
"* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n",
"* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n",
"* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n",
"\n",
"[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",
"[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
"\n",
"In this notebook, we will create a single experiment to log the prompts from each scenario."
]
@@ -899,9 +900,9 @@
],
"instance_type": "ml.t3.large",
"kernelspec": {
"display_name": "conda_pytorch_p310",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "conda_pytorch_p310"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -913,7 +914,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -9,12 +9,13 @@
"source": [
"# Trubrics\n",
"\n",
"![Trubrics](https://miro.medium.com/v2/resize:fit:720/format:webp/1*AhYbKO-v8F4u3hx2aDIqKg.png)\n",
"\n",
"[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
"prompts & feedback on AI models. In this guide we will go over how to setup the `TrubricsCallbackHandler`. \n",
">[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
"prompts & feedback on AI models.\n",
">\n",
">Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.\n",
"\n",
"Check out [our repo](https://github.com/trubrics/trubrics-sdk) for more information on Trubrics."
"In this guide, we will go over how to set up the `TrubricsCallbackHandler`. \n"
]
},
{
@@ -347,9 +348,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "langchain"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -361,7 +362,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "a016701c",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Anthropic\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# Anthropic\n",
"# ChatAnthropic\n",
"\n",
"This notebook covers how to get started with Anthropic chat models."
]

View File

@@ -1,12 +1,22 @@
{
"cells": [
{
"cell_type": "raw",
"id": "31895fc4",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Anyscale\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "642fd21c-600a-47a1-be96-6e1438b421a9",
"metadata": {},
"source": [
"# Anyscale\n",
"# ChatAnyscale\n",
"\n",
"This notebook demonstrates the use of `langchain.chat_models.ChatAnyscale` for [Anyscale Endpoints](https://endpoints.anyscale.com/).\n",
"\n",
@@ -33,7 +43,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "641f8cb0",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Azure OpenAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "38f26d7a",
"metadata": {},
"source": [
"# Azure OpenAI\n",
"# AzureChatOpenAI\n",
"\n",
">[Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview) provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or a web-based interface in the Azure OpenAI Studio.\n",
"\n",

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Azure ML Endpoint\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure ML Endpoint\n",
"# AzureMLChatOnlineEndpoint\n",
"\n",
">[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. `Azure Foundation Models` include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
">\n",

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Baichuan Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Baichuan Chat\n",
"# ChatBaichuan\n",
"\n",
"Baichuan chat models API by Baichuan Intelligent Technology. For more information, see [https://platform.baichuan-ai.com/docs/api](https://platform.baichuan-ai.com/docs/api)"
]
@@ -63,7 +72,9 @@
"outputs": [
{
"data": {
"text/plain": "AIMessage(content='首先我们需要确定闰年的二月有多少天。闰年的二月有29天。\\n\\n然后我们可以计算你的月薪\\n\\n日薪 = 月薪 / (当月天数)\\n\\n所以你的月薪 = 日薪 * 当月天数\\n\\n将数值代入公式\\n\\n月薪 = 8元/天 * 29天 = 232元\\n\\n因此你在闰年的二月的月薪是232元。')"
"text/plain": [
"AIMessage(content='首先我们需要确定闰年的二月有多少天。闰年的二月有29天。\\n\\n然后我们可以计算你的月薪\\n\\n日薪 = 月薪 / (当月天数)\\n\\n所以你的月薪 = 日薪 * 当月天数\\n\\n将数值代入公式\\n\\n月薪 = 8元/天 * 29天 = 232元\\n\\n因此你在闰年的二月的月薪是232元。')"
]
},
"execution_count": 3,
"metadata": {},
@@ -76,16 +87,23 @@
},
{
"cell_type": "markdown",
"source": [
"## For ChatBaichuan with Streaming"
],
"metadata": {
"collapsed": false
}
},
"source": [
"## For ChatBaichuan with Streaming"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-17T15:14:25.870044Z",
"start_time": "2023-10-17T15:14:25.863381Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"chat = ChatBaichuan(\n",
@@ -93,22 +111,24 @@
" baichuan_secret_key=\"YOUR_SECRET_KEY\",\n",
" streaming=True,\n",
")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-10-17T15:14:25.870044Z",
"start_time": "2023-10-17T15:14:25.863381Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-17T15:14:27.153546Z",
"start_time": "2023-10-17T15:14:25.868470Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": "AIMessageChunk(content='首先我们需要确定闰年的二月有多少天。闰年的二月有29天。\\n\\n然后我们可以计算你的月薪\\n\\n日薪 = 月薪 / (当月天数)\\n\\n所以你的月薪 = 日薪 * 当月天数\\n\\n将数值代入公式\\n\\n月薪 = 8元/天 * 29天 = 232元\\n\\n因此你在闰年的二月的月薪是232元。')"
"text/plain": [
"AIMessageChunk(content='首先我们需要确定闰年的二月有多少天。闰年的二月有29天。\\n\\n然后我们可以计算你的月薪\\n\\n日薪 = 月薪 / (当月天数)\\n\\n所以你的月薪 = 日薪 * 当月天数\\n\\n将数值代入公式\\n\\n月薪 = 8元/天 * 29天 = 232元\\n\\n因此你在闰年的二月的月薪是232元。')"
]
},
"execution_count": 6,
"metadata": {},
@@ -117,14 +137,7 @@
],
"source": [
"chat([HumanMessage(content=\"我日薪8块钱请问在闰年的二月我月薪多少\")])"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-10-17T15:14:27.153546Z",
"start_time": "2023-10-17T15:14:25.868470Z"
}
}
]
}
],
"metadata": {

View File

@@ -1,11 +1,20 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Baidu Qianfan\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Baidu Qianfan\n",
"# QianfanChatEndpoint\n",
"\n",
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
"\n",

View File

@@ -1,13 +1,31 @@
{
"cells": [
{
"cell_type": "raw",
"id": "fbc66410",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Bedrock Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# Bedrock Chat\n",
"# BedrockChat\n",
"\n",
"[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case"
">[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of \n",
"> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, \n",
"> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to \n",
"> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`, \n",
"> you can easily experiment with and evaluate top FMs for your use case, privately customize them with \n",
"> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build \n",
"> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is \n",
"> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy \n",
"> generative AI capabilities into your applications using the AWS services you are already familiar with.\n"
]
},
{
@@ -131,7 +149,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "53fbf15f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cohere\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# Cohere\n",
"# ChatCohere\n",
"\n",
"This notebook covers how to get started with Cohere chat models."
]

View File

@@ -1,13 +1,34 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Ernie Bot Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ERNIE-Bot Chat\n",
"# ErnieBotChat\n",
"\n",
"[ERNIE-Bot](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/jlil56u11) is a large language model developed by Baidu, covering a huge amount of Chinese data.\n",
"This notebook covers how to get started with ErnieBot chat models."
"This notebook covers how to get started with ErnieBot chat models.\n",
"\n",
"**Note:** We recommend users using this class to switch to [Baidu Qianfan](./baidu_qianfan_endpoint). they are 3 why we recommend users to use `QianfanChatEndpoint`:\n",
"1. `QianfanChatEndpoint` support more LLM in the Qianfan platform.\n",
"2. `QianfanChatEndpoint` support streaming mode.\n",
"3. `QianfanChatEndpoint` support function calling usgage.\n",
"\n",
"Some tips for migration:\n",
"- change `ernie_client_id` to `qianfan_ak`, also change `ernie_client_secret` to `qianfan_sk`.\n",
"- install `qianfan` package. \n",
" ```\n",
" pip install qianfan\n",
" ```"
]
},
{

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "5e45f35c",
"metadata": {},
"source": [
"---\n",
"sidebar_label: EverlyAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "642fd21c-600a-47a1-be96-6e1438b421a9",
"metadata": {},
"source": [
"# EverlyAI\n",
"# ChatEverlyAI\n",
"\n",
">[EverlyAI](https://everlyai.xyz) allows you to run your ML models at scale in the cloud. It also provides API access to [several LLM models](https://everlyai.xyz).\n",
"\n",

View File

@@ -1,12 +1,22 @@
{
"cells": [
{
"cell_type": "raw",
"id": "529aeba9",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Fireworks\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "642fd21c-600a-47a1-be96-6e1438b421a9",
"metadata": {},
"source": [
"# Fireworks\n",
"# ChatFireworks\n",
"\n",
">[Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",

View File

@@ -1,11 +1,20 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Cloud Vertex AI\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Cloud Vertex AI \n",
"# ChatVertexAI\n",
"\n",
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
@@ -25,18 +34,18 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"#!pip install langchain google-cloud-aiplatform"
"!pip install -U google-cloud-aiplatform"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -46,43 +55,29 @@
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"chat = ChatVertexAI()"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"system = \"You are a helpful assistant who translate English to French\"\n",
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"messages = prompt.format_messages()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
"AIMessage(content=\" J'aime la programmation.\")"
]
},
"execution_count": 9,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat(messages)"
"system = \"You are a helpful assistant who translate English to French\"\n",
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
]
},
{
@@ -94,35 +89,29 @@
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' 私はプログラミングが大好きです。', additional_kwargs={}, example=False)"
"AIMessage(content=' プログラミングが大好きです')"
]
},
"execution_count": 13,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
@@ -153,20 +142,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = ChatVertexAI(\n",
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 4,
"metadata": {
"tags": []
},
@@ -176,20 +152,39 @@
"output_type": "stream",
"text": [
" ```python\n",
"def is_prime(x): \n",
" if (x <= 1): \n",
"def is_prime(n):\n",
" if n <= 1:\n",
" return False\n",
" for i in range(2, x): \n",
" if (x % i == 0): \n",
" for i in range(2, n):\n",
" if n % i == 0:\n",
" return False\n",
" return True\n",
"\n",
"def find_prime_numbers(n):\n",
" prime_numbers = []\n",
" for i in range(2, n + 1):\n",
" if is_prime(i):\n",
" prime_numbers.append(i)\n",
" return prime_numbers\n",
"\n",
"print(find_prime_numbers(100))\n",
"```\n",
"\n",
"Output:\n",
"\n",
"```\n",
"[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]\n",
"```\n"
]
}
],
"source": [
"# For simple string in string out usage, we can use the `predict` method:\n",
"print(chat.predict(\"Write a Python function to identify all prime numbers\"))"
"chat = ChatVertexAI(\n",
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
")\n",
"\n",
"message = chat.invoke(\"Write a Python function to identify all prime numbers\")\n",
"print(message.content)"
]
},
{
@@ -198,66 +193,47 @@
"source": [
"## Asynchronous calls\n",
"\n",
"We can make asynchronous calls via the `agenerate` and `ainvoke` methods."
"We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# for running these examples in the notebook:\n",
"import asyncio\n",
"\n",
"# import nest_asyncio\n",
"# nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\" J'aime la programmation.\", generation_info=None, message=AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('223599ef-38f8-4c79-ac6d-a5013060eb9d'))])"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatVertexAI(\n",
" model_name=\"chat-bison\",\n",
" max_output_tokens=1000,\n",
" temperature=0.7,\n",
" top_p=0.95,\n",
" top_k=40,\n",
")\n",
"import nest_asyncio\n",
"\n",
"asyncio.run(chat.agenerate([messages]))"
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' अहं प्रोग्रामिंग प्रेमामि', additional_kwargs={}, example=False)"
"AIMessage(content=' Why do you love programming?')"
]
},
"execution_count": 36,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"chain = prompt | chat\n",
"\n",
"asyncio.run(\n",
" chain.ainvoke(\n",
" {\n",
@@ -280,56 +256,51 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys"
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" 1. China (1,444,216,107)\n",
"2. India (1,393,409,038)\n",
"3. United States (332,403,650)\n",
"4. Indonesia (273,523,615)\n",
"5. Pakistan (220,892,340)\n",
"6. Brazil (212,559,409)\n",
"7. Nigeria (206,139,589)\n",
"8. Bangladesh (164,689,383)\n",
"9. Russia (145,934,462)\n",
"10. Mexico (128,932,488)\n",
"11. Japan (126,476,461)\n",
"12. Ethiopia (115,063,982)\n",
"13. Philippines (109,581,078)\n",
"14. Egypt (102,334,404)\n",
"15. Vietnam (97,338,589)"
" The five most populous countries in the world are:\n",
"1. China (1.4 billion)\n",
"2. India (1.3 billion)\n",
"3. United States (331 million)\n",
"4. Indonesia (273 million)\n",
"5. Pakistan (220 million)"
]
}
],
"source": [
"import sys\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"human\", \"List out the 15 most populous countries in the world\")]\n",
" [(\"human\", \"List out the 5 most populous countries in the world\")]\n",
")\n",
"messages = prompt.format_messages()\n",
"for chunk in chat.stream(messages):\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"\n",
"for chunk in chain.stream({}):\n",
" sys.stdout.write(chunk.content)\n",
" sys.stdout.flush()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -341,7 +312,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.4"
},
"vscode": {
"interpreter": {

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Tencent Hunyuan\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tencent Hunyuan\n",
"# ChatHunyuan\n",
"\n",
"Hunyuan chat model API by Tencent. For more information, see [https://cloud.tencent.com/document/product/1729](https://cloud.tencent.com/document/product/1729)"
]
@@ -54,7 +63,9 @@
"outputs": [
{
"data": {
"text/plain": "AIMessage(content=\"J'aime programmer.\")"
"text/plain": [
"AIMessage(content=\"J'aime programmer.\")"
]
},
"execution_count": 3,
"metadata": {},
@@ -73,16 +84,23 @@
},
{
"cell_type": "markdown",
"source": [
"## For ChatHunyuan with Streaming"
],
"metadata": {
"collapsed": false
}
},
"source": [
"## For ChatHunyuan with Streaming"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-19T10:20:41.507720Z",
"start_time": "2023-10-19T10:20:41.496456Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"chat = ChatHunyuan(\n",
@@ -91,22 +109,24 @@
" hunyuan_secret_key=\"YOUR_SECRET_KEY\",\n",
" streaming=True,\n",
")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-10-19T10:20:41.507720Z",
"start_time": "2023-10-19T10:20:41.496456Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-19T10:20:46.275673Z",
"start_time": "2023-10-19T10:20:44.241097Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": "AIMessageChunk(content=\"J'aime programmer.\")"
"text/plain": [
"AIMessageChunk(content=\"J'aime programmer.\")"
]
},
"execution_count": 3,
"metadata": {},
@@ -121,26 +141,19 @@
" )\n",
" ]\n",
")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-10-19T10:20:46.275673Z",
"start_time": "2023-10-19T10:20:44.241097Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"start_time": "2023-10-19T10:19:56.233477Z"
}
}
},
"collapsed": false
},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Konko\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Konko\n",
"# ChatKonko\n",
"\n",
">[Konko](https://www.konko.ai/) API is a fully managed Web API designed to help application developers:\n",
"\n",

View File

@@ -1,12 +1,22 @@
{
"cells": [
{
"cell_type": "raw",
"id": "59148044",
"metadata": {},
"source": [
"---\n",
"sidebar_label: LiteLLM\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# 🚅 LiteLLM\n",
"# ChatLiteLLM\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
"\n",

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "7320f16b",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Llama 2 Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "90a1faf2",
"metadata": {},
"source": [
"# Llama-2 Chat\n",
"# Llama2Chat\n",
"\n",
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [HuggingFaceTextGenInference](https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference), [LlamaCpp](https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa), [GPT4All](https://python.langchain.com/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
"\n",
@@ -721,7 +731,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "71b5cfca",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Llama API\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "90a1faf2",
"metadata": {},
"source": [
"# Llama API\n",
"# ChatLlamaAPI\n",
"\n",
"This notebook shows how to use LangChain with [LlamaAPI](https://llama-api.com/) - a hosted version of Llama2 that adds in support for function calling."
]

View File

@@ -1,11 +1,20 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: MiniMax\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# MiniMax\n",
"# MiniMaxChat\n",
"\n",
"[Minimax](https://api.minimax.chat) is a Chinese startup that provides LLM service for companies and individuals.\n",
"\n",

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Ollama\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ollama\n",
"# ChatOllama\n",
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as LLaMA2, locally.\n",
"\n",

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Ollama Functions\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ollama Functions\n",
"# OllamaFunctions\n",
"\n",
"This notebook shows how to use an experimental wrapper around Ollama that gives it the same API as OpenAI Functions.\n",
"\n",

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: OpenAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# OpenAI\n",
"# ChatOpenAI\n",
"\n",
"This notebook covers how to get started with OpenAI chat models."
]

View File

@@ -1,10 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AliCloud PAI EAS\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AliCloud PAI EAS\n",
"# PaiEasChatEndpoint\n",
"Machine Learning Platform for AI of Alibaba Cloud is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, Machine Learning Platform for AI provides whole-process AI engineering capabilities including data labeling (PAI-iTAG), model building (PAI-Designer and PAI-DSW), model training (PAI-DLC), compilation optimization, and inference deployment (PAI-EAS). PAI-EAS supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
]
},

View File

@@ -1,12 +1,22 @@
{
"cells": [
{
"cell_type": "raw",
"id": "ce3672d3",
"metadata": {},
"source": [
"---\n",
"sidebar_label: PromptLayer ChatOpenAI\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# PromptLayer ChatOpenAI\n",
"# PromptLayerChatOpenAI\n",
"\n",
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your ChatOpenAI requests."
]
@@ -119,12 +129,6 @@
"**The above request should now appear on your [PromptLayer dashboard](https://www.promptlayer.com).**"
]
},
{
"cell_type": "markdown",
"id": "05e9e2fe",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
@@ -142,6 +146,8 @@
"metadata": {},
"outputs": [],
"source": [
"import promptlayer\n",
"\n",
"chat = PromptLayerChatOpenAI(return_pl_id=True)\n",
"chat_results = chat.generate([[HumanMessage(content=\"I am a cat and I want\")]])\n",
"\n",
@@ -162,7 +168,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -176,7 +182,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

View File

@@ -1,5 +1,14 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Tongyi Qwen\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
@@ -9,7 +18,7 @@
}
},
"source": [
"# Tongyi Qwen\n",
"# ChatTongyi\n",
"Tongyi Qwen is a large language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.\n",
"In this notebook, we will introduce how to use langchain with [Tongyi](https://www.aliyun.com/product/dashscope) mainly in `Chat` corresponding\n",
" to the package `langchain/chat_models` in langchain"
@@ -41,7 +50,7 @@
},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"

View File

@@ -1,5 +1,15 @@
{
"cells": [
{
"cell_type": "raw",
"id": "eb65deaa",
"metadata": {},
"source": [
"---\n",
"sidebar_label: vLLM Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "eb7e5679-aa06-47e4-a1a3-b6b70e604017",

View File

@@ -1,5 +1,15 @@
{
"cells": [
{
"cell_type": "raw",
"id": "66107bdd",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Volc Enging Maas\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "404758628c7b20f6",
@@ -7,7 +17,7 @@
"collapsed": false
},
"source": [
"# Volc Engine Maas\n",
"# VolcEngineMaasChat\n",
"\n",
"This notebook provides you with a guide on how to get started with volc engine maas chat models."
]
@@ -86,7 +96,9 @@
"outputs": [
{
"data": {
"text/plain": "AIMessage(content='好的,这是一个笑话:\\n\\n为什么鸟儿不会玩电脑游戏\\n\\n因为它们没有翅膀')"
"text/plain": [
"AIMessage(content='好的,这是一个笑话:\\n\\n为什么鸟儿不会玩电脑游戏\\n\\n因为它们没有翅膀')"
]
},
"execution_count": 26,
"metadata": {},
@@ -141,7 +153,9 @@
"outputs": [
{
"data": {
"text/plain": "AIMessage(content='好的,这是一个笑话:\\n\\n三岁的女儿说她会造句了妈妈让她用“年轻”造句女儿说“妈妈减肥一年轻了好几斤”。')"
"text/plain": [
"AIMessage(content='好的,这是一个笑话:\\n\\n三岁的女儿说她会造句了妈妈让她用“年轻”造句女儿说“妈妈减肥一年轻了好几斤”。')"
]
},
"execution_count": 28,
"metadata": {},

View File

@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "b4154fbe",
"metadata": {},
"source": [
"---\n",
"sidebar_label: YandexGPT\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "af63c9db-e4bd-4d3b-a4d7-7927f5541734",
"metadata": {},
"source": [
"# YandexGPT\n",
"# ChatYandexGPT\n",
"\n",
"This notebook goes over how to use Langchain with [YandexGPT](https://cloud.yandex.com/en/services/yandexgpt) chat model.\n",
"\n",

View File

@@ -153,7 +153,7 @@
"source": [
"# Now all of the Tortoise's messages will take the AI message class\n",
"# which maps to the 'assistant' role in OpenAI's training format\n",
"alternating_sessions[0][\"messages\"][:3]"
"chat_sessions[0][\"messages\"][:3]"
]
},
{
@@ -191,7 +191,7 @@
}
],
"source": [
"training_data = convert_messages_for_finetuning(alternating_sessions)\n",
"training_data = convert_messages_for_finetuning(chat_sessions)\n",
"print(f\"Prepared {len(training_data)} dialogues for training\")"
]
},
@@ -243,16 +243,16 @@
" my_file.write((json.dumps({\"messages\": m}) + \"\\n\").encode(\"utf-8\"))\n",
"\n",
"my_file.seek(0)\n",
"training_file = openai.File.create(file=my_file, purpose=\"fine-tune\")\n",
"training_file = openai.files.create(file=my_file, purpose=\"fine-tune\")\n",
"\n",
"# OpenAI audits each training file for compliance reasons.\n",
"# This make take a few minutes\n",
"status = openai.File.retrieve(training_file.id).status\n",
"status = openai.files.retrieve(training_file.id).status\n",
"start_time = time.time()\n",
"while status != \"processed\":\n",
" print(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\n",
" time.sleep(5)\n",
" status = openai.File.retrieve(training_file.id).status\n",
" status = openai.files.retrieve(training_file.id).status\n",
"print(f\"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.\")"
]
},
@@ -271,7 +271,7 @@
"metadata": {},
"outputs": [],
"source": [
"job = openai.FineTuningJob.create(\n",
"job = openai.fine_tuning.jobs.create(\n",
" training_file=training_file.id,\n",
" model=\"gpt-3.5-turbo\",\n",
")"
@@ -300,12 +300,12 @@
}
],
"source": [
"status = openai.FineTuningJob.retrieve(job.id).status\n",
"status = openai.fine_tuning.jobs.retrieve(job.id).status\n",
"start_time = time.time()\n",
"while status != \"succeeded\":\n",
" print(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\n",
" time.sleep(5)\n",
" job = openai.FineTuningJob.retrieve(job.id)\n",
" job = openai.fine_tuning.jobs.retrieve(job.id)\n",
" status = job.status"
]
},
@@ -416,7 +416,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -123,7 +123,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 6,
"id": "817bc077-c18a-473b-94a4-a7d810d583a8",
"metadata": {},
"outputs": [],
@@ -145,7 +145,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 7,
"id": "9e5ac127-b094-4584-9159-5a6d3d7315c7",
"metadata": {},
"outputs": [],
@@ -166,7 +166,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "11d19e28-be49-4801-8065-1a58d13cd192",
"metadata": {},
"outputs": [
@@ -174,7 +174,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Status=[running]... 302.42s. 143.85s\r"
"Status=[running]... 429.55s. 46.34s\r"
]
}
],
@@ -190,20 +190,20 @@
" my_file.write((json.dumps({\"messages\": dialog}) + \"\\n\").encode(\"utf-8\"))\n",
"\n",
"my_file.seek(0)\n",
"training_file = openai.File.create(file=my_file, purpose=\"fine-tune\")\n",
"training_file = openai.files.create(file=my_file, purpose=\"fine-tune\")\n",
"\n",
"job = openai.FineTuningJob.create(\n",
"job = openai.fine_tuning.jobs.create(\n",
" training_file=training_file.id,\n",
" model=\"gpt-3.5-turbo\",\n",
")\n",
"\n",
"# Wait for the fine-tuning to complete (this may take some time)\n",
"status = openai.FineTuningJob.retrieve(job.id).status\n",
"status = openai.fine_tuning.jobs.retrieve(job.id).status\n",
"start_time = time.time()\n",
"while status != \"succeeded\":\n",
" print(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\n",
" time.sleep(5)\n",
" status = openai.FineTuningJob.retrieve(job.id).status\n",
" status = openai.fine_tuning.jobs.retrieve(job.id).status\n",
"\n",
"# Now your model is fine-tuned!"
]
@@ -220,16 +220,18 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"id": "3f472ca4-fa9b-485d-bd37-8ce3c59c44db",
"metadata": {},
"outputs": [],
"source": [
"# Get the fine-tuned model ID\n",
"job = openai.FineTuningJob.retrieve(job.id)\n",
"job = openai.fine_tuning.jobs.retrieve(job.id)\n",
"model_id = job.fine_tuned_model\n",
"\n",
"# Use the fine-tuned model in LangChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model=model_id,\n",
" temperature=1,\n",
@@ -238,10 +240,21 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"id": "7d3b5845-6385-42d1-9f7d-5ea798dc2cd9",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='[{\"s\": \"There were three ravens\", \"object\": \"tree\", \"relation\": \"sat on\"}, {\"s\": \"three ravens\", \"object\": \"a tree\", \"relation\": \"sat on\"}]')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.invoke(\"There were three ravens sat on a tree.\")"
]
@@ -271,7 +284,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -35,7 +35,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "473adce5-c863-49e6-85c3-049e0ec2222e",
"metadata": {},
"outputs": [],
@@ -65,7 +65,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "9a36d27f-2f3b-4148-b94a-9436fe8b00e0",
"metadata": {},
"outputs": [],
@@ -105,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "89bcc676-27e8-40dc-a4d6-92cf28e0db58",
"metadata": {},
"outputs": [
@@ -144,7 +144,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "cd44ff01-22cf-431a-8bf4-29a758d1fcff",
"metadata": {},
"outputs": [],
@@ -169,18 +169,10 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "62da7d8f-5cfc-45a6-946e-2bcda2b0ba1f",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n"
]
}
],
"outputs": [],
"source": [
"math_questions = [\n",
" \"What's 45/9?\",\n",
@@ -219,7 +211,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "d6037992-050d-4ada-a061-860c124f0bf1",
"metadata": {},
"outputs": [],
@@ -231,7 +223,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "0444919a-6f5a-4726-9916-4603b1420d0e",
"metadata": {},
"outputs": [],
@@ -266,7 +258,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "817bc077-c18a-473b-94a4-a7d810d583a8",
"metadata": {},
"outputs": [],
@@ -288,7 +280,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "9e5ac127-b094-4584-9159-5a6d3d7315c7",
"metadata": {},
"outputs": [],
@@ -309,7 +301,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "11d19e28-be49-4801-8065-1a58d13cd192",
"metadata": {},
"outputs": [
@@ -317,7 +309,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Status=[running]... 346.26s. 31.70s\r"
"Status=[running]... 349.84s. 17.72s\r"
]
}
],
@@ -333,20 +325,20 @@
" my_file.write((json.dumps({\"messages\": dialog}) + \"\\n\").encode(\"utf-8\"))\n",
"\n",
"my_file.seek(0)\n",
"training_file = openai.File.create(file=my_file, purpose=\"fine-tune\")\n",
"training_file = openai.files.create(file=my_file, purpose=\"fine-tune\")\n",
"\n",
"job = openai.FineTuningJob.create(\n",
"job = openai.fine_tuning.jobs.create(\n",
" training_file=training_file.id,\n",
" model=\"gpt-3.5-turbo\",\n",
")\n",
"\n",
"# Wait for the fine-tuning to complete (this may take some time)\n",
"status = openai.FineTuningJob.retrieve(job.id).status\n",
"status = openai.fine_tuning.jobs.retrieve(job.id).status\n",
"start_time = time.time()\n",
"while status != \"succeeded\":\n",
" print(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\n",
" time.sleep(5)\n",
" status = openai.FineTuningJob.retrieve(job.id).status\n",
" status = openai.fine_tuning.jobs.retrieve(job.id).status\n",
"\n",
"# Now your model is fine-tuned!"
]
@@ -363,16 +355,18 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 11,
"id": "7f45b281-1dfa-43cb-bd28-99fa7e9f45d1",
"metadata": {},
"outputs": [],
"source": [
"# Get the fine-tuned model ID\n",
"job = openai.FineTuningJob.retrieve(job.id)\n",
"job = openai.fine_tuning.jobs.retrieve(job.id)\n",
"model_id = job.fine_tuned_model\n",
"\n",
"# Use the fine-tuned model in LangChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model=model_id,\n",
" temperature=1,\n",
@@ -381,17 +375,17 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 12,
"id": "7d3b5845-6385-42d1-9f7d-5ea798dc2cd9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='{\\n \"num1\": 56,\\n \"num2\": 7,\\n \"operation\": \"/\"\\n}')"
"AIMessage(content='Let me calculate that for you.')"
]
},
"execution_count": 18,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -425,7 +419,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,884 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3cebbe-079a-4bfe-b1a1-07bdac882ce2",
"metadata": {},
"source": [
"# Amazon Textract \n",
"\n",
">[Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.\n",
">\n",
">It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes). To overcome these manual and expensive processes, `Textract` uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort. \n",
"\n",
"This sample demonstrates the use of `Amazon Textract` in combination with LangChain as a DocumentLoader.\n",
"\n",
"`Textract` supports`PDF`, `TIF`F, `PNG` and `JPEG` format.\n",
"\n",
"`Textract` supports these [document sizes, languages and characters](https://docs.aws.amazon.com/textract/latest/dg/limits-document.html)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a1aa66d4-85f2-42ad-a8d3-de7cea8d6c35",
"metadata": {},
"outputs": [],
"source": [
"#!pip install boto3 openai tiktoken python-dotenv"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e4305a0d-37da-41f9-a52c-7d166d7dbabf",
"metadata": {},
"outputs": [],
"source": [
"#!pip install \"amazon-textract-caller>=0.2.0\""
]
},
{
"cell_type": "markdown",
"id": "400b25c6-befa-4730-a201-39ff112c8858",
"metadata": {},
"source": [
"## Sample 1\n",
"\n",
"The first example uses a local file, which internally will be send to Amazon Textract sync API [DetectDocumentText](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html). \n",
"\n",
"Local files or URL endpoints like HTTP:// are limited to one page documents for Textract.\n",
"Multi-page documents have to reside on S3. This sample file is a jpeg."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1becee92-e82f-42d4-9b4e-b23d77cbe88d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import AmazonTextractPDFLoader\n",
"\n",
"loader = AmazonTextractPDFLoader(\"example_data/alejandro_rosalez_sample-small.jpeg\")\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "d566dc56-c9a9-44ec-84fb-a81928f90d40",
"metadata": {},
"source": [
"Output from the file"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1272ce8c-d298-4059-ac0a-780bf5f82302",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"documents"
]
},
{
"cell_type": "markdown",
"id": "4cf7f19c-3635-453a-9c76-4baf98b8d7f4",
"metadata": {},
"source": [
"## Sample 2\n",
"The next sample loads a file from an HTTPS endpoint. \n",
"It has to be single page, as Amazon Textract requires all multi-page documents to be stored on S3."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "10374bfb-b325-451f-8bd0-c686710ab68c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import AmazonTextractPDFLoader\n",
"\n",
"loader = AmazonTextractPDFLoader(\n",
" \"https://amazon-textract-public-content.s3.us-east-2.amazonaws.com/langchain/alejandro_rosalez_sample_1.jpg\"\n",
")\n",
"documents = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "16a2b6a3-7514-4c2c-a427-6847169af473",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"documents"
]
},
{
"cell_type": "markdown",
"id": "3a9cd8ec-e663-4dc7-9db1-d2f575253141",
"metadata": {},
"source": [
"## Sample 3\n",
"\n",
"Processing a multi-page document requires the document to be on S3. The sample document resides in a bucket in us-east-2 and Textract needs to be called in that same region to be successful, so we set the region_name on the client and pass that in to the loader to ensure Textract is called from us-east-2. You could also to have your notebook running in us-east-2, setting the AWS_DEFAULT_REGION set to us-east-2 or when running in a different environment, pass in a boto3 Textract client with that region name like in the cell below."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "8185e3e6-9599-4a47-8969-d6dcef3e6404",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import boto3\n",
"\n",
"textract_client = boto3.client(\"textract\", region_name=\"us-east-2\")\n",
"\n",
"file_path = \"s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf\"\n",
"loader = AmazonTextractPDFLoader(file_path, client=textract_client)\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "b8901eec-070d-4fd6-9d65-52211d332441",
"metadata": {},
"source": [
"Now getting the number of pages to validate the response (printing out the full response would be quite long...). We expect 16 pages."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "b23c01c8-cf69-4fe2-8141-4621edb7d79c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"16"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(documents)"
]
},
{
"cell_type": "markdown",
"id": "b3e41b4d-b159-4274-89be-80d8159134ef",
"metadata": {},
"source": [
"## Using the AmazonTextractPDFLoader in an LangChain chain (e. g. OpenAI)\n",
"\n",
"The AmazonTextractPDFLoader can be used in a chain the same way the other loaders are used.\n",
"Textract itself does have a [Query feature](https://docs.aws.amazon.com/textract/latest/dg/API_Query.html), which offers similar functionality to the QA chain in this sample, which is worth checking out as well."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "53c47b24-cc06-4256-9e5b-a82fc80bc55d",
"metadata": {},
"outputs": [],
"source": [
"# You can store your OPENAI_API_KEY in a .env file as well\n",
"# import os\n",
"# from dotenv import load_dotenv\n",
"\n",
"# load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "a9ae004c-246c-4c7f-8458-191cd7424a9b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Or set the OpenAI key in the environment directly\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-OpenAI-API-key\""
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "d52b089c-10ca-45fb-8669-8a1c5fee10d5",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' The authors are Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, Weining Li, Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N., Peters, M., Schmitz, M., Zettlemoyer, L., Lukasz Garncarek, Powalski, R., Stanislawek, T., Topolski, B., Halama, P., Gralinski, F., Graves, A., Fernández, S., Gomez, F., Schmidhuber, J., Harley, A.W., Ufkes, A., Derpanis, K.G., He, K., Gkioxari, G., Dollár, P., Girshick, R., He, K., Zhang, X., Ren, S., Sun, J., Kay, A., Lamiroy, B., Lopresti, D., Mears, J., Jakeway, E., Ferriter, M., Adams, C., Yarasavage, N., Thomas, D., Zwaard, K., Li, M., Cui, L., Huang,'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain.llms import OpenAI\n",
"\n",
"chain = load_qa_chain(llm=OpenAI(), chain_type=\"map_reduce\")\n",
"query = [\"Who are the autors?\"]\n",
"\n",
"chain.run(input_documents=documents, question=query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a09d18b-ab7b-468e-ae66-f92abf666b9b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"availableInstances": [
{
"_defaultOrder": 0,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 4,
"name": "ml.t3.medium",
"vcpuNum": 2
},
{
"_defaultOrder": 1,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.t3.large",
"vcpuNum": 2
},
{
"_defaultOrder": 2,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.t3.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 3,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.t3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 4,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.m5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 5,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.m5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 6,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.m5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 7,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.m5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 8,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.m5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 9,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.m5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 10,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.m5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 11,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.m5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 12,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.m5d.large",
"vcpuNum": 2
},
{
"_defaultOrder": 13,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.m5d.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 14,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.m5d.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 15,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.m5d.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 16,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.m5d.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 17,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.m5d.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 18,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.m5d.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 19,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.m5d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 20,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": true,
"memoryGiB": 0,
"name": "ml.geospatial.interactive",
"supportedImageNames": [
"sagemaker-geospatial-v1-0"
],
"vcpuNum": 0
},
{
"_defaultOrder": 21,
"_isFastLaunch": true,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 4,
"name": "ml.c5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 22,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.c5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 23,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.c5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 24,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.c5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 25,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 72,
"name": "ml.c5.9xlarge",
"vcpuNum": 36
},
{
"_defaultOrder": 26,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 96,
"name": "ml.c5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 27,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 144,
"name": "ml.c5.18xlarge",
"vcpuNum": 72
},
{
"_defaultOrder": 28,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.c5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 29,
"_isFastLaunch": true,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.g4dn.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 30,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.g4dn.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 31,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.g4dn.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 32,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.g4dn.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 33,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.g4dn.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 34,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.g4dn.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 35,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 61,
"name": "ml.p3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 36,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 244,
"name": "ml.p3.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 37,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 488,
"name": "ml.p3.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 38,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.p3dn.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 39,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.r5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 40,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.r5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 41,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.r5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 42,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.r5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 43,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.r5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 44,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.r5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 45,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 512,
"name": "ml.r5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 46,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.r5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 47,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.g5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 48,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.g5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 49,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.g5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 50,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.g5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 51,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.g5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 52,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.g5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 53,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.g5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 54,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.g5.48xlarge",
"vcpuNum": 192
},
{
"_defaultOrder": 55,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 1152,
"name": "ml.p4d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 56,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 1152,
"name": "ml.p4de.24xlarge",
"vcpuNum": 96
}
],
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -23,8 +23,18 @@
"source": [
"from langchain.document_loaders import ArcGISLoader\n",
"\n",
"url = \"https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7\"\n",
"loader = ArcGISLoader(url)"
"URL = \"https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7\"\n",
"loader = ArcGISLoader(URL)\n",
"\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "1e174ebd-bbbd-4a66-a644-51e0df12982d",
"metadata": {},
"source": [
"Let's measure loader latency."
]
},
{
@@ -261,7 +271,7 @@
"metadata": {},
"outputs": [],
"source": [
"loader_geom = ArcGISLoader(url, return_geometry=True)"
"loader_geom = ArcGISLoader(URL, return_geometry=True)"
]
},
{

View File

@@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a634365e",
"metadata": {},
"source": [
"# Azure AI Data\n",
"\n",
">[Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:\n",
"\n",
"- Microsoft OneLake\n",
"- Azure Blob Storage\n",
"- Azure Data Lake gen 2\n",
"\n",
"The benefit of this approach over `AzureBlobStorageContainerLoader` and `AzureBlobStorageFileLoader` is that authentication is handled seamlessly to cloud storage. You can use either *identity-based* data access control to the data or *credential-based* (e.g. SAS token, account key). In the case of credential-based data access you do not need to specify secrets in your code or set up key vaults - the system handles that for you.\n",
"\n",
"This notebook covers how to load document objects from a data asset in AI Studio."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49815096",
"metadata": {},
"outputs": [],
"source": [
"#!pip install azureml-fsspec, azure-ai-generative"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2f0cd6a5",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from azure.ai.resources.client import AIClient\n",
"from azure.identity import DefaultAzureCredential\n",
"from langchain.document_loaders import AzureAIDataLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08d40b11-e87a-426e-a6b0-89f24e47ce2c",
"metadata": {},
"outputs": [],
"source": [
"# Create a connection to your project\n",
"client = AIClient(\n",
" credential=DefaultAzureCredential(),\n",
" subscription_id=\"<subscription_id>\",\n",
" resource_group_name=\"<resource_group_name>\",\n",
" project_name=\"<project_name>\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "321cc7f1",
"metadata": {},
"outputs": [],
"source": [
"# get the latest version of your data asset\n",
"data_asset = client.data.get(name=\"<data_asset_name>\", label=\"latest\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25d91cea-c5f2-4a53-ac19-442810451ec6",
"metadata": {},
"outputs": [],
"source": [
"# load the data asset\n",
"loader = AzureAIDataLoader(url=data_asset.path)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2b11d155",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "0690c40a",
"metadata": {},
"source": [
"## Specifying a glob pattern\n",
"You can also specify a glob pattern for more finegrained control over what files to load. In the example below, only files with a `pdf` extension will be loaded."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "72d44781",
"metadata": {},
"outputs": [],
"source": [
"loader = AzureAIDataLoader(url=data_asset.path, glob=\"*.pdf\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2d3c32db",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader.load()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "885dc280",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -30,6 +30,16 @@
"#!pip install datadog-api-client"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"DD_API_KEY = \"...\"\n",
"DD_APP_KEY = \"...\""
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -73,7 +83,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -87,10 +97,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.11"
},
"orig_nbformat": 4
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -1,167 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Embaas\n",
"[embaas](https://embaas.io) is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a [variety of pre-trained models](https://embaas.io/docs/models/embeddings).\n",
"\n",
"### Prerequisites\n",
"Create a free embaas account at [https://embaas.io/register](https://embaas.io/register) and generate an [API key](https://embaas.io/dashboard/api-keys)\n",
"\n",
"### Document Text Extraction API\n",
"The document text extraction API allows you to extract the text from a given document. The API supports a variety of document formats, including PDF, mp3, mp4 and more. For a full list of supported formats, check out the API docs (link below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Set API key\n",
"embaas_api_key = \"YOUR_API_KEY\"\n",
"# or set environment variable\n",
"os.environ[\"EMBAAS_API_KEY\"] = \"YOUR_API_KEY\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"#### Using a blob (bytes)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from langchain.document_loaders.blob_loaders import Blob\n",
"from langchain.document_loaders.embaas import EmbaasBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"blob_loader = EmbaasBlobLoader()\n",
"blob = Blob.from_path(\"example.pdf\")\n",
"documents = blob_loader.load(blob)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-12T22:19:48.380467Z",
"start_time": "2023-06-12T22:19:48.366886Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"# You can also directly create embeddings with your preferred embeddings model\n",
"blob_loader = EmbaasBlobLoader(params={\"model\": \"e5-large-v2\", \"should_embed\": True})\n",
"blob = Blob.from_path(\"example.pdf\")\n",
"documents = blob_loader.load(blob)\n",
"\n",
"print(documents[0][\"metadata\"][\"embedding\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"#### Using a file"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from langchain.document_loaders.embaas import EmbaasLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"file_loader = EmbaasLoader(file_path=\"example.pdf\")\n",
"documents = file_loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-12T22:24:31.894665Z",
"start_time": "2023-06-12T22:24:31.880857Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"# Disable automatic text splitting\n",
"file_loader = EmbaasLoader(file_path=\"example.mp3\", params={\"should_chunk\": False})\n",
"documents = file_loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"For more detailed information about the embaas document text extraction API, please refer to [the official embaas API documentation](https://embaas.io/api-reference)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -65,6 +65,16 @@
"%pip install langchain -q"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ab73cc1-d8e0-4b6d-bb03-9522b112fce5",
"metadata": {},
"outputs": [],
"source": [
"etherscanAPIKey = \"...\""
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -217,7 +217,7 @@
"It's compatible with the ̀`langchain.document_loaders.GoogleDriveLoader` and can be used\n",
"in its place.\n",
"\n",
"To be compatible with containers, the authentication uses an environment variable ̀GOOGLE_ACCOUNT_FILE` to credential file (for user or service)."
"To be compatible with containers, the authentication uses an environment variable `̀GOOGLE_ACCOUNT_FILE` to credential file (for user or service)."
]
},
{
@@ -331,6 +331,7 @@
"Some pre-formated request are proposed (use `{query}`, `{folder_id}` and/or `{mime_type}`):\n",
"\n",
"You can customize the criteria to select the files. A set of predefined filter are proposed:\n",
"\n",
"| template | description |\n",
"| -------------------------------------- | --------------------------------------------------------------------- |\n",
"| gdrive-all-in-folder | Return all compatible files from a `folder_id` |\n",
@@ -401,6 +402,14 @@
"id": "375bb465-8f69-407b-94bd-ffa3718ef500",
"metadata": {},
"source": [
"The conversion can manage in Markdown format:\n",
"- bullet\n",
"- link\n",
"- table\n",
"- titles\n",
"\n",
"Set the attribut `return_link` to `True` to export links.\n",
"\n",
"#### Modes for GSlide and GSheet\n",
"The parameter mode accepts different values:\n",
"\n",
@@ -408,12 +417,6 @@
"- \"snippets\": return the description of each file (set in metadata of Google Drive files).\n",
"\n",
"\n",
"The conversion can manage in Markdown format:\n",
"- bullet\n",
"- link\n",
"- table\n",
"- titles\n",
"\n",
"The parameter `gslide_mode` accepts different values:\n",
"\n",
"- \"single\" : one document with &lt;PAGE BREAK&gt;\n",
@@ -503,14 +506,6 @@
" print(\"---\")\n",
" print(doc.page_content.strip()[:60] + \"...\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51efa73a-4e2d-4f9c-abaf-6c9bde2ff69d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -74,7 +74,9 @@
"source": [
"# see https://python.langchain.com/docs/use_cases/summarization for more details\n",
"from langchain.chains.summarize import load_summarize_chain\n",
"from langchain.llms.fake import FakeListLLM\n",
"\n",
"llm = FakeListLLM()\n",
"chain = load_summarize_chain(llm, chain_type=\"map_reduce\")\n",
"chain.run(docs)"
]
@@ -96,7 +98,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -166,6 +166,9 @@
}
],
"source": [
"from langchain_core.documents import Document\n",
"\n",
"\n",
"def decode_to_str(item: tf.Tensor) -> str:\n",
" return item.numpy().decode(\"utf-8\")\n",
"\n",

Some files were not shown because too many files have changed in this diff Show More